http://WWW.FINALYEARPROJECTS.NET
http://WWW.FINALYEARPROJECTS.NET

Checking delivery availability...

background-sm
Search
3

Updates found with 'stream sx'

Page  1 1

Updates found with 'stream sx'

JAVA/DOT NET PROJECTS ABSTRACT 2016-2017 A PERFORMANCE EVALUATION OF MACHINE LEARNING-BASED STREAMING SPAM TWEET DETECTION ABSTRACT: The popularity of Twitter attracts more and more spammers. Spammers send unwanted tweets to Twitter users to promote websites or services, which are harmful to normal users. In order to stop spammers, researchers have proposed a number of mechanisms. The focus of recent works is on the application of machine learning techniques into Twitter spam detection. However, tweets are retrieved in a streaming way, and Twitter provides the Streaming API for developers and researchers to access public tweets in real time. There lacks a performance evaluation of existing machine learning-based streaming spam detection methods. In this paper, we bridged the gap by carrying out a performance evaluation, which was from three different aspects of data, feature, and model. A big ground-truth of over 600 million public tweets was created by using a commercial URL-based security tool. For real-time spam detection, we further extracted 12 lightweight features for tweet representation. Spam detection was then transformed to a binary classification problem in the feature space and can be solved by conventional machine learning algorithms. We evaluated the impact of different factors to the spam detection performance, which included spam to no spam ratio, feature discretization, training data size, data sampling, time-related data, and machine learning algorithms. The results show the streaming spam tweet detection is still a big challenge and a robust detection technique should take into account the three aspects of data, feature, and model.System analysisExisting System:1. Although there are a few works, such as [7] and [14], which are suitable to detect streaming spam tweets, there lacks of a performance evaluation of existing machine learning-based streaming spam detection methods. In this paper, we aim to bridge the gap by carrying out a performance evaluation, which was from three different aspects of data, feature, and model. Others apply existing blacklisting service, such as Google SafeBrowsing to label spam tweets. Nevertheless, these services’ API limits make it impossible to label a large amount of tweets. However, Twitter has around 5% spam tweets of all existing tweets in the real worldProposed System: Consequently, the research community, as well as Twitter itself, has proposed some spam detection schemes to make Twitter as a spam-free platform. For instance, Twitter has applied some “Twitter rules” to suspend accounts if they behave abnormally. Those accounts, which are frequently requesting to be friends with others, sending duplicate content, mentioning others users, or posting URL-only content, will be suspended by Twitter. Twitter users can also report a spammer to the official @spam account. To automatically detect spam, machine learning algorithms have been applied by researchers to make spam detection as a classification problem . Most of these works classify a user is spammer or not by relying on the features which need historical information of the user or the exiting social graph. For example, the feature, “the fraction of tweets of the user containing URL” used in must be retrieved from the users’ tweets list; features such as, “average neighbors’ tweets” in and “distance” in cannot be extracted without the built social graph. However, Twitter data are in the form of stream, and tweets arrive at very high speed.Despite that these methods are effective in detecting Twitter spam, they are not applicable in detecting streaming spam tweets as each streaming tweet does not contain the historical information or social graph that are needed in detection.SYSTEM SPECIFICATIONHardware Requirements:• System : Pentium IV 2.4 GHz.• Hard Disk : 40 GB.• Floppy Drive : 1.44 Mb.• Monitor : 14’ Colour Monitor.• Mouse : Optical Mouse.• Ram : 512 Mb.Software Requirements:• Operating system : Windows 7 Ultimate.• Coding Language : ASP.Net with C#• Front-End : Visual Studio 2010 Professional.• Data Base : SQL Server 2008.Conclusion:In this paper, we provide a fundamental evaluation of ML algorithms on the detection of streaming spam tweets. In our evaluation, we found that classifiers’ ability to detect Twitter spam reduced when in a near real-world scenario since the imbalanced data brings bias. We also identified that Feature discretization was an important preprocess to ML-based spam detection. Second, increasing training data only cannot bring more benefits to detect Twitter spam after a certain number of training samples. We should try to bring more discriminative features or better model to further improve spam detection rate. Third, classifiers can detect more spam tweets when the tweets were From the third point, we thoroughly analyzed the reason why classifiers’ performances reduced when training and testing data were in different days from three point of views. We conclude that the performance decreases due to the fact that the distribution of features changes of later days’ dataset, whereas the distribution of training dataset stays the same. This problem will exist in streaming spam tweets detection, as the new tweets are coming in the forms of streams, but the training dataset is not updated
Send Enquiry
Read More
JAVA/DOT NET PROJECTS ABSTRACT 2016-2017 MINING USER-AWARE RARE SEQUENTIAL TOPIC PATTERNS IN DOCUMENT STREAMS ABSTRACT: Textual documents created and distributed on the Internet are ever changing in various forms. Most of existing works are devoted to topic modeling and the evolution of individual topics, while sequential relations of topics in successive documents published by a specific user are ignored. In this paper, in order to characterize and detect personalized and abnormal behaviors of Internet users, we propose Sequential Topic Patterns (STPs) and formulate the problem of mining User-aware Rare Sequential Topic Patterns (URSTPs) in document streams on the Internet. They are rare on the whole but relatively frequent for specific users, so can be applied in many real-life scenarios, such as real-time monitoring on abnormal user behaviors. We present a group of algorithms to solve this innovative mining problem through three phases: preprocessing to extract probabilistic topics and identify sessions for different users, generating all the STP candidates with (expected) support values for each user by pattern-growth, and selecting URSTPs by making user-aware rarity analysis on derived STPs. Experiments on both real (Twitter) and synthetic datasets show that our approach can indeed discover special users and interpretable URSTPs effectively and efficiently, which significantly reflect users’ characteristics.EXISTING SYSTEMS: Most of existing works are devoted to topic modeling and the evolution of individual topics, while sequential relations of topics in successive documents published by a specific user are ignored. Taking advantage of these extracted topics in document streams, most of existing works analyzed the evolution of individual topics to detect and predict social events as well as user behaviors. However, few researches paid attention to the correlations among different topics appearing in successive documents published by a specific user, so some hidden but significant information to reveal personalized behaviors has been neglected. And correspondingly, unsupervised mining algorithms for this kind of rare patterns need to be designed in a manner different from existing frequent pattern mining algorithms. Most of existing works on sequential pattern mining focused on frequent patterns, but for STPs, many infrequent ones are also interesting and should be discovered.PROPOSED SYSTEMS: In order to characterize and detect personalized and abnormal behaviors of Internet users, we propose Sequential Topic Patterns (STPs) and formulate the problem of mining User-aware Rare Sequential Topic Patterns (URSTPs) in document streams on the Internet. In order to characterize user behaviors in published document streams, we study on the correlations among topics extracted from these documents, especially the sequential relations, and specify them as Sequential Topic Patterns (STPs). Each of them records the complete and repeated behavior of a user when she is publishing a series. Topic mining in document collections has been extensively studied in the literature. Topic Detection and Tracking (TDT) task aimed to detect and track topics (events) in news streams with clustering-based techniques on keywords. The experiments conducted on both real (Twitter) and synthetic datasets demonstrate that the proposed approach is very effective and efficient in discovering special users as well as interesting and interpretable URSTPs from Internet document streams, which can well capture users’ personalized and abnormal behaviors and characteristics.ADVANATAGES: Taking advantage of these extracted topics in document streams, most of exist works analyzed the evolution of individual topics to detect and predict social events as well as user behaviors. In order to find significant STPs, a document stream should be divided into independent sessions in advance with the definition. A sketch map of session identification Each ellipse represents a session, and all the sessions in each line constitute a document subsequence for a specific user. we can conclude that the two algorithms have their respective advantages. Which one is appropriate for the real task reflects a trade-off between mining accuracy and execution speed, and should depend on the specific requirements of application scenarios.HARDWARE REQUIREMENTS: Hardware - Pentium Speed - 1.1 GHz RAM - 1GB Hard Disk - 20 GB Floppy Drive - 1.44 MB Key Board - Standard Windows Keyboard Mouse - Two or Three Button Mouse Monitor - SVGASOFTWARE REQUIREMENTS: Operating System : Windows Technology : Java and J2EE Web Technologies : Html, JavaScript, CSS IDE : My Eclipse Web Server : Tomcat Tool kit : Android Phone Database : My SQL Java Version : J2SDK1.5 CONCLUSION: Mining URSTPs in published document streams on the Internet is a significant and challenging problem. It formulates a new kind of complex event patterns based on document topics, and has wide potential application scenarios, such as real-time monitoring on abnormal behaviors of Internet users. In this paper, several new concepts and the mining problem are formally defined, and a group of algorithms are designed and combined to systematically solve this problem. The experiments conducted on both real (Twitter) and synthetic datasets demonstrate that the proposed approach is very effective and efficient in discovering special users as well as interesting and interpretable URSTPs from Internet document streams, which can well capture users’ personalized and abnormal behaviors and characteristics. As this paper puts forward an innovative research direction on Web data mining, much work can be built on it in the future.
Send Enquiry
Read More
JAVA/DOT NET PROJECTS ABSTRACT 2016-2017 PUBLICLY VERIFIABLE INNER PRODUCT EVALUATION OVER OUTSOURCED DATA STREAMS UNDER MULTIPLE KEYS ABSTRACT: Uploading data streams to a resource-rich cloud server for inner product evaluation, an essential building block in many popular stream applications (e.g., statistical monitoring), is appealing to many companies and individuals. On the other hand, verifying the result of the remote computation plays a crucial role in addressing the issue of trust. Since the outsourced data collection likely comes from multiple data sources, it is desired for the system to be able to pinpoint the originator of errors by allotting each data source a unique secret key, which requires the inner product verification to be performed under any two parties’ different keys. However, the present solutions either depend on a single key assumption or powerful yet practically inefficient fully homomorphic cryptosystems. In this paper, we focus on the more challenging multi-key scenario where data streams are uploaded by multiple data sources with distinct keys. We first present a novel homomorphic verifiable tag technique to publicly verify the outsourced inner product computation on the dynamic data streams, and then extend it to support the verification of matrix product computation. We prove the security of our scheme in the random oracle model. Moreover, the experimental result also shows the practicability of our design.SYSTEM ANALYSISExisting System: The existing works under the single-key setting, our scheme aims at the more challenging multi-key scenario, i.e., it allows multiple data sources with different secret keys to upload their endless data streams and delegate the corresponding computations to a third party server, while the traceability can still be provided on demand. Furthermore, any keyless client is able to publicly verify the validity of the returned computation result. Security analysis shows that our scheme is provable secure under the CDH assumption in the random oracle model.PROPOSED SYSTEM: Proposed a realization of homomorphic signatures for bounded constant degree polynomials based on hard problems on ideal lattices. Although not all the above schemes are explicitly presented in the context of streaming data, they can be applied there under a single-key setting. In this scenario, the data source continually generates and outsources authenticated data values to a third-party server. However, the outsourced data to be a priori fixed. Another interesting line of works considered a different setting for verifiable computation. Clients are only allowed to query the server for the summation of a grouped data specified by the data source. A scheme of outsourced computations including groupby sum, inner product, and matrix product with private verifiability was considered.SYSTEM SPECIFICATIONHardware Requirements:• System : Pentium IV 2.4 GHz.• Hard Disk : 40 GB.• Floppy Drive : 1.44 Mb.• Monitor : 14’ Colour Monitor.• Mouse : Optical Mouse.• Ram : 512 Mb.Software Requirements:• Operating system : Windows 7 Ultimate.• Coding Language : ASP.Net with C#• Front-End : Visual Studio 2010 Professional.• Data Base : SQL Server 2008. Conclusion: In this paper, we introduce a novel homomorphic verifiable tag technique, and design an efficient and publicly verifiable inner product computation scheme on the dynamic outsourced data streams under multiple keys. We also extend the inner product scheme to support matrix product. Compared with the existing works under the single-key setting, our scheme aims at the more challenging multi-key scenario, i.e., it allows multiple data sources with different secret keys to upload their endless data streams and delegate the corresponding computations to a third party server, while the traceability can still be provided on demand. Furthermore, any keyless client is able to publicly verify the validity of the returned computation result. Security analysis shows that our scheme is provable secure under the CDH assumption in the random oracle model. Experimental results demonstrate that our protocol is practically efficient in terms of both communication and computation cost.
Send Enquiry
Read More
IEEE 2016 -2017 BIG DATA ANDROID DOTNET JAVA TITLESBIG DATA1. FiDoop: Parallel Mining of Frequent Itemsets Using MapReduce.ss2. Self-Healing in Mobile Networks with Big Data.ANDROID1. An Exploration of Geographic Authentication Schemes.2. Intelligent Hands Free Speech based SMS System on Android.3. PassBYOP: Bring Your Own Picture for Securing Graphical Passwords.4. Privacy-Preserving Location Sharing Services for Social Networks.5. SBVLC: Secure Barcode-based Visible Light Communication for Smartphones.6. A Shoulder Surfing Resistant Graphical Authentication System.7. A Cloud-Based Smart-Parking System Based on Internet-of-Things Technologies.8. STAMP: Enabling Privacy-Preserving Location Proofs for Mobile Users.9. Understanding Smartphone Sensor and App Data for Enhancing the Security of Secret Questions..NET1. Attribute-based Access Control with Constant-size Ciphertext in Cloud Computing.2. Attribute-Based Data Sharing Scheme Revisited in Cloud Computing3. Catch You if You Misbehave: Ranked Keyword Search Results Verification in Cloud Computing4. CDStore: Toward Reliable, Secure, and Cost-Efficient Cloud Storage via Convergent Dispersal5. Cloud workflow scheduling with deadlines and time slot availability6. Dynamic and Public Auditing with Fair Arbitration for Cloud Data7. Dynamic Proofs of Retrievability for Coded Cloud Storage Systems8. Enabling Cloud Storage Auditing with Verifiable Outsourcing of Key Updates9. Identity-Based Encryption with Cloud Revocation Authority and Its Applications10. Identity-Based Proxy-Oriented Data Uploading and Remote Data Integrity Checking in Public Cloud11. MMBcloud-tree: Authenticated Index for Verifiable Cloud Service Selection12. Prioritization of Overflow Tasks to Improve Performance of Mobile Cloud13. Providing User Security Guarantees in Public Infrastructure Clouds14. Publicly Verifiable Inner Product Evaluation over Outsourced Data Streams under Multiple Keys15. Reversible Data Hiding in Encrypted Images by Reversible Image Transformation16. Searchable Attribute-Based Mechanism with Efficient Data Sharing for Secure Cloud Storage17. Secure Data Sharing in Cloud Computing Using Revocable-Storage Identity-Based Encryption18. Service Usage Classification with Encrypted Internet Traffic in Mobile Messaging Apps19. Shadow Attacks based on Password Reuses: A Quantitative Empirical Analysis20. A Performance Evaluation of Machine Learning-Based Streaming Spam Tweets DetectionJAVA1. A Locality Sensitive Low-Rank Model for Image Tag Completion2. A Shoulder Surfing Resistant Graphical Authentication System3. DeyPoS: Deduplicatable Dynamic Proof of Storage for Multi-User Environments4. Inverted Linear Quadtree: Efficient Top K Spatial Keyword Search5. KSF-OABE: Outsourced Attribute-Based Encryption with Keyword Search Function for Cloud Storage6. Mining User-Aware Rare Sequential Topic Patterns in Document Streams7. Mitigating Cross-Site Scripting Attacks with a Content Security Policy8. Practical Approximate k Nearest Neighbor Queries with Location and Query Privacy9. Quality-Aware Subgraph Matching Over Inconsistent Probabilistic Graph Databases10. SecRBAC: Secure data in the Clouds11. Tag Based Image Search by Social Re-rankingCLOUD COMPUTING1. Cost Minimization for Rule Caching in Software Defined Networking.2. Performance Enhancement of High-Availability Seamless Redundancy (HSR) Networks Using OpenFlow.3. Data Plane and Control Architectures for 5G Transport Networks.4. HBD: Towards Efficient Reactive Rule Dispatching in Software-Defined Networks.5. SDN-based Application Framework for Wireless Sensor and Actor Networks.6. Geo-Social Distance-based Data Dissemination for Socially Aware Networking.7. An Open-Source Wireless Mesh Networking Module for Environmental Monitoring.8. Hybrid IP/SDN networking: open implementation and experiment management tools.9. Software-Defined Networking (SDN) and Distributed Denial of Service (DDoS) Attacks in Cloud Computing Environments: A Survey, Some Research Issues, and Challenges.10. Cloud Computing-Based Forensic Analysis for Collaborative Network Security Management System.NETWORK SECUIRTY1. Collaborative Network Security in Multi-Tenant Data Center for Cloud Computing.DATA MINING1. Systematic Determination of Discrepancies Across Transient Stability Software Packages.2. Identification of Type 2 Diabetes Risk Factors Using Phenotypes Consisting of Anthropometry and Triglycerides based on Machine Learning.3. Teaching Network Security With IP Darkspace Data.4. A Survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection.5. Mining High Utility Patterns in One Phase without Generating Candidates.6. An Improved String-Searching Algorithm and Its Application in Component Security Testing.
Send Enquiry
Read More
ANDROID PROJECT ABSTRACT 2016-2017 SBVLC: SECURE BARCODE-BASED VISIBLE LIGHT COMMUNICATION FOR SMARTPHONES ABSTRACT:2D barcodes have enjoyed a significant penetration rate in mobile applications. This is largely due to the extremely low Barrier to adoption – almost every camera-enabled smartphone can scan 2D barcodes. As an alternative to NFC technology, 2DBarcodes have been increasingly used for security-sensitive mobile applications including mobile payments and personal identification. However, the security of barcode-based communication in mobile applications has not been systematically studied. Due to the visual Nature, 2D barcodes are subject to eavesdropping when they are displayed on the smartphone screens. On the other hand, the Fundamental design principles of 2D barcodes make it difficult to add security features. In this paper, we propose SBVLC - a secure System for barcode-based visible light communication (VLC) between smartphones. We formally analyze the security of SBVLC based On geometric models and propose physical security enhancement mechanisms for barcode communication by manipulating screen View angles and leveraging user-induced motions. We then develop three secure data exchange schemes that encode information in Barcode streams. These schemes are useful in many security-sensitive mobile applications including private information sharing, Secure device pairing, and contactless payment. SBVLC is evaluated through extensive experiments on both Android and ios Smartphones.
Send Enquiry
Read More
VLSI PROJECT ABSTRACT 2016-2017 A DYNAMICALLY RECONFIGURABLE MULTI-ASIP ARCHITECTURE FOR MULTISTANDARD AND MULTIMODE TURBO DECODING ABSTRACT: The multiplication of wireless communication standards is introducing the need of flexible and reconfigurable multistandard baseband receivers. In this context, multiprocessor turbo decoders have been recently developed in order to support the increasing flexibility and throughput requirements of emerging applications. However, these solutions do not sufficiently address reconfiguration performance issues, which can be a limiting factor in the future. This brief presents the design of a reconfigurable multiprocessor architecture for turbo decoding achieving very fast reconfiguration without compromising the decoding performances. The proposed architecture of this paper analysis the logic size, area and power consumption using Xilinx 14.2.Existing System: The FlexiTreP ASIP supports both SBTC and DBTC for various standards and it is configured through an interleaver memory, a program memory, and the dynamically reconfigurable channel code control. a reconfigurable multiprocessor approach in order to decode multiple data streams in parallel was proposed. However, the configuration process of the platform is not described. A mixed XML/SystemC simulation model of the platform has been implemented to reach a maximum throughput of 86 Mb/s, which does not satisfy the throughput requirement of recent communication standards. Furthermore, the latency aspect and the scalability of the configuration process for a higher number of processing elements (PEs) are not discussed. In fact, previous works provide an efficient way to reach the high-performance requirement of emerging standards. However, the dynamic reconfiguration aspect of these platforms is superficially addressed. Among the few works that consider this issue, we can cite the recent architecture presented, where solutions for the reconfiguration management of the NoC-based multiprocessor turbo/low-density parity-check (LDPC) decoder architecture presented in were proposed. Up to 35 PEs and up to 8 configuration buses have been implemented. However, the proposed solution does not guarantee that the configuration process can be masked by the current decoding task. Then, stopping the current processing to configure the new configuration is unavoidable and leads to a decoding quality loss in terms of BER. To leverage these issues, this brief presents a novel dynamically reconfigurable turbo decoder providing an efficient and high-speed configuration process.Proposed System: The proposed dynamic reconfigurable UDec turbo decoder architecture is shown in Fig. 1. It consists of two rows of RDecASIPs interconnected via two butterfly topology network on chip (NoCs). Each row corresponds to a component decoder. In the example of Fig. 1, four ASIPs are organized in two component decoders, respectively, built with two ASIPs. Within each component decoder, the ASIPs are connected by two 44-bit buses for boundary state metrics exchange (not shown in Fig. 1). The RDecASIP implements the Max-Log-MAP algorithm. It supports both single and double binary convolutional TCs. Moreover, sliding window technique is used. Large frames are processed by dividing the frame into N windows, each with a maximum size of 64 symbols. Each ASIP can manage a maximum of 12 windows. Each ASIP can be configured through a 26 × 12 configuration memory. The configuration memory contains all parameters required to perform the initialization of the ASIP. Since the RDecASIP is designed to work in a multi-ASIP architecture as described, it requires several parameters to deal with a subblock of the data frame and several parameters to configure the ASIP mode.Advantages:• high performancesDisadvantages:• Performance is lowSoftware implementation:• Modelsim• Xilinx ISE
Send Enquiry
Read More
Page 1 1