http://WWW.FINALYEARPROJECTS.NET
http://WWW.FINALYEARPROJECTS.NET

Checking delivery availability...

background-sm
Search
3

Updates found with 'best ways'

Page  1 1

Updates found with 'best ways'

ANDROID PROJECT ABSTRACT 2016-2017 UNDERSTANDING SMARTPHONE SENSOR AND APP DATA FOR ENHANCING THE SECURITY OF SECRET QUESTIONS ABSTRACT:Many web applications provide secondary authentication methods, i.e., secret questions (or password recovery questions), to reset the account password when a user’s login fails. However, the answers to many such secret questions can be easily guessed by an acquaintance or exposed to a stranger that has access to public online tools (e.g., online social networks); moreover, a user may forget her/his answers long after creating the secret questions. Today’s prevalence of smartphones has granted us new opportunities to observe and understand how the personal data collected by smartphone sensors and appscan help create personalized secret questions without violating the users’ privacy concerns. In this paper, we present a Secret-Question based Authentication system, called “Secret-QA”, that creates a set of secret questions on basic of people’s smartphone usage. We develop a prototype on Android smartphones, and evaluate the security of the secret questions by asking the acquaintance/stranger who participate in our user study to guess the answers with and without the help of online tools; meanwhile, we observe the questions’ reliability by asking participants to answer their own questions. Our experimental results reveal that the secret questions related to motion sensors, calendar, app installment, and part of legacy appease history (e.g., phone calls) have the best memorability for users as well as the highest robustness to attacks.
Send Enquiry
Read More
JAVA PROJECTS ABSTRACT 2016-2017 A LOCALITY SENSITIVE LOW-RANK MODEL FOR IMAGE TAG COMPLETION ABSTRACT: Tag-based image retrieval often used to increase performance to retrieving images with the help of search engines. Image retrieval based on user-provided image tags on the photo sharing websites. A requirement for effective searching and retrieval of images in rapid growing online image databases is that each image has accurate and useful annotation. Many visual applications have benefited from the outburst of web images, yet the imprecise and incomplete tags arbitrarily provided by users. In this paper, we propose a novel locality sensitive low-rank model for image tag completion, which approximates the global nonlinear model with a collection of local linear models. To effectively infuse the idea of locality sensitivity, a simple and effective pre-processing module is designed to learn suitable representation for data partition.In thi paper they used for BIRCH algorithm. BIRCH (balanced iterative reducing and clustering using hierarchies) is an unsupervised data mining algorithm used to perform hierarchical clustering over particularly large data-sets.An advantage of BIRCH is its ability to incrementally and dynamically cluster incoming, multi-dimensional metric data points in an attempt to produce the best quality clustering for a given set of resources (memory and time constraints). In most cases, BIRCH only requires a single scan of the database.Existing SystemThe user-labeled visual data, such as images which are uploaded and shared in Flickr, are usually associated with imprecise and incomplete tags. This will pose threats to the retrieval or indexing of these images, causing them difficult to be accessed by users. Unfortunately, missing label is inevitable in the manual labeling phase, since it is infeasible for users to label every related word and avoid all possible confusions, due to the existence of synonyms and user preference. Therefore, image tag completion or refinement has emerged as a hot issue in the multimedia community.Many visual applications have benefited from the outburst of web images, yet the imprecise and incomplete tags arbitrarily provided by users, as the thorn of the rose, may hamper the performance of retrieval or indexing systems relying on such data.Proposed SystemTo effectively infuse the idea of locality sensitivity, a simple and effective pre-processing module is designed to learn suitable representation for data partition, and a global consensus regularizer is introduced to mitigate the risk of overfitting. Meanwhile, low-rank matrix factorization is employed as local models, where the local geometry structures are preserved for the low-dimensional representation of both tags and samples. Extensive empirical evaluations conducted on three datasets demonstrate the effectiveness and efficiency of the proposed method, where our method outperforms pervious ones by a large margin.Advantages• We propose a locality sensitive low-rank model for image tag completion, which approximates the global nonlinear model with a collection of local linear models, by which complex correlation structures can be captured. • Several adaptations are introduced to enable the fusion of locality sensitivity and low-rank factorization, including a simple and effective pre-processing module and a global consensus regularizer to mitigate the risk of overfitting.Disadvantages• image tag completion or refinement has emerged as a hot issue in the multimedia community.• The existing completion methods are usually founded on linear assumptions, hence the obtained models are limited due to their incapability to capture complex correlation patterns.System RequirementsH/W System Configuration:-Processor - Pentium –IIISpeed - 1.1 GhzRAM - 256 MB(min)Hard Disk - 20 GBKey Board - Standard Windows KeyboardMouse - Two or Three Button MouseMonitor - SVGA S/W System Configuration Operating System :Windows95/98/2000/XP  Application Server : Tomcat5.0/6.X  Front End : HTML, Java, Jsp Scripts : JavaScript. Server side Script : Java Server Pages. Database Connectivity : Mysql.Conclusion In this paper we propose a locality sensitive low-rank model for image tag completion. The proposed method can capture complex correlations by approximating a nonlinear model with a collection of local linear models. To effectively integrate locality sensitivity and low-rank factorization, several adaptations are introduced, including the design of a pre-processing module and a global consensus regularizer. Our method achieves superior results on three datasets and outperforms pervious methods by a large margin.
Send Enquiry
Read More
JAVA PROJECTS ABSTRACT 2016-2017 ENABLING CLOUD STORAGE AUDITING WITH VERIFIABLE OUTSOURCING OF KEY UPDATES ABSTRACT: Key-exposure resistance has always been an important issue for in-depth cyber defence in many security applications. Recently, how to deal with the key exposure problem in the settings of cloud storage auditing has been proposed and studied. To address the challenge, existing solutions all require the client to update his secret keys in every time period, which may inevitably bring in new local burdens to the client, especially those with limited computation resources such as mobile phones. In this paper, we focus on how to make the key updates as transparent as possible for the client and propose a new paradigm called cloud storage auditing with verifiable outsourcing of key updates. In this paradigm, key updates can be safely outsourced to some authorized party, and thus the key-update burden on the client will be kept minimal. Specifically, we leverage the third party auditor (TPA) in many existing public auditing designs, let it play the role of authorized party in our case, and make it in charge of both the storage auditing and the secure key updates for key-exposure resistance. In our design, TPA only needs to hold an encrypted version of the client’s secret key, while doing all these burdensome tasks on behalf of the client. The client only needs to download the encrypted secret key from the TPA when uploading new files to cloud. Besides, our design also equips the client with capability to further verify the validity of the encrypted secret keys provided by TPA. All these salient features are carefully designed to make the whole auditing procedure with key exposure resistance as transparent as possible for the client. We formalize the definition and the security model of this paradigm. The security proof and the performance simulation show that our detailed design instantiations are secure and efficient.Existing System: Existing solutions all require the client to update his secret keys in every time period, which may inevitably bring in new local burdens to the client, especially those with limited computation resources such as mobile phones. The third party auditor (TPA) in many existing public auditing designs, let it play the role of authorized party in our case, and make it in charge of both the storage auditing and the secure key updates for key-exposure resistance.PROPOSED SYSTEM: Cloud storage auditing has been proposed and studied. To address the challenge, existing solutions all require the client to update his secret keys in every time period, which may inevitably bring in new local burdens to the client, especially those with limited computation resources have been proposed to deal with this problem. These protocols focus on different aspects of cloud storage auditing such as the high efficiency the privacy protection of data the privacy protection of identities dynamic data operations the data sharing. Firstly proposed the notion of wallet databases with observers, in which a hardware was used to help the client perform some expensive computations. The first outsourcing algorithm for modular exponentiations was proposed .which was based on the methods of precomputation and server-aided computation. A secure outsourcing algorithm to complete sequence comparisons. Proposed an outsourcing algorithm for attributebased signatures computations. The auditing protocols supporting dynamic data operations were also proposed auditing protocol supporting both the dynamic property and the privacy preserving property. The privacy preserving of the user’s identity for shared data auditing was considered in. The problem of user revocation in shared data auditing was considered in proposed a public auditing protocol for data sharing with multiuser modification. The proposed cloud storage auditing protocol with outsourcing of key updates is verifiable.SYSTEM SPECIFICATIONHardware Requirements:• System : Pentium IV 2.4 GHz.• Hard Disk : 40 GB.• Floppy Drive : 1.44 Mb.• Monitor : 14’ Colour Monitor.• Mouse : Optical Mouse.• Ram : 512 Mb.Software Requirements:• Operating system : Windows 7 Ultimate.• Coding Language : ASP.Net with C#• Front-End : Visual Studio 2010 Professional.• Data Base : SQL Server 2008.Conclusion:The aim of this paper is to provide an integrity auditing scheme with public verifiability, efficient data dynamics and fair disputes arbitration. To eliminate the limitation of index usage in tag computation and efficiently support data dynamics, we differentiate between block indices and tag indices, and devise an index switcher to keep block-tag index mapping to avoid tag re-computation caused by block update operations, which incurs limited additional overhead, as shown in our performance evaluation. Meanwhile, since both clients and the CSP potentially may misbehave during auditing and data update, we extend the existing threat model in current research to provide fair arbitration for solving disputes between clients and the CSP, which is of vital significance for the deployment and promotion of auditing schemes in the cloud environment.We achieve this by designing arbitration protocols based on the idea of exchanging metadata signatures upon each update operation. Our experiments demonstrate the efficiency of our proposed scheme, whose overhead for dynamic update and dispute arbitration are reasonable.
Send Enquiry
Read More
JAVA /DOT NET PROJECT ABSTRACT 2016-2017 MITIGATING CROSS-SITE SCRIPTING ATTACKS WITH A CONTENT SECURITY POLICY ABSTRACT: A content security policy (CSP) can help Web application developers and server administrators better control website content and avoid vulnerabilities to cross site scripting (XSS). In experiments with a prototype website, the authors’ CSP implementation successfully mitigated all XSS attack types in four popular browsers. Among the many attacks on Web applications, cross site scripting (XSS) is one of the most common. An XSS attack involves injecting malicious script into a trusted website that executes on a visitor’s browser without the visitor’s knowledge and thereby enables the attacker to access sensitive user data, such as session tokens and cookies stored on the browser.1 With this data, attackers can execute several malicious acts, including identity theft, key logging, phishing, user impersonation, and webcam activation. Content Security Policy (CSP) is an added layer of security that helps to detect and mitigate certain types of attacks, including Cross Site Scripting (XSS) and data injection attacks. These attacks are used for everything from data theft to site defacement or distribution of malware. CSP is designed to be fully backward compatible; browsers that don't support it still work with servers that implement it, and vice-versa. Browsers that don't support CSP simply ignore it, functioning as usual, defaulting to the standard same-origin policy for web content. If the site doesn't offer the CSP header, browsers likewise use the standard same-origin policy.Enabling CSP is as easy as configuring your web server to return the Content-Security-Policy HTTP header. (Prior to Firefox 23, the X-Content-Security-Policy header was used). See Using Content Security Policy for details on how to configure and enable CSP.INTRODUCTION A primary goal of CSP is to mitigate and report XSS attacks. XSS attacks exploit the browser's trust of the content received from the server. Malicious scripts are executed by the victim's browser because the browser trusts the source of the content, even when it's not coming from where it seems to be coming from. CSP makes it possible for server administrators to reduce or eliminate the vectors by which XSS can occur by specifying the domains that the browser should consider to be valid sources of executable scripts. A CSP compatible browser will then only execute scripts loaded in source files received from those whitelisted domains, ignoring all other script (including inline scripts and event-handling HTML attributes).PROPOSED SYSTEM: A client-side tool that acts as a Web proxy, disallows requests that do not belong to the website and thus thwarts stored XSS attacks. Browser-enforced embedded policies (BEEPs) let the Web application developer embed a policy in the website by specifying which scripts are allowed to run.With a BEEP, the developer can put genuine source scripts in a white list and disable source scripts in certain website regions. Document Structure Integrity (DSI) is a client-server architecture that restricts the interpretation of untrusted content. DSI uses parser-level isolation to isolate inline untrusted data and separates dynamic content from static content. However, this approach requires both servers and clients to cooperatively upgrade to enable protection.System ConfigurationH/W System Configuration:Processor - Pentium –IIISpeed - 1.1 GhzRAM - 256 MB(min)Hard Disk - 20 GBFloppy Drive - 1.44 MBKey Board - Standard Windows KeyboardMouse - Two or Three Button MouseMonitor - SVGAS/W System Configuration: Operating System :Windows95/98/2000/XP  Application Server : Tomcat5.0/6.X  Front End : HTML, Java, Jsp Scripts : JavaScript. Server side Script : Java Server Pages. Database : Mysql Database Connectivity : JDBC.CONCLUSION: Although our CSP has many benefits, it is not intended as a primary defense mechanism against XSS attacks. Rather, it would best serve as a defensein- depth mitigation mechanism. A primary defense involves tailored security schemes that validate user inputs and encode user outputs.Cross site scripting has been a major threat for web applications and its users from past few years. Lot of work has been done to handle XSS attacks which include: • Client side approaches• Server side approaches• Testing based approaches• Static and dynamic analysis based approaches
Send Enquiry
Read More
JAVA/ DOT NET PROJECT ABSTRACT 2016-2017 QUALITY-AWARE SUBGRAPH MATCHING OVER INCONSISTENT PROBABILISTIC GRAPH DATABASES ABSTRACT: The Resource Description Framework (RDF) is a general framework for how to describe any Internet resource such as a Web site and its content. An RDF description (such descriptions are often referred to as metadata, or "data about data") can include the authors of the resource, date of creation or updating, the organization of the pages on a site (the sitemap), information that describes content in terms of audience or content rating, key words for search engine data collection, subject categories, and so forth.Resource Description Framework has been widely used in the Semantic Web to describe resources and their relationships. The RDF graph is one of the most commonly used representations for RDF data. However, in many real applications such as the data extraction/integration, RDF graphs integrated from different data sources may often contain uncertain and inconsistent information (e.g., uncertain labels or that violate facts/rules), due to the unreliability of data sources. In this paper, we formalize the RDF data by inconsistent probabilistic RDF graphs, which contain both inconsistencies and uncertainty. With such a probabilistic graph model, we focus on an important problem, quality-aware subgraph matching over inconsistent probabilistic RDF graphs , which retrieves subgraphs from inconsistent probabilistic RDF graphs that are isomorphic to a given query graph and with high quality scores (considering both consistency and uncertainty). In order to efficiently answer QA-gMatch queries, we provide two effective pruning methods, namely adaptive label pruning and quality score pruning, which can greatly filter out false alarms of subgraphs. We also design an effective index to facilitate our proposed pruning methods, and propose an efficient approach for processing QA-gMatch queries. Finally, we demonstrate the efficiency and effectiveness of our proposed approaches through extensive experiments.EXISTING SYSTEM: Probabilistic graphs are often obtained from real-world applications such as the data extraction/integration in the SemanticWeb. Due to the unreliability of data sources or inaccurate extraction/integration techniques, probabilistic graph data often contain inconsistencies, violating some rules or facts. Here, rules or facts can be specified by knowledge base or inferred by data mining techniques. RDF graphs integrated from different data sources may often contain uncertain and inconsistent information , due to the unreliability of data sources. we formalize the RDF data by inconsistent probabilistic RDF graphs, which contain both inconsistencies and uncertainty. With such a probabilistic graph model, we focus on an important problem, quality-aware subgraph matching over inconsistent probabilistic RDF graphs (QA-gMatch), which retrieves subgraphs from inconsistent probabilistic RDF graphs that are isomorphic to a given query graph and with high quality scores (considering both consistency and uncertainty.PROPOSED SYATEM: In this paper, we propose the quality-aware subgraph matching problem (namely, QA-gMatch) in a novel context of inconsistent probabilistic graphs G with quality guarantees. Specifically, given a query graph q, a QA-gMatch query retrieves subgraphs g of probabilistic graph G that match with q and have high quality scores. The QA-gMatch problem has many practical applications such as the Semantic Web. For example, we can answer standard queries, SPARQL queries, over inconsistent probabilistic RDF graphs by issuing QA-gMatch queries. we will propose effective pruning methods, namely adaptive label pruning (based on a cost model) and quality score pruning, to reduce the QAgMatch search space and improve the query efficiency.Advantages:• We propose the QA-gMatch problem in inconsistent probabilistic graphs, which, to our best knowledge, no prior work has studied.• We carefully design effective pruning methods, adaptive label and quality score pruning, specific for inconsistent and probabilistic features of RDF graphs.• We build a tree index over pre-computed data of inconsistent probabilistic graphs, and illustrate efficient QA-gMatch query procedure by traversing the index.System Requirements:H/W System Configuration:- Processor - Pentium –IIISpeed - 1.1 GhzRAM - 256 MB(min)Hard Disk - 20 GBKey Board - Standard Windows KeyboardMouse - Two or Three Button MouseMonitor - SVGA S/W System Configuration:- Operating System :Windows95/98/2000/XP  Application Server : Tomcat5.0/6.X  Front End : HTML, Java, Jsp Scripts : JavaScript. Server side Script : Java Server Pages. Database Connectivity : Mysql.
Send Enquiry
Read More
JAVA/ DOT NET PROJECT ABSTRACT 2016-2017 SBVLC: SECURE BARCODE-BASED VISIBLE LIGHT COMMUNICATION FOR SMART PHONES ABSTRACT:ABSTRACT: As an alternative to NFC technology, 2D barcodes have been increasingly used for security-sensitive applications including payments and personal identification. However, the security of barcode-based communication in mobile applications has not been systematically studied. In this paper, we propose SBVLC - a secure system for barcode-based visible light communication (VLC) between smart phones. We formally analyze the security of SBVLC based on geometric models and propose physical security enhancement mechanisms for barcode communication by manipulating screen view angles and leveraging user-induced motions. We then develop two secure data exchange schemes. These schemes are useful in many security-sensitive mobile applications including private information sharing, secure device pairing, and mobile payment. SBVLC is evaluated through extensive experiments on both Android and I OS smart phones.EXISTING SYSTEM: Short-range communication technologies including near field communication (NFC) and 2D barcodes have enabled many popular smart phone applications such as contactless payments, mobile advertisements, and device pairing. Evolved from the RFID technology, NFC can enable reliable low-power communication between RF tags and readers. However, NFC requires additional hardware and has been supported by only a few smart phone platforms on the market. Recent studies have shown that NFC is subject to security vulnerabilities such as eavesdropping and jamming. Moreover, most existing barcode applications are based on a single barcode exchange, which is insufficient for establishing a secure communication channel. Whenever a user types in her password in a bank’s sign in box, the key logger intercepts the password. The threat of such key loggers is pervasive and can be present both in personal computers and public kiosks; there are always cases where it is necessary to perform financial transactions using a public computer although the biggest concern is that a user’s password is likely to be stolen in these computers. Even worse, key loggers, often root kitted, are hard to detect since they will not show up in the task manager process list.PROPOSED SYSTEM: Compared with NFC, 2D barcodes have enjoyed a significantly higher penetration rate in mobile applications. This is largely due to the extremely low barrier to adoption – almost every camera-enabled smart phone can read and process 2D barcodes. As an alternative to NFC, 2D barcodes have been increasingly used for security-sensitive applications including mobile payments and personal identification. For instance, PayPal recently rolled out a barcode-based payment service for retail customers. As one of the most anticipated new features of I Phone 5, the Passbook App stores tickets, coupons, and gift/loyalty cards using barcodes. Proposes an iterative Increment Constrained Least Squares filter method for certain 2D matrix bar codes within a Gaussian blurring ersatz. In particular, they use the L-shaped finder pattern of their codes to estimate the standard deviation of the Gaussian PSF, and then restore the image by successively implementing a bi-level constraint, our approach to solving the problem is to introduce an intermediate device that bridges a human user and a terminal. Then, instead of the user directly invoking the regular authentication protocol, she invokes a more sophisticated but user-friendly protocol via the intermediate helping device. Every interaction between the user and an intermediate helping device is visualized using a Quick Response (QR) code. The goal is to keep user-experience the same as in legacy authentication methods as much as possible, while preventing key logging attacks.ADVANTAGE:• Compared with NFC, 2D barcodes have enjoyed a significantly higher penetration rate in mobile applications.• As an alternative to NFC, 2D barcodes have been increasingly used for security-sensitive applications including mobile payments and personal identification.• Every interaction between the user and an intermediate helping device is visualized using a Quick Response (QR) code.• Preventing key logging attacks.HARDWARE REQUIREMENTS: System : Pentium IV 2.4 GHz. Hard Disk : 40 GB. Floppy Drive : 1.44 Mb. Monitor : 14’ Colour Monitor. Mouse : Optical Mouse. Ram : 512 Mb.SOFTWARE REQUIREMENTS: Operating system : Windows 7 Ultimate. Coding Language : Java. Front-End : Eclipse. Data Base : SQLite Manger.CONCLUSION: As an alternative to NFC, 2D barcodes have been increasingly used for security-sensitive applications including mobile payments and personal identification. Compared with NFC, 2D barcodes have enjoyed a significantly higher penetration rate in mobile applications. As an alternative to NFC, 2D barcodes have been increasingly used for security-sensitive applications including mobile payments and personal identification. Every interaction between the user and an intermediate helping device is visualized using a Quick Response (QR) code. Preventing key logging attacks. Thus in our project password hacking, key logging and eavesdropping issues will be overcome.
Send Enquiry
Read More
Page 1 1