Stronger Password Authentication Using Browser Extensions

Keystream monitor

A natural idea for anyone who is trying to implement web password hashing is a keystream monitor that detects unsafe user behavior. This defense would consist of a recording component and a monitor component. The recording component records all passwords that the user types while the extension is in password mode and stores a one-way hash of these passwords on disk. The monitor component monitors the entire keyboard key stream for a consecutive sequence of keystrokes that matches one of the user’s passwords. If such a sequence is keyed while the extension is not in password mode, the user is alerted.

We do not use a keystream monitor in Pwd-Hash, but this feature might be useful for an extension that automatically enables password mode when a password field is focused, rather than relying on the user to press the password-key or password-prefix. However, this approach suffers from several limitations.The most severe is that the keystream monitor does not defend against an online mock password field. By the time the monitor detects thata password has been entered, it is too late — the phisher has already obtained all but the last characterof the user’s password. Another problem is that storing hashes of user passwords on disk facilitates an offline password dictionary attack if the user’s machine is infiltrated. However, the same is true of the browser’s auto-complete password database. And finally,novice users tend to choose poor passwords that might occur naturally in the keystream, when the extension is not in password mode. Although the threat of constant warnings might encourage the userto choose unique and unusual passwords, excessive false alarms could also cause the user to disregard monitor warnings.

Tags : , , ,

Transaction Management in the R* Distributed Database Management System

It concentrates primarily on the description of the R* commit protocols, Presumed Abort (PA) and Presumed Commit (PC). PA and PC are extensions of the well-known, two-phase (2P) commit protocol. PA is optimized for read-only transactions and a class of multisite update transactions, and PC is optimized for other classes of multisite update transactions. The optimizations result in reduced intersite message traffic and log writes, and, consequently, a better response time. R* is an experimental, distributed database management system (DDBMS).When a transaction execution starts, its actions and operands are not constrained. Conditional execution and ad hoc SQL statements are available to the application program.The whole transaction need not be fully specified and made known to the systemin advance. A distributed transaction commit protocol is required in order toensure either that all the effects of the transaction persist or that none of the Transaction Management in the R’ Distributed Database Management System effects persist, despite intermittent site or communication link failures. In otherwords, a commit protocol is needed to guarantee the uniform commitment of distributed transaction executions.

Guaranteeing uniformity requires that certain facilities exist in the distributed database system. We assume that each process of a transaction is able to provisionally perform the actions of the transaction in such a way that they canbe undone if the transaction is or needs to be aborted. Also, each database of the distributed database system has a log that is used to recoverably record the state changes of the transaction during the execution of the commit protocol and the transaction’s changes to the database (the UNDO/REDO log ). The log records are carefully written sequentially in a file that is kept in stable(nonvolatile) storage.

The transaction writing the log record is not allowed to continue execution until this operation is completed. This means that, if the site crashes(assuming that a crash results in the loss of the contents of the virtual memory)after the force-write has completed, then the forced record and the ones preceding it will have survived the crash and will be available, from the stable storage,when the site recovers. It is important to be able to “batch” force-writes for high performance. R* does rudimentary batching of force-writes. On the other hand, in the asynchronous case, the record gets written to virtual memory buffer storage and is allowed to migrate to the stable storage later on(due to a subsequent force or when a log page buffer fills up). The transaction writing the record is allowed to continue execution before the migration takesplace. This means that, if the site crashes after the log write, then the record may not be available for reading when the site recovers. An important point to note is that a synchronous write increases the response time of the transaction compared to an asynchronous write. Hereafter, we refer to the latter as simply awrite and the former as a force-write.

Some of the desirable characteristics in a commit protocol are (1) guaranteed transaction atomicity always, (2) ability to “forget” out come of commit processingafter a short amount of time, (3) minimal overhead in terms of log writes and message traffic, (4) optimized performance in the no-failure case, (5) exploitationof completely or partially read-only transactions, and (6) maximizing the abilityto perform unilateral aborts. R*, which is an evolution of the centralized DBMS System R, like its predecessor, supports transaction serializability and uses the two-phase locking(2PL) protocol as the concurrency control mechanism. The use of 2PL introduces the possibility of deadlocks. R*, instead of preventing deadlocks,allows them (even distributed ones) to occur and then resolves them by deadlockdetection and victim transaction abort.

Some of the desirable characteristics in a distributed deadlock detection protocol are (1) all deadlocks are resolved in spite of site and link failures,(2) each deadlock is detected only once, (3) overhead in terms of messages exchanged is small, and (4) once a distributed deadlock is detected the time takento resolve it (by choosing a victim and aborting it) is small.Here we concentrate on the specific implementation of that distributed algorithm in R* and the solution adopted for the global deadlock victim selection problem. In general, as far as global deadlock management is concerned, we suggest that if distributed detection of global deadlocks is to be performed then, in the event of a global deadlock, it makes sense to choose as the victim a transaction that is local to the site of detection of that deadlock (inpreference to, say, the “youngest” transaction which may be a nonlocal transaction),assuming that such a local transaction exists.

First, we give a careful presentation of 2P. Next, we derive from 2P in a stepwise fashion the two new protocols,namely, PA and PC. We then present performance comparisons, optimizations, and extensions of PA and PC. Next, we present the R* approach to global deadlock detection and resolution. We then conclude by outlining the current status of R*.

Tags : , , , ,

Gaps in CBMIR using Different Methods

In the field of information technology along with a digital imaging revolution in the medical domain facilitate the generation and storage of large collections of images by hospitals and clinics. To search these large image collections effectively and efficiently poses significant technical challenges, and it raises the necessity of constructing intelligent retrieval systems. Content-based Medical Image Retrieval (CBMIR) consists of retrieving the most visually similar images to a given query image from a database of images. Medical CBIR (content-based image retrieval) applications pose unique challenges but at the same time offer many new opportunities. On one hand, while one can easily understand news or sports videos,a medical image is often completely incomprehensible to untrained eyes.

BRIDGING THE GAP

The difficulty faced by CBIR methods in making inroads into medical applications can be attributed to a combination of several factors.

a. The Content Gap

It is important to consider image content in light of the context of the medical application for which a CBIR system has been optimized. Too often, we find a generic image retrieval model where the goal is to find medical images thatare similar in overall appearance. The critical factor in medical images, however, is the pathology – the primary reason forwhich the image was taken. This pathology may be expressed in details within the image (e.g., shape of a vertebra or textureand color of a lesion) rather than the entire image (e.g. spinex-ray or cervicographic image).

b. The Feature Gap

Extracted features are used to define the image content. As such, decisions on the types of features, scale(s) at which the features are extracted, and their use individually or in combination determines the extent to which the system“knows” the image and, to a large extent the system capability. It is necessary for the system to support as many types of features as possible and also capture them at several scales.

c. The Performance Gap

Benefits of medical imaging to science and healthcare haveled to an explosive growth in the volume (and rate) of acquired medical images. Additionally, clinical protocols determine the acquisition of these images. There is a need forthe system response to be meaningful, timely and sensitive tothe image acquisition process. These requirements make linear searches of image feature data, very often presented in the literature, impractical and a significant hurdle in the inclusion of CBIR into medical applications.

d. The Usability Gap

This gap is rarely addressed during the design and development of CBIR systems. However, it is the one of most concern to the end user of the system and therefore hasthe greatest potential for affecting the acceptance of a new technology. An idealized system can be designed to overcome all the above gaps, but still fall short of being accepted intothe medical community for lack of (i) useful and clear querying capability; (ii) meaningful and easily understandable responses; and (iii) provision to adapt to user feedback.

Tags : , , , ,

Privacy and accountability in database systems

Databases that preserve a historical record of activities and data offer the important benefit of system accountability, past events can be analyzed to detect breaches and maintain data quality. But the retention of history can also pose a threat to privacy. System designers need to carefully balance the need for privacy and accountability by controlling how and when data is retained by the system and who will be able to recover and analyze it. This  describes the technical challenges faced in enhancing database systems so that they can securely manage history. These include  first, assessing the unintended retention of data in existing database systems that can threaten privacy; second, redesigning system components to avoid this unintended retention; and third,developing new system features to support accountability when it is desired.

Computer forensics is an emerging field which has studied the recovery of data from file systems, and the unintended retention of data by applications like web browsers and document files. Forensic tools like the Sleuth Toolkit and EnCASE Forensic  are commonly used by investigators to recover data from computer systems. These tools are sometimes able to interpret common file types but, to our knowledge, none provide support for analysis of database files. Forensic analysts typically have unrestricted access to storage on disk. We consider as our threat model adversaries with such access, as this models the capabilities of system administrators, a hacker who has gained privileges on the system, or an adversary who has breached physical security.We also note that database storage is increasingly embedded into a wide range of common applications for persistence.For example, embedded database libraries like BerkeleyDB and SQLite  are used as the underlying storage mechanisms for email clients, web browsers, LDAP implementations, and Google Desktop. For example, Apple Mail.appuses an SQLite database file to support searches on subject,recipient, and sender (stored as !/Library/Mail/EnvelopeIndex). Recently, Mozilla has adopted SQLite as a unified storage model for all its applications. In Firefox 2.0, remote sites can store data that persists across sessions in an SQLite database as a sophisticated replacement of cookies. The forensic analysis of such embedded storage is particularly interesting because it impacts everyday users of desktop applications, and because embedded database storage is harder to protect from such investigation.

Forensic analysis can be applied to various components of a database system, and it reveals not only data currently active in the database, but also previously deleted data and historical information about operations performed on thesystem. Record storage in databases contains data that has been logically deleted, but not destroyed. Indexes also contain such deleted values, and in addition may reveal, through their structure, clues about the history of operations that led to their current state. Naturally, the transaction log contains a wealth of forensic information since it often includes before- and after-images of each database update. Other sources of forensic information include temporary relations (often written to disk for large sort operations), the database catalog, and even hidden tuple identifiers that may reveal the order of creation of tuples in the database. The goal here is to understand the magnitude of data retention, and to measure (or bound) the expected lifetime of data.

As a practical matter encrypted storageis not widely used. In databases, encryption often introduces unacceptable performance costs. In addition, forensic investigators or adversaries may recover cryptographic keys because they are shared by many employees, easily subpoenaed,or stored on disk and recovered.

Tags : , , , ,

Metadata for digital libraries: state of the art and future directions

At a time when digitization technology has become well established in library operations, the need for a degree of standardization of metadata practices has become more acute, in order to ensure digital libraries the degree of interoperability long established in traditional libraries. The complex metadata requirements of digital objects, which include descriptive, administrative and structural metadata, have so far mitigated against the emergence of a single standard. However, a set of already existing standards, all based on XML architectures, can be combined to produce a coherent, integrated metadata strategy.

An overall framework for a digital object’s metadata can be provided by either METS or DIDL, although the wider acceptance of the former within the library community makes it the preferred choice. Descriptive metadata can be handled by either Dublin Core or the more sophisticated MODS standard. Technical metadata, which is contingent on the type of files that make up a digital object, is covered by such standards as MIX (still images), AUDIOMD (audio files), VIDEOMD or PBCORE (video) and TEI Headers (texts). Rights management may be handled by the METS Rights schema or by more complex schemes such as XrML or ODRL. Preservation metadata is best handled by the four schemas that make up the PREMIS standard. Integrating these standards using the XML namespace mechanism is straight forward technically although some problems can arise with namespaces that are defined with different URIs, or as a result of duplication and consequent redundancies between schemas: these are best resolved by best practice guidelines, several of which are currently under construction.

The next ten years are likely to see further degrees of metadata integration, probably with the consolidation of these multiple standards into a single schema. The digital library community will also work towards firmer standards for metadata content (analogous to AACR2), and software developers will increasingly adopt these standards. The digital library user will benefit from developments in enhanced federated searching and consolidated digital collections. The same developments are likely to take place in the archives and museums sectors, although the different metadata traditions that apply here are likely to make the formthey take some what different.

The combined benefits of the shared XML platform and the fact that they have already proved themselves in major projects makes these standards the best strategic choices for digital libraries. Although their adoptionin integrated environments is still at a relatively early stage, particularly amongst software developers,increasing community-wide use of these will render the production of digital collections easier by freeing resources from metadata to object creation, and facilitate the adoption of service-oriented approaches to core infrastructures. The adoption of integrated metadata strategies should be pressed for at the highest managerial levels.

Tags : , , ,

Fuzzy Logic Based Intelligent Negotiation Agent (FINA) in E-Commerce

The fuzzy logic based intelligent negotiation agent. This is able to interact autonomously and consequently save human labor in negotiations. The aim of modeling a negotiation agent is to reach mutual agreement efficiently and intelligently. The negotiation agent is able to negotiate with other such agents, over various sets of issues, on behalf of the real-world parties they represent, i.e. it can handle multi-issue negotiation.

The reasoning model of the negotiation agent has been implemented partially by using c# based on Microsoft .NET. The reliability and the flexibility of the reasoning model are finally evaluated. The results show that performance of the proposed agent model is acceptable for negotiation parties to achieve mutual benefits.

Software agent technology is widely used in agent-based e-Commerce. These software agents have a certain degree of intelligence, i.e. they can make their own decisions. The agents interact with other agents to achieve certain goals. However,software agents can not directly control other agents because every agent is an independent decision maker. Negotiation becomes the necessary method to achieve mutual agreement between agents.This focuses on modeling multi-issue, one-to-one negotiation agents for a third party driven virtual market place.We consider one-to-one negotiation because it is the characteristic of individual negotiations and because it allows cooperative negotiation which is not suitable for many-to-many auction based negotiations.

When building autonomous negotiation agents which are capable of flexible and sophisticated negotiation, three broadareas need to be considered:

Negotiation protocols – the set of rules which govern the interaction

Negotiation issues – the range of issues over which agreement must be reached

Agent reasoning models – the agents employ to act in line with the negotiation protocol in order to achieve their negotiation objectives.

This reasoning model aims at the negotiation process. The process of matching and hand shaking in a pre-negotiation process has been solved in several papers. We assume that the buyer agent and vendor agent have roughly matched their similarity and start a negotiation on issues which they have not reached agreement. In a certain round of negotiation, the negotiation agent can pre-prepare a counter offer for the next round of negotiation. The counter offer is generated by the new offer generation engine. Both the incoming offer from the opponent negotiation agent and the counter offer are sent to the offer evaluation block. This does the analysis of the offer and calculates the degree of satisfaction (acceptance of the agent)of the incoming offer and the counter offer. The result is scaled over the range from 0 to 100. Finally, the decision making block makes the decision. It could be acceptance of the current incoming offer, rejection of the current incoming offer, or counter offer.

Tags : , , , ,

Sedna : A Native XML DBMS

Sedna is an XML database system. It implements XQuery and its data model exploiting techniques developed specially for this language.

Sedna is designed with having two main goals in mind.First, it should be the full-featured database system. That requires support for all traditional database services such as external memory management, query and update facilities,concurrency control, query optimization, etc. Second, itshould provide a run-time environment for XML-data intensive applications. That involves tight integration of database management functionality and that of a programming language. Developing Sedna,  not to adopt any existing database system. Instead of building a super structure upon an existing database system, use a native system from scratch. It took more time and eff ort but gave us more freedom in making design decisions and allowed avoiding undesirable run-time overheads resulting from interfacing with the data model of the underlying database system.

We take the XQuery 1.0 language  and its data mode as a basis for our implementation. In order to support updates we extend XQuery with an update language named XUpdate. Sedna is written in Scheme and C++. Static query analysis and optimization is written in Scheme. Parser, executor,memory and transaction management are written in C++.The implementation platform is Windows.

Tags : , , , ,

Privacy challenges of Cloud computing

In the Cloud computing environment, Cloud providers,being by definition third parties, can host or store important data, files and records of Cloud users. In certain forms of Cloud computing, the use of the service per se entails that personally identifiable information or content related to individual’s privacy sphere are communicated through the platform to, sometimes, an unrestricted number of users (see, social networking paradigm). Given the volume or location of the Cloud computing providers, it is difficult for companies and private users to keep at all times in control the information or data they entrust to Cloud suppliers. Some key privacy or data protection challenges that can be characterised as particular to the Cloud-computing context are, in our view, the following:

Sensitivity of entrusted information – It appears that any type of information can be hosted on, or managed by the Cloud. No doubt that all or some of this information may be business sensitive (i.e. bank account records) or legally sensitive (i.e. health records), highly confidential or extremely valuable as company asset(e.g. business secrets). Entrusting this information to a Cloud increases the risk of uncontrolled dissemination of that information to competitors (who can probably co-share same Cloud platform), individuals concerned by this information or to any other third party with an interest in this information.

Localisation of information and applicable law -The relation of certain data to a geographic locationhas never been more blurred than with the advent of Cloud computing. In the EU, as in other jurisdictions,the physical “location” plays a key role for determining which privacy rules apply. Thus, data collected and “located” within the European territory can benefit from the protection of the European privacy rules.

Users access rights to information – Given that theusers of the same Cloud share the premises of data processing and the data storage facilities, they are bynature exposed to the risk of information leakage, accidental or intentional disclosure of information.

Data transfers – If the data used by, or hosted on, the Cloud may change location regularly or may reside on multiple locations at the same time, it becomes complicated to watch over the data flows and, consequently,to determine the conditions that would legitimize such data transfers. If, data movements are geographically unlimited under some local laws, in Europe, data transfers to third countries often require contractual or other arrangements to be in place (e.g. EU “model contracts” or US Safe Harbor registration for data transfers to the US). It may become complicated to fulfill these arrangements if data locations are not stable.

Externalization of privacy – Companies engaging in Cloud computing expect that the privacy commitments they have made towards their customers, employees orother third parties will continue to apply by the Cloud computer provider. This becomes particularly relevant if the Cloud provider operates in many jurisdictions inwhich the exercise of individual rights may be subject to different conditions.

Contractual rules with privacy implications - It is common for a Cloud provider to offer his facilities to users without individual contracts. Yet, it can be that certain Cloud providers suggest negotiating their agreements with clients and, thus, offering the possibility of tailored contracts. Whatever the opted contractual model is, certain contractual clauses can have direct implications in the privacy and protection of the entrusted information (e.g. defining who actually“controls” the data and who only “processes” the data).

Tags : , , ,

Middleware Layers and R&D Efforts

Just as networking protocol stacks can be decomposed into multiple layers, middleware can also be decomposed into multiple layers

1.Host Infrastructure Middleware

2. Distribution Middleware

3. Common Middleware Services

4. Domain specific Middleware Services

Each of these middleware layers is described below, along with a summary of key R&D efforts at each layer that are helping to evolve the ability of middleware to meet the stringent QoS demands of DRE (Distributed real-time and embedded) systems.

Host infrastructure middleware encapsulates and enhances native OS communication and concurrency mechanisms to create portable and reusable network programming components, such as reactors, acceptor connectors, monitor objects, active objects, and component configurators. These components abstract away the accidental incompatibilities of individual operating systems, and help eliminate many tedious, error-prone, and non-portable aspects of developing and maintaining networked applications via low-level OS programming API, such as Sockets or POSIX Pthreads. The OVM virtual machine is written entirely in Java and its architecture emphasizes customizability and pluggable components. Its implementation strives to maintain a balance between performance and flexibility, allowing users to customize the implementation of operations such as message dispatch, synchronization, field access, and speed.

Distribution middleware defines higher-level distributed programming models whose reusable APIs and mechanisms automate and extend the native OS network programming capabilities encapsulated by host infrastructure middleware. Distribution middleware enables developers to program distributed applications much like stand-alone applications, i.e., by invoking operations on target objects without hard coding dependencies on their location, programming language,OS platform, communication protocols and interconnects,and hardware characteristics. At the heart of distribution middleware are QoS-enabled object request brokers(ORBs), such as CORBA, COM+, and Java RMI. These ORBs allow objects to interoperate across networks regardless of the language in which they were written or the OS platform on which they are deployed.

DRE applications to reserve and manage

Processor resources via thread pools, priority mechanisms, intra-process mutexes, and a global scheduling service for real-time systems with fixed priorities

Communication resources via protocol properties and explicit bindings to server objects using priority bands and private connections

Memory resources via buffering requests in queuesand bounding the size of thread pools.

Common Middleware Services augment distribution middleware by defining higher-level domain independent components that allow application developers to concentrate on programming application logic, without the need to write the “plumbing” code needed to develop distributed applications by using lower level middleware features directly. Whereas distribution middleware focuses largely on managing end-system resources in support of an object-oriented distributed programming model, common middleware services focus on allocating,scheduling, and coordinating various end-to-end resources throughout a distributed system using a component programming and scripting model. Developers can reuse these services to manage global resources and perform recurring distribution tasks, such as event notification,logging, persistence, real-time scheduling, fault tolerance,and transactions, that would otherwise be implemented in an ad hoc manner by each application or integrator.

The QuO architecture decouples DRE middleware and applications along the following two dimensions:

a) Functional paths, which are flows of information between client and remote server applications. In distributed systems, middleware ensures that this information is exchanged efficiently, predictably,scaleably, dependably, and securely between remote peers. The information itself is largely application specific and determined by the functionality being provided (hence the term “functional path”).

b) QoS paths, which are responsible for determining how well the functional interactions behave end-to-end  with respect to key DRE system QoS properties, such as

1. How and when resources are committed toclient/server interactions at multiple levels of DREsystems

2. The proper application and system behavior ifavailable resources do not satisfy the expected resources

3. The failure detection and recovery strategies necessary to meet end-to-end dependability requirements.

The QuO middleware is responsible for collecting,organizing, and disseminating QoS-related meta-information needed to monitor and manage how well the functional interactions occur at multiple levels of DRE systems. It also enables the adaptive and reflective decision-making needed to support non-functional QoS properties robustly in the face of rapidly changing application requirements and environmental conditions,such as local failures, transient overloads, and dynamic functional or QoS reconfigurations.

Domain-specific middleware services are tailored to the requirements of particular DRE system domains, such as avionics mission computing, radar processing, online financial trading, or distributed process control. Unlike the previous three middleware layers—which provide broadly reusable “horizontal” mechanisms and services domain-specific middleware services are targeted at vertical markets. From both a COTS and R&D perspective, domain-specific services are the least mature of the middleware layers, due in part to the historical lackof distribution middleware and common middleware service standards needed to provide a stable base upon which to create domain-specific middleware services. Since they embody knowledge of a domain, however,domain-specific middleware services have the most potential to increase the quality and decrease the cycle-time and effort that integrators require to develop particular classes of DRE systems.

The domain specific middleware services in Bold Stroke are layered upon COTS processors (PowerPC), network interconnects(VME), operating systems (VxWorks), infrastructure middleware (ACE), distribution middleware (TAO), and common middleware services (QuO and the CORBA Event Service).

Tags : , , ,

The use of the random oracle model in cryptography

Possibly the most controversial issue in provable security research is the use of the random oracle model (Bellare & Rogaway 1993). The random oracle model involves modelling certain parts of cryptosystems, called hash functions, as totally random functions about whose internal workings the attacker has no information.This theoretical model vastly simplies the analysis of cryptosystems and allows many schemes to be ‘proven’ secure that would otherwise be too complicated to be proven secure.

A hash function is a keyless algorithm that takes arbitrary-length inputs and outputs a fixed-length hash value or hash. There are several properties that one would expect a hash function to exhibit, including pre-image resistance (given a random element of the output set, it should be computationally infeasible to find a pre-image of that element)  and collision resistance  (it should be computationally infeasible to find two elements that have the same hash value). However, there are many more properties that we might require of a hash function depending on the circumstances. For example, it might be hoped that if the hash function is evaluated on two related inputs, then the outputs will appear unrelated.

From a provable security point of view, hash functions present a difficult problem. They are usually developed using symmetric techniques, either as standalone algorithms or based on the use of a block cipher. Thus it is difficult to apply the reductionist theory of provable security to them because there are no natural candidate problems to which we may reduce the security. There are constructions of hash functions from block ciphers for which it can be proven that the hash function has certain properties (such as pre-image and collision resistance) as long as the underlying block cipher is indistinguishable from a random permutation however, it is impossible for any publicly known function to produce outputs that appear independent when evaluated on two known inputs.

The random oracle model attempts to overcome our inability to make strong statements about the security of hash functions by modelling them as completely random functions about which an attacker has no information. The attacker (andall other parties in the security model) may evaluate such a random hash functionby querying an oracle. The original interpretation of this simplifcation was that it heuristically demonstrated that a cryptosystem was secure up to attacks against the system that may be introduced via the use of a specific hash function. Equivalently,it was thought that a proof of security in the random oracle model meant that, with overwhelming probability, the cryptosystem was secure when instantiated with a randomly chosen hash function.This interpretation of the random oracle model is correct up to a point. It ispossible to construct families of efficient hash functions for which it is computationally infeasible to differentiate between access to an oracle which computes a randomly selected hash function, and access to an oracle which computes a random function. If such a hash function is used in place of the random oracle, then we can be sure that the scheme is secure against attackers whose only interaction with the hash function is to directly compute the output of the hash function on certain inputs.

The one key difference between the random oracle model and the use of a hash function selected at random from a random looking function family is that in the latter case the attacker is given access to a description of a Turing machine that can compute the hash function  in the former the attacker is not given such a description.This led to the cataclysmic result of Ran Canetti  (2004) who demonstrated that it was possible to have a scheme that was provably secure in the random oracle model,and yet insecure when the random oracle was replaced with any hash function. The trick Canetti  employ is to use knowledge of the Turing machine that computes the hash function like a password that forces the cryptosystem to release sensitive information.

Tags : , , ,