Posts Tagged ‘protocol

Identification of Web Sites and Certification Authorities

Currently, browsers identify the provider of the web page by indicating the Universal Resource Locator(URL) of the web page in the location bar of the browser. This usually allows knowledgeable web users to identify the owner of the site, since the URL includes the domain name (which an authorized domain name registrar allocates to a specific organization; registrars are expected to deny potentially misleading domain names). However, the identity of the provider is not necessarily included (fully) in the URL, and the URL contains mostly irrelevant information such as protocol, file, and computer details. Furthermore, the URL is presented textually, which implies that the user must make a conscious decision to validate it. All this implies that this mechanism may allow a knowledgeable web user, when alert and on guard, to validate the owner of the site;but novice, naïve or off-guard users may not notice an incorrect domain, similarly to their lack of notice of whether the site is secure, as discussed in the previous subsection.

Furthermore, popular browsers are pre-configured with a list of many certification authorities, and the liabilities of certificate authorities are not well defined; also, the identity of the CA is displayed only if the user explicitly asks for it (which very few users do regularly, even for sensitive sites). As a result, it may not be very secure to use the URL or identity from the SSL certificate. Therefore, we prefer a more direct and secure means of identifying the provider of the web page, and – if relevant – of the CA, and not simply present the URL from the SSL certificate in the TrustBar.

TrustBar identifies, by default, both site and the certificate authority (CA) which identified the site, allowing users to decide if they trust the identification by that authority. The identification is based on the SSL server authentication, confirming that the site possesses the private key corresponding to a public key in a certificate signed by the given certificate authorities, which currently must be one of the certificate authorities whose keys are pre-programmed into the browser.

Figure 1.1: Screen-shots of secure sites with logo in TrustBar

Preferably, TrustBar identifies the site and authority by logo (or some other image selected by the user,e.g. a ‘my banks’ icon). However, since currently certificates do not contain a logo, TrustBar can also identify the site and authority by name. See Figure 1.1 for identifications by logo (in (b) and (c)) and by name (see (a)). TrustBar supports certificate-derived and user-customized identifiers for sites, by logo or name:

  1. Certificate-derived identification: Names are taken from the `organization name` field of the existing X.509 SSL certificates. Such names are presented together with the text `Identified by` and the name or logo of the Certificate Authority (CA) which identified this site. The site may provide the logo in an appropriate (public key or attribute) certificate extension. This may be the same as the certificate used for the SSL connection, or another certificate (e.g. identified by a <META> tag in the page). The logo may be signed by entities that focus on validating logos, e.g. national and international trademark agencies, or by a certificate authority trusted by the user.
  2. User-customized identification: The user can identify a logo for a site, e.g. by `right-click` on an image of the logo (which usually appears on the same page). Users can also select a textual site identifier (a `pet name’),presented by TrustBar to identify the site. Whenever opening a page with the same public key, TrustBar automatically presents this logo or pet name for the site.

By displaying the logo or name of the Certifying Authority (e.g. EquiFax or Verisign in Figure 1.1), we make use and re-enforce its brand at the same time. Furthermore, this creates an important linkage between the brand of the CA and the validity of the site; namely if a CA failed and issued a certificate for a spoofing web site,the fact that it failed would be very visible and it would face loss of credibility as well as potential legal liability.

Notice that most organizational web sites already use logos in their web pages, to ensure branding and to allow users to identity the organization. However, browsers display logos mostly in the main browser window, as part of the content of the web page; this allows a rogue, spoofing site to present false logos and impersonate as another site. One exception is the FavIcon, a small icon of the web site, displayed at the beginning of the location bar in most (new) browsers. Many browsers, e.g. [Mozilla], simply display any FavIcon identified in the webpage. Other browsers, including Internet Explorer, display FavIcon only for web-pages included in the user’s list of ‘Favorite’ web pages, possibly to provide some level of validation. However, since browsers display FavIcon also in unprotected pages, and come with huge lists of predefined favorite links, this security is quite weak. We believe that the logo or icon presented in the FavIcon area should be considered a part of the TrustBar and protected in the same manner.

Tags : , , , , , , ,

Transport Layer Security Protocol

TLS (Transport Layer Security) was released in response to the Internet community’s demands for a standardized protocol. The IETF provided a venue for the new protocol to be openly discussed, and encouraged developers to provide their input to the protocol.

The TLS protocol was released in January 1999 to create a standard for private communications. The protocol “allows client/server applications to communicate in a way that is designed to prevent eavesdropping,tampering or message forgery.” According to the protocol’s creators, the goals of the TLS protocol are cryptographic security, interoperability, extensibility, and relative efficiency.These goals are achieved through implementation of the TLS protocol on two levels: the TLS Record protocol and the TLS Handshake protocol.

TLS Record Protocol

The TLS Record protocol negotiates a private, reliable connection between the client and the server. Though the Record protocol can be used without encryption, it uses symmetric cryptography keys, to ensure a private connection. This connection is secured through the use of hash functions generated by using a Message Authentication Code.

TLS Handshake Protocol

The TLS Handshake protocol allows authenticated communication to commence between the server and client. This protocol allows the client and server to speak the same language, allowing them to agree upon an encryption algorithm and encryption keys before the selected application protocol begins to send data. Using the same handshake protocol procedure as SSL, TLS provides for authentication of the server, and optionally, the client.

Tags : , , , , , , , , , , , ,

SAML Threat Model

The general Internet threat model described in the IETF guidelines for security considerations is the basis for the SAML threat model. We assume here that the two or more endpoints of a SAML (Security Assertion Markup Language) transaction are uncompromised, but that the attacker has complete control over the communications channel. Additionally, due to the nature of SAML as a multi-party authentication and authorization statement protocol, cases must be considered where one or more of the parties in a legitimate SAML transaction—who operate legitimately within their role for that transaction—attempt to use information gained from a previous transaction maliciously in a subsequent transaction.

The following scenarios describe possible attacks:

1. Collusion: The secret cooperation between two or more system entities to launch an attack, for example:

2. Denial-of-Service Attacks: The prevention of authorized access to a system resource or the delaying of system operations and functions.

3. Man-in-the-Middle Attacks: A form of active wiretapping attack in which the attacker intercepts and selectively modifies communicated data to masquerade as one or more of the entities involved in a communication association.

4. Replay Attacks: An attack in which a valid data transmission is maliciously or fraudulently repeated, either by the originator or by an adversary who intercepts the data and retransmits it,possibly as part of a masquerade attack.

5. Session Hijacking: A form of active wiretapping in which the attacker seizes control of a previously established communication association.

In all cases, the local mechanisms that systems will use to decide whether or not to generate assertions are out of scope. Thus, threats arising from the details of the original log in at an authentication authority,for example, are out of scope as well. If an authority issues a false assertion, then the threats arising from the consumption of that assertion by downstream systems are explicitly out of scope.

The direct consequence of such a scoping is that the security of a system based on assertions as inputs is only as good as the security of the system used to generate those assertions, and of the correctness of the data and processing on which the generated assertions are based. When determining what issuers to trust, particularly in cases where the assertions will be used as inputs to authentication or authorization decisions, the risk of security compromises arising from the consumption of false but validly issued assertions is a large one. Trust policies between asserting and relying parties should always be written to include significant consideration of liability and implementations should provide an appropriate audit trail.

Tags : , , , , , , , , , , , , ,

Abstract Architecture for Service-oriented Automated Negotiation System

The service-oriented automated negotiation system includes the following four participants: service registration centre, negotiation service requester, negotiation service provider, and protocol.

Service registration centre is a database of service providers‟ information. The negotiation service provider follows a standard service API so that customers can use negotiation services from different service providers. Registration service centre supports all customers and services in the open system, of which negotiation is just one. Service registration centre provides customers with other services such as security and auditing in addition to negotiation service registration.

Negotiation service requester discovers and calls negotiation service. It provides business solutions to enterprises and individuals using the negotiation service. In general, the service requestor does not directly interact with service registration centre, but through an application portal system. The benefit for doing so is the application portal can provide users with access to a wide range of services.

Negotiation service provider, managed by the negotiation software vendors, advertises negotiation service to theservice registration centre, including the registration of their functions and access interfaces. It responds to theservice request. At the same time, it must ensure that any modifications to the service will not affect the requester.

Protocol is an agreement between negotiation service requesters and providers. It standardizes service requestand response and ensures mutual communication.

Figure 1: Abstract architecture for automated negotiation system

Figure 1 illustrates a simple negotiation service interaction cycle, which begins with a negotiation service ADVERTISING itself through a well-known service registration centre. A negotiation requester, who may or maynot run as a separate service, queries the service registration centre to DISCOVER a service that meets its need ofnegotiation. The service registration centre returns a (possibly empty) list of suitable services, and the servicerequester selects one and passes a request message to it, using any mutually recognized protocol. In this example,the negotiation service responds either with the result of the requested operation or with a fault message. Then they INTERACT with each other continuously.

Tags : , , , , , , , , ,

A protocol answering the usability and scalability issues

A major observation that substantially improves the protocol is that each user typically uses a limited set of computers, and that it is unlikely that a dictionary attack would be mounted against this user’s account from one of these computers. This observation can be used to answer the scalability and usability issues, while reducing security by only a negligible amount. Another helpful observation is that it would be hard for the adversary to attack accounts even if it is required to answer RTTs for only a fraction of its login attempts (and we show below how to do this without running into the problem pointed out in the comment above). This enables a great improvement in the scalability of the system.

The protocol assumes that the server has a reliable way of identifying computers. For example, in a web based system the server can identify a web browser using cookies. Other identification measures could be, for example, based on network addresses or special client programs used by the user. To simplify the exposition we assume that the login processuses a web browser, and that cookies are enabled. Our protocol can handle cookie theft, as we describe below.

User login protocol

Initialization: Once the user has successfully logged into an account, the server places in the user’s computer a cookie that contains an authenticated record of the username, and possibly an expiration date. (“Authenticated” means that no party except for the server is able to change the cookie data without being detected by the server. This can be ensured, for example, by adding a MAC that is computed using a key known only to the server. Cookies of this type can be stored in several computers, as long as each of them was used by the user.


  1. If the cookie is correctly authenticated and has not yet expired, and the user identification record stored in the cookie agrees with the entered username, then the user is granted access to the server.
  2. Otherwise (there is no cookie, or the cookie is not authenticated, or the user identification in the cookie does not agree with the entered username) the server generates an RTT and sends it to the user. The useris granted access to the server only if  he answers the RTT correctly.
  1. With probability p (where 0 < p <=1 is a system parameter, say p = 0:05), the user is asked to answer an RTT. When his answer is received he is denied access to the server, regardless of whether it is correct or not.
  2. With probability 1- p, the user is immediately denied access to the server.

This is the login protocol


The user is experiencing almost the same interaction as in the original login protocol (that uses no RTTs). The only difference is that he is required to pass an RTT in two instances (1) when he tries to login from a new computer for the first time, and (2) when he enters a wrong password (with probability p). We assume that most users are likely to use a small set of computers to access their accounts, and use other computers very in frequently (say,while traveling without a laptop). We also have evidence, from the use of RTTs in Yahoo!, Alta Vista and Paypal,that users are willing to answer RTTs if they are not asked to do so very frequently. Based on these observations we argue that the suggested protocol could be implemented without substantially downgrading the usability of the login system.


The main difference in the operation of the server, compared to a login protocol that does not use RTTs, is that it has to generate RTTs whenever they are required by the protocol. To simplify the analysis assume that most users use a small set of computers, and very seldom use a computer that they haven’t used before. We can therefore estimate that the server is required to generate RTTs only for a fraction p of the unsuccessful login attempts. Thisis a great improvement compared to the basic protocol, which requires the server to generate an RTT for every login attempt. Note that the overhead decreases as p is set to a smaller value. As for the cost of adding the new protocol to an existing system, note that no changes need to be made to the existing procedure, where the authentication module receives a username/password pair and decides whether they correspond to a legitimate user. The new protocol can use this procedure as a subroutine, together with additional code that decides whether to require the user to pass an RTT, and whether to accept the user as legitimate.

Tags : , , , , , , , , ,

Scope and Applicability of Randomized Hashing

Randomized hashing is designed for situations where one party, the message preparer, generates all or part of a message to be signed by a second party, the message signer. If the message preparer is able to find cryptographic hash function collisions (i.e., two messages producing the same hash value), then she might prepare meaningful versions of the message that would produce the same hash value and digital signature, but with different results (e.g., transferring $1,000,000 to an account, rather than $10). Cryptographic hash functions have been designed with collision resistance as a major goal, but the current concentration on attacking cryptographic hash functions may result in a given cryptographic hash function providing less collision resistance than expected. Randomized hashing offers the signer additional protection by reducing the likelihood that a preparer can generate two or more messages that ultimately yield the same hash value during the digital signature generation process – even if it is practical to find collisions for the hash function. However, the use of randomized hashing may reduce the amount of security provided by a digital signature when all portions of the message are prepared by the signer.

In randomized hashing, a quantity, called a “random value,” or rv, that the preparer cannot predict, is used by the signer to modify the message. This modification occurs after the preparer commits to the message (i.e., passes the message to the signer), but before the signer computes the hash value. The technique specified in this Recommendation does not require knowledge of the specific cryptographic hash function; the same randomization process is used regardless of the cryptographic hash functions used in the digital signature applications. Protocol and application designers should select cryptographic hash functions believed to be collision resistant, and then consider the use of the randomized hashing in the design of their protocol or application whenever one party prepares a full or partial message for signature by another party.

The randomization method specified in this Recommendation is an approved method for randomizing messages prior to hashing. The method will enhance the security provided by the approved cryptographic hash functions in certain digital signature applications.

Tags : , , , , , , , , ,

Exceptions and ‘finally’

Although modeled after the C++ mechanism, Java avoids some of  C++s more severe problems by using the ‘ finally ‘ clause. In C++, when an exception leaves the scope of a function, all objects that are allocated on the stack are reclaimed, and their destructors are called. Thus, if you want to free a resource or otherwise clean something up when an exception passes by, you must put that code in the destructor of an object that was allocated on the stack. For example:

template <class T>

class Deallocator{


Deallocator(T* o) : itsObject(o) { }

~Deallocator() { delete itsObject; }


T* itsObject;


void f() throw (int)


Deallocator<Clock> dc = new Clock;

//….things happen, exceptions may be thrown.


In this example, the Deallocator<Clock> object is responsible for deleting the instance of Clock that was allocated on the heap. Whenever ‘dc‘ goes out of scope, either because ’f’ returns, or because an exception is thrown, the destructor for ’dc’ will be called and the Clock instance will be returned to the heap.

This is artificial, error prone, and in convenient. Moreover there are some really nasty issue shaving to do with throwing exceptions from constructors and destructors that make exceptions in C++ a difficult feature to use well. For more details on the traps and pitfalls of C++ exceptions.

Every try block can have a ‘finally‘ clause. Any time such a block is exited, regardless of the reason for that exit (e.g. execution could proceed out of the block, or an exception could pass through it) the code in the finally clause is executed.

public class Thing {

public void Exclusive() {




// Code that executes while semaphore is acquired.

// exceptions may be thrown.







private Semaphore itsSemaphore;


The above Java snippet shows an example of the finally clause. The method Exclusive sets a semaphore and then continues to execute. The finally clause frees the semaphore either when the try block exists, or if an exception is thrown.

In many ways, this scheme seems superior to the C++ mechanism. Cleanup code can be directly specified in the finally clause rather than artificially put into some destructor. Also, the cleanup code can be kept in the same scope as the variables being cleaned up. In my opinion, this often makes Java exceptions easier to use than C++ exceptions. The main downside to the Java approach is that it forces the application programmer to (1) know about the release protocol for every resource allocated in a block and to (2) explicitly handle all cleanup operations in the finally block. Nevertheless, I think the C++ community ought to take a good hard look at the Java solution.

Tags : , , , , , , , , , , , , , ,