Key management procedures in Cloud computing

Cloud computing infrastructures require the management and storage of many different kinds of keys; examples include session keys to protect data in transit (e.g., SSL keys), file encryption keys, key pairs identifying cloud providers, key pairs identifying customers, authorization tokens and revocation certificates. Because virtual machines do not have a fixed hardware infrastructure and cloud based content tends to be geographically distributed, it is more difficult to apply standard controls, such as hardware security module (HSM) storage, to keys on cloud infrastructures. For example:

  1. HSMs are by necessity strongly physically protected (from theft, eavesdrop and tampering). This makes it very difficult for them to be distributed in the multiple locations used in cloud architectures (i.e., geographically distributed and highly replicated). Key management standards such as PKCS#10 and associated standards such as PKCS#11 do not provide standardized wrappers for interfacing with distributed systems.
  2. Key management interfaces which are accessible via the public Internet (even if indirectly) are more vulnerable, as security is reduced in the communication channel between the user and the cloud key storage and the mutual remote authentication mechanisms used.
  3. New virtual machines needing to authenticate themselves must be instantiated with some form of secret. The distribution of such secrets may present problems of scalability. The rapid scaling of certification authorities issuing key pairs is easily achieved if resources are determined in advance, but dynamic, unplanned scaling of hierarchical trust authorities is difficult to achieve because of the resource overhead in creating new authorities (registration or certification, in authenticating new components and distributing new credentials, etc).
  4. Revocation of keys within a distributed architecture is also expensive. Effective revocation essentially implies that applications check the status of the key (certificate usually) according to a known time constraint which determines the window of risk. Although distributed mechanisms exist for achieving the challenges to ensure that different parts of the cloud receive an equivalent level of service so that they are not associated with different levels of risk. Centralized solutions such as OCSP are expensive and do not necessarily reduce the risk unless the CA and the CRL are tightly bound.

Tags : , , , , , ,

Identification of Web Sites and Certification Authorities

Currently, browsers identify the provider of the web page by indicating the Universal Resource Locator(URL) of the web page in the location bar of the browser. This usually allows knowledgeable web users to identify the owner of the site, since the URL includes the domain name (which an authorized domain name registrar allocates to a specific organization; registrars are expected to deny potentially misleading domain names). However, the identity of the provider is not necessarily included (fully) in the URL, and the URL contains mostly irrelevant information such as protocol, file, and computer details. Furthermore, the URL is presented textually, which implies that the user must make a conscious decision to validate it. All this implies that this mechanism may allow a knowledgeable web user, when alert and on guard, to validate the owner of the site;but novice, naïve or off-guard users may not notice an incorrect domain, similarly to their lack of notice of whether the site is secure, as discussed in the previous subsection.

Furthermore, popular browsers are pre-configured with a list of many certification authorities, and the liabilities of certificate authorities are not well defined; also, the identity of the CA is displayed only if the user explicitly asks for it (which very few users do regularly, even for sensitive sites). As a result, it may not be very secure to use the URL or identity from the SSL certificate. Therefore, we prefer a more direct and secure means of identifying the provider of the web page, and – if relevant – of the CA, and not simply present the URL from the SSL certificate in the TrustBar.

TrustBar identifies, by default, both site and the certificate authority (CA) which identified the site, allowing users to decide if they trust the identification by that authority. The identification is based on the SSL server authentication, confirming that the site possesses the private key corresponding to a public key in a certificate signed by the given certificate authorities, which currently must be one of the certificate authorities whose keys are pre-programmed into the browser.

Figure 1.1: Screen-shots of secure sites with logo in TrustBar

Preferably, TrustBar identifies the site and authority by logo (or some other image selected by the user,e.g. a ‘my banks’ icon). However, since currently certificates do not contain a logo, TrustBar can also identify the site and authority by name. See Figure 1.1 for identifications by logo (in (b) and (c)) and by name (see (a)). TrustBar supports certificate-derived and user-customized identifiers for sites, by logo or name:

  1. Certificate-derived identification: Names are taken from the `organization name` field of the existing X.509 SSL certificates. Such names are presented together with the text `Identified by` and the name or logo of the Certificate Authority (CA) which identified this site. The site may provide the logo in an appropriate (public key or attribute) certificate extension. This may be the same as the certificate used for the SSL connection, or another certificate (e.g. identified by a <META> tag in the page). The logo may be signed by entities that focus on validating logos, e.g. national and international trademark agencies, or by a certificate authority trusted by the user.
  2. User-customized identification: The user can identify a logo for a site, e.g. by `right-click` on an image of the logo (which usually appears on the same page). Users can also select a textual site identifier (a `pet name’),presented by TrustBar to identify the site. Whenever opening a page with the same public key, TrustBar automatically presents this logo or pet name for the site.

By displaying the logo or name of the Certifying Authority (e.g. EquiFax or Verisign in Figure 1.1), we make use and re-enforce its brand at the same time. Furthermore, this creates an important linkage between the brand of the CA and the validity of the site; namely if a CA failed and issued a certificate for a spoofing web site,the fact that it failed would be very visible and it would face loss of credibility as well as potential legal liability.

Notice that most organizational web sites already use logos in their web pages, to ensure branding and to allow users to identity the organization. However, browsers display logos mostly in the main browser window, as part of the content of the web page; this allows a rogue, spoofing site to present false logos and impersonate as another site. One exception is the FavIcon, a small icon of the web site, displayed at the beginning of the location bar in most (new) browsers. Many browsers, e.g. [Mozilla], simply display any FavIcon identified in the webpage. Other browsers, including Internet Explorer, display FavIcon only for web-pages included in the user’s list of ‘Favorite’ web pages, possibly to provide some level of validation. However, since browsers display FavIcon also in unprotected pages, and come with huge lists of predefined favorite links, this security is quite weak. We believe that the logo or icon presented in the FavIcon area should be considered a part of the TrustBar and protected in the same manner.

Tags : , , , , , , ,

Virtual Reality enhanced stroke rehabilitation system

Rehabilitation following stroke must address both the underlying deficits (range of motion, strength, and coordination) and the skilled use of the arm for the performance of ADL. Ideas gleaned from motor learning research suggest that rehabilitation should include a large amount of practice that contains not only repetition of an activity but performance of that activity in a way that promotes solving new and novel motor problems. In this sense, using VR technology may assist the rehabilitation process by allowing the systematic presentation of practice trials of a given task to a degree not fully possible in traditional therapy. The potential advantages of using VR technology in rehabilitation are (1) interactivities to motivate stroke patients including video and auditory feedback and (2) manipulability to allow the therapist to tailor treatment sessions focusing on the deficits specific to an individual and increasing task complexity as appropriate. In addition, trials in VRSRS can be presented in such away as to require both repetition and problem solving for the promotion of motor learning without boredom due to its game features. Research to date has found that the use of VR in motor rehabilitation for individuals post-stroke is feasible to address deficits in reaching, hand function, and walking. Nonetheless, important issues such as usability in designing applications of VRSRS have been often neglected or at least not firmly established because using VRSRS as a therapeutic intervention is still in its infancy. In the following sections, we will describe the concept of human factors design and how we have applied the concept to one of our applications of VRSRS, the Reaching Task.

Tags : , , , , ,

Impact of Interface Characteristics on Digital Libraries Usage

The fundamental reason for building digital libraries is belief that it will provide better delivery of  information than was not possible in the past. The major advantages of digital libraries over traditional libraries include:

  1. Digital libraries bring the libraries closer to the users: Information are brought to the users, either at home or work, making it more accessible, and increases its usage. This is very much different that traditional libraries where the users have to physically go to the library.
  2. Computer technology is used for searching and browsing: Computer systems are better than manual methods for finding information. It is useful for reference work that involves repeated leaps from one source of information to another.
  3. Information can be shared: Placing digital information on a network makes it available to everyone. Many digital libraries are maintained at a single central site.This is a vast improvement over expensive physical duplication of little used material, or the inconvenience of unique material that is inaccessible without traveling to the location where it is stored.
  4. Information is always available: The digital library’s doors will never close; usage of digital libraries’ collections can be done at hours when the library buildings are closed. Materials are never checked-out, missed-shelve, or stolen. In traditional libraries, information is much more likely to be available when and where the user wants it.
  5. New forms of information become possible: A database may be the best way to record and disseminate information. Whereas conventional libraries are printed on paper, yet print is not always the best way to record and disseminate information.

Digital libraries would definitely facilitate research work and this should be accepted mainly by those involved in the field of research. However, recent studies showed that people still prefer to read from paper despite the progress in technology. Today with many people searching for new knowledge and information, the Internet is expected to take on board the role of the human intermediary. There is also an expectation that people are digitally literate. On the other hand, some end-users do not always have the literacy to search the Internet effectively for information. The problem is compounded by the fact the Internet as a whole is not well organized and information retrieval is inevitably a difficult and time consuming process.

Tags : , , , ,

What are the projected risks of Quantum Computing?

Although there are many proposed benefits anticipated from quantum computing, there are also potential risks. Among these are the following:

  1. While advancements in security will be welcome within the IT community, there is a possibility of an uneven distribution of adoption of the new technology. If some firms adopt quantum computing and others do not, those without these systems will be vulnerable to the security threats.
  2. Conceptually, it is believed that with quantum technology we will be able to build microscopic machines such as a nanoassembler, a virtually universal constructor that will not just take materials apart and rebuild them atom by atom but also replicate itself. The good news of this self-replication machines means that these nanomachines will cost nothing to build and eventually make any products we might desire at zero cost. The bad news is that these HAL-like computing brains with capabilities exceeding those of humans, could redesign and replicate themselves at no cost, other than the loss of human dominance.
  3. Quantum computing will instigate rapid changes in computing and corresponding modifications to human life, at a time known as the point of Singularity. When that day arrives, some futurists fear that quantum computing will cause things to change so fast that it will be impossible to predict what will happen next. Or, there will be “a developmental discontinuity, an ultimate event horizon beyond which predictability breaks down totally.” It sounds as terrifying as those scenarios in a science fiction film; theoretically, nevertheless, it is the risk that quantum computing might eventually lead us to.

Tags : , , , ,

Approaches to minimizing user-related faults in IS security

Recent research to minimizing user-related faults in information systems (IS) security can be roughly summarized as follows. First, since ancient times, punishment has been used to discourage ’wrongdoing’. It has been debated whether punishment as deterrence is relevant in the context of contemporary IS security or not. Results that support the economic theories of punishment have been published. However, scholars of the behavioural community have presented much evidence of the negative long-run consequences related to the use of punishment, for instance loss of productivity,increased dissatisfaction, and aggression.

Second, the importance of ease of safe use and the related transparency principle have been presented. Similarly, asocial approach, named User Centered Security (UCS), has been put forward. However, some argue that ’ ease of safe use’ has not been properly defined. Moreover, some elements of the mentioned approaches are argued to teach users to take security as granted, which may lead to neglecting or misusing forthcoming security mechanisms. Furthermore,the aforementioned approaches are criticized for not presenting guidelines to modeling let alone resolving conflicting requirements.

Third, the Organizational psychology and incident analysis (OPIA) approach has argued that human errors can only be overcome by understanding human behaviour. However,According to Siponen, the six theses that constitute OPIA do not stand up to closer psychological scrutiny. For instance, the effects of weakness of will and lack of commitment are not taken into account.

Fourth, the importance of awareness has been underlined since it has been perceived instrumental to the effort of reducing ’human error’. The topic has been approached systematically, and program frameworks have been developed. Extending the analysis, Siponen has presented a conceptual foundation for organizational information security awareness that differentiates between the framework (‘hard’, structural) and content (informal, interdisciplinary) aspects.

Tags : , , , , , ,

Web Spoofing: Threat Models, Attacks and Current Defenses

The initial design of  Web protocols and Internet assumed benign environment, where servers, clients and routers cooperate and follow the standard protocols, except for unintentional errors. However, as the amount and sensitivity of usage increased, concerns about security, fraud and attacks became important. In particular, since currently Internet access is widely (and often freely) available, it is very easy for attackers to obtain many client and even host connections and addresses, and use them to launch different attacks on the network itself (routers and network services such as DNS) and on other hosts and clients. In particular, with the proliferation of commercial domain name registrars allowing automated, low-cost registration in most top level domains, it is currently very easy for attackers to acquire essentially any unallocated domain name, and place there malicious hosts and clients. We call this the unallocated domain adversary: an adversary who is able to issue and receive messages using many addresses in any domain name, excluding the finite list of already allocated domain names. This is probably the most basic and common type of adversary.

Unfortunately, we believe, as explained below, that currently, most web users are vulnerable even against unallocated domain adversaries. This claim may be surprising, as sensitive web sites are usually protected using the SSL or TLS protocols, which, as we explain in the following subsection, securely authenticate webpages even in the presence of intercepting adversaries (often referred to as Man In The Middle (MITM) attackers).Intercepting adversaries are able to send and intercept (receive, eavesdrop) messages to and from all domains.Indeed, even without SSL/TLS, the HTTP protocol securely authenticates web pages against spoofing adversaries, which are able to send messages from all domains, but receive only messages sent to unallocated (adversary-controlled) domains. However, the security by SSL/TLS (against intercepting adversary; or by HTTP against spoofing adversary) is only with respect to the address (URL) and security mechanism (HTTPS, using SSL/TLS, or ‘plain’ HTTP) requested by the application (usually browser). In a phishing attack (and most other spoofing attacks), the application specifies, in its request, the URL of the spoofed site. Namely, web spoofing attacks focus on the gap between the intentions and expectations of the user, and the address and security mechanism specified by the browser to the transport layer.

 

Tags : , , , , , , , , , , , , , , ,

Process for avoiding SPAM

While the SPAM-blocking capabilities of Web mail providers are good, they will never be perfect, and SPAMers can be expected to evolve their tactics in an attempt to circumvent SPAM filters. And to conduct our research, we tried to do everything wrong in an attempt to attract SPAM. What follows are some guidelines on what users can do to minimize the amount of SPAM they receive:

Recognize suspicious sites

In our experience, it’s an invitation for SPAM (or identity theft) to submit your email address and other information to sites that:

  1. Request your email address on their home page.
  2. Claim to be free but request your credit card information “for verification purposes.”
  3. Make any claims that seem too good to be true.
  4. Make it hard to leave by popping up “are you sure” types of notifications.
  5. Open popup windows as soon as you visit them.
  6. Promise something valuable for very little work (“get a free iPad just for filling out a survey”).
  7. Claim you are a randomly selected winner.
  8. Claim there’s limited time to act on an offer.

If you are interested in what a site offers but it appears suspicious, you can often find out by doing a search for the Web site to see if it’s a scam. For example, search for “theremovelist scam.”

Recognize SPAM

Spam is often identifiable in your inbox, based on certain characteristics:

What to do with SPAM

Do:

  1. Delete the email.
  2. Use your Web mail provider’s ability to mark it as junk. However, do not mark an email as SPAM if you have intentionally subscribed to it and no longer wish to receive it.

Don’t

  1. Display the images in the email. This sends a signal to the SPAMer and they know they have a working email address.
  2. Unsubscribe. If it’s a legitimate email, you can unsubscribe, but it it’s truly unsolicited, unsubscribing only tells the spammer they have a real email address.
  3. Click on links. This also sends a signal to the spammer.

Tags : , , , , ,

Honeypots or decoy email addresses

Honeypots (decoy email addresses) are used for collecting large amounts of spam. These decoy email addresses do not belong to actual end users, but are made public to attract spammers who will think the address is legitimate. Once the spam is collected, identification techniques, such as hashing systems or fingerprinting, are used to process the spam and create a database of known spam. Let’s take a closer look at hashing systems and fingerprinting.

HASHING SYSTEMS: With hashing systems, each spam email receives an identification number,or “hash,” that corresponds to the contents of the spam. A list of known spam emails and their corresponding hash is then created. All incoming email is compared to this list of known spam. If the hashing system determines that an incoming email matches an email in the spam list, then the email is rejected. This technique works as long as spammers send the same or nearly the same email repeatedly. One of the original implementations of this technique was called Razor.

FINGERPRINTING: Fingerprinting techniques examine the characteristics, or fingerprint, of emails previously identified as spam and use this information to identify the same or similar email each time one is intercepted. These real time fingerprint checks are continuously updated and provide a method of identifying spam with nearly zero false positives. Fingerprinting techniques can also look specifically at the URLs contained in a message and compare them against URLs of previously identified as spam propagators.

Honeypots with hashing or fingerprinting can be effective provided similar spam emails are widely sent. If each spam is made unique, these techniques can run into difficulties and fail.

Tags : , , , , , ,

Domain Name System

The DNS (Domain Name System) is used to translate hostnames and service names (e. g. www.simplexu.com) to numeric IP addresses (e.g. 69.65.42.236), which is a more suitable format for computers. In short this is done by a lookup in the DNS server’s database, and if not found the server will contact other DNS servers to get the correct IP address for the requested lookup (Figure 1).

IP addresses to other DNS servers and hosts will be cached in the local DNS server performing a lookup. How long an address is valid in the cache is decided by the TTL value of the IP address. TTL is set when the address is added in its authoritative DNS server’s database. This cache function results in that commonly used addresses and domains are often found in the cache. With cached information the time of the lookup decreases and less data traffic is produced.

In a full DNS lookup without any cached information, the local DNS server will request a root name server for an IP address to the correct top level domain DNS server (e.g..se, .com). The top level server, which knows what lower level DNS servers’ IP addresses are, redirects the local DNS server further down the hierarchy of DNS servers. This proceeds between the local DNS server and other DNS servers until the IP of the requested host name is known or results in an error. The local DNS server also sends the correct address or an error to the client that requested the DNS lookup.

The DNS server contacted last in a lookup which is responsible for a portion of the name space delegated to its organisation, which is called the DNS zone. This authoritative DNS server has the address saved in its database along with the TTL value. An example of a DNS lookup where the top level domain server is known by the local DNS server is illustrated in Figure 1.

Figure 1: Standard DNS lookup

  1. Client asks for an IP address to a certain name of for example a web server.
  2. Local DNS server asks a top level domain server for an address to a lower level DNS server.
  3. The top level domain server answers with an IP-address to an authoritative DNS server.
  4. Local DNS server asks the new DNS server for the address of the web server.
  5. The server was an authoritative DNS server thus it answers with the correct IP address.
  6. The IP address is forwarded to the client.
  7. Now the client can ask for the web page from the web server since it got the exact address.
  8. Data exchange between client and web server starts.

 

 

Tags : , , , , , ,