Posts Tagged ‘Attacks
The initial design of Web protocols and Internet assumed benign environment, where servers, clients and routers cooperate and follow the standard protocols, except for unintentional errors. However, as the amount and sensitivity of usage increased, concerns about security, fraud and attacks became important. In particular, since currently Internet access is widely (and often freely) available, it is very easy for attackers to obtain many client and even host connections and addresses, and use them to launch different attacks on the network itself (routers and network services such as DNS) and on other hosts and clients. In particular, with the proliferation of commercial domain name registrars allowing automated, low-cost registration in most top level domains, it is currently very easy for attackers to acquire essentially any unallocated domain name, and place there malicious hosts and clients. We call this the unallocated domain adversary: an adversary who is able to issue and receive messages using many addresses in any domain name, excluding the finite list of already allocated domain names. This is probably the most basic and common type of adversary.
Unfortunately, we believe, as explained below, that currently, most web users are vulnerable even against unallocated domain adversaries. This claim may be surprising, as sensitive web sites are usually protected using the SSL or TLS protocols, which, as we explain in the following subsection, securely authenticate webpages even in the presence of intercepting adversaries (often referred to as Man In The Middle (MITM) attackers).Intercepting adversaries are able to send and intercept (receive, eavesdrop) messages to and from all domains.Indeed, even without SSL/TLS, the HTTP protocol securely authenticates web pages against spoofing adversaries, which are able to send messages from all domains, but receive only messages sent to unallocated (adversary-controlled) domains. However, the security by SSL/TLS (against intercepting adversary; or by HTTP against spoofing adversary) is only with respect to the address (URL) and security mechanism (HTTPS, using SSL/TLS, or ‘plain’ HTTP) requested by the application (usually browser). In a phishing attack (and most other spoofing attacks), the application specifies, in its request, the URL of the spoofed site. Namely, web spoofing attacks focus on the gap between the intentions and expectations of the user, and the address and security mechanism specified by the browser to the transport layer.
Since the SOAP binding requires that conformant applications support HTTP over TLS/SSL with a number of different bilateral authentication methods such as Basic over server-side SSL and certificate-backed authentication over server-side SSL, these methods are always available to mitigate threats in cases where other lower-level systems are not available and the above listed attacks are considered significant threats.
This does not mean that use of HTTP over TLS with some form of bilateral authentication is mandatory. If an acceptable level of protection from the various risks can be arrived at through other means (for example, by an IPsec tunnel), full TLS with certificates is not required. However, in the majority of cases for SOAP over HTTP, using HTTP over TLS with bilateral authentication will be the appropriate choice.
The HTTP Authentication RFC describes possible attacks in the HTTP environment when basic or message-digest authentication schemes are used. Note, however, that the use of transport-level security (such as the SSL or TLS protocols under HTTP)only provides confidentiality and/or integrity and/or authentication for “one hop”. For models where there may be intermediaries, or the assertions in question need to live over more than one hop, the use of HTTP with TLS/SSL does not provide adequate security.
Why relying on XML for solving the “what you see is what you sign” problem? Our ideas can be summarized in two points:
- If a document to be signed is either not well-formed in the sense of XML, or not valid in the sense of its accompanying schema, or both, than it must strictly be assumed that the document has been manipulated. In consequence, it has to be dropped, and the user has to notified.
- A smart card application can extract certain content items for dis-play on the smart card reader¬from a structured and formally described document. The extraction and display operations are fully controlled by the tamper-proof smart card—which is the same environment that generates the digital signature.
The fundamental property of XML documents is wellformedness. Ac-cording to the XML specification every XML processing entity has to check and assert this property. Regarding digital signing wellformedness is important, since it ensures the uniqueness of the XML documents’ interpretation. Wellformedness also ensures the usage of proper Unicode characters and the specification of their encoding. This is also very important regarding digital signatures, since character set manipulation can be used to perform “what you see is what sign” attacks.
Validity is a much more restrictive property of XML documents com-pared to wellformedness. A smart card which checks validity of XML documents with respect to a given schema before signing ensures due to the tamper resistance of the smart card that only certain types of XML documents are signed. Consider for example a smart card which contains your private key, but only signs XML documents which are valid with respect to a purchase order schema. You could give this card to your secretary being sure, that nothing else than purchase order is signed using your signature. Using additional constrains in the schema, e.g. the restriction of the maxi-mum amount to 100 Euro, eliminates the last chance of misusage.
When operated in a class 3 card reader (i.e. a card reader including a dis-play and a keypad) the card can display selected content and request user confirmation. This finally solves the “what you see is what you sign” problem. Obviously, XML processing is not an easy task to perform on resource-constraint SmartCards. The following table therefore summarizes the challenging XML properties and the resulting opportunities for improving the signing process.
Software in embedded systems is a major source of security vulnerabilities. Three factors, which we call the Trinity of Trouble — complexity, extensibility and connectivity — conspire to make managing security risks in software a major challenge.
1. Complexity: Software is complicated, and will become even more complicated in the near future. More lines of code increases the likelihood of bugs and security vulnerabilities. As embedded systems converge with the Internet and more code is added, embedded system software is clearly becoming more complex. The complexity problem is exacerbated by the use of unsafe programming languages (e.g., Cor C++) that do not protect against simple kinds of attacks, such as buffer overflows. For reasons of efficiency, C and C++ are very popular languages for embedded systems. In theory, we could analyze and prove that a small program is free of problems, but this task is impossible for programs of realistic complexity today.
2. Extensibility: Modern software systems, such as Java and .NET, are built to be extended. An extensible host accepts updates or extensions (mobile code) to incrementally evolve system functionality. Today’s operating systems support extensibility through dynamically loadable device drivers and modules. Advanced embedded systems are designed to be extensible (e.g., J2ME, Java Card). Unfortunately, the very nature of extensible systems makes it hard to prevent software vulnerabilities from slipping in as an unwanted extension.
3. Connectivity: More and more embedded systems are being connected to the Internet. The high degree of connectivity makes it possible for small failures to propagate and cause massive security breaches. Embedded systems with Internet connectivity will only make this problem grow. An attacker no longer needs physical access to a system to launch automated attacks to exploit vulnerable software. The ubiquity of networking means that there are more attacks, more embedded software systems to attack, and greater risks from poor software security practices.
Fault injection attacks rely on varying the external parameters and environmental conditions of a system such as the supply voltage, clock, temperature, radiation, etc., to induce faults in its components. The injected faults can be transient or permanent, and can compromise the security of a system in several ways:
- Availability Attacks: Faults can be injected to disrupt the normal functioning of the system. For example, the bus in an embedded system on chip can be made unavailable for performing inter component communications through permanent faults that set the bus lines to a constant value.
- Integrity attacks: These attacks can be used to corrupt the secureor non-secure code or data stored in components such as memories.
- Privacy attacks: An interesting example of the use of fault injection attacks to reveal cryptographic keys involves RSA implementations that use the Chinese Remainder Theorem (CRT) optimization. The optimization, intended to enhance the performance of the modular exponentiation operation in RSA, in fact, increases its vulnerability against fault injection attacks. It has been shown in that the RSA modulus can be factored very easily if faults can be introduced to affect the outputs of one of the sub-exponentiations being performed.
- Pre-cursor attacks: Fault injection techniques are also usefulas a pre-cursor to software attacks. For example, it has been shown in that simple memory faults induced by heat can be exploited by an untrusted program running on a processor to assume complete control of its execution environment.