Efficiencies in optical computing

Optical computing is an inherently multidisciplinary subject whose study routinely involves a spectrum of expertise that threads optical physics, materials science, optical engineering, electrical engineering, computer architecture, computer programming, and computer theory. Applying ideas from theoretical computer science, such as analysis of algorithms and computational complexity, enables us to place optical computing in a framework where we can try to answer a number of important questions. For example,which problems are optical computers suitable for solving? Also, how does the resource usage on optical computers compare with more standard (e.g. digital electronic) architectures? The physical principles behind some efficiencies in optical computing are outlined here.

1. Fan-in efficiency

Kirchoff’s Law is well understood in analog electronics as a natural and constant time means of summing the current at the intersection of an arbitrary number of wires. In optics, the same thing is possible by directing several light beams towards a point detector with a linear response to incident light. Such an optical arrangement sums n non-negative integers in O(1) addition steps. On a model of a sequential digital electronic computer this would require n − 1 addition operations and even many typical (bounded fan-in)parallel models, with n or more processors, take O(log n) time steps. Tasks that rely on scalar summation operations (such as matrix multiplication) would benefit greatly from an optical implementation of the scalar sum operation. Similarly, O(1) multiplication and O(1) convolution operations can be realised optically. Very recently, an optics-based digital signal processing platform has been marketed that claims digital processing speeds of  tera (10^12) operations per second.

2. Efficiency in interconnection complexity

As optical pathways can cross in free space without measurable effect on the information in either channel, high interconnection densities are possible with optics. Architectures with highly parallel many-to-many interconnections between parallel surfaces have already been proposed for common tasks such as sorting. Currently, intrachip, inter-chip, and inter-board connections are being investigated for manufacturing feasibility.

3. Energy efficiency

Electrical wires suffer from induced noise and heat, which increases dramatically whenever wires are made thinner or placed closer together, or whenever the data throughput is increased. As a direct consequence of their resistance free pathways and noise-reduced environments, optical systems have the potential to generate less waste heat and so consume less energy per computation step than electronic systems. This has beend emonstrated experimentally with general purpose digital optical processors.

Tags : , , , , , , , ,

Memory errors in C : Terminology

Using terminology from SafeC, memory errors in C programs can be classifieds into two different types:

  1. Spatial memory errors and
  2. Temporal memory errors.

Spatial memory errors in C programs include array bounds violations (i.e., buffer overrun) errors, uninitialized pointer dereferences (causing an access to an invalid address), invalid type conversion errors, format string errors, etc. Temporal memory errors include uses of pointers to freed heap memory and uses of pointers to an activation record after the function invocation completes.

Here we focus on detecting uses of pointers to freed heap memory. In previous work, we have described techniques for detecting spatial errors with very low overhead, which also exploits Automatic Pool Allocation to reduce run-time overhead. Those techniques (and other approaches that detect spatial errors) are complementary to our approach here because our approach here does not use any metadata on individual pointers or objects and does not restrict adding such metadata. For dangling pointer accesses to stack objects, some combination of compile time escape analysis, run-time checks, or converting possibly escaping stack allocations to heap allocations can be used. By dangling pointer errors we mean use of pointers to freed heap memory, where use of a pointer is a read, write or free operation on that pointer.

Tags : , , , , , , , , , , ,

Limitations of Modern Cryptosystems

Before exploring quantum key distribution, it is important to understand the state of modern cryptography and how quantum cryptography may address current digital cryptography limitations. Since public key cryptography involves complex calculations that are relatively slow, they are employed to exchange keys rather than for the encryption of voluminous amounts of date. For example, widely deployed solutions, such as the RSA and the Diffie-Hellman key negotiation schemes, are typically used to distribute symmetric keys among remote parties. However, because asymmetric encryption is significantly slower than symmetric encryption, a hybrid approach is preferred by many institutions to take advantage of the speed of a shared key system and the security of a public key system for the initial exchange of the symmetric key. Thus, this approach exploits the speed and performance of a symmetric key system while leveraging the scalability of a public key infrastructure.

However, public key cryptosystems such as RSA and Diffie-Hellman are not based on concrete mathematical proofs. Rather, these algorithms are considered to be reasonably secure based on years of public scrutiny over the fundamental process of factoring large integers into their primes, which is said to be “intractable”. In other words, by the time the encryption algorithm could be defeated, the information being protected would have already lost all of its value. Thus, the power of these algorithms is based on the fact that there is no known mathematical operation for quickly factoring very large numbers given today’s computer processing power.

Secondly, there is uncertainty whether a theorem may be developed in the future  or perhaps already available that can factor large numbers into their primes in a timely manner. At present, there is no existing proof stating that it is impossible to develop such a factoring theorem. As a result, public key systems are thus vulnerable to the uncertainty regarding the future creation of such a theorem, which would have a significant affect on the algorithm being mathematical intractable. This uncertainty provides potential risk to areas of national security and intellectual property which require perfect security.

In sum, modern cryptography is vulnerable to both technological progress of computing power and evolution in mathematics to quickly reverse one way functions such as that of factoring large integers. If a factoring theorem were publicized or computing became powerful enough to defeat public cryptography, then business, governments, militaries and other affected institutions would have to spend significant resources to research the risk of damage and potentially deploy a new and costly cryptography system quickly.

Tags : , , , , , , , , ,

Dynamic Coordination of Information Management Services for Processing Dynamic Web Content

Dynamic Web content provides us with time-sensitive and continuously changing data. To glean up-to-date information, users need to regularly browse, collect and analyze this Web content. Without proper tool support this information management task is tedious, time-consuming and error prone, especially when the quantity of the dynamic Web content is large, when many information management services are needed to analyze it, and when underlying services/network are not completely reliable. This describes a multi-level, lifecycle (design-time and run-time) coordination mechanism that enables rapid, efficient development and execution of information management applications that are especially useful for processing dynamic Web content.  Such a coordination mechanism brings dynamism to co-ordinating independent, distributed information management services. Dynamic parallelism spawns/merges multiple execution service branches based on available data, and dynamic run-time reconfiguration coordinates service execution to overcome faulty services and bottlenecks. These features enable information management applications to be more efficient in handling content and format changes in Web resources, and enable the applications to be evolved and adapted to process dynamic Web content.

The coverage of individual Web sites that provide such dynamic content is often incomplete, since individually they are limited by time and resource constraints. To obtain a complete picture about time-sensitive or wide-range topics, people tend to access multiple Web sites. For example, during the terror attacks, since it was such a time-critical situation, no single news site could provide complete information to understand what was happening. People needed to browse different news sites to access different coverage and different opinions, then compile and assemble the information together to understand the full situation. In addition, to understand different reactions from different parts of the world, news sources from various countries needed to be accessed.If a Web site is unresponsive due to congestion, people tend to switch to another Web site and come back later. People exhibit other forms of dynamism as well. For example, they will select a different set of information sources based on their topic area and geographic region of interest, and they will mentally filter and analyze the news articles based on the articles’ content, structure and format.

Any information management tool that supports this process of gleaning information for dynamic Web content should help alleviate the tedious and repetitive aspects, but should be flexible enough to allow users to incorporate the dynamic aspects of information analysis. This  describes a dynamic service coordination mechanism that brings dynamism in information management systems for processing dynamic Web content. This coordination mechanism allows users to incrementally develop information management applications on different abstraction levels through the design/runtime lifecycle, which is essential for processing dynamic Web contents efficiently and correctly. This mechanism has been adapted by USC ISI’s GeoWorlds system, and has been proven that it is practically effective on developing information management applications for processing dynamic Web content.

The characteristics of the class of information management :

  1. The information is time-sensitive and continuously changing
  2. The information needs to be joined together from multiple sources
  3. Multiple complex analysis steps are needed to jointly process the information
  4. The analysis steps need to be reconfigured to adapt to specific tasks
  5. The tasks are repetitive. They need to be performed periodically

 

Tags : , , , , ,

Security a New Dimension in Embedded System Design

Embedded systems, which will be ubiquitously used to capture, store, manipulate, and access data of a sensitive nature, pose several unique and interesting security challenges. Security has been the subject of intensive research in the areas of cryptography, computing, and networking. However, security is often mis-construed by embedded system designers as the addition of features, such as specific cryptographic algorithms and security protocols, to the system. In reality, it is an entirely new metric that designers should consider throughout the design process, along with other metrics such as cost, performance, and power.security in one form or another is a requirement for an increasing number of embedded systems, ranging from low-end systems such as PDAs, wireless handsets, networked sensors, and smart cards, to high-end systems such as routers, gateways, firewalls, storage servers, and web servers. Technological advances that have spurred the development of these electronic systems have also ushered in seemingly parallel trends in the sophistication of security attacks. It has been observed that the cost of insecurity in electronic systems can be very high. For example, it was estimated that the “I Love You” virus caused nearly one billion dollars in lost revenues worldwide.
With an increasing proliferation of such attacks, it is not surprising that a large number of users in the mobile commerce world (nearly 52% of cell phone users and 47% of PDA users, according to a survey by Forrester Research) feel that security is the single largest concern preventing the successful deployment of next-generation mobile services. With the evolution of the Internet, information and communications security has gained significant attention. For example, various security protocols and standards such as IPSec, SSL, WEP, and WTLS, are used for secure communications. While security protocols and the cryptographic algorithms they contain address security considerations from a functional perspective, many embedded systems are constrained by the environments they operate in, and by the resources they possess. For such systems, there are several factors that are moving security considerations from a functioncentric perspective into a system architecture (hardware/software) design issue.

Tags : , , , , , , , , , , , , , , , , , , , ,

Write Optimized Tree Indexing

Due to the poor random write performance of flash SSDs (Solid State Drives), write optimized tree indexes have been proposed to improve the update performance. BFTL was proposed to balance the inferior random write performance and fast random read performance for flash memory based sensor nodes and embedded systems. It allows the index entries in one logical B-tree node to span over multiple physical pages, and maintains an in-memory table to map each B-tree node to multiple physical pages. Newly inserted entries are packed and then written together to some new blocks. The table entries of corresponding B-tree nodes are updated, thus reducing the number of random writes. However, BFTL entails a high search cost since it accesses multiple disk pages to search a single tree node. Furthermore, even though the in-memory mapping table is compact, the memory consumption is still high. FlashDB was proposed to implement a self-tuning scheme between standard B+-tree and BFTL, depending on the workloads and the types of flash devices. Since our proposed index mostly outperforms both B+-tree and BFTL under various workloads on different flash SSDs, we do not compare our index with this self-tuning index in this paper. More recently, LA-tree was proposed for flash memory devices by adding adaptive buffers between tree nodes. LA-tree focuses on raw, small-capacity and byte addressable flash memory devices, such as sensor nodes, whereas our work is targeted for off theshelf large flash SSDs, which provide only a block-based access interface. Different target devices of these two indexes result in their differences in design.

On the hard disk, many disk-based indexes optimized for write operations have also been proposed. Graefe proposed a write-optimized B-tree by applying the idea of the log file system to the B-tree index. Y-tree supports high volume insertions for data warehouses following the idea of buffer tree. The logarithmic structures have been widely applied to optimize the write performance. O’Neil et al. proposed LSM-tree and its variant LHAM for multi-version databases. Jagadish et al. used a similar idea to design a stepped tree index and the hash index for data warehouses. Our FD-tree follows the idea of logarithmic method. The major difference is that we propose a novel method based on the fractional cascading technique to improve the search performance on the logarithmic structure.

Tags : , , , , , , ,

Quantum Computers

Around 2030 computers might not have any transistors and chips. Think of a computer that is much faster than a common classical silicon computer. This might be a quantum computer. Theoretically it can run without energy consumption and billion times faster than today’s PIII computers. Scientists already think about a quantum computer, as a next generation of classical computers.

Gershenfeld says that if making transistors smaller and smaller is continued with the same rate as in the past years, then by the year of 2020, the width of a wire in a computer chip will be no more than a size of a single atom. These are sizes for which rules of classical physics no longer apply. Computers designed on today’s chip technology will not continue to get cheaper and better. Because of its great power, quantum computer is an attractive next step in computer technology.

A technology of quantum computers is also very different. For operation, quantum computer uses quantum bits (qubits). Qubit has a quaternary nature. Quantum mechanic’s laws are completely different from the laws of a classical physics. A qubit can exist not only in the states corresponding to the logical values 0 or 1 as in the case of a classical bit, but also in a superposition state.

A qubit is a bit of information that can be both zero and one simultaneously (Superposition state). Thus, a computer working on a qubit rather than a standard bit can make calculations using both values simultaneously. A qubyte, is made up of eight qubits and can have all values from zero to 255 simultaneously. “Multi-qubyte systems have a power beyond anything possible with classical computers.”

Forty qubits could have the same power as modern supercomputers. According to Chuang a supercomputer needs about a month to find a phone number from the database consisting of world’s phone books, where a quantum computer is able to solve this task in 27 minutes.

Tags : , , , , , , , ,

Replica Creation and Replica Selection in Data Grid Service

Replica selection is interesting because it does not build on top of the core services, but rather relies on the functions provided by the replica management component described in the preceding section. Replica selection is the process of choosing a replica that will provide an application with data access characteristics that optimize a desired performance criterion, such as absolute performance (i.e. speed), cost, or security. The selected le instance may be local or accessed remotely. Alternatively the selection process may initiate the creation of a new replica whose performance will be superior to the existing ones.

Where replicas are to be selected based on access time, Grid information services can provide information about network  performance, and perhaps the ability to reserve network bandwidth, while the metadata repository can provide information about the size of the file. Based on this, the selector can rank all of the existing replicas to determine which one will yield the fastest data access time.  Alternatively, the selector can consult the same information sources to determine whether there is a storage system that would result in better performance if a replica was created on it.

A more general selection service may consider access to subsets of a fi le instance. Scientific experiments often produce large les containing data for many variables, time steps, or events, and some application processing may require only a subset of this data. In this case, the selection function may provide an application with a fi le instance that contains only the needed subset of the data found in the original file instance. This can obviously reduce the amount of data that must be accessed or moved.

This type of replica management has been implemented in other data-management systems. For example, STACS is often capable of satisfying requests from High Energy Physics applications by extracting a subset of data from a file instance. It does this using a complex indexing scheme that represents application metadata for the events contained within the file . Other mechanisms for providing similar function may be built on application metadata obtainable from self-describing file formats such as NetCDF or HDF.

Providing this capability requires the ability to invoke ltering or extraction programs that understand the structure of the fi le and produce the required subset of data. This subset becomes a fi le instance with its own metadata and physical characteristics, which are provided to the replica manager. Replication policies determine whether this subset is recognized as a new logical file (with an entry in the metadata repository and a fi le instance recorded in the replica catalog), or whether the fi le should be known only locally, to the selection manager.

Data selection with subsetting may exploit Grid-enabled servers, whose capabilities involve common operations such as reformatting data, extracting a subset, converting data for storage in a different  type of system, or transferring data directly to another storage system in the Grid. The utility of this approach has been demonstrated as part of the Active Data Repository. The subsetting function could also exploit the more general capabilities of a computational Grid such as that provided by Globus. This o ers the ability to support arbitrary extraction and processing operations on fi les as part of a data management activity.

Tags : , , , , , , , , , , , , , , , ,

Telepresence and Micropresence

Telepresence, micropresence, and telerobotics hold promise for allowing expert physicians to assist clinicians and surgeons at remote locations. Telepresence allows a physician to have access to a distant location through static or full-perspective views of a patient. Highfidelity telepresence allows a remote location or scene to be inspected from different perspectives via a procedure of moving distant cameras in concert with the head and gaze positions of a local observer. Telerobotics allows the telepresent clinician to interact with a distant patient. Simple forms of telepresence and telerobotics have already become popular in pathology diagnosis. In telepathology, a surgical pathologist has instant access to a microscope and a slide of a patient’s biopsy at a distant site. Telepathology systems that communicate via satellite and over the telephone lines have been developed. Several groups, including teams at NASA, and within the SRI International bioengineering group, have developed interesting demonstration technologies that display the effectiveness of telerobotics for exploring and manipulating objects at a distance. Some telepresence projects demonstrate strides in developing force-feedback techniques, which allow a user to feel the texture, elasticity, or weight of distant objects and structures.

We have been investigating a derivative of telepresence, called micropresence, at the Palo Alto Laboratory. Micropresence can enable physicians to explore and to perform procedures on compact, complex regions of a patient’s anatomy, while minimizing the
extent of surgical incisions. Micropresence involves the positioning of one or more small CCD television cameras and associated camera-control systems in hard-to-reach or compact areas of a patient’s anatomy. Such cameras have the ability to image and enlarge complex anatomic regions of interest, as well to identify the exact position of teleoperated microsurgery tools. Micropresence could allow surgeons to explore small or hidden areas from different perspectives, and to perform surgical procedures in these areas as if the regions of interest were expanded greatly, or even made to surround a physician. In essence, a surgeon would be endowed with the ability to explore complex regions, and maneuver tele-operated tools as if he or she were reduced in size, and could step into these regions.

We focused, in particular, on technologies that can help physicians make more effective use of computers that offer assistance with decision making. There is great opportunity for enhancing future healthcare delivery by integrating medical-informatics software with evolving human—computer interaction technologies.

Tags : , , , , , , ,

Checking and Signing XML Documents on Java Smart Cards

Smart card assistance for generating digital signatures is current state of the art and best practice. This is mainly due to the fact that smart cards now a days have enough processing power to produce digital signatures for documents by on card resources (processor and memory) only. This way the owner’s private signing key never has to leave the smart card: The signing key is and remains permanently stored in a tamper proof environment. A closer look at the signing process however reveals a still existing major security problem: the problem known as the “what you see is what you sign” problem. Before signing a document the signer usually wants to check the document’s syntactic and semantic correctness.

When compared to the traditional process of signing a paper document with a hand written signature, the difference can easily be identified: In the traditional case, it is relatively easy for the user to assert the correctness, because syntactic and semantic document checking and signature generation are in immediate context. Digitally signing an electronic document is completely different, because checking and signature generation are executed in two different environments, exposing fundamentally different characteristics different with respect to security on the one hand and processor, memory, and display resources on the other hand.

Traditionally, the signing application computes the document’s digest using a one way hash function and sends the result to the smart card. The card encrypts the digest by an asymmetric cipher using the signing key stored on the card. The resulting value is the digital signature of the document. It is sent back to the signing application. The user can neither check the syntactic correctness of the document (in case of XML documents: well formedness, validity) nor the semantic correctness of the document. What really is signed is beyond the user’s control. It might for instance be the digest for a manipulated document. Even if the smart card can be regarded as tamper proof, the terminal (e.g. a PC) and the programs running on it are vulnerable to viruses and Trojan horses. Such evildoers might obviously also affect signing applications and let them produce valid signatures for from the user’s perspective invalid documents. Such incidents invalidate the signing process in total.

We propose an enhanced architecture which performs checking and signing of XML documents on Java smart cards, called JXCS architecture. The basic idea of JXCS is to shift the syntactic validation and hash value generation from the vulnerable PC to the trusted smart card. Syntactic validation imposes the following challenges and opportunities: Challenging is the need of processing XML documents on resource constraint Java smart cards. The opportunity of the approach is the possibility to perform syntactic and even semantic checks on the XML document in a tamper proof environment which improves the security of the signing process.
We propose the need for three major checks on the XML documents to be signed: Well formedness, validity and content acknowledgement using a class 3 card reader. Taken together all three checks can defeat “what you see is what you sign” attacks.

Tags : , , , , , , , , , ,