Transactional Memory Coherence and Consistency

Transactional memory Coherence and Consistency (TCC). TCC providesa model in which atomic transactions are always the basic unit of parallel work, communication, memory coherence, and memory reference consistency. TCC greatly simplifies parallel software by eliminating the need for synchronization using conventional locks and semaphores, along with their complexities.

TCC  hardware must combine all writes from each transaction region in a program into a single packet and broadcast this packet to the permanent shared memory state atomically as a large block.This simplifies the coherence hardware because it reduces the need for small, low-latency messages and completely eliminates the need for conventional snoopy cache coherence protocols, as multiple speculatively written versions of a cache line may safely coexist within the system. Meanwhile, automatic, hardware-controlled rollback of speculative transactions resolves any correctness violations that may occur when several processors attempt to read and write the same data simultaneously. The cost of this simplified scheme is higher interprocessor bandwidth.

TCC model continually execute speculative transactions. A transaction is a sequence of instructions that is guaranteed to execute and complete only as an atomic unit. Each transaction produces a block of writes called the write state which are committed to shared memory only as an atomic unit, after the transaction completes execution.Once the transaction is complete, hardware must arbitrate system-wide for the permission to commit its writes. After this permission is granted, the processor can take advantage of high system interconnect bandwidths to simply broadcast all writes for the entire transaction out as one large packet to the rest of the system. The broadcast can be over an unordered interconnect,with individual stores separated and reordered, as long as stores from different commits are not reordered or overlapped.Snooping by other processors on these store packets maintains coherence in the system, and allows them to detect when they have used data that has subsequently been modified by anothertransaction and must rollback — a dependence violation. Combiningall writes from the entire transaction together minimizesthe latency sensitivity of this scheme, because fewer interprocessormessages and arbitrations are required, and because flushingout the write state is a one-way operation. At the same time, sincewe only need to control the sequencing between entire transactions,instead of individual loads and stores, we leverage the commit operation to provide inherent synchronization and a greatly simplified consistency protocol. This continual speculative buffering, broadcast, and potential violation cycle  allows us to replace conventional coherence and consistence protocols simultaneously:

Consistence:

To impose some sort of ordering rules between individual memory reference instructions, as with most consistency models, TCC just imposes a sequential ordering between transaction commits. This can drastically reduce the number of latency-sensitive arbitration and synchronization events required by low-level protocols in a typical multiprocessor system. As far as the global memory state and software is concerned, all memory references from aprocessor that commits earlier happened “before” all memory references from a processor that commits afterwards, even if the references actually executed in an interleaved fashion.A processor that reads data that is subsequently updated by another processorʼs commit, before it can commit itself, is forced to violate and rollback in order to enforce this model.Interleaving between processorsʼ memory references is only allowed at transaction boundaries, greatly simplifying the process of writing programs that make fine-grained access to shared variables. In fact, by imposing an original sequential programʼs original transaction order on the transaction commits, this can effectively let the TCC system provide an illusionof uniprocessor execution to the sequence of memory references generated by parallel software.

Coherence:

Stores are buffered and kept within the processor node for the duration of the transaction in order to maintainthe atomicity of the transaction. No conventional, MESI-style cache protocols are used to maintain lines in “shared” or “exclusive”states at any point in the system, so it is legal for many processor nodes to hold the same line simultaneously in either an unmodified or speculatively modified form. At the end of each transaction, the broadcast notifies all other processors about what state has changed during the completing transaction. During this process, they perform conventional invalidation(if the commit packet only contains addresses) or update(if it contains addresses and data) to keep their cache state coherent. Simultaneously, they must determine if they may have used shared data too early. If they have read any datamodified by the committing transaction during their currently executing transaction, they are forced to restart and reload the correct data. This hardware mechanism protects against true data dependencies automatically, without requiring programmers to insert locks or related constructs. At the same time, data antidependencies are handled simply by the fact that later processors will eventually get their own turn to flush out data to memory. Until that point, their “later” results are not seenby transactions that commit earlier (avoiding WAR dependencies) and they are able to freely overwrite previously modified data in a clearly sequenced manner (handling WAW dependenciesin a legal way). Effectively, the simple,  sequentialized consistence model allows the coherence model to be greatly simplified, as well.

Tags : , , ,

Open and standardized network protocols fueled Internet innovation

At the foundation of the technology that has enabled these developments is anovel philosophy of communication network design. Prior to the emergence of the Internet,communication networks were designed and operated by telephone companies. The telephone network, operating under the control of AT&T and other telephone monopolies, was designed toplace computer intelligence “inside” the network, out of the reach of end-users. The telephone network was operated in a manner that limited the end-user’s ability to attach innovative devices to the network, or otherwise take advantage of network technology in ways not designed (andsold) by the telephone company. The telephone company was the seller of network services, andend-users were the buyers of network services—end of story.

The Internet turned the telephone-company model “inside out.” Any device that abidedby the standardized and open Internet protocols could be attached to the network, and any innovator who utilized these publicly available Internet protocols could develop new content, applications, and services which would be provided over the Internet. Devices (mainly computers) attached to the edge of the network thus became the most important component of the Internet. The computers at the “network edge” could either supply network applications, content,or services, or could be used to consume network applications, content, or services. Further innovations led to the blending of computer functions at the network edge, such as those associated with file sharing technologies, where those at the network edge simultaneously produce and consume Internet content and applications.

The foundation of the innovations which are associated with the Internet—e-mail,  web browsing, search engines, online auctions,  e-commerce,  streaming media,  file sharing—are openand standardized network protocols. No firm has the ability to act as a gatekeeper associated with access to the protocols, and thus determine which applications, content, or services should be allowed to use the Internet. Innovation associated with the Internet has been fueled by the high level of deference to the network edge,  and the equal opportunity to utilize network resources enabled by Internet protocols and pro-competitive policies.

The early development of the Internet, those involved were determined that the networknot “step on the toes” of the developers of the technologies which would ultimately use the network. Those that designed the initial Internet protocols could not anticipate what direction future innovation might take. As a result of this insight, open and neutral protocols underlie how the Internet operates today.  The greatest potential for innovation associated with the development of new network applications occurs when the underlying network does not introduce artificial or arbitrary constraints on how those at the network edge innovate.

Tags : , , , ,

Machine Learning Techniques for the Analysis of Functional Data in Computational Biology

The amount of data in typical computational biology (bioinformatics) applications tends to be quite large but is on a manageable scale. In contrast,astrophysical applications have huge amount of data, and medical research oftenonly has a rather limited number of samples. The challenges in bioinformaticsseem to be:

• Diversity and inconsistency of biological data,

• Unresolved functional relationships within the data,

• Variability of different underlying biological applications/problems.

As in many other areas, this requires the utilization of adaptive and implicitmethods, as provided by machine learning

Protein function, interaction, and localization is definitely one of the key research areas in bioinformatics where machine learning techniques can beneficiallybe applied. Protein localization data, no matter whether on tissue, cell or evensubcellular level, are essential to understand specific functions and regulationmechanisms in a quantitative manner. The data can be obtained, for example,by fluorescence measurements of appropriately labelled proteins. Now the challenge is to recognize different proteins, and classes of them , respectively, whichusually leads to either an unsupervised clustering problem or, in case availablea-priori information is to be considered, a supervised classification task. Here anumber of different neural networks have been used. Dueto the underlying measurement technique, often artifacts are observed and haveto be eliminated. Since the definition of these artifacts is not straightforward,here too, trainable methods are used. In this context, for the separation of artifact vs. all other data, support vector machines have successfully been appliedas well.

Spectral Data in Bioinformatics

Frequentlyused measurement techniques providing such data are mass spectrometry (MS)and nuclear magnetic resonance spectroscopy (NMR). Typical fields, where suchtechniques are applied in biochemistry and medicine, are the analysis of smallmolecules, e.g., metabolite studies, or studies of medium or larger molecules, e.g.,peptides and small proteins in case of mass spectrometry. One major objectiveis the search for potential biomarkers in complex body fluids like serum, plasma,urine, saliva, or cerebral spinal fluid in case of MS or search for  characteristicmetabolites as a result of metabolism in cells (NMR).

Spectral data in this field have in common that the raw functional datavectors, representing the spectra, are very high-dimensional, usually containingmany thousands of dimensions idepending on the resolution of the measurementinstruments and/or the specific task. Moreover, the raw spectra are usuallycontaminated with high-frequency noise and systematic baseline disturbances.Thus, before any data analysis may be done, advanced pre-processing has tobe applied. Here application specific knowledge can be involved.Here machine learningmethods including neural networks offer alternatives to traditional methods like averaging or discrete wavelet transformation.

Preprocessed spectra often still remain high-dimensional. For further complexity reduction usually peak lists of the spectra are generated which then areunder consideration. These peak lists can be considered as a compressed, information preserving encoding of the originally measured spectra. The peak pickingprocedure has to locate and to quantify the positions and the shape/height ofpeaks within the spectrum. The peaks have to be identified by scanning all localmaxima and the associated peak endpoints followed by a S/N thresholding suchthat one obtains the desired peak list. This method is usually applied to theaverage spectrum generated from the set of spectra to be investigated. This approach works fine if the spectra belong to a common set or two groups of similarsize, with similar content to be analyzed.

Tags : , , ,

Performance Analysis of Cache Policies for Web Servers

Many existing Web servers, e.g., NCSA and Apache, rely on the underlying file system buffer of the operating system to cache recently accessed documents. When a new request arrives, the Web server asksthe operating system to open the file containing the requested document and starts reading it into a temporary memory buffer. After the file has been read, it needs to be closed.

Web Server Caching vs. File System and Database Caching

Traditional file system caches do not perform well for the WWW load [A+95, M96]. The following three differences between the traditional and Web caching account for that:

1. Web data items have a different granularity.File system and database buffers deal with fixed size blocks of data. Web servers always read andcache entire files.  Additionally,non-fixed size data items complicate memory management in Web server caches.

2. Caching is not obligatory in Web servers, i.e., some documents may not be admitted in the cache.A file system/database buffer manager always places requested data blocks in the cache (the cache serves as an interface between the storage subsystem and the application). On the contrary, a Webserver cache manager may choose not to buffer a document if this can increase cache performance(hit rate). This option of not caching some documents combined with a different granularity of data items significantly broadens the variety of cache management policies that can be used in a Webserver.

3. There are no correlated re-reads or sequential scans in Web workloads.Unlike database systems, Web servers never experience sequential scans of a large number of data items. Nor do they have correlated data re-reads. That is, the same user never reads a document twice in a short time interval. This becomes obvious if we take the Web browser cache into account. All documents re-reads are handled there. The Web server never knows about them. (Communicationfailures, dynamically generated documents, and browser cache malfunctioning may result incorrelated re-reads on the Web server).

One of the reasons why correlated rereads are common in traditional caches is the possibility of storing several logical data items, e.g., database records, in the same physical block. As a result, even accesses to different logical records may touch the same buffer page, e.g., during a sequential scan,resulting in artificial page re-reads.

The impact of the absence of correlated re-reads is two-fold. Firstly, a caching algorithm for Web workloads does not need to “factor out locality” [RV90], i.e. eliminate the negative impact of correlated re-reads on cache performance. Repeated (correlated) accesses to a data item from a single application make the cache manager consider the item popular even though this popularity is artificial: all accesses occur in a short time interval and the item is rarely used. In Web servers, on the contrary, multiple accesses in a short time interval do indicate high popularity of a document since the accesses came from different sources.

Secondly, a single access to a document should not be a reason for the cache manager to put the document in the cache since there will be no following, correlated accesses. In other words, the traditional LRU (Least Recently Used) is not suitable for Web document caching.

Additionally, Web servers deal with fewer data items than file systems and databases do. The number of documents stored on a typical Web server rarely exceeds 100,000 whereas files systems and databases have to deal with millions of blocks. As a result, a Web server can afford keeping access statistics for all stored documents while traditional systems cannot.

Keeping the foregoing observations in mind, one can expect that a dedicated document cache would perform better than a file system cache. Such a document cache should use a cache management policy that is more suitable for the WWW load than that of the operating system.

 

Tags : , , , , ,

Transaction Isolation in XML Database Management Systems

System Architecture and XML Data Processing (XDP) Interfaces

XML Transaction Coordinator (XTC) database engine (XTCserver) adheres to the widely used five-layer DBMS architecture.

The file-services layer operates on the bit pattern stored on external, non-volatile storagedevices. In collaboration with the OS file system, the i/o managers store the physicaldata into extensible container files; their uniform block length is configurable to thecharacteristics of the XML documents to be stored.A buffer manager per container filehandles fixing and unfixing of pages in main memory and provides a replacement algorithmfor them which can be optimized to the anticipated reference locality inherent inthe respective XDP applications. Using pages as basic storage units, the record, index,and catalog managers form the access services. The record manager maintains in a setof pages the tree-connected nodes of XML documents as physically adjacent records.Each record is addressed by a unique life-time ID managed within a B-tree by the index manager.This is essential to allow for fine-grained concurrency control which requireslock acquisition on unique identifiable nodes.The catalog managerprovides for the database metadata. The node manager implementing the navigationalaccess layer transforms the records from their internal physical into an externalrepresentation, thereby managing the lock acquisition to isolate the concurrent transactions.The XML-services layer contains the XML manager responsible for declarativedocument access, e. g., evaluation of XPath queries or Extensible Stylesheet Language transformations (XSLT).

The agents of the interface layer make the functionalityof the XML and node services available to common internet browsers, ftp clients, andthe XTCdriver thereby achieving declarative / set-oriented as well as navigational /node-oriented interfaces. The XTCdriver linked to client-side applications provides formethods to execute XPath-like queries and to manipulate documents via the SAX (Simple API for XML) orDOM API. Each API accesses the stored documents within a transaction to be startedby the XTCdriver. Transactions can be processed in the well-known isolation levels uncommitted,committed, repeatable, and serializable.

Tags : , , ,

HCI research in Applied contexts

In addition to being multidisciplinary and theoretically grounded, HCI in MIS is also a strong practical and application-oriented area. Applications requiring interactions with human users can be found everywhere in our surroundings, and are therefore of significant concern to both researchers and practitioners in a wide variety of disciplines. Long-term efforts are under way to pull these researchers and practitioners under a single metaphorical umbrella where duplication of effort can be avoided and synergies can be exploited.

Researchers and practitioners alike can benefit from the application of theory. Researchers can develop and apply theory to generalize to other situations. They develop and test models that are either derived from applications of theory, or that lead to new theory. Practitioners can use it to solve problems, often for evaluation of new software or hardware.

Many applications of theory can be found in the literature. These areas are diverse, interesting, and important, and have either direct or indirect relevance to researchers and practitioners alike. This section mentions several specific areas with representative articles. These topics have evolved over an extended time or over an extended set of studies. The application areas include electronic commerce, team collaboration, culture and globalization, user learning and training, system development, and health care. Many of these areas have built a distinctive literature and can be further developed.

Privacy protection of network world

The Internet has now become anubiquitous channel for information sharing and dissemination. This has created a wholenew set of research challenges, while giving a new spin to some existing ones.

Anonymization

In many scenarios, information exchange can provesocially beneficial. For example, if medical researchers have access to databases containing the medical histories of various individuals, they can discover the association betweencertain lifestyle factors and higher risk of certain diseases; geographical occurrence dataon communicable diseases can enable detection of the outbreak of epidemics at an earlystage, thereby preventing its spread to larger populations. With the goal of enabling suchapplications, it is quite desirable that hospitals make their records available to medical scientists. At the same time, such personal data has a great potential for misuse; for example,a health insurance company could exploit such data to selectively raise the health insurancepremiums of certain individuals.

Possible solution is that instead of releasing the entire database, the databaseowner answers aggregate queries posed by medical researchers after ensuring that answersto the queries do not reveal sensitive information. This approach is called query auditing [KPR03, KMN05, DN04a]. This requires the researchers to formulate their querieswithout access to any data. In this case, one can also use techniques from secure multiparty computation [Yao86, GMW87, LP02, AMP04, FNP04]. However, many of the datamining tasks are inherently ad hoc and the data mining researchers need to examine the datain order to discover data aggregation queries of interest. In such cases, query auditing andsecure function evaluation techniques do not provide an adequate solution, and we need torelease an anonymized view of the database that enables the computation of non-sensitivequery aggregates, perhaps with some error or uncertainty.

Enterprise Web Technologies

Enterprise Web Technologies practice offers comprehensive capability across the entire web technology spectrum. We are recognized thought leaders in Web technologies developing leading edge solutions that drive operational efficiency, increase ROI, and create a platform for continuous business innovation.