EX8208 Ethernet Switch

The EX8208 modular Ethernet switch offers a flexible, powerful,and modular platform that delivers the performance, scalability, and high availability required for today’s high-density data center, campus aggregation, and core switching environments. With a total capacity of up to 6.2 Tbps, the EX8208 system provides a complete, end-to-end solution for the high-performance networks of today and into the future.

The EX8208 switch features eight dedicated line-card slots that can accommodate a variety of  EX8200 Ethernet line cards. Options include the following:

• EX8200-48T: a 48-port 10/100/1000BASE-T RJ-45 unshielded twisted pair (UTP) line card

• EX8200-48T-ES: a 48-port 10/100/1000BASE-T RJ-45 unshielded twisted pair (UTP) extra scale line card

• EX8200-48F: a 48-port 100BASE-FX/1000BASE-X SFP fiber line card

• EX8200-48F-ES: a 48-port 100BASE-FX/1000BASE-X SFP extra scale fiber line card

• EX8200-8XS: an eight-port 10GBASE-X SFP+ fiber line card\

• EX8200-8XS-ES: an eight-port 10GBASE-X SFP+ fiber extra scale line card

• EX8200-40XS: a 40-port 10GBASE-X SFP+ / 1000BASE-X SFP line card

Fully configured, a single EX8208 chassis can support up to 384 Gigabit Ethernet or 64 10-Gigabit Ethernet ports at wire speed, or 320 10-Gigabit Ethernet ports in shared bandwidth applications, delivering one of the industry’s highest 10-Gigabit Ethernet port densities.

At 14 rack-units (RUs) high, three EX8208 Ethernet Switches can fit in a standard 42 RU rack, enabling up to 1,152 Gigabit Ethernet or 960 10-Gigabit Ethernet ports in a single rack. At just 21 inches deep, the EX8208 is sufficiently compact to fit into typical wiring closets, makingit ideal for campus deployments where space is at a premium.The EX8208 features a switch fabric that is capable of delivering 320 Gbps (full duplex) per slot, enabling scalable wire-rate performance on all ports for any packet size. The passive back plane design supports a future capacity of up to 6.2 Tbps, providing a built-inmigration path to next-generation deployments.

The base-configuration EX8208 Ethernet Switch includes a side-mounted hot-swappable fan tray with variable-speed fans,one Switch Fabric and Routing Engine (SRE) module, and one dedicated Switch Fabric module. Base EX8208 switches also ship with either two 2000 watt or two 3000 watt power supplies, although six power supply bays allow users to provision the chassis to provide the power and redundancy required for any application. Redundant EX8208 configurations include a second SRE modulefor hot-standby resiliency while AC or DC power options provide complete redundancy, reliability, and availability. All components are accessible from the front, simplifying repairs and upgrades.

A front-panel chassis-level LCD panel displays Routing Engine status as well as chassis component alarm information for rapid problem identification and resolution to simplify overall operations. The LCD also provides a flexible, user-friendly interface for performing device initialization and configuration rollbacks, reporting system status and alarm notification, or restoring the switch to its default settings.

Tags : , , , , , , , ,

Ontology Driven Information Systems for Search, Integration and Analysis on Semantic (Web) Technology

Some Reservation among DB researchers about the Semantic Web :-

As a constituent technology, ontology work of this sort is defensible. As the basis for programmatic research and implementation, it is a speculative and immature technology of uncertain promise.

Users will be able to use programs that can understand semantics of the data to help them answer complex questions.  This sort of hyperbole is characteristic of much of the genre of semantic web conjectures, papers, and proposals thus far. It is reminiscent of the AI hype of a decade ago and practical systems based on these ideas are no more in evidence now than they were then.

Such research is fashionable at the moment, due in part to support from defense agencies, in part because the Web offers the first distributed environment that makes even the dream seem tractable.

It (proposed research in Semantic Web) pre-supposes the availability of semantic information extracted from the base documents -an unsolved problem of many years,…

Google has shown that huge improvements in search technology can be made without understanding semantics. Perhaps after a certain point, semantics are needed for further improvements, but a better argument is needed.

These reservations and skepticism likely stem from a variety of reasons. Specifically, database researchers may have reservations stemming from the over whelming role of description logic in the W3C’s Semantic Web Activity and related standards. The vision of the Semantic Web proposed in several articles may seem, to many readers, like a proposed solution to the long standing AI problems. Lastly, one of the major skepticism is related to the legitimate concern about the scalability of the three core capabilities forthe Semantic Web to be successful, namely the scalability of (a) ontology creation and maintenance of large ontologies, (b) semantic annotation, and (c) inference mechanisms or other computing approaches involving large, realistic ontologies, metadata, and heterogeneous data sets.

Despite these reservations, some of them well justified, we believe semantic technology is beginning to mature and will play a significant role in the development of future information systems. We believe that database research will greatly benefit by playing critical roles in the development of both SemanticTechnology and the Semantic Web. In addition, we also feel that the database community is very well equipped to play their part in realizing this vision.

• Identify some prevalent myths about the Semantic Web

• Identify instances of Semantic (Web) Technology in action and how the database community canmake invaluable contributions to the same.

By Semantic Technology, we imply application of techniques that support and exploit semantics of information (as opposed to syntax andstructure/schematic issues ) to enhance existing information systems. In contrast, the Semantic Web technology (more specifically its vision) is best defined as “The Semantic Web is an extension of thecurrent web in which information is given well-defined meaning, better enabling computers and people towork in cooperation”. Currently in more practical terms, Semantic Web technology also implies the use of standards such as RDF/RDFS, and for some OWL. It is however important to note that while description logic is a center piece for many Semantic Web researchers, it is nota necessary component for many applications that exploit semantics. For the Semantic Technology as theterm is used here, complex query processing, involving both metadata and ontology takes the center piece,and is where the database technology continues to play a critical role. This becomes especially relevant when ontology is populated bymany persons or by extracting and integrating knowledge from multiple sources.


Tags : , , , , , , , ,

Post-Disaster Image Processing for Damage Analysis Using GENESI-DR, WPS and Grid Computing

Ground European Network for Earth Science Inter operations-Digital Repositories (GENESI-DR) project was to build an open and seamless access service to Earth science digital repositories for European and world-wide science users. In order to showcase GENESI-DR, one of the developed technology demonstrators focused on fast search, discovery, and access to remotely sensed imagery in the context of post-disaster building damage assessment. It describes the scenario and implementation details of the technology demonstrator, which was developed to support post-disaster damage assessment analyst activities. Once a disaster alert has been issued, response time is critical to providing relevant damage information to analysts and / or stake holders. The presented technology demonstrator validates the GENESI-DR project data search, discovery and security infrastructure and integrates the rapid urban area mapping and the near real-time orthorectification web processing services to supporta post-disaster damage needs assessment analysis scenario. It also demonstrates how the GENESI-DR SOA can be linked to web processing services that access grid computing resources for fast image processing and use secure communication to ensure confidentiality of information.

Primary analysis is based on remotely sensed imagery using both pre- and post-disaster images where by analysts identify damage extent and severity. Soon after the team has been alerted to a disaster, the following scenario is generally played out as quickly as humanly possible:

1. Disaster alert identifying location and disaster type is received (via online alerting system orgovernment representative);

2. Search and download pre-disaster imagery if available;

3. Order/Acquire and download post-disaster imagery as soon as possible;

4. Analyse pre- and post-disaster imagery to identify affected populated areas;

5. Produce and disseminate maps and reports to stakeholders based on the most current availableinformation.

a. GENESI-DR : The need for open access to online Earth Science repositories is at the heart of the GENESI-DR project. Among the many objectives of this project, the authors of this paper were most interested in the distributed image search, discovery and access capabilities to operational repositories of remotely sensed imagery and geographical web processing chaining. One such operational repository is maintained by the Community Image Data (CID) portal action of the JRC that also provides orthorectified imagery resulting from a semiautomatic orthorectification application based on area matching algorithms.

b. Web Processing Service : A Web Processing Service (WPS) is a standard developed by the Open Geospatial Consortium (OGC) that provides rules for geospatial processing service requests and responses. Specifically, its aim is to provide access to GIS functionality over the internet and was chosen for this project because of its extensibility and wide use for geospatial web services.

c. Computing Power : Automated image analysis in many cases requires significant computing resources which may not be available at the location where the image is stored, either because the repository does not offer sucha service or the algorithm and/or methodology to be applied is not available. The demo required thatthe images be processed by a specialised methodology in a secure setting because of the sensitive nature of the information to be analysed. A grid computing solution was chosen because it provided a solution to our security and licensing concerns and was made available by one of our project partners, the European Space Agency (ESA). The advantages of using cloud or grid computing in apost-disaster image processing scenario are scalability and parallelisation. Depending on the image processing requirements, the number of jobs submitted can easily be scaled (i.e., the infrastructure can handle a single processing job or a thousand jobs) and depending on the resources available withinthe computing infrastructure, the number of jobs running in parallel can also be increased. Both these advantages ensure that the required computing power is available to the disaster analysts on-demand.

d. Trust : Post-disaster damage information needs are a sensitive matter and the demonstration had to provide an end-to-end level of security to keep out parties that should not have access to the data and information. Such a level of trust was achieved through the use of security certificates (X.509), certificate proxies(VOMS proxies) and the adoption of Virtual Organisations (VOs).

The security infrastructure of the demonstration was based on X.509 certificates, a standard for a Public Key Infrastructure (PKI). The X.509 specifies, amongst other things, standard formats for public key certificates, certificate revocation lists, attribute certificates, and a certification path validation algorithm. In X.509, authentication and authorization processes take place. In the authentication phase, the need to interact with a trusted relaying party is addressed through the interaction with globally trusted third parties, i.e., Certification Authorities(CAs). A certificate authority or certification authority (CA) is an entity that issues digital certificates.The digital certificate certifies the ownership of a public key by the named entity of the certificate. This allows others (i.e., entities who depend on this trust) to be confident that the signatures or assertions made by the private key correspond to the certified public key. With such a trust model, a CA is a trusted third party that is trusted by both the owner of the certificate and the entity relying on the certificate. CAs are characteristic of many PKI schemes. The user is identified with identity certificates signed by CAs allowing the system to delegate their identity temporarily to other users and/or systems. This process generates proxy certificates allowing the WPS service to run with the identified user’s privileges.

Tags : , , , , , , , ,

General J2ME architecture

J2ME is  ”a highly optimized Java run-time environment targetinga wide range of consumer products, including pagers, cellular phones, screen-phones,digital set-top boxes and car navigation systems.”

J2ME uses configurations and profiles to customize the Java Runtime Environment (JRE). As a complete JRE, J2ME is comprised of a configuration, which determines the JVM used,and a profile, which defines the application by adding domain-specific classes. The configuration defines the basic run-time environment as a set of core classes and aspecific JVM that run on specific types of devices. The profile defines the application; specifically, it adds domain-specific classes to the J2MEconfiguration to define certain uses for devices. The following graphic depicts the relationship between the different virtual machines,configurations, and profiles. It also draws a parallel with the J2SE API and its Java virtual machine. While the J2SE virtual machine is generally referred to as a JVM, the J2ME virtual machines, KVM and CVM, are subsets of JVM. Both KVM and CVM can be thought of as a kind of Java virtual machine — it’s just that they are shrunken versions of the J2SE JVM and are specific to J2ME.

The configuration defines the basic run-time environment as a set of core classes and aspecific JVM that run on specific types of devices. Currently, two configurations exist forJ2ME, though others may be defined in the future:

Configurations overview

Connected Limited Device Configuration (CLDC) is used specifically with the KVMfor 16-bit or 32-bit devices with limited amounts of memory. This is the configuration (and the virtual machine) used for developing small J2ME applications. Its size limitations make CLDC more interesting and challenging (from a development point of view) than CDC. CLDC is also the configuration that we will use for developing our drawing tool application. An example of a small wireless device running smallapplications is a Palm hand-held computer.

Connected Device Configuration (CDC) is used with the C virtual machine (CVM) and is used for 32-bit architectures requiring more than 2 MB of memory. An example of such a device is a Net TV box.

Profiles overview

The profile defines the type of devices supported by your application. Specifically, it adds domain-specific classes to the J2ME configuration to define certain uses for devices. Profiles are built on top of configurations. Two profiles have been defined for J2ME and are built on CLDC: KJava and Mobile Information Device Profile (MIDP). These profiles are geared toward smaller devices.  A skeleton profile on which you create your own profile, the Foundation Profile, is available for CDC.

Tags : , , , , , , , ,

EIB/KNX software development and deployment

Obviously, writing all BCU (Bus Coupling Units) application programs from scratch is typically not a feasible approach towards building an EIB (European Installation Bus)/KNX (Konnex Association) system. Therefore, manufacturers provide ready-made applications matching their hardware. The behaviour of these applications can be customized by the project engineer by modifying manufacturer defined parameters. The nal configuration is then downloaded to the BCU. The workflow of creating an EIB/KNX system is thus divided into three steps:


A software developer writes a BCU application program for a particular hardware configuration. The documents  behaviour and defines the parameters available to influence it. The application is brought into a format suitable for distribution to EIB/KNX project engineers. This format also includes the necessary meta information to allow a software tool to display the application parameters. Moreover, it provides this tool with the necessary knowledge on how to apply these changes to the program code.

Project planning: -

A project engineer selects appropriate EIB/KNX devices to full the requirementsof a particular project. Using a (typically PC-based) integration tool, it makes the necessary adjustments to the application parameters ofthe chosen devices.  It also sets up its communication relationships. While the software developer defines the behaviour (or set of possible behaviours) of one single node, the project engineer thus defines the behaviour of the entire system. This step is entirely off-line, i.e., no target devices are required yet.

Installation and download:-

The BCUs are combined with the appropriate application modules (if not already delivered in a common housing by the manufacturer) and installed to their final location. This step is often carried out by a site technician. Before or after installation, the configuration is downloaded to the BCUs. This can be done via the network. Targets are identified via their Individual address  (a configurable identifier which is unique within the installation) or via a special button on the BCU if they have not yet been assigned such an address.

The second and third step shall together be referred as system integration, the software tool which assists the work of the project engineer (and possibly site technician) as integration tool. For EIB/KNX systems, only one single integration tool software is necessary. This tool, called ETS (EIBA s.c.r.l., n.d.), handles every certified EIB/KNX device, no matter from which manufacturer. This solution significantly lowers the effort involved in setting up a multi-vendor system.

Tags : , , , , , , , ,

Transformation Recipes for Code Generation and Auto-Tuning

Transformation recipes, which providea high-level interface to the code transformation and code generation capability of a compiler. These recipes can be generated by compiler decision algorithms and savvy software developers. This interface is part of an auto-tuning framework that explores a set of different implementations of the same computation and automatically selects the implementation that best meets a set of optimization criteria. Along with the original computation, a transformation recipe specifies a range of implementations of the computation resulting from composing a set of high-level code transformations. In our system, an underlying polyhedral framework coupled with transformation algorithms takes this set of transformations, composes them and automatically generates correct code. We first describe an abstract interface for transformation recipes, which we propose to facilitate interoperability with other transformation frameworks. We then focus on the specific transformation recipe interface used in our compiler and present performance results on its application to kernel and library tuning and tuning of key computations in high-end applications. We also show how this framework can be used to generate and auto-tune parallel OpenMP or CUDA (CUDA is NVIDIA’s parallel computing architecture) code from a high-level specification.

A well-recognized challenge to the effectiveness of compiler optimization targeting architectural features is making complex trade offs between different optimization, or identifying optimal values of optimization parameters such as unroll factors or loop tile sizes. Without sufficient knowledge of the executionenvironment, which is extremely difficult to model statically, compilers oftenmake suboptimal choices, sometimes even degrading performance. To address this limitation, a recent body of work on auto-tuning uses empirical techniques toexecute code segments in representative execution environments to determine the best-performing optimization sequence and parameter values. A compiler to support auto-tuning must have a different structure than a traditional compiler, as it must expose a set of different variants of acomputation, with parameterized optimization variables. These parameterizedvariants are then evaluated empirically by an experiments engine that identifies the best implementation of the computation. It supports collaborative auto-tuning, so that application developers can access and guide the auto-tuning framework.

The focal point paper is  high-level interface for describing transformation recipes, as this interface is the mechanism by which the compiler organization meets the three driving principles: supporting auto-tuning, serving as anapplication-developer interface (in addition to a compiler interface), and providinga common interface for interoperability between compilers. Although othercompiler interfaces exist with overlapping goals, our approach is unique in trying to bring together all of these elements.

Specification of Parameterized Variants for Auto-tuning Environment : Transformation recipes describe a range of implementations, parameterized by certain optimization variables. This range of implementations can be evaluated by an external experiments engine that efficiently explores the resulting optimization search space.

Application and Library Developer Interface : Transformation recipes allow application and library developers to interact directly with the compiler totransform their code, including parallelization. Through the system organization,the compiler manages the details of carrying out transformations correctly, and generates code for a range of implementations that can be compared automatically using auto-tuning technology.

Common API for Compiler Transformation Framework : Transformation recipes can also serve as a common interface for different compiler transformation frameworks. Numerous compiler transformation frameworks are capable of specific optimizations such as loop unrolling or tiling, so with the appropriate interface, the same recipe could be used by multiple compilers and their results compared. The same application could be tuned using different compiler transformation frameworks, either successively or on independent pieces of code.

The transformation recipe interface is part of a working compiler system that uses an underlying polyhedral transformation and code generation framework to support robust code generation within its domain of applicability. This system has been used in a variety of ways: for kernel tuning, for library tuning and generation, for tuning of key computations from scientific applications, and for guiding parallel code generation for OpenMP and CUDA. The working compiler framework and the transformation recipes it supports, as well as a current broad activity across a number of compiler and auto-tuning research groups to develop an infrastructure-independent common transformation API. To engage the LCPC (Languages and Compilers for Parallel Computing) community to participate so that, this representation can potentially interoperate with a large number of compiler infrastructures, thus moving our entire community in the direction of repeatable experimental research and easier adoption of new ideas.

Tags : , , , , , , , ,

Representing XML in the HDM

The HDM (Hypergraph based Data Model) can represent a number of higherlevel structured modelling languages such as the ER relational and UML data models.   It is possible to transform the constructs of one modelling language into those of another during the process of integrating multiple heterogeneous schemas into a single global schema By extending our workto specify how XML can be represented in the HDM we are adding XML to the set of modelling languages whose schemas can be transformed into each other and integrated using our framework.

Structured data models typically have a set based semantics ie there is no ordering on the extents of the types and relationships comprising the database schema and no duplicate occurances XMLs semistructured nature and the fact that it is presentation oriented means that lists need to be representable in the HDM as opposed to just sets which were sucient for ourprevious work on transforming and integrating structured data models. In particular lists are needed because the order in which elements appear within an XML document may be significant to applications and this information should not be lost when transforming and integrating XML documents.

Thus notions of nodes and edges in HDM schemas which respectively correspond to types and relationships in higher level modelling languages, so that the extent of a node or edge may be either a set or a list

Tags : , , , , , , , ,

Profiling the TCP switch implementation in userspace

We measured the performance of the TCP switch(user space) & observed that in average, the forwarding is fast, but there is high fluctuation of forwarding time performance.Also, the connection setup and teardown is slow. The fluctuation of performance happens when garbage collector runs to reclaim the unused resources. We profiled each part of the system tofigure out the reasons of such behavior. First, we measured the time to perform a function to insert or delete an entry in the classification table of iptables and to insert or delete a queue and a filter for a particular connection in output scheduler. Each function call takes about 300us. It’s partly due to the communication overhead between kernel and user space. To setup and tear down a connection,we need to do the transaction twice for iptables and twice for traffic control. This gives us motivation to implement the TCP switch inside the Linux kernel.

The more important issue is to make the forwarding performance more predictable(less variable). To find the reason of the fluctuation, we measured thread context switching time and insertion, deletion, and lookup time of iptables in the kernel. One factor of the fluctuation may be the conflict between lookup of forwarding data and operation of the admission controller and the garbage collector. The thread context switching time measurement program creates the given number of threads and each thread continuously yields. Then, we counted the number of yields in given time. The machine we experiment is Pentium III 1GHz PC. The process context switching time is almost same as the thread context switching time.

In average, it’s smaller than 1 us, although there is some variability in the number of context switches in different threads. From this result, we conclude that the thread context switching time is not the major factor of the fluctuation of the forwarding performance. Next, we examined the locking conflict between code that read iptables and code that writes iptables. iptables use read-write lock which is the spin lock with multiple readers and one writer support. When no reader has the lock, writer can get the lock. When writer has the lock, any reader cannot get the lock. The measured lookup (read) time of iptables is 1~2us, but the insertion or deletion of an entry in iptables takes about 100us. When a new entry is inserted oran entry is deleted, a new table is created by vmalloc, modification is done, old table is copiedto the new table by memcpy, and old table is freed. This seems to be an inefficent table management. But it’s a good design considering the usage of iptables. In general, iptables entries are static (i.e. they do not change often). So, the operation of copying whole table is not a problem. By creating a new table and deleting an old one, the memory allocated is compact. However, this is not suitable for the TCP switch operation. The TCP switch inserts or deletes entries dynamically. And this latency in modification of iptables delays the forwarding of packets. When garbage collector runs, it tries to delete inactive entries. Because this deletion is slow, the forwarding of packet gets delayed in this period. The reason of the fluctuation of forwarding time can be explained with this slow modification of iptables. This gives the motivation to change iptables to the data structure suitable for the TCP switch.


Tags : , , , ,

Applying the Lessons of eXtreme Programming


The benefits available from adopting XP style unit testing on aproject, and then moves on to identify useful lessons that can be learned from other XP (eXtreme Programming) practices. This concludes by asking questions about the nature of process improvement in software development and how we can make our software serve society.

Applying JUnit

JUnit is a deceptively simple testing framework that can create a major shift in your personal development process and the enjoyment of programming. Prior to using JUnit, developers typically have resistance to making changes late in a project “because something might break.”With JUnit the worry associated with breaking something goes away. Yes, it might still break, but the tests will detect it. Because every method has a set of tests, it is easy to see which methods have been broken by the changes, and hence make the necessary fix. So changes can bemade late on in a project with confidence, since the tests provide a safety net.

Lesson: Automated tests pay off by improving developer confidence in their code.

In order to create XP style Unit Tests however, developers need to make sure that their classes and methods are well designed. This means that there is minimum coupling between the class being tested and the rest of the classes in the system. Since the test case subclass needs to be able to create an object to test it, the discipline of testing all methods forces developers to create classes with well-defined responsibilities.

Lesson: Requiring all methods to have unit tests forces developers to create better designs.

It is interesting to compare classes that have unit tests with those that do not have unit tests.By following the discipline of unit tests, methods tend to be smaller but more numerous. The implementations tend to be simpler, especially when the XP practice of  “Write the Unit Tests first”  is followed. The big difference  that the really long methods full of nested and twisted conditional code don’t exist in the unit tested code. That kind of code is impossible to write unit tests for, so it ends up being re-factored into a better design.

Lesson: Design for testability is easier if you design and implement the tests first.

Adopting XP style unit testing also drastically alters the minute by minute development process. We write a test, compile and run (after adding just enough implementation to make itrun) so that the test will fail. It notes “This may seem funny – don’t we want the tests to pass? Yes, we do. But by seeing them fail first, we get some assurance that the test isvalid.”  Now that we have a failing test, we can implement the body of the method and run the test again. This time it will probably pass, so now it’s time to run all the other unit tests to see ifthe latest changes broke anything else. Now that we know that it works, we can now tidy up the code, Refactor as needed and possibly optimize this correct implementation. Once we have done this we are ready to restart the cycle by writing the next unit test.

Lesson: Make it Run, Make it Right, Make it Fast (but only if you need to).

Early successes with JUnit encouraged me to experiment with other XP Practices. Just havingthe unit tests in place made programming fun again, and if that practice was valuable, may be other were as well.

Tags : , , , ,

The Open Digital Library

Technology Platform Requirements

By its nature, digital collection development requires extensive use of technological resources. In the early days of digital library development, when collections were typically small and experimental, a wide variety of hardware and software was utilized.Today, the leading digital library developers are putting substantial collections online. Some of these collections include millions of digital objects; collections are being planned that will require storage measured in petabytes—the equivalent of more than 50,000 desktop computers with 20-gigabyte hard drives. As digital libraries scale in size and functionality, it is critical for the underlying technology platform to deliver the performance and reliability required. Many digital libraries are considered “mission critical” to the overall institution. In addition, patrons expect high service levels, which means that downtime and poor response time are not tolerable. Moreover, because cost is a foremost concern, scalability and efficiency with a low total cost of ownership are also key requirements. This type of digital library implementation requires a scalable enterprise-level technology solution with built-in reliability, availability, and service ability (RAS) features.

Storage capacity also must be scalable to adapt to rapid growth in demand, and must be adapted to the mix of media types that may be stored in a digital library, such as:

• Text, which is relatively compact.

• Graphics, which can be data-intensive.

• Audio, which is highly dynamic.

• Video, which is highly dynamic and data intensive.

Storage capacity should be expandable in economical increments and should not require redesign or re-engineering of the system design as requirements grow. An open systems architecture provides both a robust platform and the best selection of digital media management solutions and development tools. The inherent reliability and scalability of open platforms have made them the most popular choice of IT professionals for Internet computing. This computing model features an architecture that is oriented totally around Internet protocols and stresses the role of Websites for a vast and diverse array of services that follow a utility model.

Evolution to Web Services

The digital library of the future will deliver “smart media services”; that is, Webservices that can match “media content” to user “context” in a way that provides a customized, personalized experience. Media content is digital content that includes elements of interactivity. Context includes such information as the identity and location of the user.

Several key technologies must interact to allow Web services to work in this way.Extensible Markup Language (XML) and Standard General Markup Language (SGML)are important standards influencing our ability to create broadly interoperable Webbased applications. SGML is an international standard for text markup systems and is very large and complex, describing thousands of different document types in many fields of human activity. XML is a standard for describing other languages, written in SGML. XML allows the design of customized markup languages for limitless types of documents, providing a very flexible and simple way to write Web-based applications.This differs from HTML, which is a single, predefined markup language and can be considered one application of XML. The primary standards powering Web services today are XML-based. These include:

• Simple Object Access Protocol (SOAP).

• Universal Description, Discovery and Integration (UDDI).

• Web Services Description Language (WSDL).

• Electronic Business XML (ebXML).

These standards are emerging as the basis for the new Web services model. While not all are fully defined standards, they are maturing quickly with broad industry support.


Tags : , , , ,