UML for Modeling Complex Real-Time Systems

The embedded real-time software systems encountered in applications such as telecommunications, aerospace, and defense typically tend to be large and extremely complex. It is crucial in such systems that the software is designed with a sound architecture. A good architecture not only simplifies construction of the initial system, but even more importantly, readily accommodates changes forced by a steady stream of new requirements. In this paper, we describe a set of constructs that facilitate the design of software architectures in this domain. The constructs, derived from field-proven concepts originally defined in the ROOM modeling language, are specified using the Unified Modeling Language (UML) standard.

Modelling Structure

The structure of a system identifies the entities that are to be modeled and the relationships between them (e.g., communication relationships, containment relationships). UML provides two fundamental complementary diagram types for capturing the logical structure of systems: class diagrams and collaboration diagrams. Class diagrams capture universal relationships among classes— those relationships that exist among instances of the classes in all contexts. Collaboration diagrams capture relationships that exist only within a particular context— a pattern of usage for a particular purpose that is not inherent in the class itself. Collaboration diagrams therefore include a distinction between the usage of different instances of the same class, a distinction captured in the concept of role. In the modeling approach described here, there is a strong emphasis on using UML collaboration diagrams to explicitly represent the interconnections between architectural entities. Typically, the complete specification of the structure of a complex real-time system is obtained through a combination of class and collaboration diagrams.

Specifically three principal constructs for modeling structure:

Tags : , , , , , ,

Transport Layer Security Protocol

TLS (Transport Layer Security) was released in response to the Internet community’s demands for a standardized protocol. The IETF provided a venue for the new protocol to be openly discussed, and encouraged developers to provide their input to the protocol.

The TLS protocol was released in January 1999 to create a standard for private communications. The protocol “allows client/server applications to communicate in a way that is designed to prevent eavesdropping,tampering or message forgery.” According to the protocol’s creators, the goals of the TLS protocol are cryptographic security, interoperability, extensibility, and relative efficiency.These goals are achieved through implementation of the TLS protocol on two levels: the TLS Record protocol and the TLS Handshake protocol.

TLS Record Protocol

The TLS Record protocol negotiates a private, reliable connection between the client and the server. Though the Record protocol can be used without encryption, it uses symmetric cryptography keys, to ensure a private connection. This connection is secured through the use of hash functions generated by using a Message Authentication Code.

TLS Handshake Protocol

The TLS Handshake protocol allows authenticated communication to commence between the server and client. This protocol allows the client and server to speak the same language, allowing them to agree upon an encryption algorithm and encryption keys before the selected application protocol begins to send data. Using the same handshake protocol procedure as SSL, TLS provides for authentication of the server, and optionally, the client.

Tags : , , , , , , , , , , , ,

Common Problems Associated with Spam Traps and its Preventation

Spam traps are email addresses activated for the sole purpose of catching illegitimate email and identifying senders with poor data quality practices. Internet Service Providers (ISPs) and anti-spam organizations create and manage spam trap networks and use spam traps,

Common Problems Associated with Spam Traps

  1. Return Path studies have shown that one spam trap can reduce your Sender Score more than 20 points and can decrease your inbox placement rates to 81% and lower.
  2. ISPs will lower your sending reputation for too many spam trap hits.
  3. Mailing IPs and/or domains may become blacklisted.
  4. Membership in the Return Path Certification Program may be suspended for exceeding the acceptable thresholds defined within the compliance standards.

Preventing Spam Traps

  1. Reject requests for malformed addresses (i.e. me@hotmai.lcom).
  2. Reject abuse@ and postmaster@ addresses.
  3. Reject role accounts (i.e. sales@company.com, customerservice@company.com).
  4. Send Welcome/Confirmation email messages and use a confirmed or double option process to validate newly acquired email addresses before adding them to your file. It is best practice to use a separate IP space and monitor spam trap rates.
  5. Having multiple pages or CAPTCHA during the subscription process aids in preventing list poisoning.
  6. Provide a change of email address option in all emails, in a preference center and at the point of unsubscribe.
  7. Do not purchase, rent or lease email addresses from third parties or perform email appends on your files.
  8. Isolate and monitor “Import Address Book” and “Forward to a Friend” mail streams on separate IPs and sub-domains to identify spam traps and protect your other email programs. These types of features commonly collect old email addresses that have likely been converted into spam traps.

Tags : , , , , , , ,

Phishing Email

Phishing emails are crafted to look as if they’ve been sent from a legitimate organization. These emails attempt to fool you into visiting a bogus web site to either download malware (viruses and other software intended to compromise your computer) or reveal sensitive personal information. The perpetrators of phishing scams carefully craft the bogus web site to look like the real thing.

For instance, an email can be crafted to look like it is from a major bank. It might have an alarming subject line, such as “Problem with Your Account.” The body of the message will claim there is a problem with your bank account and that, in order to validate your account, you must click a link included in the email and complete an online form.

The email is sent as spam to tens of thousands of recipients. Some, perhaps many, recipients are customers of the institution. Believing the email to be real, some of these recipients will click the link in the email without noticing that it takes them to a web address that only resembles the address of the real institution. If the email is sent and viewed as HTML, the visible link may be the URL of the institution, but the actual link information coded in the HTML will take the user to the bogus site. For example

visible link: http://www.yourbank.com/accounts/

actual link to bogus site: http://itcare.co.kr/data/yourbank/index.html

The bogus site will look astonishingly like the real thing, and will present an online form asking for information like your account number, your address, your online banking username and password—all the information an attacker needs to steal your identity and raid your bank account.

What to Look For

Bogus communications purporting to be from banks, credit card companies, and other financial institutions have been widely employed in phishing scams, as have emails from online auction and retail services. Carefully examine any email from your banks and other financial institutions. Most have instituted policies against asking for personal or account information in emails, so you should regard any email making such a request with extreme skepticism.

Phishing emails have also been disguised in a number of other ways. Some of the most common phishing emails include the following:

  1. fake communications from online payment and auction services, or from internet service providers – These emails claim there is a “problem” with your account and request that you access a (bogus) web page to provide personal and account information.
  2. fake accusation of violating Patriot Act – This email purports to be from the Federal Deposit Insurance Corporation (FDIC). It says that the FDIC is refusing to ensure your account because of “suspected violations of the USA Patriot Act.” It requests you provide information through an online form to “verify your identity.” It’s really an attempt to steal your identity.
  3. fake communications from an IT Department – These emails will attempt to ferret passwords and other information phishers can use to penetrate your organization’s networks and computers.
  4. low-tech versions of any of the above asking you to fax back information on a printed form you can download from a (bogus) web site.

Tags : , , , , , , , ,

Integrating metadata into the data model

Mathematical models define infinite precision real numbers and functions with infinite domains, whereas computer data objects contain finite amounts of information and must therefore be approximations to the mathematical objects that they represent. Several forms of scientific metadata serve to specify how computer data objects approximate mathematical objects and these are integrated into our data model. For example, missing data codes (used for fallible sensor systems) may be viewed as approximations that carry no information. Any value or sub-object in a VIS-AD data object may be set to the missing value. Scientists often use arrays for finite samplings of continuous functions, as, for example, satellite image arrays are finite sampling of continuous radiance fields. Sampling metadata, such as those that assign Earth locations to pixels, and those that assign real radiances to coded (e.g., 8-bit) pixel values, quantify how arrays approximate functions and are integrated with VIS-AD array data objects.

The integration of metadata into our data model has practical consequences for the semantics of computation and display. For example, we define a data type goes_image as an array of ir radiances indexed by lat_lon values. Arrays of this data type are indexed by pairs of real numbers rather than by integers. If goes_west is a data object of type goes_image and loc is a data object of type lat_lon then the expression goes_west[loc] is evaluated by picking the sample of goes_west nearest to loc. If loc falls outside the region of the Earth covered by goes_west pixels then goes_west[loc] evaluates to the missing value. If goes_east is another data object of type goes_image, generated by a satellite with a different Earth perspective, then the expression goes_west – goes_east is evaluated by resampling goes_east to the samples of goes_west (i.e., by warping the goes_east image) before subtracting radiances. In Earth regions where the goes_west and goes_east images do not overlap, their difference is set to missing values. Thus metadata about map projections and missing data contribute to the semantics of computations.

Metadata similarly contribute to display semantics. If  both goes_east and goes_west are selected for display, the system uses the sampling of their indices to co-register these two images in a common Earth frame of reference. The samplings of 2-D and 3-D array indices need not be Cartesian. For example, the sampling of lat_lon may define virtually any map projection. Thus data may be displayed in non-Cartesian coordinate systems.

Tags : , , , , , , ,

Content Adaptive Stereo Video Coding

There are different theories about the effects of unequal bit allocation between left and right video sequences, such as the fusion theory and suppression theory. According to fusion theory, the stereo bitrate (hence distortion) needs to be equally allocated between the views for the best human perception. Contrarily, according to suppression theory, the highest quality view in a stereo-video determines the overall perception performance. Therefore, the target (right) sequence can be compressed as much as possible to save bits for the reference (left) sequence, so that the overall perceived distortion is the lowest. The proposed content-adaptive stereo encoder (CA-SC) is motivated by the suppression theory and reduces the frame (temporal) rate and spatial resolution of the target (right) sequence adaptively according to its content-based features.

Figure 1.0: Stereoscopic encoder

The principle behind content adaptive video coding is to parse video into temporal segments. Each temporal segment can be encoded at different spatial, temporal and SNR resolution (hence at a different target bitrate) depending on its low and/or high-level content-based features. Even though this approach has been used for monoscopic video encoding, there are no such studies in the literature for content-adaptive stereoscopic coding. The proposed CA-SC codec is an extension of the stereo codec (SC), which is based on AVC/H.264. We note that CA-SC can also be developed as an extension of the recently standardized MVC codec. The codec structure is shown in Figure 1.0.

In stereoscopic coding, in the compatible mode, any standard H.264/AVC decoder can decode the sequence as a monoscopic sequence since left channel is coded independent of the right channel. In order to improve the coding efficiency without significant perceptual quality loss, we added three modes to the encoder for down-sampling the right-view only. They are the spatial, temporal, and content-adaptive scaling modes.

Tags : , , , , , , , , , , ,

Can a fingerprint image be reconstructed from the template?

We have ideas that minutiae information is personal and sufficient to identify an individual, and interoperable among different databases, this question becomes less important. However, since many proponents of biometric systems make a claim that a fingerprint image cannot be reconstructed from a minutiae template, we will address this issue.

Until recently, the view of non-reconstruction was dominant in the biometrics community. However, over the last few years, several scientific works were published that showed that a fingerprint can, in fact, be reconstructed from a minutiae template. The most advanced work was published in 2007 by Cappelli. The authors analyzed templates compatible with the ISO/IEC 19794-2 minutiae standard. In one test, they used basic minutiae information only (i.e. positions x, positions y, and directions). In another test, they also used optional information: minutiae types, Core and Delta data, and proprietary data (the ridge orientation field in this case). In all the tests, the authors were able to reconstruct a fingerprint image from the minutiae template. Very often, the reconstructed image had a striking resemblance with the original image. Even though this reconstruction was only approximate, the reconstructed image was sufficient to obtain a positive match in more than 90% of cases for most minutiae matchers.

The potential repercussions of this work for the security and privacy of fingerprint minutiae systems are as follows:

  1. The fingerprint image reconstructed from the minutiae template, known as a “masquerade” image since it is not an exact copy of the original image, will likely fool the system if it is submitted.
  2. A masquerade image can be submitted to the system by injecting it in a digital form after the fingerprint sensor.
  3. A malicious agent could also create a fake fingerprint and physically submit it to the sensor. The techniques of creating a fake fingerprint are inexpensive and well-known from the literature.
  4. The ability to create a masquerade image will increase the level of interoperability for the minutiae template. The masquerade image can be submitted to any other fingerprint system that requires an image (rather than a minutiae template) as an input. No format conversion of the minutiae template would be required. Moreover, the minutiae template can be made compatible even with a non-minutiae fingerprint system (these systems are rare, however).

Tags : , , , , ,

Fiber deployment by incumbents will make additional broadband overbuilds less likely

Fiber optic cable deployment by incumbent telephone and cable companies will have a significant impact on the prospects for last-mile broadband competition. Once a customer is served by fiber cable, all non-mobile communications services could be provided over the single fiber pathway: voice, super-high-speed data, and HDTV quality video. Once fiber is put in place by one provider, the business case for additional high-speed last-mile facilities weakens. This fact is readily discernable by efforts of incumbents to block fiber-to-the-home projects that have been pursued by municipalities. Both incumbent telephone companies and incumbent cable operators have taken steps to disable the attempts of municipalities to deploy fiber. Thus, fiber optic cable, either connected directly to the household, or terminated near the home (and using existing metallic cable distribution to bridge the last few hundred feet), will provide a virtually unlimited supply of bandwidth to any end-user.
 
Once fiber is deployed, its vast capacity will undermine the attractiveness of other technologies which are not capable of delivering the extremely high bandwidth (e.g., 100 Mbps) which fiber is capable of delivering to end users. It is simply not reasonable to believe that capital markets will support numerous last-mile overbuilds, using fiber optics, wireless, or broadband over power line technology, especially if incumbent telephone company and cable companies are well on their way to deploying fiber to, or close to, the home.  Alternative technologies have deployment or operational problems. For example, broadband over power line (BPL) technology, which has the potential to share existing electric company power distribution networks is currently in the trial phase, but problems have emerged with this technology, especially due to its generation of external interference which affects radio transmission of both public safety agencies and ham radio operators. The generation of radio interference has been an unresolved issue in several BPL trials, and led to the termination of at least one trial. BPL may offer some promise as an alternative last-mile facility if the interference problems can be overcome. However, expected transmission speeds from BPL (2Mbps to 6Mbps) are much lower than those available from fiber optics. Furthermore, BPL will face a market where incumbents have already gained first-mover advantage by deploying fiber. As was recently noted by one analyst: “By the time it (BPL) really arrives in the market, terrestrial broadband will be almost fully saturated”.
 
Fixed wireless services, such as WiMax service, may be deployed with lower levels of investment and sunk costs than fiber, but suffer from other limitations, including the requirement that high-frequency radio waves be utilized to provide the service. Higher frequency radio waves are more likely to require a direct line of sight between points of transmission. Constructing line-of-sight wireless networks may be useful for network transport, but it is much more costly to install as a last-mile facility. The very high frequencies in which WiMax operates, ranging between 2GHz and 11GHz for the non-line-of-sight service, and up to 66GHz for the highest speed line-of-sight transmission, indicates that the spectrum is not optimal for last-mile facilities. Finally, it is also notable that due to the pending merger of at&t and BellSouth, the resulting company will control a significant number of WiMax licenses. Regulators may require the divestiture of these licenses as a merger condition, however, if they do not, it is difficult to imagine that the licenses will be used by the merged company to compete against its fiber-based broadband offering.

Tags : , , , , , , ,

Booleanization of XPath Scalar Queries

We’ll now describe how a query whose return type is string or number can be replaced with a sequence of XPath queries whose return type is Boolean. Let us assume that Q is a numeric XPath query. Let us further assume that we handle 32-bit signed integers (which is sufficient in a 32-bit architecture). We first extract the sign bit of Q:

(Q>=0)

True value indicates that Q is positive (or zero), and false value indicates that Q is negative. We then use –Q (instead of Q) if it is negative, and proceed. Assuming that Q is positive, we extract its 31 bits,also assuming we already know the most significant N bits (of the 31 bits), we can then find the next bit: Let K be the number formed by the known N high bits, then the N+1 bit is set to 1, then the rest, 30-N bits, set to 0.

((Q-K)>=0)

Yields true if the N+1 bit (from the left) is 1, and false if that bit is 0.

Thus, we can reconstruct a positive Q with 31 Boolean queries. We start with N=0 and iteratively extract the next bit until we get to N=30, inclusive.

A string query S is first factored into bytes (or, more accurately, Unicode symbols), as follows:

First, we query the string length using the XPath string-length function, which is a numeric query:

(string-length(S))

Then we can iterate over the symbols, reducing the query into a series of one byte (symbol) queries:

(substring(S,N,1))

Now, a single byte/symbol query B is in turn reduced into Boolean queries as follows: Let us assume that the list of possible symbols (excluding the double quote mark) in the document is known (denote it by C), and that the list’s length is L. L is hopefully small, e.g. if it is known that the XML document is in fact comprised of printable ASCII characters, including CR, LF and HT, excluding double quotes, then L is 97. We index each possible symbol, starting from 0 and going to (L-1). Denote by K=ceiling(log2(L)) – this is the number of bits that are required to determine the symbol. Now, we prepare K strings of length L. The Nth string is a list of bits in position N of the symbols. Let us designate the Nth string as CN.

First, we ensure that the byte is not a double quote:

If the expression returns true, then the byte is simply the double quote mark. If the expression is false,we proceed as follows: The Nth bit is extracted as following:

(number(translate(B,”C”,”CN”))=1)

If this yields true, then the Nth bit is 1, and if it yields false, the Nth bit is 0. Note, we must exclude the double quote mark from C, or else the XPath syntax will be broken. Thus we are able to extract string queries using Boolean queries.

 

Tags : , , , , ,

System Topology Enumeration Using CPUID Extended Topology Leaf

The algorithm of system topology enumeration can be summarized as three phase of operation:

  1. Derive “mask width” constants that will be used to extract each Sub IDs.
  2. Gather the unique APIC IDs of each logical processor in the system, and extract/decompose each APIC ID into three sets of Sub IDs.
  3. Analyze the relationship of hierarchical Sub IDs to establish mapping tables between OS’s thread management services according to three hierarchical levels of processor topology.

Table 1 Modular Structure of Deriving System Topology Enumeration Information

Table 1 shows an example of the basic structure of the three phases of system wide topology as applied to processor topology and cache topology.  Figure 1 outlines the procedures of querying CPUID leaf 11 for the x2APIC ID and extracting sub IDs corresponding to the “SMT”, “Core”, “physical package” levels of the hierarchy.

Figure 1 Procedures to Extract Sub IDs from the x2APIC ID of Each Logical Processor

System topology enumeration at the application level using CPUID involves executing CPUID instruction on each logical processor in the system. This implies context switching using services provided by an OS. On-demand context switching by user code generally relies on a thread affinity management API provided by the OS. The capability and limitation of thread affinity API by different OS vary. For example, in some OS, the thread affinity API has a limit of 32 or 64 logical processors. It is expected that enhancement to thread affinity API to manage larger number of logical processor will be available in future versions.

Tags : , , , , , , , ,