Posts Tagged ‘Accounting
One aspect that has been overlooked in mobile research is link layer access. Most mobility solutions assume that the link layer configuration will be automatic and base trigger mechanisms in the presence of network layer connectivity.We believe that there is the need for a framework for link layer access, to standardize the operating system interface, creating an unified API to report the presence of access point in the vicinity of the mobile, and to do AAA(Authentication, Authorization and Accounting). A multiplexing transport protocol has to be aware of new link layers that become available, and of link layers that can no longer be used, to add and remove these interfaces from protocol processing. To this end, a link-layer aware transport protocol needs the following support:
Link layer management: a management entity can usedirect information (by probing or listening to the link layer for the presence of access points) or indirect information(by using an existing connection to query the infrastructure for the existence of additional access points) to find new access points. This is called link layer discovery. Management also encompasses measuring signal strength and possibly location hints to rule that a link layer is nolonger usable. This is called link layer disconnection.
Network layer management: before using a link layer,the mobile has to acquire an IP address for that interface. The most common protocol for acquiring a network addressin broadcast media is DHCP (Dynamic Host Configuration Protocol). For point-to-point links, such as infrared, acquiring a network address also entails creatinga point-to-point link. In this case, the link will only be created on demand, as creating the link precludes other mobiles from using the same access point.
Transport layer notification: the transport layer has to benotified of new access points (in the form of a new IP address it can use) and of the loss of an active access point(an IP that can no longer be used). The transport protocols can also notify a management entity about the available bandwidth of each link. Because this bandwidth is closely tied with the available bandwidth of the last hop, by controlling the maximum bandwidth each protocol instance can use the management entity to enforce usage policies for cooperating protocols.
As consumers rely on Cloud providers to supply all their computing needs, they will require specific QoS to be maintained by their providers in order to meet their objectives and sustain their operations. Cloud providers will need to consider and meet different QoS parameters of each individual consumer as negotiated in specific SLAs. To achieve this, Cloud providers can no longer continue to deploy traditional system-centric resource management architecture that do not provide incentives for them to share their resources and still regard all service requests to be of equal importance.Instead, market-oriented resource management is necessary to regulate the supply and demand of Cloud resources at market equilibrium, provide feedback interms of economic incentives for both Cloud consumers and providers, and promote QoS-based resource allocation mechanisms that differentiate service requests based on their utility.
Figure 1 shows the high-level architecture for supporting market-oriented resource allocation in Data Centers and Clouds. There are basically four main entities involved:
Figure 1: High-level market-oriented cloud architecture.
- Users/Brokers: Users or brokers acting on their behalf submit service requests from anywhere in the world to the Data Center and Cloud to be processed.
- SLA Resource Allocator: The SLA Resource Allocator acts as the interface between the Data Center/Cloud service provider and external users/brokers. It requires the interaction of the following mechanisms to support SLA-oriented resource management:
- Service Request Examiner and Admission Control: When a service request is first submitted, the Service Request Examiner and Admission Control mechanism interprets the submitted request for QoS requirements before determining whether to accept or reject the request. Thus, it ensures that there is no overloading of resources whereby many service requests cannot be fulfilled successfully due to limited resources available. It also needs the latest status information regarding resource availability (from VM Monitor mechanism) and workload processing (from Service Request Monitor mechanism) in order tomake resource allocation decisions effectively. Then, it assigns requests to VMs and determines resource entitlements for allocated VMs.
- Pricing: The Pricing mechanism decides how service requests are charged. For instance, requests can be charged based on submission time(peak/off-peak), pricing rates(fixed/changing) or availability of resources (supply/demand). Pricing serves as a basis for managing the supply and demand of computing resources within the Data Center and facilitates in prioritizing resource allocations effectively.
- Accounting: The Accounting mechanism maintains the actual usage of resources by requests so that the final cost can be computed and charged to the users. In addition, the maintained historical usage information can be utilized by the Service Request Examiner and Admission Control mechanism to improve resource allocation decisions.
- VM Monitor: The VM Monitor mechanism keeps track of the availability of VMs and their resource entitlements.
- Dispatcher: The Dispatcher mechanism starts the execution of accepted service requests on allocated VMs.
- Service Request Monitor: TheService Request Monitor mechanism keeps track of the execution progress of service requests.
- VMs: Multiple VMs can be started and stopped dynamically on a single physical machine to meet accepted service requests, hence providing maximum flexibility to configure various partitions of resources on the same physical machine to different specific requirements of service requests. In addition, multiple VMs can concurrently run applications based on different operating system environments on a single physical machine since every VM is completely isolated from one another on the same physical machine.
- Physical Machines: The Data Center comprises multiple computing servers that provide resources to meet service demands.
In the case of a Cloud as a commercial offering to enable crucial business operations of companies, there are critical QoS parameters to consider in a service request, such as time, cost, reliability and trust/security. In particular, QoS requirements cannot be static and need to be dynamically updated over time due to continuing changes in business operations and operating environments. In short, there should be greater importance on customers since they pay for accessing services in Clouds. In addition, the state-of the-art in Cloud computing has no or limited support for dynamic negotiation of SLAs between participants and mechanisms for automatic allocation of resources to multiple competing requests. Recently, we have developed negotiation mechanisms based on alternate offers protocol for establishing SLAs. These have high potential for their adoption in Cloud computing systems built using VMs.
Commercial offerings of market-oriented Clouds must be able to:
- support customer-driven service management based on customer profiles and requested service requirements,
- define computational risk management tactics to identify, assess, and manage risks involved in the execution of applications with regards to service requirements and customer needs,
- derive appropriate market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation,
- incorporate autonomic resource management models that effectively self-manage changes in service requirements to satisfy both new service demands and existing service obligations, and
- leverage VM technology to dynamically assign resource shares according to service requirements.