Monday, February 11, 2008

Pictures

Unfortunately, I was not able to copy pictures which are present in the paper here. Please go through the links given in the left side of blog to find related pictures...

cancer_nano_technology

Abstract

At present there are wide varieties of Technologies, which are vastly being used to analyze biological cells to diagnose diseases and develop methodologies to cure diseases. One such technology is ‘Nanotechnology’.
A nanometer is a billionth of a meter. It's difficult to imagine anything so small, but think of something only 1/80,000 the width of a human hair. Ten hydrogen atoms could be laid side-by-side in a single nanometer. Nanotechnology is the creation of useful materials, devices, and systems through the manipulation of matter on this miniscule scale. The emerging field of nanotechnology involves scientists from many different disciplines, including physicists, chemists, engineers, and biologists.
“Nanotechnology will change the very foundations of cancer diagnosis, treatment, and prevention.”
Nanoscale devices used for treatment of Cancer are based on the constant study of cancer cells and nanotechnology. Nanoscale devices which are smaller than 50 nanometers can easily enter most cells, while those smaller than 20 nanometers can move out of blood vessels as they circulate through the body.
Because of their small size, nanoscale devices can readily interact with biomolecules on both the surface of cells and inside of cells. By gaining access to so many areas of the body, they have the potential to detect disease and deliver treatment in ways unimagined before now. Since biological processes that lead to cancer occur at the nanoscale at and inside cells, nanotechnology offers a wealth of tools with new and innovative ways to diagnose and treat cancer.
In our paper we design a device that contains sensors, transceivers, motors and a processor, which are made up of biodegradable compound. No more destruction of healthy cells due to harmful toxins and radiations generated through chemotherapy and radiation therapy.

INTRODUCTION:
The paper deals with the eradication of cancer cells by providing an efficient method of destroying and curing the cancer so that healthy cells are not affected in any manner. This technology also focuses on a main idea that the patient is not affected by cancer again. The purpose of using the RF signal is to save normal cells.

NANOTECHNOLOGY IN THIS CONTEXT

Nanotechnology refers to the interactions of cellular and molecular components and engineered materials at the most elemental level of biology. This paper emphasizes on the effective utilization of Nanotechnology in the treatment of cancer.

WHAT IS CANCER?

Cancer cells are different from healthy cells because they divide more rapidly than healthy cells. In addition, when cells divide at an accelerated rate, they form a mass of tissue called a tumor. These cancerous cells that come in excess amounts cause many problems to the bodies of patients.
In general, the most common methods used for the cancer treatment are
ü Chemotherapy, a treatment with powerful medicines
ü Radiation therapy, a treatment given through external high-energy rays.

PROBLEM
Both the treatments mentioned above are harmful. Healthy cells are destroyed in the process. As a result, this leaves the patient very weak, causing him not able to recover quickly to medical treatments. It has been proved that any individual who had cancer can survive on deadly chemotherapy up to a maximum of five years and after that it’s anybody’s guess.



PROPOSED SOLUTION
The nanodevices can be programmed to destroy affected cells and kill only them, thus ending the problem of destroying any normally functioning cells which are essential to one’s well-being. Thus the treatment-using nanotechnology will make the affected man perfectly normal.

”Noninvasive access to the interior of a living cell affords the opportunity for unprecedented gains on both clinical and basic research frontiers.”

NANOTECHNOLOGY AND DIAGNOSTICS
Nanodevices can provide rapid and sensitive detection of cancer-related molecules by enabling scientists to detect molecular changes even when they occur only in a small percentage of cells.
CANTILEVERS
Nanoscale cantilevers - microscopic, flexible beams resembling a row of diving boards - are built using semiconductor lithographic techniques. These can be coated with molecules capable of binding specific substrates-DNA complementary to a specific gene sequence, for example. Such micron-sized devices, comprising many nanometer-sized cantilevers, can detect single molecules of DNA or protein.





As a cancer cell secretes its molecular products, the antibodies coated on the cantilever fingers selectively bind to these secreted proteins. These antibodies have been designed to pick up one or more different, specific molecular expressions from a cancer cell. The physical properties of the cantilevers change as a result of the binding event. This change in real time can provide not only information about the presence and the absence but also the concentration of different molecular expressions. Nanoscale cantilevers, thus can provide rapid and sensitive detection of cancer-related molecules.
Nanotechnology and Cancer Therapy
Nanoscale devices have the potential to radically change cancer therapy for the better and to dramatically increase the number of highly effective therapeutic agents. Nanoscale constructs, for example, should serve as customizable, targeted drug delivery vehicles capable of ferrying large doses of chemotherapeutic agents or therapeutic genes into malignant cells while sparing healthy cells, which would greatly reduce or eliminate the often unpalatable side effects that accompany many current cancer therapies.
Nanoparticles
Nanoscale devices have the potential to radically change cancer therapy for the better and to dramatically increase the number of highly effective therapeutic agents.
In this example, nanoparticles are targeted to cancer cells for use in the molecular imaging of a malignant lesion. Large numbers of nanoparticles are safely injected into the body and preferentially bind to the cancer cell, defining the anatomical contour of the lesion and making it visible.
These nanoparticles give us the ability to see cells and molecules that we otherwise cannot detect through conventional imaging. The ability to pick up what happens in the cell - to monitor therapeutic intervention and to see when a cancer cell is mortally wounded or is actually activated - is critical to the successful diagnosis and treatment of the disease.
Nanoparticulate technology can prove to be very useful in cancer therapy allowing for effective and targeted drug delivery by overcoming the many biological, biophysical and biomedical barriers that the body stages against a standard intervention such as the administration of drugs or contrast agents.

WORKING PROCEDURE:
The initial step of identifying the cancer and the location can be done by scanning. Once the location has been identified through scanning, the task is to position the nanodevice to the exact location. We focus on the positioning of the nanodevice into the required location by itself. The nanodevice is allowed to be placed into any part of the body (or) the nano device is injected through the blood vessel. The positioning is done with the help of mathematical calculations. External Control signals could be used to avoid mishap or any other errors.


The nanodevice is loaded with a microchip. The device is also provided with the compounds concealed so that it is initiated externally through a computer. The nano device contains sensors, motor, gene reader, processor, transceiver, camera and power supply. The location of the cancer cells is given as coordinates in a 3-dimensional point of view. This point is considered as the reference and referred as (0, 0, 0).

POSITIONING

The nanodevice performs an internal calculation based on the difference between its current position and the reference. Mathematical computations involve such that only one axis is compared between the nano device and the reference at a time. The motor fan is placed in a particular direction for a particular reference comparison. After one of the axis is completed and comparison is done, then the next axis is being compared followed by the third. Thus the three co-ordinate comparison of the nano-device results in any 3-
Dimensional orientation of the nano-device and results in exact positioning.

NAVIGATION

The output of the mathematical operation is given to a driver circuit (motor). The driver helps the device to navigate through the blood with precision in direction and with the required speed. The device thus should sample its new position with the reference at a sampling rate. The sampling rate is made such that their value is less than the velocity of blood flow.

The cancer killer could thus determine that it was located in (say) the big toe. If the objective were to kill a colon cancer, the cancer killer in the big toe would move to the colon and destroy the cancer cells. Very precise control over location of the cancer killer's activities could thus be achieved. The cancer killer could readily be reprogrammed to attack different targets using acoustic signals while it was in the body.

ALGORITHM FOR NAVIGATION:
Step1: Marking the co-ordinates.
Step2: Initialize the start command.
Step3: Feed the axis.
Step4: Send command to emit ultrasound.
Step5: Wait for T seconds.
Step6: If there is no signal reflected back (or) if the reflected signal is less than the threshold value, then activates the stepper motor to rotate through a certain distance. (Note: the distance is proportional to one axis)
Step7: Subtract the axis value by one.
Step8: Continue from step4 to step7 for both co-ordinates.
Step9: If the signal reflected back is greater than the threshold value then the motor is de-activated.
Step10: The motor (perpendicular to motor1) is activated. The motor2 moves through one step thus making the motor1 to change the axis.
Step11: The motor1 is allowed to travel until next change is required.
Step12: Once the nanodevice reaches the required spot, the motor is deactivated through external command.
Step13: Receives the RF radiation for T seconds that has been already calculated depending upon the intensity of tumor
.
IMAGING
With the available technology, a camera is inserted which helps us to monitor the internal process. Whenever multiple directions are there in the blood vessel, the device is made to stop through the external control signal and another signal is given to activate in the right direction.
Current clinical ultrasound scanners form images by transmitting pulses of ultrasonic energy along various beam lines in a scanning plane and detecting and displaying the subsequent echo signals. Our imaging is based on the absolute scattering properties and in the frequency dependence of scattering in tissues, which will help to differentiate between normal and abnormal cells.





IDENTIFICATION
The nano device identifies the cancer cells using a gene reader. A gene reader is a sensor which contains ten to fifty DNA probes or samples of cancer cells that are complementary. The DNA detection system generates an electronic signal whenever a DNA match occurs or when a virus causing cancer is present. Whenever we get a signal indicating the presence of cancer cells we go for further process. Once the device has been originally located, the next step is the destruction of the cancer cells.
DESTRUCTION:
We can remotely control the behavior of DNA using RF energy. An electronic interface to the biomolecule (DNA) can be created. RF magnetic field should be inductively coupled to nanocrystal antenna linked covalently to a DNA molecule. The inductive coupling results to the increase in the local temperature of the bound DNA, allowing the change of state to take place, while leaving molecules surrounding the DNA relatively unaffected. The switching is fully reversible, as dissolved molecules dissipate the heat in less time duration. Thus RF signal generated outside the body can destroy the affected DNA.

RF HEATING
The treatment tip contains the essential technology components that transform RF to a volumetric tissue heating source. The heat delivery surface transmits RF energy to the cells. Tumors that have little or no oxygen content (i.e. hypoxia) also have increased resistance to radiofrequency radiation. Thus, due to high resistance to radio frequency radiation the affected cells get heated and hence destroyed. The RF carrier frequency is in the biomedical range (174 - 216MHz). A pair of RF pulses is transmitted at a frequency of about 1-2Hz.

HOW NANO DEVICE ESCAPES FROM IMMUNE SYSTEM?
Generally our immune system attacks all the foreign particles entering any part of our body. The problem has been that such nano particles are similar in size to viruses and bacteria, and the body has developed very efficient mechanisms to deal with these invaders. It is known that bacteria with hydrophilic surfaces can avoid being destroyed by immune system and remain circulating in the body for longer periods. To emulate this effect, our nano device can be coated with a polymer such as polyethylene glycol (PEG),
which is proved after the research?

CONCLUSION:
As per our aim we have proposed the usage of nanotechnology and the RF signal for the destruction of cancer cells. This method doesn’t affect the healthy cells such that the cancer affected person is healthy after the treatment. This treatment doesn’t involve critical operations. This treatment will not take longer time as in any other treatments. Surely one day or the other cancer treated patient will be affected again in treatments other than nanotechnology treatment. This can be very well used for other dangerous diseases.

3g_technology

ABSTRACT

New technology and multimedia platforms are revolutionizing the way we watch television. It seems that the only factors likely to limit the take-up of new viewing technology are people's incomes and the amount of spare time they have. The numbers of people using on-demand, mobile phone and broadband television were all increasing as they moved away from traditional terrestrial television.

The present discussion deals with the evolution of cellular technologies including the various generations of cellular technologies. The discussion starts with the evolution of cellular technologies in various generations. Then the introduction of cellular technologies and the use of radio spectrum is discussed. The discussion continues with the access modes and the cellular standards for various generations. Then a brief introduction of 3G technology is given.

Introduction - Evolution of the Mobile Market :
The first radiotelephone service was introduced in the US at the end of the 1940s, and was meant to connect mobile users in cars to the public fixed network. In the 1960s, a new system launched by Bell Systems, called Improved Mobile Telephone Service” (IMTS), brought many improvements like direct dialling and higher bandwidth. The first analog cellular systems were based on IMTS and developed in the late 1960s and early 1970s. The systems were “cellular” because coverage areas were split into smaller areas or “cells”, each of which is served by a low power transmitter and receiver.
This first generation (1G) analog system for mobile communications saw two key improvements during the 1970s: the invention of the microprocessor and the digitization of the control link between the mobile phone and the cell site.
Second generation (2G) digital cellular systems were first developed at the end of the 1980s. These systems digitized not only the control link but also the voice signal. The new system provided better quality and higher capacity at lower cost to consumers.
Third generation (3G) systems promise faster communications services, including voice, fax and Internet, anytime and anywhere with seamless global roaming. ITU’s IMT-2000 global standard for 3G has opened the way to enabling innovative applications and services (e.g. multimedia entertainment, infotainment and location-based services, among others). The first 3G network was deployed in Japan in 2001. 2.5G networks, such as GPRS (Global Packet Radio Service) are already available in some parts of Europe.
Work has already begun on the development of fourth generation (4G) technologies in Japan.
It is to be noted that analog and digital systems, 1G and 2G, still co-exist in many areas.
The Basics of Cellular Technology and the Use of the Radio Spectrum :
Mobile operators use radio spectrum to provide their services. Spectrum is generally considered a scarce resource, and has been allocated as such. It has traditionally been shared by a number of industries, including broadcasting, mobile communications and the military. At the World Radio Conference (WRC) in 1993, spectrum allocations for 2G mobile were agreed based on expected demand growth at the time. At WRC 2000, the resolutions of the WRC expanded significantly the spectrum capacity to be used for 3G, by allowing the use of current 2G spectrum blocks for 3G technology and allocating 3G spectrum to an upper limit of 3GHz.
Before the advent of cellular technology, capacity was enhanced through a division of frequencies, and the resulting addition of available channels. However, this reduced the total bandwidth available to each user, affecting the quality of service. Cellular technology allowed for the division of geographical areas, rather than frequencies, leading to a more efficient use of the radio spectrum. This geographical re-use of radio channels is knows as “frequency reuse”.
In a cellular network, cells are generally organized in groups of seven to form a cluster. There is a “cell site” or “base station” at the centre of each cell, which houses the transmitter/receiver antennae and switching equipment. The size of a cell depends on the density of subscribers in an area: for instance, in a densely populated area, the capacity of the network can be improved by reducing the size of a cell or by adding more overlapping cells. This increases the number of channels available without increasing the actual number of frequencies being used. All base stations of either by fixed lines or microwave. The MSO is generally connected to the PSTN (Public Switched Telephone Network): each cell are connected to a central point, called the Mobile Switching Office (MSO),
Cellular technology allows the “hand-off” of subscribers from one cell to another as they travel around. This is the key feature which allows the mobility of users. A computer constantly tracks mobile subscribers of units within a cell, and when a user reaches the border of a call, the computer automatically hands-off the call and the call is assigned a new channel in a different cell.International roaming arrangements govern the subscriber’s ability to make and receive calls the home network’s coverage area.

Access Technologies (FDMA, TDMA, CDMA) :
FDMA: Frequency Division Multiple Access (FDMA) is the most common analog system. It is a technique whereby spectrum is divided up into frequencies and then assigned to users. With FDMA, only one subscriber at any given time is assigned to a channel. The channel therefore is closed to other conversations until the initial call is finished, or until it is handed-off to a different channel. A “full-duplex” FDMA transmission requires two channels, one for transmitting and the other for receiving. FDMA has been used for first generation analog systems.
TDMA: Time Division Multiple Access (TDMA) improves spectrum capacity by splitting each frequency into time slots. TDMA allows each user to access the entire radio frequency channel for the short period of a call. Other users share this same frequency channel at different time slots. The base station continually switches from user to user on the channel. TDMA is the dominant technology for the second generation mobile cellular networks.
CDMA: Code Division Multiple Access is based on “spread” spectrum technology. Since it is suitable for encrypted transmissions, it has long been used for military purposes. CDMA increases spectrum capacity by allowing all users to occupy all channels at the same time. Transmissions are spread over the whole radio band, and each voice or data call are assigned a unique code to differentiate from the other calls carried over the same spectrum. CDMA allows for a “soft hand-off”, which means that terminals can communicate with several base stations at the same time. The dominant radio interface for third-generation mobile, or IMT-2000, will be a wideband version of CDMA with three modes (IMT-DS, IMT-MC and IMT-TC).

Cellular Standards for 1G and 2G:
Each generation of mobile communications has been based on a dominant technology, which has significantly improved spectrum capacity. Until the advent of IMT-2000, cellular networks had been developed under a number of proprietary, regional and national standards, creating a fragmented market.
First Generation:
1) Advanced Mobile Phone System (AMPS) was first launched in the US. It is an analog system based on FDMA (Frequency Division Multiple Access) technology. Today, it is the most used analog system and the second largest worldwide.
2) Nordic Mobile Telephone (NMT) was mainly developed in the Nordic countries. (4.5 million in 1998 in some 40 countries including Nordic countries, Asia, Russia, and other Eastern European Countries)
3) Total Access Communications System (TACS) was first used in the UK in 1985. It was based on the AMPS technology.
There were also a number of other proprietary systems, rarely sold outside the home country.
Second Generation:
1) Global System for Mobile Communications (GSM) was the first commercially operated digital cellular system. It was first developed in the 1980s through a pan-European initiative, involving the European Commission, telecommunications operators and equipment manufacturers. The European Telecommunications Standards Institute was responsible for GSM standardization. GSM uses TDMA (Time Division Multiple Access) technology. It is being used by all European countries, and has been adopted in other continents. It is the dominant cellular standard today, with over (45%) of the world’s subscribers at April 1999.
2) TDMA IS-136 is the digital enhancement of the analog AMPS technology. It was called D-AMPS when it was fist introduced in late 1991 and its main objective was to protect the substantial investment that service providers had made in AMPS technology. Digital AMPS services have been launched in some 70 countries worldwide (by March 1999, there were almost 22 million TDMA handsets in circulation, the dominant markets being the Americas, and parts of Asia)
3) CDMA IS-95 increases capacity by using the entire radio band with each using a unique code (CDMA or Code Division Multiple Access) . It is a family of digital communication techniques and South Korea is the largest single CDMA IS-95 market in the world.
4) Personal Digital Cellular (PDC) is the second largest digital mobile standard although it is exclusively used in Japan where it was introduced in 1994. Like GSM, it is based on the TDMA access technology. In November 2001, there were some 66.39 million PDC users in Japan.
5) Personal Handy phone System (PHS) is a digital system used in Japan, first launched in 1995 as a cheaper alternative to cellular systems. It is somewhere in between a cellular and a cordless technology. It has inferior coverage area and limited usage in moving vehicles. In November 2001, Japan had 5.68 million PHS subscribers.

Cellular Standards for the Third Generation: The ITU's IMT-2000 family:
It is in the mid-1980s that the concept for IMT-2000, “International Mobile Telecommunications”, was born at the ITU as the third generation system for mobile communications. After over ten years of hard work under the leadership of the ITU, a historic decision was taken in the year 2000: unanimous approval of the technical specifications for third generation systems under the brand IMT-2000. The spectrum between 400 MHz and 3 GHz is technically suitable for the third generation. The entire telecommunication industry, including both industry and national and regional standards-setting bodies gave a concerted effort to avoiding the fragmentation that had thus far characterized the mobile market. This approval meant that for the first time, full interoperability and inter networking of mobile systems could be achieved. IMT-2000 is the result of collaboration of many entities, inside the ITU (ITU-R and ITU-T), and outside the ITU (3GPP, 3GPP2, UWCC and so on)
IMT-2000 offers the capability of providing value-added services and applications on the basis of a single standard. The system envisages a platform for distributing converged fixed, mobile, voice, data, Internet and multimedia services. One of its key visions is to provide seamless global roaming, enabling users to move across borders while using the same number and handset. IMT-2000 also aims to provide seamless delivery of services, over a number of media (satellite, fixed, etc…). It is expected that IMT-2000 will provide higher transmission rates: a minimum speed of 2Mbit/s for stationary or walking users, and 348 kbit/s in a moving vehicle. Second-generation systems only provide speeds ranging from 9.6 kbit/s to 28.8 kbit/s. In addition IMT-2000 has the following key characteristics:
1. FlexibilityWith the large number of mergers and consolidations occurring in the mobile industry, and the move into foreign markets, operators wanted to avoid having to support a wide range of different interfaces and technologies. This would surely have hindered the growth of 3G worldwide. The IMT-2000 standard addresses this problem, by providing a highly flexible system, capable of supporting a wide range of services and applications. The IMT-2000 standard accommodates five possible radio interfaces based on three different access technologies (FDMA, TDMA and CDMA): Value-added services and worldwide applications development on the basis of one single standard accommodating five possible radio interfaces based on three technologies
2. AffordabilityThere was agreement among industry that 3G systems had to be affordable, in order to encourage their adoption by consumers and operators.
3. Compatibility with existing systemsIMT-2000 services have to be compatible with existing systems. 2G systems, such as the GSM standard (prevalent in Europe and parts of Asia and Africa) will continue to exist for some time and compatibility with these systems must be assured through effective and seamless migration paths.
4. Modular DesignThe vision for IMT-2000 systems is that they must be easily expandable in order to allow for growth in users, coverage areas, and new services, with minimum initial investment.

3G technology:
3G wireless technology represents the convergence of various 2G wireless telecommunications systems into a single uniform global system which includes terrestrial and satellite components in its functioning.
What is 3G wireless?
3G wireless networks are capable of transferring data at speeds of up to 384Kbps. Average speeds for 3G networks will range between 64Kbps and 384Kbps, quite a jump when compared to common wireless data speeds in the U.S. that are often slower than a 14.4Kb modem. 3G is considered high-speed or broadband mobile Internet access, and in the future 3G networks are expected to reach speeds of more than 2Mbps.
3G technologies are turning phones and other devices into multimedia players, making it possible to download music and video clips. The new service is called the freedom of mobile multimedia access (FOMA), and it uses wideband code division multiple access (W-CDMA) technology to transfer data over its networks. W-CDMA sends data in a digital format over a range of frequencies, which makes the data move faster, but also uses more bandwidth than digital voice services. W-CDMA is not the only 3G technology; competing technologies include CDMAOne, which differs technically, but should provide similar services.
FOMA services are available in a 20-mile radius around the centre of Tokyo, the company plans to introduce it to other Japanese cities by the end of the year. But services and phones are expensive and uptake of this market is expected to be slow.

Conclusion:
Technology is expanding at a rapid rate and more and more applications are available to us in the day to day life. Though it has its advantages, its our duty to use in a effective way so that the developed technology will be beneficial rather than creating disasters in some way or the other.

Automated energy[performance] macromodelling of embedded software

Abstract:

Efficient energy and performance estimation of embedded software is a critical part of any system-level design flow. Macromodeling based estimation is an attempt to speed up estimation by exploiting reuse that is inherent in the design process. Macromodeling involves pre-characterizing reusable software components to construct high-level models, which express the execution time or energy consumption of a sub-program as a function of suitable parameters.During simulation, macromodels can be used instead of detailed hardware models, resulting in orders of magnitude simulation speedup. However, in order to realize this potential, significant challenges
need to be overcome in both the generation and use of macromodels— including how to identify the parameters to be used in the macromodel, how to define the template function to which the macromodel is fitted, etc. This paper presents an automatic methodology to perform characterization-based high-level software macromodeling, which addresses the aforementioned issues.
Given a sub-program to be macromodeled for execution time and/or energy consumption, the proposed methodology automates the steps of parameter identification, data collection through detailed simulation, macromodel template







selection, and fitting. We propose a novel technique to identify potential macromodel parameters and perform data collection, which applicability draws from the concept of data structure serialization used in distributed programming. We utilize symbolic regression techniques to concurrently filter out irrelevant macromodel parameters, construct a macromodel function, and derive the optimal coefficient values to minimize fitting error. Experiments with several realistic benchmarks suggest that the proposed methodology improves estimation accuracy and enables wide of macromodeling to complex embedded software, while realizing its potential for estimation speedup. We describe a case study of how macromodeling can be used to rapidly explore algorithm-level energy tradeoffs, for the zlib data compression library.










Categories and Subject Descriptors -

I.6.5 [Computing Methodologies]: Simulation and Modeling Model development - Modeling methodologies; D.2.8 [Software]:
Software Engineering - Metrics, Performance measures; C.4
[Computer Systems Organization ]: Performance of Systems-
Modeling Techniques
General Terms
Design, Measurement
Keywords
Data Serialization, Embedded Software, Genetic Programming,
Macromodeling, Symbolic Regression

1. INTRODUCTION

Efficient performance and energy estimation are fundamental concerns in the design of embedded software. Simulating the execution of embedded software on models of the underlying processor platform is the most widely used approach for performance and energy estimation. While simulation efficiency has been the subject of significant research effort, rapid growth in the complexity of embedded software (the number of lines of code in a typical embedded application is estimated to double every 10 to 12 months on an average, i.e., even faster than Moore’s law) implies that efficient performance/energy estimation for embedded software will remain a challenge for the foreseeable future.

Our work is based on the observation that large embedded software programs are rarely written from scratch — reliable design, subject to stringent design turnaround time and design cost constraints, mandates substantial reuse. An analysis of the dynamic execution traces of embedded programs reveals that a large fraction of the time consumption arises from reused software components
(including embedded operating systems, middleware, run-time libraries, domain-specific algorithm libraries, etc.). As an example, our experiments with the popular compression utility gzip, showed
that, on an average, 90% of gzip’s execution time is spent in calls to the gzip library1 package, 8% in calls to the standard C library functions, and only 2% in code specific to the gzip program, or what is frequently known as “glue code.” It is hence not surprising that reusable software modules account for a major fraction of simulation or estimation effort.

It is natural to wonder whether reuse, which saves significant design effort, can also be exploited to reduce estimation effort.Characterization-based romodeling takes a step in the above direction by enabling the extraction of fast, higher level models of reusable software components, based on pre-characterization using more detailed, slower models. The effort expended in the construction of macromodels for a software module is amortized over the
large number of applications of the macromodel when the module is simulated in the context of all the programs that include it.

The rest of this paper is organized as follows. We describe the contributions of this paper in Section 1.1 and discuss related work in Section 1.2. In Section 2, we identify the major challenges involved in macromodeling. Section 3 describes in detail how the proposed macromodel generation methodology overcomes
the identified challenges. Our implementation and experimental results are presented in Section 4, and conclusions in Section 5.

1.1 Paper Contributions

The complexity of modern embedded software poses significant challenges for both the generation and use of macromodels. In this work, we identify key limitations of the state-of-the-art in software macromodeling. Notably, significant manual effort is required from the software designer towards the identification of suitable parameters, and a template function on which the macromodel is based. Addressing these problems, while maintaining sufficient
generality in order to handle a wide range of embedded software programs is quite challenging. We propose a methodology to automate the critical steps of parameter identification, data collection through accurate simulation or measurement, and construction of the macromodel function while simultaneously optimizing the values of macromodel coefficients for achieving the best fit. Our work draws inspiration from concepts presented in the fields of distributed programming (automatic data structure serialization), and recent advances in statistical data analysis (symbolic regression). We also demonstrate the practical application of macromodeling to software libraries of significant complexity.
Figure 1: Function bg compute scc and associated data structures

1.2 Related Work

We discuss related work in the areas of macromodeling for hardware power simulation, efficient software performance and energy estimation, and fast instruction set simulation. Macromodels for register-transfer level (RTL) components can be constructed through characterization of their logic-level hardware models and have been used extensively in RTL power estimation. Techniques to speed up cycle-accurate instruction set simulation have received significant attention. Instruction-set simulation can be accelerated with little or no loss of accuracy using compiled simulation , combining compiled and interpreted simulation , or by optimizing the implementation of different functions such as instruction fetch and decode in the instruction set simulator (ISS). Hybrid simulation techniques for energy estimation were proposed . Delay and energy caching (reusing the execution time and energy consumption results from previous simulations of a program segment) are use to speed up estimation .

An alternative approach to embedded software power analysis is to use cycle-accurate and structure-aware architecture simulators, which can identify the architectural blocks activated in each cycle during a program’s execution, and record the stream of inputs seen by them . Software macromodeling at the granularity of functions or sub-programs was explored, demonstrating that orders of magnitude speedup in estimation time could be obtained, while maintaining high estimation accuracy. Performance characterization of library functions using statistical and semantic proerties of function arguments was recently presented.In summary,the importance of embedded software performance and energy estimation has fueled significant research effort but macromodeling for software sub-programs of arbitrary complexity has remained a relatively unexplored area, and many important issues have not been addressed. To the best of our knowledge, this is the first work to automate all the key steps in macromodel generation and demonstrate the applicability of fully automatic macromodeling to software programs of realistic complexity.

2. MOTIVATION

In this section, we describe the key challenges involved in macromodel generation for complex software programs, and illustrate them through the task of constructing an energy macromodel for the bg_compute_scc function taken from a commercial graph data structure library. The C prototype of function bg­_compute_sscis shown in Figure 1, along with a description of its input data structures. The bgraph structure contains various dynamically allocated fields, including an adjacency list representation of the graph’s connectivity, and fields to store the identified strongly connected components (SCCs). In addition to the software implementation of the graph data structure library and several testbenches that exercise its functions, we are given a crosscompilation tool chain for the target StrongARM based embedded system, as well as a cycle-accurate ISS that reports energy consumption [14, 15]. Any automated approach to generating a macromodel needs to address the following key challenges:

-Selection of macromodel parameters: In general, macromodel parameters, which are the independent variables used in the macromodel, can include the size or value of any field nested arbitrarily deep within the input or output data structures. The number of candidate parameters can be very large even for simple software functions. However, an efficient and robust macromodel must include only relevant parameters that have an actual impact on energy consumption. For the bg compute scc function, if we consider the values of all nested fields of scalar data types and the sizes of all nested fields of complex types, we can identify 2n+e+s+9potential candidates for macromodel parameters for a graph with n vertices, e edges, and s SCCs. The number of possible relevant subsets of parameters is 22n+e+s+9. While in some cases, human understanding and insight may reveal that only a small subset of parameters may largely determine the execution time or energy consumption, an automatic tool processing the source code does not have the luxury of human insight.
-Data collection: Once a candidate set of macromodel parameters is identified, characterization experiments must be performed to obtain values of the candidate macromodel parameters and the corresponding value of the dependent variable (energy or execution time) for numerous execution instances. Capturing the macromodel parameter values requires runtime tracing of the size of dynamically-created data structures as well as values of nested atomic fields. In practice, this is not a simple task — the number of levels of pointer traversals that have to be performed to access all scalar fields may vary dynamically, and conventional size computation utilities (such as sizeof in the C programming language) do not perform pointer traversal, i.e., they do not include the size of objects pointed to by fields in the given object.
-Macromodel function construction: Given the data gathered from characterization, determining a suitable function to approximate the collected data can be a daunting task. The search space of functions is highly intractable (infinite in the case of real-valued functions). Conventional approaches to macromodeling circumvent this problem by requiring the designer to manually specify a macromodel template. While various templates have been suggested. For example, template identification is in practice an ad hoc and tedious process that demands a detailed understanding of the target function.

We present techniques using type analysis, data serialization concepts and symbolic regression to overcome these challenges, making it possible to significantly extend the applicability of macromodeling to complex software, while greatly simplifying macromodel construction and minimizing the need for human intervention.


Figure 2: Energy estimates from macromodeling and
instruction-level simulation for bg compute scc
To illustrate the utility of our methodology, we used
it to construct an energy consumption macromodel for the
bg compute scc function shown in Figure 1. The resulting
macromodel equation, which relates energy consumption to the
size2 of the input argument bgr and values of its member fields,
is as follows:

Energy = (5.83E - 6) * last * size(bgr) + (0.5934) * no vertices - (0.576) * last + (3.625E - 4) * size(bgr).

A model in terms of function arguments, like the one shown above, also has the additional advantage of being well-suited to automated macromodel application within a larger estimation framework, because the model parameters should be readily available in any software simulator. A comparison of the energy estimates from the use of the macromodel vs. the energy estimates from instruction-level simulation for various input instances, as shown in Figure 2, shows them to be in close agreement with instruction level estimates,with an average estimation error of 0.7%.

3.AUTOMATIC ACROMODELING
METHODOLOGY

Figure 3 presents an overview of the proposed macromodeling methodology. Starting with the source code for the target function to be macromodeled, and a testbench that thoroughly exercises the target function over a wide range of input instances, the methodology consists of a sequence of steps that culminates in the generation of macromodels which approximate the energy consumption or execution time of the function. Two parallel compilation and execution flows are used to collect the data necessary to construct the macromodel. First, the source code is subject to parsing and type analysis, based on which our tool automatically generates data structure traversal and serialization routines and instruments the source code to invoke them at appropriate locations. The instrumented source code, traversal and serialization routines, and testbench are compiled and executed (any functionally accurate execution environment suffices for this step). During execution, the traversal and serialization routines dynamically enumerate and collect the values of candidate macromodel parameters. Crosscompilation and instruction-level simulation of the uninstrumented target source code and testbench is used to obtain the energy consumption and execution time for each execution instance of the target function. The collected instance-by-instance profile database, which contains values for all the independent and dependent variables,
is then fed to the symbolic regression engine to produce the macromodel.

The rest of this section describes the key steps of our methodology, which are highlighted as shaded rectangles in Figure 3.

Figure 3: Overview of the proposed automatic macromodeling
Methodology

3.1 Data Collection

Our data collection tool parses the input C files3 to collect information about data types and function arguments in the program, which is used to suitably instrument the input program.

We use argument sizes and values of the input and output data structures of the target function, as well as their nested fields, as model parameters.
We define argument size of a data structure as the number of bytes it would occupy if it were serialized. Serialization is the process of converting structured data to serial byte streams for the purpose
of storage or transmission, as in a remote procedure call.
We use type analysis to automatically generate code that computes argument sizes. In compiler theory , two types of data types are identified: basic types (e.g, int, char, float) and type constructors (e.g, pointers, arrays, and structures). Our object size calculations utilize rules associated with each basic type and type constructor. The size of basic types can be directly obtained using language facilities. The size of a structure is the sum of all nested fields. Pointers are recursively traversed using indirection
until a non-pointer type is obtained, whose size is then taken as the size of the pointer. Array sizes can be calculated similarly but require knowledge of array bounds at runtime. While C implementations do not in general maintain array bounds, C functions that
have array arguments usually also include other arguments specifying array bounds.

Callee function argument sizes are computed dynamically by code inserted in the caller function that calls the target function, immediately before and after the call. The framework described above enables run-time calculation of object size and other interesting information. For example, the size of a linked list object would be calculated as the sum of the sizes of all elements of the linked list. As a more complex example, consider the bgraph structure shown in Figure 1. Most macromodel templates for bg compute scc would require data about the number of vertices, n, and number of edges, e, in the graph. From the value of field no vertices, n can be obtained directly. Calculating e requires recognizing that vlist (the graph’s adjacency list) is actually an array of size no vertices of LISTPTR objects. Hence, the size of the vlist field ends up serving as a good estimate of e.

3.2 Macromodel intruction Using Symbolic Regression

The data collected through characterization experiments should be used to construct a macromodel relating the target function’s energy or execution time to a subset of the potential macromodel parameters. We perform this critical step through the use of symbolic regression, which was first introduced as an application that combined the fields of statistical data analysis and genetic programming (GP) by Koza . GP is used to evolve formulae containing the identified model parameters and a chosen set of mathematical operators. Based on extensive experimentation, we found the set F = {+,-,×, /, x2, x3, and log(x)} of operators to be quite adequate for our modeling needs.

We used an extended form of Koza’s symbolic regression technique, called Hybrid GP (HGP) [19], to increase the numerical robustness of symbolic regression. HGP extends Koza’s symbolic regression by introducing weights for all additive terms in the genetically derived regression formula. Classical linear regression is used to tune the weights.

4. IMPLEMENTATION AND RESULTS

In this section, we discuss our implementation and present experimental
results demonstrating the benefits of the proposed methodology.

The instrumentation and data collection steps in our methodology were implemented using a YACC based parser and several PERL scripts. Our implementation of symbolic regression is based on the GP kernel gpc++ and libraries for symbolic and numerical manipulation .

We conducted several experiments using a variety of benchmark software programs to demonstrate the utility of our automatic macromodeling framework. Table 1 shows how the macromodels obtained using our framework perform compared to execution times and energy consumption values obtained through a combination of extensive simulations and measurements from real implementations.
The benchmarks have been given descriptive names to indicate their function. For each benchmark, a sample set of 500 input instances (data sets) was used to develop the macromodel. The error associated with a macromodel is defined as the root mean square (RMS) deviation from observed values (obtained through instruction-set simulation or measurement), taken over the entire sample set. The symbolic regression tool was programmed to terminate after fifty GP generations or when the error dropped to less than 1%, whichever occurred sooner.




Table 1: Macromodeling examples

We chose the SimIt-ARM-1.1 cycle-accurate ARM ISS as our measurement platform because of its high simulation speed. The execution time of a code segment was determined as the difference in execution times of two versions of the benchmark, one with the execution of the target function enabled, and the other with it disabled. To compute energy consumption, we extended SimIt-ARM-1.1, to report processor and memory energy estimates.

4.1 Case Study: Energy Tradeoffs during Lossless Data Compression

In this section, we explore the use of macromodels in making algorithmic tradeoffs using the zlib [29] compression library. zlib can be embedded into any software application in order to perform lossless data compression. The compress2() function provided by zlib, whose interface is given by int compress2 (Bytef *dest, uLongf *destLen, uLong sourceLen, int level), allows the user to vary the computational effort expended in compression by using the level function argument that takes values from zero (no compression) to nine (maximum compression).

We developed a macromodel for the energy consumed by the compress2 function using the proposed methodology, and used it to study the tradeoff between energy consumption and the actual compression ratio achieved, for various values of the level parameter, over 300 files of various types ranging in size from 1 byte to 1 MB. It can be seen from the results of this experiment in Figure 4 that the average energy consumption increases monotonically with level but the compression ratio does not, indicating that not all compression levels are Pareto-optimal in terms of the above metrics. The figure also shows that macromodel estimates are in close agreement with energy estimates obtained using SimIt-ARM. Furthermore, the macromodel based approach has the same relative trend as the simulation based estimates, which makes it suitable for high-level design space exploration. The advantage of macromodeling is evident from the fact that estimation using the macromodel for all the input samples required less than a
minute, while the ISS took over a day to complete.

Figure 4: Using macromodeling to explore compression vs. energy
Tradeoffs


5. CONCLUSIONS

We presented a systematic methodology to automate the generation of energy and performance macromodels for embedded software. The proposed methodology radically simplifies macromodel construction, while expanding its applicability to complex embedded software. Furthermore, the use of properties of program data structures, including function arguments, as model parameters simplifies macromodel use, enabling usage in conjunction with any simulation environment. For example, macromodels could be integrated into an instruction-level simulation environment, so that some parts of the code are handled using macromodels, while glue code or parts that are difficult to macromodel are simulated using conventional techniques.

Fingerprint Recognition

ABSTRACT:

Fingerprint image analysis for automatic identification technology has been developed for use in a number of major applications. Important industries affected by this technology include network security and protection, smart money, ATM transaction, and biometric identifier systems for many major government sectors. In this paper we discuss the major components of the technology including the live-scan fingerprint subsystem, the WSQ compression algorithm, and the recognition algorithm.

INTRODUCTION:

The fingerprints have been used as a mean for identifying individual for a long time because the fingerprints are unique and stay unchanged through out an individual life time. The chance of two people—even identical twins—having the same fingerprint is probably less than one in a billion. Fingerprint comparison is the most widely used method of biometric authentication and the most cost effective. Currently there are about 200 million FBI cards (10 fingerprints per cards) stored in cabinets that would fill an area of one acre. The manual effort of identifying and maintaining such a system is very cumbersome, time consuming and expensive as the number of finger print records grows at a rate from 30 to 50 thousands cards per day [1]. With the advancement of computer technology the problem of automatic finger print identification has attracted wide attention among researchers that results in automatic fingerprint identification system (AFIS) available today. Going in hands with the recognition problem is the problem of real-time matching system for large fingerprint data bases. Since the storage requirement for such a large amount of data can be thousands of terabytes system, data compression is another aspect of automatic identification using fingerprints. Currently the FBI data compression specification for finger is the “de facto national standard which is based on wavelet transform scalar quantization (WSQ)”.




AFIS: AUTOMATIC FINGERPRINT IDENTIFICATION SYSTEM

The four main components of an AFIS system is the scanner, the recognition algorithm, the search and query algorithm of the data base and the data compression algorithm.

1. The Live Scanner:
The live scanner captures the finger print at a minimum resolution of 500 pixels per inch in both row and column and each pixel shall be gray level quantized to 8 bits. Regardless of the method and media used by the scanner, the electronic image must be sufficient quality to provide conclusive finger print comparison, successful finger classification and feature detection, and can support an AFIS search reliably. The major consideration for the scanner is whether or not it meets number test procedures that will warranty the image quality as stated in the Minimum Image Quality Requirement, Electronically Produced, Fingerprint Cards, and Appendix F- IAFIS Image Quality Specifications.

¨ Geometric Image Accuracy
§ 1% for distance between .07 and 1.5 inch
§ .0007 for distance less than or equal to .07 inch

¨ Modulation Transfer Function (MTF).
MTF is the point response of the image capturing system. For each frequency the Image Modulation (IM) is computed.
IM = (Max- Min)/ (Min-Max)
The MTF is then computed by dividing the Image Modulation by the Target Modulation.




¨ Signal to Noise Ratio (SNR).
For adequate image quality, the SNR must be less than 125 for both black and white noise.
The SNR is computed as the difference between the average white and the average black value, alternately divided by the white noise standard deviation and the black noise standard deviation.
¨ Grey-Scale Range of Image Data
At least 80% of the captured images should have a dynamic range of at least 200 grey levels and at least 90% shall have a dynamic range of at least 128 grey levels.
¨ Grey Scale Linearity and Grey Level Uniformity
Linearity and uniformity of the grey level must meet a standard to assure an image quality suitable for AFIS. When scanning a uniform reference of white (and black), no two adjacent rows or columns of length 5 pixels or greater shall have an average grey scale different more than a threshold value, pixel’s grey level must remain within a deviation from mean value of local area, the mean grey level of adjacent quarter inch area shall not differ by certain value.

2. Fingerprint Matching:

The fingerprint matching process can be represented by the flowing block diagram
Matching block diagram



The pre-processing aim is to improve the quality of the image. The pre-processing has two tasks:
¨ Ridge enhancement
¨ Restoration and segmentation of fingerprint image
The pre-processing step produces a binary segmented fingerprint ridge image from an input grey scale image. The ridges have a value of ‘1’ and the rest of the image has value of ‘0’. The pre-processing steps involve
¨ Computation of orientation field
¨ foreground/background separation,
¨ ridge segmentation ,
¨ Directional smoothing of ridge
Analysis of the fingerprints shows that the ridges exhibit different anomalies referred to as ridge ending, ridge bifurcation, short ridge, ridge crossovers etc... There are some eighteen different types of features enumerated and called minutiaes. The most frequently used are the ridge ending and ridge bifurcations.


(a): Ridge ending (b): Ridge bifurcation (c): Ridge direction

A typical good finger print has about 70 to 80 minutiae points. Other complex fingerprint features can be expressed as a combination of these two features. The features are normally recorded as a vector with three attributes: the x-co-ordinate, the y-co-ordinate, and the local ridge direction ().
The finger matching is the matching of the minutiae sets. This can be done with number of techniques including point set matching, graph matching, and sub-graph isomorphism.
3. Fingerprint classification:
























Block Diagram of Classification Algorithm

Given the database for the fingerprints is very large, the matching should be done only on a subset of the database. To this end, the fingerprints are classified in to five main categories as high-level features that can be used in reducing the search source during a match. They are: arch, tented arch, left loop, right loop, and whorl. The singular points commonly used are the core and the delta. The core is the highest point on the innermost ridge; the delta is the point at which three ridges radiated from it.

III. The Wavelet Scalar Quantization:

The US Federal Bureau of Investigation (FBI) has formulated a national standard for digitization and compression of grey-scale fingerprint image. At a 15:1 compression ration, the WSQ is a lossy compression but can produce archival-quality image. The compression and decompression is based on adaptive uniform scalar quantization of discrete wavelet transform sub band decomposition.



WSQ Decoder
The encoding consists of three main processes:
¨ The discrete wavelet transform (DWT) decomposition,
¨ The scalar quantization, and
¨ The Huffman entropy coding.







In the DWT step, the digital image is split into 64 spatial frequency sub bands by a two-dimensional discrete wavelet transform which is a multi rate digital filter bank. The output DWT coefficient which is in floating point arithmetic format is truncated by the scalar quantization step ("quantized"). The integer indices output by the quantization encoder are entropy-encoded by run-length coding of zeros and Huffman coding. The compressed image data, a table of wavelet transform specifications, tables for the scalar quantizes and the Huffman codes are concatenated into a single bit stream of compressed data.
In the WSQ, a two-dimensional symmetric wavelet transform is applied to the input image by transforming first the rows and then the column yielding four-channel decomposition. The four sub bands are then cascaded back through the two-dimensional analysis bank to produce more refined sixteen-bank decomposition. This process is repeated until 64-band decomposition is achieved.
The WSQ decoder reverses the process above to reproduce the finger print image from compressed data. The Huffman decoder first recovers the quantized DWT coefficients. Through the de quantizer, approximation of the original floating point format of the DWT coefficients obtained and the coefficients are feed to an inverse DWT to reconstruct the finger print image.












Application: The Conceptual Design of a Fingerprint based Identifier

¨ Verification of driver-license authenticity and license validity check

Verifying the matching between driver fingerprint and the fingerprint features stored on the license assures that the driver is indeed the person that the license is issued for. This task can be done on-site where the fingerprint features obtained from the driver by live scanning is compared with the features magnetically stored on the driver license. Current "smart card" technology allows abundant memory capacity to store the features on card.A driver/ license match means that the license indeed belongs to the driver, this, however does no warranty that the driver license is not falsified. To check for validity of the driver license the police officer has the option to make additional inquiry against the database.
In this case license validity check will result.









CONCLUSION:

We have presented the overview of the finger print technology which include primarily the scanner, the classification of fingerprint image in the database, the matching algorithms and the compression\decompression algorithm standardized by the FBI. Certain standard perhaps might be needed for this area before major commercial system applications can be implemented. An application which is a part of the fingerprint based biometric systems for commercial driver license has been shown. Once the standards and compliance procedures are in place, one can predict an explosion in the number of applications of the fingerprint technology to important industries including network security and protection, smart money, ATM transaction, military installations , airports and other secure facilities.

SCADA (Supervisory Control And Data Acquisition.)

ABSTRACT
==========
SCADA is the acronym for Supervisory Control And Data Acquisition. The term refers to a large-scale, distributed measurement (and control) system. SCADA systems are used to monitor or to control chemical or transport processes, in municipal water supply systems, to control electric power generation, transmission and distribution, gas and oil pipelines, and other distributed processes.
An industrial SCADA system will be used for the development of the controls of the four LHC experiments.
So what is SCADA?
It is used to monitor and control plant or equipment. The control may be automatic, or initiated by operator commands. The data acquisition is accomplished firstly by the RTU's scanning the field inputs connected to the RTU (it may be also called a PLC - programmable logic controller). This is usually at a fast rate. The central host will scan the RTU's (usually at a slower rate.) The data is processed to detect alarm conditions, and if an alarm is present, it will be displayed on special alarm lists.

Data can be of three main types.Analogue data (ie real numbers) will be trended (ie placed in graphs). Digital data (on/off) may have alarms attached to one state or the other. Pulse data (eg counting revolutions of a meter) is normally accumulated or counted.The primary interface to the operator is a graphical display (mimic) which shows a representation of the plant or equipment in graphical form. Live data is shown as graphical shapes (foreground) over a static background. As the data changes in the field, the foreground is updated.
Example: a valve may be shown as open or closed. Analog data can be shown either as a number, or graphically. The system may have many such displays, and the operator can select from the relevant ones at any time.







Contents
========
1. Systems concepts
2. Human Machine Interface
3. Hardware solutions
4. System components
4.1 Remote Terminal Unit (RTU)
4.2 Master Station
5. Operational philosophy
6. Communication infrastructure and methods
7. Future trends in SCADA

Systems concepts
A SCADA system includes input/output signal hardware, controllers, HMI, networks, communication, database and software.
The term SCADA usually refers to a central system that monitors and controls a complete site or a system spread out over a long distance (kilometres/miles). The bulk of the site control is actually performed automatically by a Remote Terminal Unit (RTU) or by a Programmable Logic Controller (PLC). Host control functions are almost always restricted to basic site over-ride or supervisory level capability. For example, a PLC may control the flow of cooling water through part of an industrial process, but the SCADA system may allow an operator to change the control set point for the flow, and will allow any alarm conditions such as loss of flow or high temperature to be recorded and displayed. The feedback control loop is closed through the RTU or PLC; the SCADA system monitors the overall performance of that loop.

Data acquisition begins at the RTU or PLC level and includes meter readings and equipment statuses that are communicated to SCADA as required. Data is then compiled and formatted in such a way that a control room operator using the HMI can make appropriate supervisory decisions that may be required to adjust or over-ride normal RTU (PLC) controls. Data may also be collected in to a Historian, often built on a commodity Database Management System, to allow trending and other analytical work.
SCADA systems typically implement a distributed database, commonly referred to as a tag database, which contains data elements called tags or points. A point represents a single input or output value monitored or controlled by the system. Points can be either "hard" or "soft". A hard point is representative of an actual input or output connected to the system, while a soft point represents the result of logic and math operations applied to other hard and soft points. Most implementations conceptually remove this distinction by making every property a "soft" point (expression) that can equal a single "hard" point in the simplest case. Point values are normally stored as value-timestamp combinations; the value and the timestamp when the value was recorded or calculated. A series of value-timestamp combinations is the history of that point. It's also common to store additional metadata with tags such as: path to field device and PLC register, design time comments, and even alarming information.
It is possible to purchase a SCADA system, or Distributed Control System (DCS) from a single supplier. It is more common to assemble a SCADA system from hardware and software components like Allen-Bradley or GE PLCs, HMI packages from Wonderware, Rockwell Automation, Inductive Automation, Citect, or GE. Communication typically happens over ethernet.
Human Machine Interface
A Human-Machine Interface or HMI is the apparatus which presents process data to a human operator, and through which the human operator controls the process.
The HMI industry was essentially born out of a need for a standardized way to monitor and to control multiple remote controllers, PLCs and other control devices. While a PLC does provide automated, pre-programmed control over a process, they are usually distributed across a plant, making it difficult to gather data from them manually. Historically PLCs had no standardized way to present information to an operator. The SCADA system gathers information from the PLCs and other controllers via some form of network, and combines and formats the information. An HMI may also be linked to a database, to provide trending, diagnostic data, and management information such as scheduled maintenance procedures, logistic information, detailed schematics for a particular sensor or machine, and expert-system troubleshooting guides. Since about 1998, virtually all major PLC manufacturers have offered integrated HMI/SCADA systems, many of them using open and non-proprietary communications protocols. Numerous specialized third-party HMI/SCADA packages, offering built-in compatibility with most major PLCs, have also entered the market, allowing mechanical engineers, electrical engineers and technicians to configure HMIs themselves, without the need for a custom-made program written by a software developer.
SCADA is popular, due to its compatibility and reliability. It is used in small applications, like controlling the temperature of a room, to large applications, such as the control of nuclear power plants.
Hardware solutions
SCADA solutions often have Distributed Control System (DCS) components. Use of "smart" RTUs or PLCs, which are capable of autonomously executing simple logic processes without involving the master computer, is increasing. A functional block programming language, IEC 61131-3, is frequently used to create programs which run on these RTUs and PLCs. Unlike a procedural language such as the C programming language or FORTRAN, IEC 61131-3 has minimal training requirements by virtue of resembling historic physical control arrays. This allows SCADA system engineers to perform both the design and implementation of a program to be executed on a RTU or PLC.
System components
The three components of a SCADA system are:
Multiple Remote Terminal Units (also known as RTUs or Outstations).
Master Station and HMI Computer(s).
Communication infrastructure


Remote Terminal Unit (RTU)
The RTU connects to physical equipment, and reads status data such as the open/closed status from a switch or a valve, reads measurements such as pressure, flow, voltage or current. By sending signals to equipment the RTU can control equipment, such as opening or closing a switch or a valve, or setting the speed of a pump.
The RTU can read digital status data or analogue measurement data, and send out digital commands or analogue setpoints.
An important part of most SCADA implementations are alarms. An alarm is a digital status point that has either the value NORMAL or ALARM. Alarms can be created in such a way that when their requirements are met, they are activated. An example of an alarm is the "fuel tank empty" light in a car. The SCADA operator's attention is drawn to the part of the system requiring attention by the alarm. Emails and text messages are often sent along with an alarm activation alerting managers along with the SCADA operator.


Master Station
The term "Master Station" refers to the servers and software responsible for communicating with the field equipment (RTUs, PLCs, etc), and then to the HMI software running on workstations in the control room, or elsewhere. In smaller SCADA systems, the master station may be composed of a single PC. In larger SCADA systems, the master station may include multiple servers, distributed software applications, and disaster recovery sites.
The SCADA system usually presents the information to the operating personnel graphically, in the form of a mimic diagram. This means that the operator can see a schematic representation of the plant being controlled. For example, a picture of a pump connected to a pipe can show the operator that the pump is running and how much fluid it is pumping through the pipe at the moment. The operator can then switch the pump off. The HMI software will show the flow rate of the fluid in the pipe decrease in real time. Mimic diagrams may consist of line graphics and schematic symbols to represent process elements, or may consist of digital photographs of the process equipment overlain with animated symbols.
The HMI package for the SCADA system typically includes a drawing program that the operators or system maintenance personnel use to change the way these points are represented in the interface. These representations can be as simple as an on-screen traffic light, which represents the state of an actual traffic light in the field, or as complex as a multi-projector display representing the position of all of the elevators in a skyscraper or all of the trains on a railway. Initially, more "open" platforms such as Linux were not as widely used due to the highly dynamic development environment and because a SCADA customer that was able to afford the field hardware and devices to be controlled could usually also purchase UNIX or OpenVMS licenses. Today, all major operating systems are used for both master station servers and HMI workstations.
Operational philosophy
Instead of relying on operator intervention, or master station automation, RTUs may now be required to operate on their own to control tunnel fires or perform other safety-related tasks. The master station software is required to do more analysis of data before presenting it to operators including historical analysis and analysis associated with particular industry requirements. Safety requirements are now being applied to the system as a whole and even master station software must meet stringent safety standards for some markets.
For some installations, the costs that would result from the control system failing is extremely high. Possibly even lives could be lost. Hardware for SCADA systems is generally ruggedized to withstand temperature, vibration, and voltage extremes, but in these installations reliability is enhanced by having redundant hardware and communications channels. A failing part can be quickly identified and its functionality automatically taken over by backup hardware. A failed part can often be replaced without interrupting the process. The reliability of such systems can be calculated statistically and is stated as the mean time to failure, which is a variant of mean time between failures. The calculated mean time to failure of such high reliability systems can be in the centuries.
Communication infrastructure and methods
SCADA systems have traditionally used combinations of radio and direct serial or modem connections to meet communication requirements, although Ethernet and IP over SONET is also frequently used at large sites such as railways and power stations.
This has also come under threat with some customers wanting SCADA data to travel over their pre-established corporate networks or to share the network with other applications. The legacy of the early low-bandwidth protocols remains, though. SCADA protocols are designed to be very compact and many are designed to send information to the master station only when the master station polls the RTU. Typical legacy SCADA protocols include Modbus, RP-570 and Conitel. These communication protocols are all SCADA-vendor specific. Standard protocols are IEC 60870-5-101 or 104, Profibus and DNP3.
These communication protocols are standardised and recognised by all major SCADA vendors. Many of these protocols now contain extensions to operate over TCP/IP, although it is good security engineering practice to avoid connecting SCADA systems to the Internet so the attack surface is reduced.
RTUs and other automatic controller devices were being developed before the advent of industry wide standards for interoperability. The result is that developers and their management created a multitude of control protocols. Among the larger vendors, there was also the incentive to create their own protocol to "lock in" their customer base. A list of automation protocols is being compiled here. industrial firewall and VPN solutions for TCP/IP based SCADA networks.
Future trends in SCADA
The trend is for PLC and HMI/SCADA software to be more "mix-and-match". In the mid 1990s, the typical DAQ I/O manufacturer offered their own proprietary communications protocols over a suitable-distance carrier like RS-485. Towards the late 1990s, the shift towards open communications continued with I/O manufacturers offering support of open message structures like Modicon MODBUS over RS-485, and by 2000 most I/O makers offered completely open interfacing such as Modicon MODBUS over TCP/IP. The primary barriers of Ethernet TCP/IP's entrance into industrial automation (determinism, synchronization, protocol,selection,environment suitability) are still a concern to a few extremely specialized applications, but for the vast majority of HMI/SCADA markets these barriers have been broken.
Recently, however, the very existence of SCADA based systems has come into question as they are increasingly seen as extremely vulnerable to cyberwarfare/cyberterrorism attacks. Given the mission critical nature of a large number of SCADA systems, such attacks could, in a worse case scenario, cause massive financial losses through loss of data or actual physical destruction, misuse or theft, even loss of life, either directly or indirectly. Whether such concerns will cause a move away from the use of SCADA systems for mission critical applications towards more secure architectures and configurations remains to be seen, given that at least some influential people in corporate and governmental circles believe that the benefits and lower initial costs of SCADA based systems still outweigh potential costs and risks.

CONCLUSION
Potential benefits of SCADA
The benefits one can expect from adopting a SCADA system for the control of experimental physics facilities can be summarised as follows:
a rich functionality and extensive development facilities. The amount of effort invested in SCADA product amounts to 50 to 100 p-years!
the amount of specific development that needs to be performed by the end-user is limited, especially with suitable engineering.
Enhance reliability and robustness.
technical support and maintenance by the vendor. For large collaborations, as for the CERN LHC experiments, using a SCADA system for their controls ensures a common framework not only for the development of the specific applications but also for operating the detectors. Operators experience the same

Pictures

For the Images which are missing in the paper,browse through the links provided on the left..