What is the Importance Of iPhones and Android In Contemporary Society

The modes of communication changed with the invention of cell phone many years ago. However, as the ways of communication improved, cell phones also managed to bring in a number of applications that had a profound impact on the lifestyle of an average person.

Today, Android and iPhones are all the rage and everyone aspires to hold one in their hand. These two devices are considered “trend-setters” for the fourth generation cell phone users. Many technology experts claim that iPhones are playing a major role in our lives and those who own it perhaps have the best broadband technological innovation in their hands.
Android and iPhones ensure greater connectivity, is a step further by way of its advanced online options. iPhone users can constantly communicate with their friends and other contacts through social network and email. iPhones are known to offer the maximum level of connectivity compared to other communication devices.
The accessibility of the iPhone works exceptionally well, whereas new information is processed and presented in the shortest span of time. It allows the users to access significant facts and stay updated round the clock. This means that you, as an iPhone user, can stay updated on stock market movements, train delay, weather forecast and everything else that matters you.
iPhone’s benefits do not end here; there is a whole new lot of applications that users can purchase and download from the application store. These applications are not only useful for people who need organizational tools or information, but have also driven the businesses forward.
The app store for the iPhone has enabled professionals to organize their calendars, tasks, and improve other forms of productivity. It has changed the conventional means of management as professionals can either establish processes for monitoring sales or collecting information on the market movement.
On the other hand, Android has its own benefits too. Android features Google Voice just like the iPhone. Many users implicate that the experience of using Google Voice is extremely unique with Android instead of the iPhone because it integrates with the OS directly. When a user selects a contact from Google Maps or phone numbers in the browser, it works using Google Voice rather than connecting to an incorrect contact.
Using Android Market, or even 3rd party best broadband websites such as AppBrain, you can easily search any application and just download it right away on your phone. Moreover, Flash is an important part of your phone that you cannot let go anyway. You will need it while using Flash websites like Kongregate. You will be able to access many things if you have Flash installed on your tablet or cell phone.
There are many third party applications that offer you several advanced features on your Android phone. The best aspect about operating system is that users can utilize it, tweak over it, as well as installs their personalized version rather than the one that already comes with the phone. There is a limit to the amount of tweaks you can have on your Android no matter if it is interface-fixing MIUI ROM or CyanogenMod. Moreover, Android also gives the advantage to take off, swop and upgrade the SD card and memory.
Pros of iPhone:
  • Bigger, bright screen
  • First class camera
  • Reliable user interface
  • Greater applications market
  • Easy upgrading options for apps
  • Retina display
  • Easy upgrade to OTA OS
  • Greater market for accessories
  • Aesthetically rewarding
  • Unfailing battery life
Cons of iPhone:
  • Only offers touch screen
  • Controlled directly by Apple
  • Delicate
  • No SD slot
  • Pricey
  • Third party APIs cannot be accessed
  • Does not feature flash
Pros of Android:
  • Physical keyboard
  • High to low end Smartphones
  • SD card for assortment of phone options
  • User interface customization
  • Large application market
  • Third party APIs can be easily tweaked
  • Usually, lower costs
  • Usually, more freedom
Cons of Android:
  • Incompatible OS
  • Less user interface reliability
  • Variable battery life
  • Usually bad camera
  • Variable processor speeds

Image processing and robotics full seminar


  <-DOWNLOAD full seminar->

 



What is Image Processing?

In the future, the world will become full with robots which can do works for the human kind without any fail. Robots will do things like human for human. So they want human elements in it like eyes, ears, nose, hands etc. With that they can work with efficiency and accuracy. Now let’s see how they work with eyes and try to get information of near particle and object in front of the camera or eye of the robot.

In camera hey catch the image of object and try to analysis the image and get the information of it after getting information they do their task related to image. This field of robotics called image processing in robotics or image processing for the robots. We will see Image processing first then its implementation in Robotics.

In electrical engineering and computer science, image processing is any form of signal processing for which the input is an image, such as photographs or frames of video; the output of image processing can be either an image or a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. Image processing usually refers to digital image processing, but optical and analog image processing are also possible.

In the field of industrial robotics, the interaction between man and machine typically consists of programming and maintaining the machine by the human operator. For safety reasons, a direct contact between the working robot and the human has to be prevented. As long as the robots act out pre-programmed behaviours only, a direct interaction between man and machine is not necessary anyway.

However, if the robot is to assist a human e.g. in a complex assembly task, it is necessary to have means of exchanging information about the current scenario between man and machine in real time. For this purpose, the classical computer devices like keyboard, mouse and monitor are not the best choice as they require an encoding and decoding of information: if, for instance, the human operator wants the robot to grasp an object. This way of transmitting information to the machine is not only unnatural but also error prone.

If the robot is equipped with a camera system, it would be much more intuitive to just point to the object to grasp and let the robot detect its position visually.


Introduction

Modern digital technology has made it possible to manipulate multi-dimensional signals with systems that range from simple digital circuits to advanced parallel computers. The goal of this manipulation can be divided into three categories:

  •  Image Processing image in  image out
  •  Image Analysis image in  measurements out
  •  Image Understanding image in  high-level description out

We will focus on the fundamental concepts of image processing. Space does not permit us to make more than a few introductory remarks about image analysis. Image understanding requires an approach that differs fundamentally from the theme of this book. Further, we will restrict ourselves to two–dimensional (2D) image processing although most of the concepts and techniques that are to be described can be extended easily to three or more dimensions.


We begin with certain basic definitions. An image defined in the “real world” is considered to be a function of two real variables, for example, a(x,y) with a as the amplitude (e.g. brightness) of the image at the real coordinate position (x,y). An image may be considered to contain sub-images sometimes referred to as regions–of–interest, ROIs, or simply regions.

This concept reflects the fact that images frequently contain collections of objects each of which can be the basis for a region. In a sophisticated image processing system it should be possible to apply
specific image processing operations to selected regions. Thus one part of an image (region) might be processed to suppress motion blur while another part might be processed to improve color rendition.


The amplitudes of a given image will almost always be either real numbers or integer numbers. The latter is usually a result of a quantization process that converts a continuous range (say, between 0 and 100%) to a discrete number of levels. In certain image-forming processes, however, the signal may involve photon counting which implies that the amplitude would be inherently quantized. In other image forming procedures, such as magnetic resonance imaging, the direct physical measurement yields a complex number in the form of a real magnitude and a real phase. For the remainder of this book we will consider amplitudes as reals or integers unless otherwise indicated.

UDDI (Universal Description, Discovery and Integration )


  <-DOWNLOAD->

 

What Is UDDI?

Universal Description, Discovery and Integration (UDDI) is a project to encourage interoperability and adoption of web services. It consists of a standards-based set of specifications which allow service description and discovery. The project was initiated by Ariba, IBM and Microsoft. It has now grown to a partnership of over 300 community members who collaborate to develop the specification. UDDI is not just a specification though. As part of its charter, it is also backed with a set of publicly available internet-based implementations. A UDDI registry is a set of one or more implementations which comply with the UDDI specification. The "UDDI Business Registry" is the name of a special set of UDDI implementations which member companies have provided for public access. Each implementation interoperates to share registrations.
UDDI addresses a number of business problems. For the mid-sized manufacturer which interacts with numerous on-line customers, each of whom has differing interfaces, each with their own set of standards and protocols, UDDI provides a way to broaden and simplify the use of B2B through web Service Description. For the flower shop in Australia which desires to be "plugged in" to every marketplace in the world but doesn't know how, UDDI offers a simple one-stop location for Service Discovery. For the B2B marketplace which needs to get catalog data for relevant suppliers in its industry, along with connections to shippers, insurers, etc., UDDI offers Easy Integration. Because UDDI provides a standards-based profile for all electronic services, it Enhances Accessibility.
The vision of the UDDI organization has been a three stage process. First, to build the specification on a set of recognized standards, such as XML and SOAP, and a shared vision of open protocols. Next, UDDI has been expanded into a common web services “stack.” Inter-operating implementations compliant with the UDDI specification avoid confusing customers. The specifications are public and are developed in an inclusive process. The final part of the vision is to transition the work to a standards body after completion of three cycles of implementation backed specification development. UDDI is currently in development of this third version of the specification.
Version 1 of the UDDI Specification was published in September, 2000. Public registry site implementations of the UDDI Business Registry have been in continuous operation ever since. Both IBM and Microsoft currently operate "nodes" (implementations) which make up the registry. The UDDI Version 2 Specification was published in June of 2001. Availability of Beta implementations is imminent. Both Hewlett Packard and SAP have announced their intention to operate public registry implementations as part of the UDDI Business Registry in the version 2 time frame as well. Version 3 of the UDDI Specification is currently under development.

2. How UDDI Works

A UDDI registry contains programmatic descriptions of businesses and the services they support. It also contains references to specifications (called Technical Models, or "tModels") which describe how web services work. It is built upon a programming model and schema which are platform and language agnostic.
Business and standards organizations are the primary source of registry information. Populating the registry has several steps. Software companies, standards bodies and programmers populate the registry with descriptions of various tModels which describe specifications common to an industry vertical or business. Businesses populate the registry with descriptions of the services they support. UDDI assigns a programmatically unique universal identifier (UUID) to each tModel and business registration and stores them in the registry. Marketplaces, search engines, and business applications then query the registry to discover services of other companies. Businesses then use this data to facilitate easier integration with each other over the web. This can then be a dynamic process where search and discovery automatically adapts to available services.
Business and service data in the registry can be thought of in three categories: "white pages," "yellow pages," and "green pages." White pages contain information about a business, such as its name, a set of multi-language text descriptions, and contact information such as addresses, phone numbers, fax numbers, web sites, etc. It also consists of a set of optional Identifiers by which a business may be known, such a Dun & Bradstreet D-U-N-S® number, etc. Yellow pages consist of business categorizations. UDDI supports three built-in standard taxonomies in version 1. The NAICS taxonomy of industry codes; the UN/SPSC taxonomy of products & services; and a geographical taxonomy of location codes based on ISO 3166. Taxonomies are implemented as name-value pairs. Any valid taxonomy data can be attached to the business white page, or business service. Finally, Green pages specify how to bind to a service provider. They contain technical information about how to invoke a businesses service, including references to specifications for web services and support for pointers to various file and URL based discovery mechanisms, if required. UDDI uses a nested data model of Businesses, their Services and related Service Binding information.

ANDROID full seminar

DOWNLOAD








Introduction Android is a software platform and operating system for mobile devices, based on the Linux kernel, and developed by Google and later the Open Handset Alliance. It allows developers to write managed code in the Java language, controlling the device via Google-developed Java libraries. Applications written in C and other languages can be compiled to ARM native code and run, but this development path isn't officially supported by Google. Android is a software stack for mobile devices that includes an operating system, middleware and key applications. This beta version of the Android SDK provides the tools and APIs necessary to begin developing applications on the Android platform using the Java programming language. The unveiling of the Android platform on 5 November 2007 was announced with the founding of the Open Handset Alliance, a consortium of 48 hardware, software, and telecom companies devoted to advancing open standards for mobile devices. Google has made most of the Android platform available under the Apache free-software and open source license. Android Architecture The following diagram shows the major components of the Android operating system. Each section is described in more detail below. Features Handset layouts: The platform is adaptable to both larger, VGA, 2D graphics library, 3D graphics library based on OpenGL ES 1.0 specifications, and traditional smartphone layouts. Storage: The Database Software SQLite is used for data storage purposes Connectivity: Android supports a wide variety of connectivity technologies including GSM/EDGE, CDMA, EV-DO, UMTS, Bluetooth, and Wi-Fi. Messaging: SMS and MMS are available forms of messaging including threaded text messaging. Web browser: The web browser available in Android is based on the open-source WebKit application framework. Dalvik virtual machine: Software written in Java can be compiled into Dalvik bytecodes and executed in the Dalvik virtual machine, which is a specialized VM implementation designed for mobile device use, although not technically a standard Java Virtual Machine. Media support: Android will support advanced audio/video/still media formats such as MPEG-4, H.264, MP3, AAC, OGG, AMR, JPEG, PNG, GIF. Additional hardware support: Android is fully capable of utilizing video/still cameras, touch screens, GPS, accelerometers, and accelerated 3D graphics. Development environment: Includes a device emulator, tools for debugging, memory and performance profiling, and a plug-in for the Eclipse IDE. Market: Similar to the App Store on the iPhone, The Android Market is a catalog of applications that can be downloaded and installed directly to target hardware over-the-air, without the use of a PC. Currently only freeware applications are supported, although Google has announced plans to allow developers to offer paid applications as well. Open Handset Alliance and Android This barrier to application development began to crumble in November of 2007 when Google, under the Open Handset Alliance, released Android. The Open Handset Alliance is a group of hardware and software developers, including Google, NTT DoCoMo, Sprint Nextel, and HTC, whose goal is to create a more open cell phone environment. The first product to be released under the alliance is the mobile device operating system, Android. (For more information about the Open Handset Alliance, see www.openhandsetalliance.com.) With the release of Android, Google made available a host of development tools and tutorials to aid would-be developers onto the new system. Help files, the platform software development kit (SDK), and even a developers’ community can be found at Google’s Android website, http://code.google.com/android. This site should be your starting point, and I highly encourage you to visit the site. Brief History of Embedded Device Programming For a long time, cell phone developers comprised a small sect of a slightly larger group of developers known as embedded device developers. Seen as a less “glamorous” sibling to desktop—and later web—development, embedded device development typically got the proverbial short end of the stick as far as hardware and operating system features, because embedded device manufacturers were notoriously stingy on feature support. Embedded device manufacturers typically needed to guard their hardware secrets closely, so they gave embedded device developers few libraries to call when trying to interact with a specific device. Embedded devices differ from desktops in that an embedded device is typically a “computer on a chip.” For example, consider your standard television remote control; it is not really seen as an overwhelming achievement of technological complexity. When any button is pressed, a chip interprets the signal in a way that has been programmed into the device. This allows the device to know what to expect from the input device (key pad), and how to respond to those commands (for example, turn on the television). This is a simple form of embedded device programming. However, believe it or not, simple devices such as these are definitely related to the roots of early cell phone devices and development. Most embedded devices ran (and in some cases still run) proprietary operating systems. The reason for choosing to create a proprietary operating system rather than use any consumer system was really a product of necessity. Simple devices did not need very robust and optimized operating systems. As a product of device evolution, many of the more complex embedded devices, such as early PDAs, household security systems, and GPSs, moved to somewhat standardized operating system platforms about five years ago. Small-footprint operating systems such as Linux, or even an embedded version of Microsoft Windows, have become more prevalent on many embedded devices. Around this time in device evolution, cell phones branched from other embedded devices onto their own path. This branching is evident when you examine their architecture. Nearly since their inception, cell phones have been fringe devices insofar as they run on proprietary software software that is owned and controlled by the manufacturer, and is almost always considered to be a “closed” system. The practice of manufacturers using proprietary operating systems began more out of necessity than any other reason. That is, cell phone manufacturers typically used hardware that was completely developed in-house, or at least hardware that was specifically developed for the purposes of running cell phone equipment. As a result, there were no openly available, off-the-shelf software packages or solutions that would reliably interact with their hardware. Since the manufacturers also wanted to guard very closely their hardware trade secrets, some of which could be revealed by allowing access to the software level of the device, the common practice was, and in most cases still is, to use completely proprietary and closed software to run their devices. The downside to this is that anyone who wanted to develop applications for cell phones needed to have intimate knowledge of the proprietary environment within which it was to run. The solution was to purchase expensive development tools directly from the manufacturer. This isolated many of the “homebrew” developers (A growing culture of homebrew developers has embraced cell phone application development. The term “homebrew” refers to the fact that these developers typically do not work for a cell phone development company and generally produce small, one-off products on their own time). Another, more compelling “necessity” that kept cell phone development out of the hands of the everyday developer was the hardware manufacturers’ solution to the “memory versus need” dilemma. Until recently, cell phones did little more than execute and receive phone calls, track your contacts, and possibly send and receive short text messages; not really the “Swiss army knives” of technology they are today. Even as late as 2002, cell phones with cameras were not commonly found in the hands of consumers. By 1997, small applications such as calculators and games (Tetris, for example) crept their way onto cell phones, but the overwhelming function was still that of a phone dialer itself. Cell phones had not yet become the multiuse, multifunction personal tools they are today. No one yet saw the need for Internet browsing, MP3 playing, or any of the multitudes of functions we are accustomed to using today. It is possible that the cell phone manufacturers of 1997 did not fully perceive the need consumers would have for an all-in-one device. However, even if the need was present, a lack of device memory and storage capacity was an even bigger obstacle to overcome. More people may have wanted their devices to be all-in-one tools, but manufacturers still had to climb the memory hurdle. To put the problem simply, it takes memory to store and run applications on any device, cell phones included. Cell phones, as a device, until recently did not have the amount of memory available to them that would facilitate the inclusion of “extra” programs. Within the last two years, the price of memory has reached very low levels. Device manufacturers now have the ability to include more memory at lower prices. Many cell phones now have more standard memory than the average PC had in the mid-1990s. So, now that we have the need, and the memory, we can all jump in and develop cool applications for cell phones around the world, right? Not exactly. Device manufacturers still closely guard the operating systems that run on their devices. While a few have opened up to the point where they will allow some Java-based applications to run within a small environment on the phone, many do not allow this. Even the systems that do allow some Java apps to run do not allow the kind of access to the “core” system that standard desktop developers are accustomed to having. While cell phones running Linux, Windows, and even PalmOS are easy to find, as of this writing, no hardware platforms have been announced for Android to run on. HTC, LG Electronics, Motorola, and Samsung are members of the Open Handset Alliance, under which Android has been released, so we can only hope that they have plans for a few Android-based devices in the near future. With its release in November 2007, the system itself is still in a software-only beta. This is good news for developers because it gives us a rare advance look at a future system and a chance to begin developing applications that will run as soon as the hardware is released. System and Software Requirements To develop Android applications using the code and tools in the Android SDK, you need a suitable development computer and development environment, as described below. Supported Operating Systems * Windows XP or Vista * Mac OS X 10.4.8 or later (x86 only) * Linux (tested on Linux Ubuntu Dapper Drake) Supported Development Environments * Eclipse IDE o Eclipse 3.2, 3.3 (Europa) o Eclipse JDT Plugin (included in most Eclipse IDE packages) o JDK 5 or JDK 6 (JRE alone is not sufficient) o Not compatible with Gnu Compiler for Java (gcj) o Android Development Tools plugin (optional) * Other development environments or IDEs o JDK 5 or JDK 6 (JRE alone is not sufficient) o Not compatible with Gnu Compiler for Java (gcj) o Apache Ant 1.6.5 or later for Linux and Mac, 1.7 or later for Windows Anatomy of an Android Application There are four building blocks to an Android application: * Activity * Intent Receiver * Service * Content Provider Not every application needs to have all four, but your application will be written with some combination of these. Once you have decided what components you need for your application, you should list them in a file called AndroidManifest.xml. This is an XML file where you declare the components of your application and what their capabilities and requirements are. See the Android manifest file documentation for complete details. Activity Activities are the most common of the four Android building blocks. An activity is usually a single screen in your application. Each activity is implemented as a single class that extends the Activity base class. Your class will display a user interface composed of Views and respond to events. Most applications consist of multiple screens. For example, a text messaging application might have one screen that shows a list of contacts to send messages to, a second screen to write the message to the chosen contact, and other screens to review old messages or change settings. Each of these screens would be implemented as an activity. Moving to another screen is accomplished by a starting a new activity. In some cases an activity may return a value to the previous activity -- for example an activity that lets the user pick a photo would return the chosen photo to the caller. When a new screen opens, the previous screen is paused and put onto a history stack. The user can navigate backward through previously opened screens in the history. Screens can also choose to be removed from the history stack when it would be inappropriate for them to remain. Android retains history stacks for each application launched from the home screen. Intent and Intent Filters Android uses a special class called an Intent to move from screen to screen. An intent describes what an application wants done. The two most important parts of the intent data structure are the action and the data to act upon. Typical values for action are MAIN (the front door of the activity), VIEW, PICK, EDIT, etc. The data is expressed as a URI. For example, to view contact information for a person, you would create an intent with the VIEW action and the data set to a URI representing that person. There is a related class called an IntentFilter. While an intent is effectively a request to do something, an intent filter is a description of what intents an activity (or intent receiver, see below) is capable of handling. An activity that is able to display contact information for a person would publish an IntentFilter that said that it knows how to handle the action VIEW when applied to data representing a person. Activities publish their IntentFilters in the AndroidManifest.xml file. Navigating from screen to screen is accomplished by resolving intents. To navigate forward, an activity calls startActivity(myIntent). The system then looks at the intent filters for all installed applications and picks the activity whose intent filters best matches myIntent. The new activity is informed of the intent, which causes it to be launched. The process of resolving intents happens at run time when startActivity is called, which offers two key benefits: * Activities can reuse functionality from other components simply by making a request in the form of an Intent * Activities can be replaced at any time by a new Activity with an equivalent IntentFilter Intent Receiver You can use an IntentReceiver when you want code in your application to execute in reaction to an external event, for example, when the phone rings, or when the data network is available, or when it's midnight. Intent receivers do not display a UI, although they may use the NotificationManager to alert the user if something interesting has happened. Intent receivers are registered in AndroidManifest.xml, but you can also register them from code using Context.registerReceiver(). Your application does not have to be running for its intent receivers to be called; the system will start your application, if necessary, when an intent receiver is triggered. Applications can also send their own intent broadcasts to others with Context.broadcastIntent(). Service A Service is code that is long-lived and runs without a UI. A good example of this is a media player playing songs from a play list. In a media player application, there would probably be one or more activities that allow the user to choose songs and start playing them. However, the music playback itself should not be handled by an activity because the user will expect the music to keep playing even after navigating to a new screen. In this case, the media player activity could start a service using Context.startService() to run in the background to keep the music going. The system will then keep the music playback service running until it has finished. (You can learn more about the priority given to services in the system by reading Lifecycle of an Android Application.) Note that you can connect to a service (and start it if it's not already running) with the Context.bindService() method. When connected to a service, you can communicate with it through an interface exposed by the service. For the music service, this might allow you to pause, rewind, etc. Content Provider Applications can store their data in files, an SQLite database, or any other mechanism that makes sense. A content provider, however, is useful if you want your application's data to be shared with other applications. A content provider is a class that implements a standard set of methods to let other applications store and retrieve the type of data that is handled by that content provider. Lifecycle of an Android Application In most cases, every Android application runs in its own Linux process. This process is created for the application when some of its code needs to be run, and will remain running until it is no longer needed and the system needs to reclaim its memory for use by other applications. An important and unusual feature of Android is that an application process's lifetime is not directly controlled by the application itself. Instead, it is determined by the system through a combination of the parts of the application that the system knows are running, how important these things are to the user, and how much overall memory is available in the system. It is important that application developers understand how different application components (in particular Activity, Service, and IntentReceiver) impact the lifetime of the application's process. Not using these components correctly can result in the system killing the application's process while it is doing important work. A common example of a process lifecycle bug is an IntentReceiver that starts a thread when it receives an Intent in its onReceiveIntent() method, and then returns from the function. Once it returns, the system considers that IntentReceiver to be no longer active, and thus its hosting process no longer needed (unless other application components are active in it). Thus, it may kill the process at any time to reclaim memory, terminating the spawned thread that is running in it. The solution to this problem is to start a Service from the IntentReceiver, so the system knows that there is still active work being done in the process. To determine which processes should be killed when low on memory, Android places them into an "importance hierarchy" based on the components running in them and the state of those components. These are, in order of importance: 1. A foreground process is one holding an Activity at the top of the screen that the user is interacting with (its onResume() method has been called) or an IntentReceiver that is currently running (its onReceiveIntent() method is executing). There will only ever be a few such processes in the system, and these will only be killed as a last resort if memory is so low that not even these processes can continue to run. Generally at this point the device has reached a memory paging state, so this action is required in order to keep the user interface responsive. 2. A visible process is one holding an Activity that is visible to the user on-screen but not in the foreground (its onPause() method has been called). This may occur, for example, if the foreground activity has been displayed with a dialog appearance that allows the previous activity to be seen behind it. Such a process is considered extremely important and will not be killed unless doing so is required to keep all foreground processes running. 3. A service process is one holding a Service that has been started with the startService() method. Though these processes are not directly visible to the user, they are generally doing things that the user cares about (such as background mp3 playback or background network data upload or download), so the system will always keep such processes running unless there is not enough memory to retain all foreground and visible process. 4. A background process is one holding an Activity that is not currently visible to the user (its onStop() method has been called). These processes have no direct impact on the user experience. Provided they implement their activity lifecycle correctly (see Activity for more details), the system can kill such processes at any time to reclaim memory for one of the three previous processes types. Usually there are many of these processes running, so they are kept in an LRU list to ensure the process that was most recently seen by the user is the last to be killed when running low on memory. 5. An empty process is one that doesn't hold any active application components. The only reason to keep such a process around is as a cache to improve startup time the next time a component of its application needs to run. As such, the system will often kill these processes in order to balance overall system resources between these empty cached processes and the underlying kernel caches. When deciding how to classify a process, the system picks the most important level of all the components currently active in the process. See the Activity, Service, and IntentReceiver documentation for more detail on how each of these components contribute to the overall lifecycle of a process. The documentation for each of these classes describes in more detail how they impact the overall lifecycle of their application.