<< Chapter < Page Chapter >> Page >
This module explains the different kinds of memory, some common memory organizations, and the basics of conserving memory.

In the early days of computers, the instruction memories of main frames were incredibly small by today's standards, only in the hundreds or thousands of bytes. This small capacity emphasized making each instruction count and each value saved necessary. Fortunately, just as processors have exponentially increased in speed, program memory has similarly increased in capacity. Even though it seeme like there is an abundance, there are still general practices that must be kept in mind when using program memory. Also, smaller platforms, like microcontrollers, are still limited to memory capacities in the kilobytes. This module explains the kinds of memory, common memory organizations, and the basics of conserving memory.

How memory is organized

In most memory architectures there is some categorization of memory into parts. The basic principle behind this is that seperate sections of memory can serve specific purposes while each section can be more efficiently accessed. Types of memory include the following:

  • Instruction Memory is a region of memory reserved for the actual assembly code of the program. This memory may have restrictions on how it can be written to or accessed because frequent changes to an application's instruction are not expected. Because the size of instruction memory is known when the program compiles, this section of memory can be segmented by hardware, software, or a combination of the two.
  • Data Memory is a region of memory where temporary variables, arrays, and information used by a program can be stored without using long term memory (such as a hard disk). This section of memory is allocated during the course of the program when more memory for data structures is needed.
  • Heap Memory is an internal memory pool that tasks dynamically allocate as needed. As functions call other functions, it is necessary that the new (callee) function's data be loaded into the CPU. The previous (caller) function's data must be stored in the heap memory so that it may be restored when the callee function is finished executing. The deeper function calls go, the larger the heap portion of memory needs to be.
Often, the heap memory and the data memory compete directly for space while the program is running. This is because both the depth of the function calls and the size of the data memory can fluctuate based upon the situation. This is why it is important to return the heap memory the task uses to the memory pool when the task finishes.

Memory allocation in languages

The organization of memory can vary among compilers and programming languages. In most cases, the goal of memory management systems is to make the limited resource of memory appear infinite (or at least more abundant) than it really is. The goal is to free the application programmer from having to worry about where his memory will come from. In the oldest days of mainframes, when each byte of memory was precious, a programmer might account each address in memory himself to ensure that there was enough room for the instructions, heap, and data. As programming languages and compilers were developed, algorithms to handle this task were developed so that the computer could handle its own memory issues.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Introduction to the texas instruments ez430. OpenStax CNX. Jun 19, 2006 Download for free at http://cnx.org/content/col10354/1.6
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Introduction to the texas instruments ez430' conversation and receive update notifications?

Ask