Labels

.NET Job Questions About Java Absract class Abstract class Abstract Class and Interface Aggregation ajax aop apache ofbiz Apache ofbiz tutrial Association authentication autocad basics batch Binary Tree bootstrap loader in java build Builder design pattern C++ Job Questions caching CallableStatement in java certifications Chain of responsibility Design pattern charts check parentheses in a string Classes classloader in java classloading concept code quality collage level java program Composition concurrency Concurrency Tutorial Converting InputStream to String Core Java core java concept core java interview questions Core Java Interview Questions Core Java Questions core java tutorial CyclicBarrier in Java data structures database Database Job Questions datetime in c# DB Db2 SQL Replication deserialization in java Design Patterns designpatterns Downloads dtd Eclipse ejb example/sample code exception handling in core java file handling injava File I/O vs Memory-Mapped Filter first program in spring flex Garbage Collection Generics concept in java grails groovy and grails Guice Heap hibernate Hibernate Interview Questions how-to IBM DB2 IBM DB2 Tutorial ide immutable Interceptor Interface interview Interview Questions for Advanced JAVA investment bank j2ee java JAVA Code Examples Java 7 java changes java class loading JAVA Classes and Objects Java Classloader concept Java classloading concept java cloning concept java collection Java collection interview questions Java Collections java concurrency Java CountDownLatch java definiton Java design pattern Java EE 5 Java EE 6 Java Exceptions Java file Java Garbage Collection Java generics Java Glossary java hot concept java immutable concept Java Interface Java interview Question java interview question 2012 java interview question answer Java Interview Questions Java Interview Questions and Answers java interview topic java investment bank Java Job Questions java multithreading java multithreading concept java new features Java Packages java proxy object java questions Java Serialization Java serialization concept java serialization interview question java session concept java string Java Swings Questions java synchronization java threading Java Threads Questions java tutorial java util; java collections; java questions java volatile java volatile interview question Java Wrapper Classes java.java1.5 java.lang.ClassCastException JavaNotes javascript JAX-WS jdbc JDBC JDBC Database connection jdk 1.5 features JDK 1.5 new features Concurrent HashMap JMS interview question JMS tutorial job JSESSIONID concept JSESSIONID interview Question JSF jsp JSP Interview Question JSP taglib JSTL with JSP Junit Junit Concept Junit interview question.Best Practices to write JUnit test cases in Java JVM Linux - Unix tutorial Marker Interfaces MD5 encryption and decryption messaging MNC software java interview question musix NCR java interview question Networking Job Questions news Object Serialization Objects ojdbc14.jar OOP Oracle Oracle SQL Query for two timestamp difference orm own JavaScript function call in Apache ofbiz Packages Palm Apps patterns pdf persistence Portal Portlet Spring Integration Prime number test in java programs Rails Reboot remote computers REST Ruby Sample application schema SCJP security Senior java developer interviews servlet3 servlets session tracking singleton design pattern Spring Spring 2.5 Framework spring ebook Spring framework concept spring MVC spring pdf Spring Security Spring Security interview questions SQL SQL performance SQL Query to create xml file Sql Query tuning ssis and ssrs StAX and XML string concept string immutable string in java strings struts Struts2 Struts2 integration synchronization works in java Technical Interview testing tips Tomcat top Tutorial Volatile in deep Volatile working concept web Web Developer Job Questions web services weblogic Weblogic Application Server websphere what is JSESSIONID xml XML parsing in java XML with Java xslt


Wednesday, 17 July 2013

Variables and Data Storage Questions & Ans


 Where in memory are my variables stored?
Variables can be stored in several places in memory, depending on their lifetime. Variables that are defined outside any function (whether of global or file static scope), and variables that are defined inside a function as static variables, exist for the lifetime of the program's execution. These variables are stored in the "data segment." The data segment is a fixed-size area in memory set aside for these variables. The data segment is subdivided into two parts, one for initialized variables and another for uninitialized variables.
Variables that are defined inside a function as auto variables (that are not defined with the keyword static) come into existence when the program begins executing the block of code (delimited by curly braces {}) containing them, and they cease to exist when the program leaves that block of code.
Variables that are the arguments to functions exist only during the call to that function. These variables are stored on the "stack". The stack is an area of memory that starts out small and grows automatically up to some predefined limit. In DOS and other systems without virtual memory, the limit is set either when the program is compiled or when it begins executing. In UNIX and other systems with virtual memory, the limit is set by the system, and it is usually so large that it can be ignored by the programmer.
The third and final area doesn't actually store variables but can be used to store data pointed to by variables. Pointer variables that are assigned to the result of a call to the malloc() function contain the address of a dynamically allocated area of memory. This memory is in an area called the "heap." The heap is another area that starts out small and grows, but it grows only when the programmer explicitly calls malloc() or other memory allocation functions, such as calloc(). The heap can share a memory segment with either the data segment or the stack, or it can have its own segment. It all depends on the compiler options and operating system. The heap, like the stack, has a limit on how much it can grow, and the same rules apply as to how that limit is determined.

 Do variables need to be initialized?
No. All variables should be given a value before they are used, and a good compiler will help you find variables that are used before they are set to a value. Variables need not be initialized, however. Variables defined outside a function or defined inside a function with the static keyword are already initialized to 0 for you if you do not explicitly initialize them.
Automatic variables are variables defined inside a function or block of code without the static keyword. These variables have undefined values if you don't explicitly initialize them. If you don't initialize an automatic variable, you must make sure you assign to it before using the value.
Space on the heap allocated by calling malloc() contains undefined data as well and must be set to a known value before being used. Space allocated by calling calloc() is set to 0 for you when it is allocated.

 What is page thrashing?
Some operating systems (such as UNIX or Windows in enhanced mode) use virtual memory. Virtual memory is a technique for making a machine behave as if it had more memory than it really has, by using disk space to simulate RAM (random-access memory). In the 80386 and higher Intel CPU chips, and in most other modern microprocessors (such as the Motorola 68030, Sparc, and Power PC), exists a piece of hardware called the Memory Management Unit, or MMU.
The MMU treats memory as if it were composed of a series of "pages." A page of memory is a block of contiguous bytes of a certain size, usually 4096 or 8192 bytes. The operating system sets up and maintains a table for each running program called the Process Memory Map, or PMM. This is a table of all the pages of memory that program can access and where each is really located.
Every time your program accesses any portion of memory, the address (called a "virtual address") is processed by the MMU. The MMU looks in the PMM to find out where the memory is really located (called the "physical address"). The physical address can be any location in memory or on disk that the operating system has assigned for it. If the location the program wants to access is on disk, the page containing it must be read from disk into memory, and the PMM must be updated to reflect this action (this is called a "page fault"). Hope you're still with me, because here's the tricky part. Because accessing the disk is so much slower than accessing RAM, the operating system tries to keep as much of the virtual memory as possible in RAM. If you're running a large enough program (or several small programs at once), there might not be enough RAM to hold all the memory used by the programs, so some of it must be moved out of RAM and onto disk (this action is called "paging out").
The operating system tries to guess which areas of memory aren't likely to be used for a while (usually based on how the memory has been used in the past). If it guesses wrong, or if your programs are accessing lots of memory in lots of places, many page faults will occur in order to read in the pages that were paged out. Because all of RAM is being used, for each page read in to be accessed, another page must be paged out. This can lead to more page faults, because now a different page of memory has been moved to disk. The problem of many page faults occurring in a short time, called "page thrashing," can drastically cut the performance of a system.
Programs that frequently access many widely separated locations in memory are more likely to cause page thrashing on a system. So is running many small programs that all continue to run even when you are not actively using them. To reduce page thrashing, you can run fewer programs simultaneously. Or you can try changing the way a large program works to maximize the capability of the operating system to guess which pages won't be needed. You can achieve this effect by caching values or changing lookup algorithms in large data structures, or sometimes by changing to a memory allocation library which provides an implementation of malloc() that allocates memory more efficiently. Finally, you might consider adding more RAM to the system to reduce the need to page out.

 What is a const pointer?
The access modifier keyword const is a promise the programmer makes to the compiler that the value of a variable will not be changed after it is initialized. The compiler will enforce that promise as best it can by not enabling the programmer to write code which modifies a variable that has been declared const.
A "const pointer," or more correctly, a "pointer to const," is a pointer which points to data that is const (constant, or unchanging). A pointer to const is declared by putting the word const at the beginning of the pointer declaration. This declares a pointer which points to data that can't be modified. The pointer itself can be modified. The following example illustrates some legal and illegal uses of a const pointer:
const char  *str = "hello";
char  c = *str    /* legal */
str++;            /* legal */
*str = 'a';       /* illegal */
str[1] = 'b';     /* illegal */
The first two statements here are legal because they do not modify the data that str points to. The next two statements are illegal because they modify the data pointed to by str.
Pointers to const are most often used in declaring function parameters. For instance, a function that counted the number of characters in a string would not need to change the contents of the string, and it might be written this way:
my_strlen(const char *str)
{
        int count = 0;
        while (*str++)
        {
            count++;
        }
        return count;
}
Note that non-const pointers are implicitly converted to const pointers when needed, but const pointers are not converted to non-const pointers. This means that my_strlen() could be called with either a const or a non-const character pointer.

No comments:

Post a Comment

LinkWithin

Related Posts Plugin for WordPress, Blogger...