Sunday, 27 April 2014

VMware and its possible issues..

An error caused the resume operation to fail. Preserve the suspended state and correct the error, or discard the suspended state.`

This issue may occur when the VMware virtual machine is put to suspended state while your machine is shutting down.

To remove the suspend state from the virtual machine:
  1. Close VMware Workstation.
  2. Locate the virtual machine's folder. 
  3. Delete the .VMSS and .LCK, files.
  4. Open the .vmx (virtual machine configuration) file in a text editor. For more information, see  Editing the .vmx file of a VMware Workstation and VMware Player virtual machine (2057902)
  5. Find the line which starts with:

    checkpoint.vmState
  6. Remove everything between the quotation marks. It should look like:

    checkpoint.vmState = ""
  7. Save and close the .vmx file.
Warning: Deleting the suspend state has the same effect as performing a hard reset of the virtual machine, or pushing the Reset button on a physical computer, in that any unsaved data in open applications is lost.


 Unable to start the VM with the following error:-  The VMWare Authorization service is not running.


 
Solution:-  Go to services by typing services.msc in run panel and press enter. Restart VMWare Authorization Service and try again.


Friday, 29 November 2013

Some C++ concepts from daily programming life



Standardization: Available and released version of C++

YearC++ StandardInformal name
1998ISO/IEC 14882:1998C++98
2003ISO/IEC 14882:2003C++03
2007ISO/IEC TR 19768:2007C++TR1
2011ISO/IEC 14882:2011C++11


Managed and Unmanaged objects in c++


The concept of Managed or Unmanaged objects is not typically C++. It is related to Microsoft .Net technology. Basically somewhere related to memory management and garbage collection.
If you are using normal or native C++ language, your application itself is responsible for deleting all the memory it has allocated dynamically. Due to which the developer has to be very careful about maintaining the object lifetime. If memory is sooner than its use is over, the application may crash. If memory is not deleted at all, the application results in memory leak.
Environments like Java and .Net solve this problem by using garbage collectors. The developer do not have to delete objects and care about the lifetime of the objects any more, the garbage collector will do this for him.
In the 'native' .Net languages (like C#), the whole language works with the garbage collector concept. To make the transition from normal, plain C++ applications to .Net easier, Microsoft added some extensions to its C++ compiler, so that C++ developers could already benefit from the advantages of .Net.
Whenever you use normal, plain C++, Microsoft talks about unmanaged, or native C++. If you use the .Net extensions in C++, Microsoft talks about managed C++.

 If your application contains both, you have a mixed-mode application.


Strict weak ordering

A Binary predicate* that allows you for more general interpretation of equality.
That typically means you do not actually follow the actual comparison of value by value but does a more subjective comparison. One way to achieve this by overloading "operator<()" and then writing our own weak or subjective comparison logic built.

This concept is also used by container class to sort the keys in its hash or containers.
*Predicate:- A simple function that returns a boolean value, could be result of some calculation.
$Note- more to be coming soon on this topic.



To suppress any warning in C++ compilers



#pragma warning( disable : 4786)
#pragma warning( disable : 4503)
#pragma warning( disable : 4290)
#pragma warning( disable : 4996)

Note:- Use the warning error no.



Why Sizeof operator cannot be overloaded


Sizeof cannot be overloaded because built-in operations, such as incrementing a pointer into an array implicitly depends on it. Consider:
 X a[10];
 X* p = &a[3];
 X* q = &a[3];
 p++; // p points to a[4]
// thus the integer value of p must be
// sizeof(X) larger than the integer value of q

Thus, sizeof(X) could not be given a new and different meaning by the programmer without violating basic language rules.
Note:- note the addition of size to get the new pointer.

Courtesy stroustrup.com

In C++ we cannot define our own operators

It may lead to subtle bugs because of having difference in opinion about the behaviour of operator while looking at it.

For e.g.
3 ** 2 means 3*2*2 for me but to someone else it make more sense as 3^(2*2). 
So to avoid any such confusion we cannot define our own operators.

Difference between Scope, Lifetime and Visibility of objects/variables

Lifetime is the total time between an object's creation and destruction. 
Scope is those sections of the code where an identifier (whether it represents a variable, a function or anything else) is legally accessible. 
Visibility is those sections where the identifier is in scope and has not been masked by the reuse of the identifier in a more local scope. 
Lifetime >= Scope >= Visibility. 

A global variable might have a lifetime that equals the entire execution of the program and would always be in scope, but would not be visible in a function where its name was reused for a local variable



Significance of arrow operator?



In C, a->b is precisely equivalent to (*a).b. The "arrow" notation was introduced as a convenience; accessing a member of a struct via a pointer was fairly common and the arrow notation is easier to write/type, and generally considered more readable as well.
C++ adds another wrinkle as well though: operator-> can be overloaded for a struct/class. Although fairly unusual otherwise, doing so is common (nearly required) for smart pointer classes.
That's not really unusual in itself: C++ allows the vast majority of operators to be overloaded (although some almost never should be, such as operator&&operator|| and operator,).
What is unusual is how an overloaded operator-> is interpreted. First, although a->b looks like ->is a binary operator, when you overload it in C++, it's treated as a unary operator, so the correct signature is T::operator(), not T::operator(U) or something on that order.
The result is interpreted somewhat unusually as well. Assuming foo is an object of some type that overloads operator->foo->bar is interpreted as meaning (f.operator->())->bar. That, in turn, restricts the return type of an overloaded operator->. Specifically, it must return either an instance of another class that also overloads operator-> (or a reference to such an object) or else it must return a pointer.
In the former case, a simple-looking foo->bar could actually mean "chasing" through an entire (arbitrarily long) chain of instances of objects, each of which overloads operator->, until one is finally reached that can refer to a member named bar. For an (admittedly extreme) example, consider this:
#include <iostream>

class int_proxy {
    int val;
public:
    int_proxy(): val(0) {}
    int_proxy &operator=(int n) { 
        std::cout<<"int_proxy::operator= called\n";
        val=n; 
        return *this; 
    }
};

struct fubar {
    int_proxy bar;
} instance;

struct h {
    fubar *operator->() {
        std::cout<<"used h::operator->\n";
        return &instance;
    }
};

struct g {
    h operator->() {
        std::cout<<"used g::operator->\n";
        return h();   
    }
};

struct f {
    g operator->() { 
        std::cout<<"Used f::operator->\n";
        return g();
    }
};

int main() {
    f foo;

    foo->bar=1;
}
Even though foo->bar=1; looks like a simple assignment to a member via a pointer, this program actually produces the following output:
Used f::operator->
used g::operator->
used h::operator->
int_proxy::operator= called
Clearly, in this case foo->bar is not (even close to) equivalent to a simple (*foo).bar. As is obvious from the output, the compiler generates "hidden" code to walk through the whole series of overloaded -> operators in various classes to get from foo to (a pointer to) something that has a member named bar (which in this case is also a type that overloads operator=, so we can see output from the assignment as well).

{http://stackoverflow.com/questions/13023320/the-arrow-member-operator-in-c}


Pointers Vs Reference.  When to use and why?

Note:- Not covering the definitions

1) References can never be null. 
 
  string& str1;             // Error! References must  be initialized  
  string str("xyzzy");  
  string& str1 = str;         // okay, str1 refers to str  

There is no such restriction in pointer. SO whenever we use pointer we tend to check for the possibility of its being null and then we use it. We dont have to bother about this when using references. This way references are safer to use.

2) Pointer may be reassigned to point out to some other objects as well. this is not the case with reference, it is assigned to the object with which it is initialized.

Hence, use pointers when there is possibility of the pointer referring to different things.
Use reference when you wanted to refer to only one object.

Note:- When ever you are implementing certain operators always use references.


Wednesday, 30 October 2013

Nested Classes... brief discussion

Nested class are used for hiding implementation details, exactly hiding the classes.
PROS:

class List
{
    public:
        List(): head(NULL), tail(NULL) {}
    private:
        class Node
        {
              public:
                  int   data;
                  Node* next;
                  Node* prev;
        };
    private:
        Node*     head;
        Node*     tail;
};
Here I don't want to expose Node as other people may decide to use the class and that would hinder me from updating my class as anything exposed is part of the public API and must be maintained forever. By making the class private, I not only hide the implementation I am also saying this is mine and I may change it at any time so you can not use it.

CONS:-
One more example,
Let's imagine the following code :

class A
{
   public :
      class B { /* etc. */ } ;

   // etc.
} ;
Or even:
class A
{
   public :
      class B ;

   // etc.
} ;

class A::B
{
   public :

   // etc.
} ;
So:
  • Privilegied Access: A::B has privilegied access to all members of A (methods, variables, symbols, etc.), which weakens encapsulation
  • A's scope is candidate for symbol lookup: code from inside B will see all symbols from A as possible candidates for a symbol lookup, which can confuse the code
  • forward-declaration: There is no way to forward-declare A::B without giving a full declaration of A
  • Extensibility: It is impossible to add another class A::C unless you are owner of A
  • Code verbosity: putting classes into classes only makes headers larger. You can still separate this into multiple declarations, but there's no way to use namespace-like aliases, imports or usings.

Conclusion:-
As a conclusion, unless exceptions (e.g. the nested class is an intimate part of the nesting class... And even then...), the flaws outweighs by magnitudes the perceived advantages.
On the pro-side, you isolate this code, and if private, make it unusable but from the "outside" class...
-Coutesy : stackOverflow
------------------------------------------------------------------------------------------------------------
Reference link:- http://publib.boulder.ibm.com/infocenter/comphelp/v8v101/index.jsp?topic=%2Fcom.ibm.xlcpp8a.doc%2Flanguage%2Fref%2Fcplr061.htm

Saturday, 19 October 2013

Notes on Memory management

Notes on Memory management- 

Note:-  These notes are just bullet points for refreshing the knowledge on Memory management. This is not written with the intention to explain any concept or for reference. 

1.       Main memory and the registers build into the processors itself are storage that CPU directly access.

2 .       Registers are generally accessible within one CPU cycle of CPU clock.

3.       Protection on memory space is provided by two registers-
             ·         Base registers(Relocation register)- holds the smallest legal physical address.
             ·         Limit address- specifies the size of range

4.       Protection of memory is accomplished by having the CPU hardware compare every address (generated in user mode, ie generated by user code) is within the above two register address. Any attempt to access any memory outside the range is marked as fatal error.

5.       The runtime mapping from virtual to physical address is done by hardware device called Memory Management Unit(MMU)

6.       Dynamic loading – a routine is not loaded into memory until it is called.
           ·         Relocatable  linker table – loads the desired routines into the memory and updates the program address tables to reflect this change.

7.       Static Linking – In which the system language libraries are treated like any other object  module and combined by the loader into the binary program image.
  In dynamic linking –the linking is postponed until execution time.

8. Version information of library is included in both the program and library. More than one version of library can be loaded into the memory and each program uses its own version information to decide which copy of library to use.

9. Roll in & roll out- the process of swapping in and swapping out process for execution.

10. Relocation registers  = base registers of the process.
Physical address = logical address + base(Relocation)  register value.

11. Transient Operating system code – The OS code that is not required too frequently.

12. Variable partition scheme (M.V.T) = Satisfy the request of size ‘n’ from the list of available free holes.
       There are many suggestions to above problem,
·         First Fit
·         Best fit
·         Worst fit


      13. External fragmentation – it exists when there is enough total memory space to satisfy the request but the available spaces are not contiguous.
      
      14. Internal fragmentation – it is the unused memory that is internal to the partition. This occurs when we take the approach of breaking the physical memory into fixed sized partition and allocate in unit based on block size.

      15. Compaction – shuffling the memory contents as to place all the free memory space in once large block.

      16.  Backing store – the memory where the swapped out process is kept.
      
       17. Paging is the memory management scheme that permits physical address space of the process to be non contiguous.
·         It avoids external fragmentation.
·         Compaction is not needed.
·         Also saves the memory management problem in backing store.

       18. Frames – Breaking physical memory into fixed size of blocks called frame.
             Pages – breaking logical memory into fixed size of blocks called pages. 
       Page size is defined by the hardware.
       
        19. Frame table –it is the data structure that has entry for each physical page frame, indicating that whether              the later is free or allocated and if allocated to which page of which process

        20.   For protection - protection bits are used that is associated with each frame.

               We have access type buts and valid and invalid buts associated in page table.