What is the difference between binary to decimal and binary to octal?
Binary to octal is easier, as long as every 3 bits of binary are calculated into one decimal written there is octal
For example
111101001010
This is a binary, if it is to be converted to octal
111 is 7
101 is 5< /p>
001 is 1
010 is 2
Then octal is 7512
What is the conversion principle of computer binary, octal, decimal and hexadecimal?
Computer binary, octal, decimal, and hexadecimal are four commonly used number systems, which are based on 2, 8, 10, and 16, respectively, and represent different values. They can be converted to each other, the principle of conversion is mainly the use of division and take the remainder of the operation.
The following are some common conversion methods:
Decimal to binary: divide by 2, take the remainder in the reverse direction until the quotient is 0 to terminate.
Binary to decimal: add by weight, multiplying each bit by the corresponding power of 2.
Binary to octal: split into groups of three bits each from right to left, each group corresponding to an octal number.
Binary to hexadecimal: from right to left into groups of four, each group corresponds to a hexadecimal number.
Octal to hexadecimal: convert octal digits to binary digits, then convert binary digits to hexadecimal digits.
For example:
The decimal number 796 is converted to binary: 796/2=398…0;398/2=199…0;199/2=99…1;…; Finally we get 1100011100
Binary number 1100011100 converted to decimal number: (1 * 2 ^ 9) + (1 * 2 ^ 8) + (0 * 2 ^ 7) + … + (0 * 2 ^ 0) = 796
Octal number 14 converted to hexadecimal number: 14 converted to 001100, and then converted to C