Computerized decimal to binary algorithm for decimals

How to convert numbers with decimals decimal to binary ah?

The decimal decimal number N is converted to a binary number by repeatedly multiplying N by 2, resulting in an integer that is arranged in a smooth sequence.

Example 1-5 converts the decimal decimal number 0.5625 to a binary number.

Solution:

0.5625 x 2 = 1.125 The integer part is 1 and the decimal part is 0.125

0.125 x 2 = 0.25 The integer part is 0 and the decimal part is 0.25

0.25 x 2 = 0.5 The integer part is 0 and the decimal part is 0.5

0.5 x 2 = 1.0 integer part is 1, decimal part is 0

Thus, the binary representation of 0.5625 is 0.1001

And so on until the decimal part is 0 or until the requirement of precision is met (sometimes the decimal part is never 0), from which it can be seen that repeated multiplication by two yields the integer part and arranges them in the order in which they are obtained as a1a2a3 …, is the binary representation of N.

Computer decimal to binary algorithm

Decimal to binary algorithm steps are as follows:

1, integer conversion:

+ binary to binary principle: decimal number divided by 2 until the quotient is 0, and finally reverse the remainder.

2, decimal conversion:

For decimals, binary to decimal is relatively simple, is still a binary number of each multiplied by 2 to the nth power, the decimal point in front of the n from zero, each time, plus one; decimal point behind the n from -1, each time, minus one, and finally add up.

Learning computer friends just started learning to contact the conversion between binary, decimal, octal, hexadecimal, etc., today we one by one to get it done to see how decimal and binary are converted to each other.

+Binary is the world’s common, that is, full ten into one, full twenty into two, and so on. Binary is a widely used in computing technology, is the use of 0 and 1 two digital representation of the number, it is the base 2, into the rule is “every two into one”, borrowing rule is “borrowing one when…

Binary is a system that is widely used in computing technology.

Binary conversion to decimal:

Integer binary by multiplying the value of the power of 2 in turn, decimal binary by multiplying the value of the negative power of 2 and then in turn, if the binary number of complementary digits of the first after the number of 1, then the corresponding integer for the negative, then you need to take the inverse and then converted to, for example, 11111001, the first for the first, then you need to take the inverse of its first.

The -0000011000000110, corresponding to the decimal system is 6, so 11111001 corresponding to the decimal system is -6.

How to convert between decimal and binary

The conversion between decimal and binary is done in four steps:

1. Convert the integer part of the decimal number to binary. Take the decimal number, factor it by two, and take its remainder.

For example, 101/2 = 50, the remainder is 1, 50/2 = 25, the remainder is 0, 25/2 = 12, the remainder is 1, 12/2 = 6, the remainder is 0, 6/2 = 3, the remainder is 0, 3/2 = 1, the remainder is 1, 1/2 = 0, the remainder is 1.

2, the corresponding remainder written out in a straight line from low to high, such as the above as 1100101, which is the binary representation of 101.

3, the decimal part of the decimal to binary. Keep multiplying the decimal by 2 and rounding until there are no more decimals. Note that not all decimals can be converted to binary.

For example, 0.75*2=1.50, rounded up to the nearest 1, and 0.50*2=1, rounded up to the nearest 1.

4. Putting the corresponding integers in order gives 0.11.

To turn a binary number into a decimal number, just do the math the other way around.

The use of decimal for human arithmetic may have something to do with the fact that humans have ten fingers. Aristotle claimed that the universal use of decimal by humans was simply a result of the anatomical fact that the vast majority of people are born with 10 fingers. In fact, of the independently developed written notation systems in the ancient world, almost all of them were decimal, with the exception of the Babylonian civilization’s cuneiform numerals, which were in base 60, and the Mayan numerals, which were in base 20. Except that these decimal notation systems were not bitwise.

Binary is a widely used number system in computing technology. Binary data is a number represented by two digits, 0 and 1. It has a base of 2, a rounding rule of “one for two” and a borrowing rule of “one for two”, discovered by Leibniz, the 18th-century German philosopher of mathematics. The current computer system is basically a binary system, the data in the computer is mainly stored in the form of complementary code. The binary in a computer is a very tiny switch, with ‘on’ for 1 and ‘off’ for 0.

The invention and application of computers in the 20th century, which has been called one of the major symbols of the third technological revolution, was a major success, as digital Computers can only recognize and process codes consisting of strings of ‘0’ and ‘1’ symbols. Its mode of operation is precisely binary. 19th century Irish logician George Boole’s thinking process on logical propositions was transformed into some kind of algebraic calculation on the symbols ”0” and ”1”. binary is a 2-bit system. 0 and 1 are the basic arithmetic symbols. Because it only uses 0, 1 two digital symbols, very simple and convenient, easy to realize electronically.