Acronym BIT

Although it has the same phonemes than the English noun, this means a piece, a small part of the whole, therefore, both concepts are completely different.

Acronym BIT

The definition corresponding to this word is the Unit of measurement of information within computer science, which is equivalent to the alternative of two possibilities with equal probability.

This unit is a numbering system for binary digit used in digital computers. While decimal system uses ten digits, the binary only uses two: 0 and 1. Therefore with this designation can be represented both the 0 and the 1, giving each one one value such as “shutdown” or “on”; “high” or “low” (referring to the electric current) etc., although in electronic and information technology used to power.

Then, given that with this unit only can represent only two values, if you need to represent more information, they need greater number of bits. Already that if two are used, it only has four possible alternatives:

-0 0 which means both are off.

-0 1 where 0 means it is off and 1, on.

-1 0 where is 1 on and 0, off.

-1 1 which means that both are powered on.

Through sequences of this unit, you can get to encode any value such as words, images, or numbers.

Since this concept is formed the nibble that is formed by 4 and the byte that is comprised of 8. It should be clear that octet and byte is not the same since the byte can consist of 6, 7, 8 or 9 (b), but currently the majority of computers a byte has 8 b, there may be exceptions.

When this magnitude applies to CPUs or microprocessors which specifies that they have 4, 8, 16, 32 or 64 bits, is it is alluding to the size that has the internal microprocessor registry and at the same time, to the ability to process the unit arithmetic logic. Of 16, 32 or 64 microprocessors take records equivalent to those numbers and unit arithmetic logic performs operations of 16, 32, or 64 respectively. These processors perform that operation on the size of the unit of your records but you can also do it in submultiples, according to the design having.

In addition to its use to define the capacity of the computer, this unit is also used to classify colors in an image. In fact a single color image has a value of 1 at each point is white or black, as for example a 8-bit image comes up to 256 colors.

“The first thing that explained in the course of technician in computing the value of the bit is and what it is”. In this example, used with the sense of its function in computer science.

“They have recommended him to to perform a wedding ceremony photos do them in 16-bit”. Refers in this case to its application in an image.

“Bit was defined by the engineer Claude Shannon who first used this term in 1948.”. Here, applies to the creator of this word.