# Dealing with Bit Sets in Swift

Posted on February 5, 2016 中文版 | 한국어

**Update 10/17:***This post has been updated to Swift 4.*

**Update 10/16:***This post has been updated to Swift 3.*

Swift provides convenient fixed size integers and the usual set of bitwise operators you already know, so, dealing with bit sets would seem pretty straightforward.

But you’ll soon discover that the language and the standard library put always safety first and dealing with bits and different integer types requires a few more type conversion than what you could be used to. This post describes the a gotcha you should be aware of.

Before explaining what i mean, let’s get up to speed with the basics of integer types and bitwise operations.

*Get the playground for this article from GitHub or zipped.*

## Integer Types and bitwise operators

Swift offers a set of integer types with different fixed length and signedness: **Int/UInt**, **Int8/UInt8**(8 bits), **Int16/UInt16**(16 bits), **Int32/UInt32**(32 bits) and **Int64/UInt64**(64 bits).

The types Int and UInt are platform dependent and are equivalent to **Int32/UInt32** on 32 bit platforms and to **Int64/UInt64** on 64 bit platforms. The others have the specified length regardless of the target platform you compiled for.

Fixed length type are extremely useful when used in combination with bitwise operators because they make explicit the size of the data you are working on, and when performing operations on single bit you’ll rarely use the platform dependent `Int`

or `UInt`

.

Variables with a fixed size integer type can be initialized with a binary, octal or hexadecimal value this way:

```
var int1:UInt8 = 0b10101010
var int2:UInt8 = 0o55
var int3:UInt8 = 0xA7
```

In regards to bitwise operations, Swift supports what you’d expect: **unary NOT**(~ operator), **AND**(& operator), **OR**(| operator), **XOR**(^ operator) and the **left and right shift**(« and » operators).

It’s important to remember that for unsigned integers, shifting left or right by the given number of positions inserts a 0 in the opposite direction. For signed integers instead, the sign is preserved inserting the sign bit instead of a zero when shifting to right.

For integers more than one byte long, Swift also provides a few useful computed properties to deal with endianness conversion: `littleEndian`

, `bigEndian`

, `byteSwapped`

, that respectively convert from the current integer representation to big or little endian or convert to the opposite endianness.
One last thing, is there a way to understand if we are on a 32 or 64 bits platform?

Sure, but considering that the Builtin module is not accessible, we can only compare the size of *Int* with one of the two fixed size integers that correspond to the two supported platform widths:

```
MemoryLayout<Int>.stride==MemoryLayout<Int32>.stride //Are we on a 32bits platform? Nope.
```

I’m using the `stride`

property here, but in this case `size`

would have worked as well.

## Integer type conversions

Swift does not perform implicit type conversions, and as you may have already noticed when doing mixed types arithmetic operations, you need to explicitly convert the variables of your expression to a type big enough to hold your results.

In case of multiple integers in the same expression, Swift can only infer the type of free integer literals when they are used with some typed variable of one single type, as before, no implicit conversion toward the bigger integer type is performed.

Let’s see an example of what is allowed and what isn’t:

```
var u8:UInt8 = 1
u8 << 2 //4: The number 2 is considered an UInt8 and u8 is shifted
// to the left by 2 positions
var by2:Int16 = 1
u8 << by2 //Error: Operands of different types, doesn't compile
u8 << UInt8(by2) //4: It works, we manually converted the Int type,
// but this is NOT SAFE!
```

Why it’s not safe you could ask?

Because when converting a bigger integer type to a smaller one or an unsigned integer to an signed one, Swift does not perform any truncation of the contained value, and *fails at runtime* when the value being converted overflows the receiving type.

This must be kept in mind and it’s extremely important when you perform conversions on integer that are the result of something entered by the user or coming from some external component.

But luckily, Swift provides a way to perform *bit* truncating conversions using the `init(truncatingIfNeeded:)`

constructor, quite useful when you are performing operations on bits and are **not interested in the actual decimal value of the integer**.

```
var u8:UInt8=UInt8(truncatingIfNeeded:1000)
u8 // 232
```

In this sample we converted the *Int* 1000 that has a binary representation of `0b1111101000`

to an *UInt8* just keeping the 8 least significant bits and discarding everything else. That way we obtained 232, that has a binary representation of `0b11101000`

.

And this works the same way for every combination of Int*n* or UInt*n* types, the sign of the signed *Int* is ignored and the bit sequence is simply used to initialize a new integer variable. Between signed and unsigned integers of the same size, `init(bitPattern:)`

is also available but the result is the same of the usual truncating conversion.

The only drawback of this safety-first/no-assumptions approach, is that when you need to perform a lot of type conversions, you code starts to become bloated with all those truncating conversions.

But luckily, in Swift we can extend base types with new methods and we can use this to add some utility methods that truncate to a specific size to all the integer types, for example:

```
extension Int {
public var toU8: UInt8{ get{return UInt8(truncatingIfNeeded:self)} }
public var to8: Int8{ get{return Int8(truncatingIfNeeded:self)} }
public var toU16: UInt16{get{return UInt16(truncatingIfNeeded:self)}}
public var to16: Int16{get{return Int16(truncatingIfNeeded:self)}}
public var toU32: UInt32{get{return UInt32(truncatingIfNeeded:self)}}
public var to32: Int32{get{return Int32(truncatingIfNeeded:self)}}
public var toU64: UInt64{get{
return UInt64(self) //No difference if the platform is 32 or 64
}}
public var to64: Int64{get{
return Int64(self) //No difference if the platform is 32 or 64
}}
}
extension Int32 {
public var toU8: UInt8{ get{return UInt8(truncatingIfNeeded:self)} }
public var to8: Int8{ get{return Int8(truncatingIfNeeded:self)} }
public var toU16: UInt16{get{return UInt16(truncatingIfNeeded:self)}}
public var to16: Int16{get{return Int16(truncatingIfNeeded:self)}}
public var toU32: UInt32{get{return UInt32(self)}}
public var to32: Int32{get{return self}}
public var toU64: UInt64{get{
return UInt64(self) //No difference if the platform is 32 or 64
}}
public var to64: Int64{get{
return Int64(self) //No difference if the platform is 32 or 64
}}
}
var h1 = 0xFFFF04
h1
h1.toU8 // Instead of UInt8(truncatingIfNeeded:h1)
var h2:Int32 = 0x6F00FF05
h2.toU16 // Instead of UInt16(truncatingIfNeeded:h2)
```

## Common Bitwise Patterns

Now, let’s see some of this in action with some common bitwise operation patterns, just as an excuse to talk about something really useful that is not available in Swift.

### Multi-byte Component Extraction

A combination of AND and right shift is commonly used to extract single bits or bytes from longer sequences, let’s see an example where we want to extract the single color components from an RGB representation of the color:

```
let swiftOrange = 0xED903B
let red = (swiftOrange & 0xFF0000) >> 16 //0xED
let green = (swiftOrange & 0x00FF00) >> 8 //0x90
let blue = swiftOrange & 0x0000FF //0x3B
```

Here we are isolating the bits we are interested in performing an *AND* with a bitmask that has only the bits we want to get in the result at `1`

, the rest is zeroed out. To obtain and 8 bits representation of the component we need we shift the result of the *AND* to the right, moving of 16 positions for the red component(2 bytes to the right) and 8 for the green component (1 byte to the right).
And that’s it, this masking+shifting pattern has a wide range of applications, but when used in sub-expression it can make your expression unreadable really fast, but why not implement it as a subscript of all the integer types? In other words, why not add the ability to access with an index the single byte components of an Int like we do with arrays?

For example, let’s add the subscript to `Int32`

:

```
extension UInt32 {
public subscript(index: Int) -> UInt32 {
get {
precondition(index<4,"Byte set index out of range")
return (self & (0xFF << (index.toU32*8))) >> (index.toU32*8)
}
set(newValue) {
precondition(index<4,"Byte set index out of range")
self = (self & ~(0xFF << (index.toU32*8))) | (newValue << (index.toU32*8))
}
}
}
var i32:UInt32=982245678 //HEX: 3A8BE12E
print(String(i32,radix:16,uppercase:true)) // Printing the hex value
i32[3] = i32[0]
i32[1] = 0xFF
i32[0] = i32[2]
print(String(i32,radix:16,uppercase:true)) //HEX: 2E8BFF8B
```

### The magical XOR

Some of you may know XOR from the simple and quite useless *XOR* cipher, that allows to encrypt a bit stream with a key and retrieve the original value performing again an *XOR* with the same key.
For the sake of brevity, let’s use a message and a key with the same size:

```
let secretMessage = 0b10101000111110010010101100001111000 // 0x547C95878
let secretKey = 0b10101010101010000000001111111111010 // 0x555401FFA
let result = secretMessage ^ secretKey // 0x12894782
let original = result ^ secretKey // 0x547C95878
print(String(original,radix:16,uppercase:true)) // Printing the hex value
```

The same property of the *XOR* can be used for other things, the most simple one is the XOR swap that swaps two integer variables without an additional temporary variable:

```
var x=1
var y=2
x = x ^ y
y = y ^ x // y is now 1
x = x ^ y // x is now 2
```

Not really useful in Swift since you can do the same trick using tuples (see item #11 here).

Another thing you can do with the *XOR* but that i will not describe here, is building a variation of the classic doubly linked lists: the XOR linked list. A way more interesting use of the *XOR*, learn more about this on wikipedia.

### Double Negation: Is That Bit Set?

Another common pattern, similar to the first presented, is using a bitmask in conjunction with a suspicious looking double negation to understand if a specific bit or a specific bit pattern is set in the input bit sequence:

```
let input:UInt8 = 0b10101101
let mask:UInt8 = 0b00001000
let isSet = !!(input & mask) // If the 4th bit is set this is equal to 1.
// But this code is not valid in Swift...
```

The double negation is based on the specific behavior of the logical negation in C/C++ (and a few other languages) and the fact that in C/C++ booleans are implemented with integers (0 as false, 1 as true), quoting from the C99 standard:

The result of the logical negation operator ! is 0 if the value of its operand compares unequal to 0, 1 if the value of its operand compares equal to 0. The result has type int. The expression !E is equivalent to (0==E).

Considering this, the behavior of the double negation becomes more clear. The first logical *NOT* turns our masked input into either a 0 or 1 if the operand was respectively a number greater than 0 or 0 (effectively inverting its boolean value as we’d expect), while the second logical *NOT* turns the input back to the original boolean value, but now, the only possible value will be 0 or 1. Maybe the explanation is a bit messy, but you should get the idea of how this works.

But Swift has a proper boolean type and the logical *NOT* works only on those booleans. So, what can we do?

Let’s define a custom operator(usually i don’t really like them, but let’s make an exception) that implements the double negation for the `UInt8`

type!

```
prefix operator ~~
prefix func ~~(value: UInt8)->UInt8{
return (value>0) ? 1 : 0
}
~~7 // 1
~~0 // 0
let isSet = ~~(input & mask) // 1 as expected
```

As an improvement we could return a `Bool`

instead of an `UInt8`

, to use it directly in conditional statements, but we’ll loose the ability to embed it in other integer expressions.

## Bitter: A library for bits manipulation

All the alternative approaches to manipulate bit sets presented in this post are part of Bitter, a library that try to offer a more “swifty” interface for bit sets manipulation.

To recap what you’ll find in Bitter (available via CocoaPods,Carthage, SwiftPM):

- Convenient computed properties for bit pattern truncating conversions
- Integer byte indexed subscripting for every integer type
- The double negation operator
- and more…

The library is still young and feedback is highly appreciated, feel free to try it out and open issues if something doesn’t work or you have ideas for additional features!

**Did you like this article? Let me know on Twitter!**