Swift is interoperable with Objective-C, so you can use C, Objective-C, and Swift types and code all within Swift. As discussed earlier in the chapter, when you write a variable using an integer, Swift automatically declares it with a type Int, without your having to tell Swift you want an Int. In this example, you don’t tell Swift to make this variable an Int:
let theAnswerToLifeTheUniverseAndEverything = 42
Rather, Swift infers that it is an Int. Remember that on 32-bit systems this Int will be an Int32, and on 64-bit systems it will be an Int64. If you don’t remember that it won’t matter because Swift will convert this for you automatically anyway. Even though you have many different Int types available to you, unless you need an Int of a specific size, you should stick with Swift’s Int. When we say Int32, what we mean is a 32-bit integer. (This is similar to C.) You can also use UInt for unsigned (non-negative) integers, but Apple recommends that you stick with Int even if you know that your variable is going to be unsigned.
Again, when you write any type of floating-point number (a number with a decimal), and you don’t assign a type, Swift automatically declares it with the type Double. Swift also gives you Double and Float types. The difference between them is that Double has a higher precision of around 15 decimal digits, whereas Float has around 6. Here is an example of a Double in Swift:
let gamma = 0.57721566490153286060651209008240243104215933593992
Swift is strict about its types and they get combined together. If something is meant to be a String, and you give it an Int, then you will get an error. Swift needs you to be explicit with types. For example, this will not work:
var someInt = 5 // Inferred to be an Int someInt + 3.141 // throws an error
This throws an error because you can’t combine an Int and a Double. If you want to combine an Int and a Double, you must first convert the Int to a Double or vice versa, depending on your preference. Here we combine an Int and a Double by converting the Int to a Double:
var someInt = 5 // Inferred to be an Int Double(someInt) + 3.141 // 8.141 var someInt = 5 // Inferred to be an Int Float(someInt) + 3.141 // In this case 3.141 will be inferred to be a Float so // it can combine with a Float var someInt = 5 // Inferred to be an Int Float(someInt) + Double(3.141) //This will throw an error and will not work
You can use the initializer (Float(someInt) or Double(someInt), etc.) of the number type to convert between types. For example, you can use Float() to convert any number type into a Float.
So again, when you want to perform any operations on two or more number types, all sides of the operation must be of the same type. You’ll see this pattern often in Swift, and not just with numbers. For example, you cannot directly add a UInt8 and a UInt16 unless you first convert the UInt8 to a UInt16 or vice versa.