I was wondering if there’s a performance difference between matching the integers from two slices, once if the numbers are sorted and once if they’re not. I didn’t stress the hell out of the situation, I went up to 10k numbers.

For small sets, of course, the difference is not worth mentioning. For large slices, if you really, really focus on performance, you could be better with sorted values, if the values are already sorted; if you sort them each time, the loss will be there.

var a = []int{ ... } var b = []int{ ... } func IterateNotSorted() int { count := 0 for _, i := range a { for _, j := range b { if i == j { count++ break } } } return count } var c = []int{ ... } var d = []int{ ... } func IterateSorted() int { count := 0 for _, i := range c { for _, j := range d { if i == j { count++ break } } } return count }

Fill in the slices with some numbers and test it yourself.

func BenchmarkIterateNotSorted(b *testing.B) { for n := 0; n < b.N; n++ { IterateNotSorted() } } func BenchmarkIterateSorted(b *testing.B) { for n := 0; n < b.N; n++ { IterateSorted() } }

From what I remember from CS classes, it is better to prefer the sorted array rather than unsorted.