R: Remove Duplicates from Sorted Array
remove_duplicates <- function(nums) { n <- length(nums) # Handle edge cases: empty or single-element array if (n == 0) { return(0) } if (n == 1) { return(1) } # Pointer for the position of the next unique element # Starts at 1 because the first element is always unique in a non-e...
This R function removes duplicate elements from a sorted array in-place. It iterates through the array using two pointers: one (`i`) to scan through all elements and another (`unique_idx`) to keep track of the position w...
The `remove_duplicates` function has a time complexity of O(n) because it iterates through the array once. The space complexity is O(1) as it modifies the array in-place without using any significant auxiliary data structures. The `unique_idx` pointer effectively partitions the array: elements from index 1 to `unique_idx` are unique, while elements beyond `unique_idx` are either duplicates or unprocessed. When `nums[i]` is found to be different from `nums[unique_idx]`, it means `nums[i]` is a new unique element. We then increment `unique_idx` and copy `nums[i]` to `nums[unique_idx]`. Edge cases include empty arrays (returning 0) and arrays with a single element (returning 1). The invariant is that the subarray `nums[1...unique_idx]` always contains unique elements in their original relative order.
function remove_duplicates(array): n = length of array if n == 0: return 0 if n == 1: return 1 unique_idx = 1 for i from 2 to n: if array[i] != array[unique_idx]: unique_idx = unique_idx + 1 array[unique_idx] = array[i]...