Classic Shell Scripting - Arnold Robbins [170]
Setting IFS to : makes it easy to read password file lines, treating each colon-separated field correctly. The original value of IFS is saved in old_ifs and restored after the loop. (We could also have used IFS=: read ..., but we would have to be careful to do so on both read statements.)
Similar code applies for the users for whom the UID numbers are the same but the username is different. Here too, we opt for simplicity; we give all such users a brand-new, unused UID number. (It would be possible to let, say, the first user of each pair keep the original UID number; however this would require that we do the file ownership changing only on the system where the second user's files reside. Again, in a real-life situation, this might be preferable.)
count=$(wc -l < dupids) # Total duplicate ids
# This is a hack, it'd be better if POSIX sh had arrays:
set -- $(newuids.sh -c $count unique-ids)
IFS=:
while read user passwd uid gid fullname homedir shell
do
newuid=$1
shift
echo "$user:$passwd:$newuid:$gid:$fullname:$homedir:$shell"
printf "%s\t%s\t%s\n" $user $uid $newuid >> old-new-list
done < dupids > unique3
IFS=$old_ifs
In order to have all the new UID numbers handy, we place them into the positional parameters with set and a command substitution. Then each new UID is retrieved inside the loop by assigning from $1, and the next one is put in place with a shift. When we're done, we have three new output files:
$ cat unique2
Those who had two UIDs
ben:x:301:10:Ben Franklin:/home/ben:/bin/bash
jhancock:x:300:10:John Hancock:/home/jhancock:/bin/bash
$ cat unique3
Those who get new UIDs
abe:x:4:10:Honest Abe Lincoln:/home/abe:/bin/bash
tj:x:5:10:Thomas Jefferson:/home/tj:/bin/bash
dorothy:x:6:10:Dorothy Gale:/home/dorothy:/bin/bash
toto:x:7:10:Toto Gale:/home/toto:/bin/bash
$ cat old-new-list
List of user-old-new triples
ben 201 301
jhancock 200 300
abe 105 4 See next section about these
tj 105 5
dorothy 110 6
toto 110 7
The final password file is created by merging the three unique? files. While cat would do the trick, it'd be nice to merge them in UID order:
sort -k 3 -t : -n unique[123] > final.password
The wildcard unique[123] expands to the three filenames unique1, unique2, and unique3. Here is the final, sorted result:
$ cat final.password
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
abe:x:4:10:Honest Abe Lincoln:/home/abe:/bin/bash
tj:x:5:10:Thomas Jefferson:/home/tj:/bin/bash
dorothy:x:6:10:Dorothy Gale:/home/dorothy:/bin/bash
toto:x:7:10:Toto Gale:/home/toto:/bin/bash
camus:x:112:10:Albert Camus:/home/camus:/bin/bash
jhancock:x:300:10:John Hancock:/home/jhancock:/bin/bash
ben:x:301:10:Ben Franklin:/home/ben:/bin/bash
george:x:1100:10:George Washington:/home/george:/bin/bash
betsy:x:1110:10:Betsy Ross:/home/betsy:/bin/bash
tolstoy:x:2076:10:Leo Tolstoy:/home/tolstoy:/bin/bash
Changing File Ownership
At first blush, changing file ownership is pretty easy. Given the list of usernames and new UID numbers, we ought to be able to write a loop like this (to be run as root):
while read user old new
do
cd /home/$user Change to user's directory
chown -R $new . Recursively change ownership, see chown(1)
done < old-new-list
The idea is to change to the user's home directory and recursively chown everything to the new UID number. However, this isn't enough. It's possible for users to have files in places outside their home directory. For example, consider two users, ben and jhancock, working on a joint project in /home/ben/declaration:
$ cd /home/ben/declaration
$ ls -l draft*
-rw-r--r-- 1 ben fathers 2102 Jul 3 16:00 draft10
-rw-r--r-- 1 jhancock fathers 2191 Jul 3 17:09 draft.final
If we just did the recursive chown, both