hi,
i want to...
1. get 20 records from the bottom of a tab delimited .txt datafile
2. skip duplicates on one field
3. minimize i/o
i have some code, but i'm wondering what might be a more efficient way to do it, here, i'm reading in the whole datafile....
my $count = 20;
open(IN, "< $datafile") || die "Couldn't open $datafile: $!\n";
my @update = <IN>;
close(IN) || die "Can't close $datafile: $!\n";
@update = reverse(@update);
for ($x=0;$x<$count;$x++) {
my ($status, $time, $desc, $PM) = split "\t", $update[$x];
$track{$PM}++ && $count++ && next;
push(@newarr, $update[$x]);
}
i want to...
1. get 20 records from the bottom of a tab delimited .txt datafile
2. skip duplicates on one field
3. minimize i/o
i have some code, but i'm wondering what might be a more efficient way to do it, here, i'm reading in the whole datafile....
Code:
my (%track, @newarr, $x); my $count = 20;
open(IN, "< $datafile") || die "Couldn't open $datafile: $!\n";
my @update = <IN>;
close(IN) || die "Can't close $datafile: $!\n";
@update = reverse(@update);
for ($x=0;$x<$count;$x++) {
my ($status, $time, $desc, $PM) = split "\t", $update[$x];
$track{$PM}++ && $count++ && next;
push(@newarr, $update[$x]);
}