That's not as simple as it appears, especially if you are counting links.
You need to remove the link from the ARRAY, not a hash (the array is used
to preserve order, hashes are unordered). Removing from an array is more complicated, since they are ordered. If you remove element 'n', there is an empty 'hole' where that 'value' used to be, but the n-th element still exists. Unlike a Hash where delete($hash{key}) removes the element, no problem.
So, you have to compact the array. In most languages this means copying the array or shifting each element after the tossed one down, then cutting the array size by 1. PERL of course, has a function to do that.
There are several ways to do this -- set the element to null, then copy the old array to the new one, generate a new array as you iterate the old one, etc. Each of these probably has merit, and may be less CPU intensive, and better performance, but requires more code to make it work.
Or, use the PERL function and worry about peformance later if it's a problem.
The simplest solution is to use "splice" and shift everything down. This is assuming that there are only going to be a few "featured" or "official" sites in each category.
As you iterate the original sorted array, you need to REMOVE the element you found that matches the 'featured' criteria. Then, because the array has changed size, you need to decrement $numlinks & $i by one, to start up again AT THE CURRENT POSITION (there are other ways of doing this -- such as using $#array, but that probably requires a computation each iteration, and if it doesn't, then it may fail).
This also makes 2 assumptions that I can't test (since I don't have this mod installed on my sites).
1) that PERL allows you to change the initialization variables from inside a loop.
2) that @$links_r properly dereferences the array.
Code:
#####################
for my $i (0 .. $numlinks - 1) {
$tmp = $LINKDB->array_to_hash (${$links_r}[$i]);
## $tmp is now a pointer to a hash of the Link values _NOT_ an array pointer!
## delete()ing tmp would do nothing to the original array elements.
if ($tmp->{Priority} > 0) {
$OUT{featured} .= &site_html_link ($tmp);
splice (@$links_r, $i, 1); ## need to dereference the array
$numlinks=$numlinks-1; ## need to reset the size of the array
$i=$i-1; ## need to re-check the new element $1
}
}
#####################
$numlinks, as a variable scoped to the subroutine, is altered in the loop, and should take on the new value/size of the array at the end of the loop.
This should properly preserve the link counting, and not go off the end of the array, but you might want to look at resetting the following variables:
Code:
$OUT{total} = $numlinks;
$total_links = $numlinks;
These values will include the "featured" links as well ("Official" in your case). You might want to keep track of the two different totals, and you can do that by creating new $OUT{value} variables to pass to the template.
If you try to use those values in the code from this point on, you will get an out-of-scope (or whatever) error as PERL tries to read array elements that don't exist and have not been assigned.
I don't know if this works, so let me know.
Someone else may have a more 'elegant' solution to this, since I freely admit I'm not a code guru with any of this.
In any case, adding those lines shouldn't do any damage, at worst they'll generate an error.
This has benefits of _NOT_ doing two SELECT statements, but you might get better performance if you did.
Code:
my $get_plinks = $LINKDB->prepare (" SELECT * FROM Links
WHERE CategoryID = ?
AND Priority = 'Official'
ORDER BY $LINKS{build_sort_order_category}
LIMIT 1000 ");
$get_links = $LINKDB->prepare (" SELECT * FROM Links
WHERE CategoryID = ?
AND Priority != 'Official'
ORDER BY $LINKS{build_sort_order_category}
LIMIT 1000 ");
You then use $get_plinks to generate a list, and use that in the "featured" subroutine, then use $get_links in the regular subroutine.
My problem with is is that the UNION of the two searches above, _SHOULD_ equal the original search query (without the Prority=xx). But it may _NOT_.
By scanning the original found list, and breaking it up, you eliminate some potential errors that could have you going around and around for hours or days before you realize it's differences in the SELECT statements that cause the problem.
Then, again, by using two searches, if you have 1000 "unofficial" sites, and 32 "offical" sites, you'll find all 1032 sites, but with the original single query, you'd only find the first 1000 sites, and depending on the sort parameter, it could be the 32 'official' sites + 968 'unofficial' sites, or 1000 unofficial sites, and no featured sites.
Anyway, let me know if this works.
-----------
The question of "how much" comes up regularly in the forum, and in email. In case anyone is interested in the "costs" of programming, this tiny (3-line) mod to an existing mod would probably be a $75-$100 billable item if you requested it from a programmer. The problem was analyzed, 4-5 different solutions were considered, one was picked and implemented, along with other 'possibilities.' References were checked, code was commented, logic documented, and put down. Of course that would also include the 'debugging' part, which I'm leaving to you. It's not just 3-lines of code. I'll do this from time to time now, just as a reality check. FWIW: this was a bit more complicated/tricky thing to work out than it appeared on the surface due to the arrays, not hashes, where the rest of Links is all hashes.
-----------
http://www.postcards.com FAQ:
http://www.postcards.com/FAQ/LinkSQL/