Gossamer Forum
Home : Products : Gossamer Links : Version 1.x :

Feature request

Quote Reply
Feature request
This could be done with Alt Categories (if they are made to co-exist better with sorts), but it would be nice if there were a way to automate the process.

What I was thinking would be for a category like:


Where there are nearly 700 colleges in the 30 subcategories (conferences). I would like to have another subcategory that is an alphabetical listing of the colleges in all those subcategories, so that people have a third option for searching for schools (in addition to the regular category structure and searches).

To do this manually with Alt Categories would be an immense project. What I'm thinking is a flag of some sort to turn this on for specified categories or subcategories, which would then include any subcategories below the one specified. I suppose this could get rather confusing if you specified a one within another...

Obviously, one problem would be the sheer number of links that would end up being built into the large alphabetical page. I suppose it could be taken a step further by having it designed to build a subcategory for each letter of the alphabet the new section. Thus, for the example above, you would have:


Does this sound like something that could be done?

By the way, what's the thought on # of links per page vs. spanning vs. all-in-one-page, both in terms of server load, usability, and page views? I go with 30 links per page, as I find the 10 per page that many SE's use annoyingly few. As a user, I think I'd rather see everything in one page, although waiting for everything to load (especially with a large category) is a bad thing...

Quote Reply
Re: Feature request In reply to
This goes back to the question someone else asked. Look at letter.cgi.

that is tied to the search.cgi.

You can put a letter bar at the top of your category pages, and a user can click on the letter to get a page with items from that category starting with that letter.

Another option I can see is a 'monitor' program that looks at the input a person makes. Let's say a person enters (can only enter) a school in the upper level category. The monitor program on "validate" looks at the school and makes some decisions about it. It decides what alt-categories it should belong to based on the letter (and can create the alt-category if it doesn't exist).

This would not be as difficult as it seems, but I'd wait a for the next Links release before tackling it since it's only a few weeks away.

Quote Reply
Re: Feature request In reply to
Thanks, pugdog.

It's rather ironic that the letter.cgi thread came to the top right around the time I posted this. I'll have to play around with it and see if it will do what I want.

If I understand correctly, this would place a letter bar at the top of all categories, and it would display results from every category? There isn't any easy way to specify certain categories and/or subcats, is there?

What about page spanning? Would following the nph-build.cgi example have much chance of working (I'm guessing yes, because it works in page.cgi)?

Quote Reply
Re: Feature request In reply to
If you alter search.cgi to use a db->do format, rather than a db->query format, you can pass anything you want into the search.cgi. As long as you validate the input, and escape anything that is going to be searched on, you have the template generate a link to the modified search.cgi where it passes in a query string that contains the search parameter:

NAME Like 'A%' AND CategoryID='<%cat_id%>'

and passes this to the select command:

SELECT * FROM Links WHERE $query

Essentially you are making a specialized search.cgi to be called from the letter bar, which is generated on each category page with the correct category_id.

With OOP and modular concepts, there is no reason why you have to have ONE script do everything. As long as the logic for which script to call is in place, and there is no abiguity, use separate scripts for individual jobs. There is no benefit, but there is a performance hit, for do-it-all scripts.

For instance, if you have one sort of search that requires date parsing or processing, and you need to use a module like Date::Manip to get all the features you want, you are _probably_ better off making that a separate search script that is called if the search involves dates. You avoid all the overhead of loading the modules each time.

On a busy server, the server would do better running different lean scripts than one bloated one over and over. Especially if most of the bloat was infrequently used.

As long as you play by the rules in writing each script, there is no real maintennance problem either. Any changes to the modules shouldn't affect them, and if you need to change one scripts behaviour you don't have to worry about affecting them all.

Sometimes you end up going full circle -- one of your "lean" scripts starts doing everything the original one did, but may do it better. And, it does it your way.

Anyway, think modular, think OOP.

Sometimes, there is a lot of penalty for using objects (you end up loading the whole object when you only need a part of it) but it does keep you honest and keep all your code maintainable.

So, the next best way to keep things lean, is to write access scripts that only load the objects needed for the job. If you aren't going to deal with dates, don't load a date object.

Or, if all you were going to have jump.cgi do is jump a person to a link, and not update or track any hits or stats, it _might_ be better (performance wise) to just make a direct call through DBI to find the matching record, and grab the URL field, than to load the DBSQL.pm module, and all the other overhead. _BUT_ once you start asking jump.cgi to work with the Links database, it's _SAFER_ to follow the rules.

Make sense?

So, back to the original point. If you know exactly what you want this type of search to do, write a lean, mean search_nn.cgi script to do it. Don't "add" that functionality to the existing search script.

You might not want 200 different search scripts, but a couple of dozen that are really just wrappers for appropriate SQL calls, might be better than one script that can handle all 30 different types of searches.

Remember, if the "all-in-one" script has to load 5 modules, but each targeted search only loads 2 or 3, and the all-in-one script is 40k and the targeted ones are 11k, and the all-in-one has to make 8 or 9 decisions before figuring out what sort of query it has to do, but the targeted one only has to make one, then you start adding up a lot of system/cpu/resource savings.

I'm not sure how this applies -- if at all -- with mod perl, but even without any start up (compile, load, etc) penalty, you still have the overhead of multiple decisions for each query, when (if?) one will do.

I'm not sure there is one right answer for all situations. In mine, I'm centralizing all the calls, and access routines into the Links SQL modules, and HTML_Templates.pm, and putting all "configuration" stuff into Links.pm. Each addin has $ADD_IN_NAME{} hash variable attached to it, and has access to the "site-wide" $LINKS{} variables. Anything the add in needs in addition to the $LINKS{} variables is added that way. They can be added to the %GLOBALS hash, or put into the key=>value pairs sent to any routines.

I'm making all access to the database from the website go through one of the .cgi programs, but I'm starting to target the .cgi programs to do specific jobs.

For me, creating multiple scripts based on the same data-interface is easier to maintain. For others, one large script for each main function might be better. But that means one change, can kill a whole site/script.

Example: My original postcards script was an all-in-one, that loaded 3 modules every invocation, and pretty much was brute force. I've reduced the size by over 80%, at the "penalty" of loading in Links.pm, HTML_Templates.pm and DBSQL.pm to do all my database access. I've _gained_ all the features (and uniformity) of Links, and thus increased the maintainability and integration with the site. Additionally, I can split the pickup card function into a pickup.cgi that can be a stripped down program that makes about 3 function calls.

I'm looking at a performance increase (I hope) over the way it was being done before (not even counting the elimination of the flat-file accesses).

At the same time, the internal logic of the program is completely consistent with the site, all the templates use the same codes and code formats, etc.

This got a little long, but people seem to be striking out in different directions -- finding 3rd party scripts, using PHP, tinkering with this and that.... The more I looked around -- and I looked at almost everything out there -- the more I kept coming back to what I had here.

I think if people would spend more time looking at what the Links code does, what the sequence of steps is, what each different "action" taken does, a world of possibilities INSIDE the framework that's already on your server suddenly becomes available.

For instance.... Think about creating tables to hold data that normally disappears or is calculated a dozen times an hour. Use PHP or SSI to pull these values into "dynamic" pages. None of the overhead and problems of flat-file access exist with MySQL. _use_ databases and tables instead.

Anyway... this got long, and sort of off the point. The point is/was/should be that sometimes taking search.cgi or jump.cgi or add.cgi and changing it to do one specific job well, and calling that script when needed is better than trying to change the main script to do it all.