Gossamer Forum
Home : Products : Gossamer Links : Discussions :

Plug-in two step

Quote Reply
Plug-in two step

I think I have the problem more defined now.

The plug-ins and plug-in system needs to handle a third options, rather than just stop/continue -- Commit

Here's why.

Let's follow the uploading of a file.

A data stream is uploaded.
If the pre-hook gets it, it doesn't know if the basic data passed inspection.
If the post hook gets it, the Add routine doesn't know yet if the data passes inspection.

A way needs to be created in the main access utilities to handle the data, but call the plug-in BEFORE "COMMIT"

On return from the plug-in, the plugin would send "COMMIT" or "STOP"

If it is commit, the routine knows the plug-in ok'd all it's data, and the routine can now safely update. The plug-in can then either be re-entered on a post-hook, with 'commit' set (so it knows where it's at) and finish up, or skipped if it does it's work in a pre- and commit hook.

Is this making sense?

What has to happen, is that routines that "do something" to links, have to have a way to hook into the routine before the routine commits it's data, but after the routine has done everything else to it.

This means logically looking at the various subroutines that have hooks, and figuring out where the best place to hook into it is. Pre- and Post- hooks make broad sense, but so does a COMMIT hook.

I hope this is clearer.

PUGDOGŪ Enterprises, Inc.
FAQ: http://LinkSQL.com/FAQ

Quote Reply
Re: Plug-in two step In reply to
Why can't the pre hook just check it's own data, and if it has problems, return an error? The only downside is the user won't see all errors on one screen, but that should be ok..



Gossamer Threads Inc.
Quote Reply
Re: Plug-in two step In reply to
The plug-in would have to completely rewrite and bypass the existing routine.

In some cases, that would be a _good_thing_

But, if a bunch of plugins were trying to do the same thing, each of their changes would have to incorporate the other plugins. Also, if links was updated, the plug-in core code would have to be updated.

The whole idea is to make the plug-ins independent of the links modules and the other changes to the data stream.

Think of the old rules for writing books for Star Trek. The characters could go through hell, but they had to be the same when the story ended, so that another book could pick up and start over. The reader knew that that all this stuff had happened, but the characters were largely unchanged (spock got better at the mind-meld, the corbomite manuever was used 2x, etc.)

If the rules aren't set now at the begining, writing plug-ins is going to become a nightmare!

A pre-hook should be "safely" assumed to get the data BEFORE this routine does it's stuff, but it might be AFTER some other plug-in got the data. Therefore, the plug-in should only do stuff that won't affect anything else's transformations.

The plug-in author should be reasonably comfortable that the the subroutine will do it's own stuff (consider it the "official" plug-in) but still leave the data stream as "expected".

If the routine starts to write data to the database, change data, drop data, hide data, before the plug-in on the tail-end hook gets it, then the routine (or official plug-in) is misbehaving.

There is no reason a plug-in author should have to rewrite the data access routines or the core routines. The whole point is to allow the plug-ins to survive updates, upgrades and other plug-ins.

Maybe there needs to be one more layer of abstraction -- Commit.pm

Someone in the plug-in chain has to send the data to Commit, or generate an error that bounces back up saying why it wasn't. That data needs to be presented back to the originating routine, so it can be re-presented to the user to fix.

This problem is painfully obvious with the modify_upload_file routine.

If modify didn't write all the passed-in fields to the Changes table, I have no way to pass the data to the _post_hook to operate on the file :

1) delete it if it doesn't pass inspection, and then I have to roll back, un-do the Changes table, re-present all the data to the user, etc.


2) store it under a pre-validate name until validated, when I have to delete the old file, move this into place, and update the main record.

(and even some other specific options).

Here's my poor attempt at flow:

-> _pre_hooks -> [form_data_passed_in][additional_tags] ->
-> main_routine -> [form_data_passed_in][form_data_cleaned_and_validated]
[additional_tags] [additional_tags_cleaned_and_validated_or_ignored] [SUCCESS/ERROR] ->
-> Commit_hooks -> [form_data_passed_in][form_data_final]
[additional_tags] [additional_tags_final]
-> Commit_actions/final_actions ->
-> _post_hooks -> [form_data_passed_in][form_data_final]
[additional_tags] [additional_tags_final]
{actions completed}
-> return
At any point the data needs to be able to be backed out, no matter what plug-ins had tried to chew it up, and sent back to the calling routine. Or, bounced as an error.

This is not a problem if there are 5 people all doing different things, and following their own rules.

But, if everyone starts to do it, things will get ugly really, really fast.

How can I prevent another mod from overriding my plug-in, since my plug-in overrides the main routine. Maybe theirs requires the main routine.

If we all can't expect the same data stream in, and out, and in the middle, then we have no way of operating on it properly.

I don't know if this means there has to be another object -- $IN and $DATA where $DATA is the data block that is being passed from one routine to the next, and contains all the signal information from each plugin.

The simple plug-in system is really cool. But, it's starting to break down for me already, and I don't consider this stage of my mods really "complicated" as yet. The nasty stuff is yet to come :)

Some thoughts. Maybe I'm missing something. I often do. You have to look at logic from the same position, or it can seem very, very different.

PUGDOGŪ Enterprises, Inc.
FAQ: http://LinkSQL.com/FAQ