That depends on your specific application.
I needed to upgrade with only 400 links, other people can get by with with many, many more.
If your links are short, and you have little activity, and you use direct linking rather than jump.cgi you can grow a pretty large database.
I think the question is more not 'how big' can it be before you upgrade, but really 'how long can I afford NOT to.'
If you have an active, growing site, with a lot of visitors and additions or changes then SQL will make a big difference to you. If you are just compiling a list of URL's and not much is going on, then Links 2.0 is probably fine for the long run.
What will probably happen is some benchmarks will become obvious between the versions, and people will start pointing out other reasons to upgrade.
FWIW ... I still have my Links 2.0 site as the main, but I haven't added to it in over 3 weeks because it's really plumped to capacity for performance with 400 links IMHO. I've split the site, and have LinkSQL running for the newer area, since it has less modifications, and the difference in performance is striking. My beta site for the old area is going to go live any day now, and the performance difference there is striking too ---
But, the most amazing difference is how easy it is to affect mods and additions once you get the hang of it. There are no file-handling routines to worry about since everything is just another call to the SQL database with a different table name. Data from one area of the program -- or even other databases (programs, applications, etc) are available in a single call (well, 2 lines).
The amount of data a MySQL table can hold is at _least_ 1 million lines, 40 fields and I forget how many bytes. That's only ONE table, not the whole database. And that is only what has been known to work solidly for 2+ years. Compare that to the data corruption people have had with various flat-file add ons with only a few hundred records!
The access to the MySQL database is much faster, and it handles the queuing of the requests. All of that is optimized in a very fast C code. The more requests you have for information, the more speed difference you will see between Link 2.0 and LinkSQL in terms of number of requests per unit time handled -- and thus, better response on your server over all.
50 to 100 requests per minute to the SQL engine is a reasonable load on a good machine -- how well do you think your server would handle 50-100 requests per minute for links on Links 2.0 (I'm not talking about the HTTPD serving PAGES, I'm talking about CGI requests for a link -- search, etc) if your database was more than 150 or 200k in size?? Just making those requests would kill your server even if it had 500 Meg of RAM.
With SQL, those requests are queued, and handled sequentially (perhaps simultaneously depending on the engine and the access) but you are not slurping the database into memory each time, reading through it and parsing out the data. You are requesting it from the SQL engine, which efficiently handles it, and passes it back.
The MySQL site has some really good performance data.
Moving from Link 2.0 to LinkSQL will shock you.
Granted, Link 2.0 is a more mature product, but it's also limited.
I'll NEVER (well, never say never) write another non-SQL database type application again. There is no reason to. The suffering you go through trying to make flat-files behave is eliminated with SQL.
NOTE: I'm using SQL to refer to both the query and the engine, and specifically in this case to MySQL... I wouldn't touch M$-SQL and ODBC. Why? How many sites do you visit that give you "ODBC error xx in module xxx.asp" or "Configuration error" ?? How many times do you see something like that from a Unix based system? I've started to collect M$ error messages, just to tally them. I've gotten one Unix message, and that was "Connection refused, over 300 connections already" -- which I think is legitimate, since they were probably running on an Intel box