Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: MythTV: Users

Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question)

 

 

MythTV users RSS feed   Index | Next | Previous | View Threaded


mtdean at thirdcontact

Aug 31, 2010, 4:50 PM

Post #1 of 18 (4271 views)
Permalink
Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question)

On 08/31/2010 05:31 PM, Michael T. Dean wrote:
> On 08/31/2010 05:24 PM, Yeechang Lee wrote:
>> Michael T. Dean says:
>>> I'm planning to make changes to mythfilldatabase so that it always
>>> retrieves all of the data (at minimum tomorrow through +13) for
>>> Schedules Direct users.
>> Would --refresh-all --refresh-today be sufficient to accomplish this
>> today?
> No. That would be /very/ bad. The problem now is that
> mythfilldatabase makes 2 separate requests--one to get each of today
> and +13. (And, in truth, it can make additional requests to get +12
> and +11 and ... if it detects significant holes in the listings.)
> Using --refresh-all makes /13/ requests (+1 through +13) and adding
> --refresh-today adds a 14th request.
>
> AIUI, Robert has said that pulling all the data most likely woudn't be
> a problem (more testing still required, so please bear with us :) if
> it was done as a single request for all 14 days of listings. To
> handle this properly and reliably on all our users' systems, we need
> some changes to the code and, possibly, to the MythTV database
> schema. We're planning these changes, but they will take some time.

And, now it's time to come clean. What I hadn't said is that we
actually have full support for grabbing all days of listings data from
TMS in a single pull. It's been this way for quite some time (including
in 0.23-fixes, and possibly even 0.22-fixes).

After discussion with some of the Schedules Direct board and a lot of
effort, including testing, by people such as Robert Eden and Chris
Petersen, we've decided to "advertise" this approach. Since it works in
0.23-fixes as well as trunk, users may enable its use immediately.

However, use of --dd-grab-all has not been optimized, so it can take
significantly more CPU and RAM than a "normal" run of mythfilldatabase.
Users with resource-limited backend systems may not be able to use the
argument. We also ask that those users who cannot use --dd-grab-all do
not use --refresh-all, either. Instead, they should run with default
refresh options.

Please see http://svn.mythtv.org/trac/changeset/26033 for more
information (including how to enable its use in automatic
mythfilldatabase runs), and--if you have a sufficiently-powerful master
backend system--please enable --dd-grab-all. It will help TMS as much
as it helps you.

(And, for those who are wondering, even though I was holding out on you
when it came to information, I wasn't silently benefiting from what I
knew. I have never used --dd-grab-all on my systems. That said, I will
be enabling it, now that we have approval to do so.)

Enjoy,
Mike
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


ylee at pobox

Aug 31, 2010, 10:34 PM

Post #2 of 18 (4153 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

Michael T. Dean <mtdean [at] thirdcontact> says:
> However, use of --dd-grab-all has not been optimized, so it can take
> significantly more CPU and RAM than a "normal" run of
> mythfilldatabase. Users with resource-limited backend systems may
> not be able to use the argument.

Thanks for the information. My Pentium 4 3.0GHz frontend/master
backend with 2GB RAM and no swap usage completed a --dd-grab-all run
in 17 minutes with perhaps 30 seconds of that spent downloading the
feed. No move in RAM usage according to free -m (about 900MB free on
the -/+ buffers/cache line); nothing unusual in top. Admittedly I
wasn't recording anything at the time but I don't expect the option to
pose an issue; was your warning more directed to those running
backends on the likes of a Via box with 512MB RAM?

--
MythTV FAQ Q: "Cheap frontend/backend?" A: Revo, $200-300 @ Newegg
Q: "Record HD cable/satellite?" A: Hauppauge HD-PVR, $200 @ Newegg
Q: "Can't change Live TV channels w/multirec!" A: Hit NEXTCARD key
More answers @ <URL:http://www.gossamer-threads.com/lists/mythtv/>
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


ozzy.lash at gmail

Aug 31, 2010, 10:39 PM

Post #3 of 18 (4196 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On Tue, Aug 31, 2010 at 6:50 PM, Michael T. Dean
<mtdean [at] thirdcontact> wrote:
>  On 08/31/2010 05:31 PM, Michael T. Dean wrote:
>>
>>  On 08/31/2010 05:24 PM, Yeechang Lee wrote:
>>>
>>> Michael T. Dean says:
>>>>
>>>> I'm planning to make changes to mythfilldatabase so that it always
>>>> retrieves all of the data (at minimum tomorrow through +13) for
>>>> Schedules Direct users.
>>>
>>> Would --refresh-all --refresh-today be sufficient to accomplish this
>>> today?
>>
>> No.  That would be /very/ bad.  The problem now is that mythfilldatabase
>> makes 2 separate requests--one to get each of today and +13.  (And, in
>> truth, it can make additional requests to get +12 and +11 and ... if it
>> detects significant holes in the listings.)  Using --refresh-all makes /13/
>> requests (+1 through +13) and adding --refresh-today adds a 14th request.
>>
>> AIUI, Robert has said that pulling all the data most likely woudn't be a
>> problem (more testing still required, so please bear with us :) if it was
>> done as a single request for all 14 days of listings.  To handle this
>> properly and reliably on all our users' systems, we need some changes to the
>> code and, possibly, to the MythTV database schema.  We're planning these
>> changes, but they will take some time.
>
> And, now it's time to come clean.  What I hadn't said is that we actually
> have full support for grabbing all days of listings data from TMS in a
> single pull.  It's been this way for quite some time (including in
> 0.23-fixes, and possibly even 0.22-fixes).
>
> After discussion with some of the Schedules Direct board and a lot of
> effort, including testing, by people such as Robert Eden and Chris Petersen,
> we've decided to "advertise" this approach.  Since it works in 0.23-fixes as
> well as trunk, users may enable its use immediately.
>
> However, use of --dd-grab-all has not been optimized, so it can take
> significantly more CPU and RAM than a "normal" run of mythfilldatabase.
>  Users with resource-limited backend systems may not be able to use the
> argument.

I just tried it on a newish AMD quad core system with 4 Gig of RAM
thinking "I don't have a resource limited system". While I was doing
a single recording over firewire and simultaneous commercial flagging,
the run took a little over 45 minutes (only about 2.5 downloading the
data). I feel so humbled! I have 2 source, one for clear QAM which
only has 50 channels or so (maybe less) and one for firewire which has
a few hundred (actually it says 502 in the mythfilldatabase output).
Clearing the data for source 1 for all 14 days took about 3 minutes,
and clearing the data for the firewire source took about 30 minutes.
Does this point to something I need to tune in my database setup? I
have some tweaks to my mysql settings that were suggested a long time
ago on this list if you had a lot of memory (probably 2 gig back then)
Here they are:

key_buffer_size = 192M

query_cache_limit = 2M
query_cache_size = 64M
query_cache_type = 1

table_cache = 128
myisam_sort_buffer_size = 192M
sort_buffer_size = 2M
read_buffer_size = 2M
join_buffer_size = 2M
read_rnd_buffer_size = 2M


I looked at the listings at one point during the run through mythweb,
and they were all cleared, which worried me about using this option
all the time. What would happen a recording was to start during this
interval?

Bill
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


mtdean at thirdcontact

Aug 31, 2010, 11:16 PM

Post #4 of 18 (4152 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On 09/01/2010 01:39 AM, Ozzy Lash wrote:
> On Tue, Aug 31, 2010 at 6:50 PM, Michael T. Dean wrote:
>> And, now it's time to come clean. What I hadn't said is that we actually
>> have full support for grabbing all days of listings data from TMS in a
>> single pull. It's been this way for quite some time (including in
>> 0.23-fixes, and possibly even 0.22-fixes).
>>
>> After discussion with some of the Schedules Direct board and a lot of
>> effort, including testing, by people such as Robert Eden and Chris Petersen,
>> we've decided to "advertise" this approach. Since it works in 0.23-fixes as
>> well as trunk, users may enable its use immediately.
>>
>> However, use of --dd-grab-all has not been optimized, so it can take
>> significantly more CPU and RAM than a "normal" run of mythfilldatabase.
>> Users with resource-limited backend systems may not be able to use the
>> argument.
> I just tried it on a newish AMD quad core system with 4 Gig of RAM
> thinking "I don't have a resource limited system". While I was doing
> a single recording over firewire and simultaneous commercial flagging,
> the run took a little over 45 minutes (only about 2.5 downloading the
> data). I feel so humbled! I have 2 source, one for clear QAM which
> only has 50 channels or so (maybe less) and one for firewire which has
> a few hundred (actually it says 502 in the mythfilldatabase output).
> Clearing the data for source 1 for all 14 days took about 3 minutes,
> and clearing the data for the firewire source took about 30 minutes.
> Does this point to something I need to tune in my database setup? I
> have some tweaks to my mysql settings that were suggested a long time
> ago on this list if you had a lot of memory (probably 2 gig back then)
> Here they are:

It's possible that the slow DB update was primarily due to DB locking
caused by the fact that the database was in use while recording. It's
also possible that lineup--550 channels--may just be asking a lot of
even your system.

If the run didn't cause any problems with your recordings, you can
continue to use --dd-grab-all, even if it does take a long time to complete.

Regardless, if you decide not to use it with 0.23-fixes, you may want to
try again when you upgrade to 0.24 (after it's released). It will have
some optimizations that may make it much less resource intensive, even
for a 550-channel system. In truth, I expect with 0.24, your
--dd-grab-all run time will be almost the same as a run time without
that argument.

As far as the DB optimizations go, I'll leave it to others to help. The
one thing I will say, however, is that having the database's binary data
files on the same file system that you're using to record could have a
huge impact on performance. In truth, the best setup puts the database
on a separate spindle.

> I looked at the listings at one point during the run through mythweb,
> and they were all cleared, which worried me about using this option
> all the time. What would happen a recording was to start during this
> interval?

That I can't tell you. If the scheduler runs without any listings in
the database, it won't record anything. In truth, though, if your
system is taking 45mins to complete the run, you are probably better
equipped to test that "what if" than anyone else, since you have such a
large working time in which to put the recording start. :) I'd be very
interested to see your results.

Mike
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


mtdean at thirdcontact

Aug 31, 2010, 11:28 PM

Post #5 of 18 (4158 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On 09/01/2010 01:34 AM, Yeechang Lee wrote:
> Michael T. Dean says:
>> However, use of --dd-grab-all has not been optimized, so it can take
>> significantly more CPU and RAM than a "normal" run of
>> mythfilldatabase. Users with resource-limited backend systems may
>> not be able to use the argument.
> Thanks for the information. My Pentium 4 3.0GHz frontend/master
> backend with 2GB RAM and no swap usage completed a --dd-grab-all run
> in 17 minutes with perhaps 30 seconds of that spent downloading the
> feed. No move in RAM usage according to free -m (about 900MB free on
> the -/+ buffers/cache line); nothing unusual in top. Admittedly I
> wasn't recording anything at the time but I don't expect the option to
> pose an issue; was your warning more directed to those running
> backends on the likes of a Via box with 512MB RAM?

It was directed at everyone as a "test-first--not my fault if things
don't work" type of warning. Basically, if you choose to use this
argument, you will get the most-current listings possible, but you take
responsibility if it overloads your backend system today or some time in
the future. :)

In truth, those of us who use this are helping to build up the first
real-world data on how it operates. So, tales of your experiences--both
good and bad--are appreciated. Think of it as kind of "beta" testing
the feature.

And, in keeping with that thought, please note that inefficiencies in
the --dd-grab-all code aren't considered bugs. Optimization is a
planned feature, but if --dd-grab-all proves too much for your hardware,
please don't open tickets for it unless they include patches. It's not
expected to work for everyone, especially in 0.23-fixes.

Mike
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


cynical at penguinness

Aug 31, 2010, 11:50 PM

Post #6 of 18 (4158 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On 8/31/10 4:50 PM, Michael T. Dean wrote:

> And, now it's time to come clean. What I hadn't said is that we
> actually have full support for grabbing all days of listings data from
> TMS in a single pull. It's been this way for quite some time (including
> in 0.23-fixes, and possibly even 0.22-fixes).
>
> After discussion with some of the Schedules Direct board and a lot of
> effort, including testing, by people such as Robert Eden and Chris
> Petersen, we've decided to "advertise" this approach. Since it works in
> 0.23-fixes as well as trunk, users may enable its use immediately.
>
> However, use of --dd-grab-all has not been optimized, so it can take
> significantly more CPU and RAM than a "normal" run of mythfilldatabase.
> Users with resource-limited backend systems may not be able to use the
> argument. We also ask that those users who cannot use --dd-grab-all do
> not use --refresh-all, either. Instead, they should run with default
> refresh options.

*snip*

Neat! For the additional data point, here are my results:

I ran it on my combo machine, which consists of:

C2D 8400
2 gig RAM
mythbackend version: branches/release-0-22-fixes [22594]
Mythbuntu 9.10 64-bit install (if it's not broke yet...)

(Was built before VDPAU existed)

and the only real thing I noticed was that the load average went up to
0.25 (normally around .01 to nada when idle).

The SQL server is a separate machine all together. All told, it took
less than a minute to run.

I call this a shiny thing to use. Thanks for the info!
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


asherml at gmail

Sep 1, 2010, 5:30 AM

Post #7 of 18 (4128 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On Sep 1, 2010, at 2:16 AM, Michael T. Dean wrote:

> On 09/01/2010 01:39 AM, Ozzy Lash wrote:
>> On Tue, Aug 31, 2010 at 6:50 PM, Michael T. Dean wrote:
>
>> I looked at the listings at one point during the run through mythweb,
>> and they were all cleared, which worried me about using this option
>> all the time. What would happen a recording was to start during this
>> interval?
>
> That I can't tell you. If the scheduler runs without any listings in the database, it won't record anything. In truth, though, if your system is taking 45mins to complete the run, you are probably better equipped to test that "what if" than anyone else, since you have such a large working time in which to put the recording start. :) I'd be very interested to see your results.

Does --dd-grab-all handle the table update differently than the --refresh variants? I don't recall the tables being cleared (and therefore the listings going empty) for an extended period of time using the old arguments. Of course, I haven't paid this close attention to a mythfilldatabase run in years.

Another performance datapoint: 2GHz AMD 4850e, 2GB DDR2, took ~30min to do Boston OTA (13 channels) and Boston DirecTV (357 channels) lineups, with a mythcommflag of a HD-PVR show running in the background. No noticeable effect on playback or memory usage (peaked @34MB resident, ~350MB total). Downloaded ~3MB of data in 96s. I'm using 0.23-fixes. Nearly half of the total time was spent clearing data for the DirecTV source.

David.

_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


lynchmv at gmail

Sep 1, 2010, 6:32 AM

Post #8 of 18 (4125 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On Tue, Aug 31, 2010 at 7:50 PM, Michael T. Dean
<mtdean [at] thirdcontact> wrote:
> However, use of --dd-grab-all has not been optimized, so it can take
> significantly more CPU and RAM than a "normal" run of mythfilldatabase.
>  Users with resource-limited backend systems may not be able to use the
> argument.  We also ask that those users who cannot use --dd-grab-all do not
> use --refresh-all, either.  Instead, they should run with default refresh
> options.
>

Data point -

P4 3.0 GHz
1.5GB RAM
MySQL and recordings on separate drives
OTA source with 12 channels
Dish Network with 148 channels

mythfilldatabase --dd-grab-all took 9 minutes to run, with a
mythcommflag job running. The mythcommflag job stayed above
mythfilldatabase in top output and memory usage was actually pretty
consistent.
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


ozzy.lash at gmail

Sep 1, 2010, 8:45 AM

Post #9 of 18 (4109 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On Wed, Sep 1, 2010 at 1:16 AM, Michael T. Dean <mtdean [at] thirdcontact> wrote:
>  On 09/01/2010 01:39 AM, Ozzy Lash wrote:
>>
>> On Tue, Aug 31, 2010 at 6:50 PM, Michael T. Dean wrote:
>>>
>>> And, now it's time to come clean.  What I hadn't said is that we actually
>>> have full support for grabbing all days of listings data from TMS in a
>>> single pull.  It's been this way for quite some time (including in
>>> 0.23-fixes, and possibly even 0.22-fixes).
>>>
>>> After discussion with some of the Schedules Direct board and a lot of
>>> effort, including testing, by people such as Robert Eden and Chris
>>> Petersen,
>>> we've decided to "advertise" this approach.  Since it works in 0.23-fixes
>>> as
>>> well as trunk, users may enable its use immediately.
>>>
>>> However, use of --dd-grab-all has not been optimized, so it can take
>>> significantly more CPU and RAM than a "normal" run of mythfilldatabase.
>>>  Users with resource-limited backend systems may not be able to use the
>>> argument.
>>
>> I just tried it on a newish AMD quad core system with 4 Gig of RAM
>> thinking "I don't have a resource limited system".  While I was doing
>> a single recording over firewire and simultaneous commercial flagging,
>> the run took a little over 45 minutes (only about 2.5 downloading the
>> data).  I feel so humbled!  I have 2 source, one for clear QAM which
>> only has 50 channels or so (maybe less) and one for firewire which has
>> a few hundred (actually it says 502 in the mythfilldatabase output).
>> Clearing the data for source 1 for all 14 days took about 3 minutes,
>> and clearing the data for the firewire source took about 30 minutes.
>> Does this point to something I need to tune in my database setup?  I
>> have some tweaks to my mysql settings that were suggested a long time
>> ago on this list if you had a lot of memory (probably 2 gig back then)
>>  Here they are:
>
> It's possible that the slow DB update was primarily due to DB locking caused
> by the fact that the database was in use while recording.  It's also
> possible that lineup--550 channels--may just be asking a lot of even your
> system.
>
> If the run didn't cause any problems with your recordings, you can continue
> to use --dd-grab-all, even if it does take a long time to complete.
>
> Regardless, if you decide not to use it with 0.23-fixes, you may want to try
> again when you upgrade to 0.24 (after it's released).  It will have some
> optimizations that may make it much less resource intensive, even for a
> 550-channel system.  In truth, I expect with 0.24, your --dd-grab-all run
> time will be almost the same as a run time without that argument.
>
> As far as the DB optimizations go, I'll leave it to others to help.  The one
> thing I will say, however, is that having the database's binary data files
> on the same file system that you're using to record could have a huge impact
> on performance.  In truth, the best setup puts the database on a separate
> spindle.
>
>> I looked at the listings at one point during the run through mythweb,
>> and they were all cleared, which worried me about using this option
>> all the time.  What would happen a recording was to start during this
>> interval?
>
> That I can't tell you.  If the scheduler runs without any listings in the
> database, it won't record anything.  In truth, though, if your system is
> taking 45mins to complete the run, you are probably better equipped to test
> that "what if" than anyone else, since you have such a large working time in
> which to put the recording start.  :)  I'd be very interested to see your
> results.
>

The DB is on a separate disk. Looking at top during the run,
mythcommflag was on top, followed by mythbackend, followed by mysqld
(at least most of the time, occasionally I would see mythfilldatabase
peek onto the first page, but not very often). I don't think the
memory usage was really high or anything, but I'll have to check
again. I'll probably wait until 0.24 (I'm running 0.23 fixes from the
debian multimedia repository on debian unstable) to put it in
production, but I'll try to give it another shot on an idle system,
and maybe set up a recording to start during a run to see what
happens.

Bill
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


hobbes1069 at gmail

Sep 1, 2010, 9:28 AM

Post #10 of 18 (4138 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

Additional data point:

AMD X2 2.4GHz
2GB RAM
HDHomerRun ATSC (2 tuners, 1 lineup)

real 0m53.903s
user 0m12.110s
sys 0m7.128s
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


mythtv-users2 at dwilga-linux1

Sep 1, 2010, 1:01 PM

Post #11 of 18 (4111 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On 9/1/10 11:45 AM, Ozzy Lash wrote:
> On Wed, Sep 1, 2010 at 1:16 AM, Michael T. Dean<mtdean [at] thirdcontact> wrote:
>
>> On 09/01/2010 01:39 AM, Ozzy Lash wrote:
>>
>>> On Tue, Aug 31, 2010 at 6:50 PM, Michael T. Dean wrote:
>>>
>>>> And, now it's time to come clean. What I hadn't said is that we actually
>>>> have full support for grabbing all days of listings data from TMS in a
>>>> single pull. It's been this way for quite some time (including in
>>>> 0.23-fixes, and possibly even 0.22-fixes).
>>>>
>>>> After discussion with some of the Schedules Direct board and a lot of
>>>> effort, including testing, by people such as Robert Eden and Chris
>>>> Petersen,
>>>> we've decided to "advertise" this approach. Since it works in 0.23-fixes
>>>> as
>>>> well as trunk, users may enable its use immediately.
>>>>
>>>> However, use of --dd-grab-all has not been optimized, so it can take
>>>> significantly more CPU and RAM than a "normal" run of mythfilldatabase.
>>>> Users with resource-limited backend systems may not be able to use the
>>>> argument.
>>>>
>>> I just tried it on a newish AMD quad core system with 4 Gig of RAM
>>> thinking "I don't have a resource limited system". While I was doing
>>> a single recording over firewire and simultaneous commercial flagging,
>>> the run took a little over 45 minutes (only about 2.5 downloading the
>>> data). I feel so humbled! I have 2 source, one for clear QAM which
>>> only has 50 channels or so (maybe less) and one for firewire which has
>>> a few hundred (actually it says 502 in the mythfilldatabase output).
>>> Clearing the data for source 1 for all 14 days took about 3 minutes,
>>> and clearing the data for the firewire source took about 30 minutes.
>>> Does this point to something I need to tune in my database setup? I
>>> have some tweaks to my mysql settings that were suggested a long time
>>> ago on this list if you had a lot of memory (probably 2 gig back then)
>>> Here they are:
>>>
>> It's possible that the slow DB update was primarily due to DB locking caused
>> by the fact that the database was in use while recording. It's also
>> possible that lineup--550 channels--may just be asking a lot of even your
>> system.
>>
One thing I'll say about this possibility: If locking is the reason, it
would probably help to convert your DB tables to innodb. This engine
supports row-level locking, whereas myisam locks the entire table
whenever a write is occurring. The downside of innodb is that it
generally requires more memory and can be somewhat slower for some
operations.

--
Dan Wilga "Ook."

_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


mtdean at thirdcontact

Sep 1, 2010, 1:07 PM

Post #12 of 18 (4095 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On 09/01/2010 11:45 AM, Ozzy Lash wrote:
> On Wed, Sep 1, 2010 at 1:16 AM, Michael T. Dean wrote:
>> On 09/01/2010 01:39 AM, Ozzy Lash wrote:
>>> I just tried it on a newish AMD quad core system with 4 Gig of RAM
>>> thinking "I don't have a resource limited system". While I was doing
>>> a single recording over firewire and simultaneous commercial flagging,
>>> the run took a little over 45 minutes (only about 2.5 downloading the
>>> data). I feel so humbled! I have 2 source, one for clear QAM which
>>> only has 50 channels or so (maybe less) and one for firewire which has
>>> a few hundred (actually it says 502 in the mythfilldatabase output).
>>> Clearing the data for source 1 for all 14 days took about 3 minutes,
>>> and clearing the data for the firewire source took about 30 minutes.
>>> Does this point to something I need to tune in my database setup? I
>>> have some tweaks to my mysql settings that were suggested a long time
>>> ago on this list if you had a lot of memory (probably 2 gig back then)
>>> Here they are:
>> It's possible that the slow DB update was primarily due to DB locking caused
>> by the fact that the database was in use while recording. It's also
>> possible that lineup--550 channels--may just be asking a lot of even your
>> system.
>>
>> If the run didn't cause any problems with your recordings, you can continue
>> to use --dd-grab-all, even if it does take a long time to complete.
>>
>> Regardless, if you decide not to use it with 0.23-fixes, you may want to try
>> again when you upgrade to 0.24 (after it's released). It will have some
>> optimizations that may make it much less resource intensive, even for a
>> 550-channel system. In truth, I expect with 0.24, your --dd-grab-all run
>> time will be almost the same as a run time without that argument.
>>
>> As far as the DB optimizations go, I'll leave it to others to help. The one
>> thing I will say, however, is that having the database's binary data files
>> on the same file system that you're using to record could have a huge impact
>> on performance. In truth, the best setup puts the database on a separate
>> spindle.
> The DB is on a separate disk. Looking at top during the run,
> mythcommflag was on top, followed by mythbackend, followed by mysqld
> (at least most of the time, occasionally I would see mythfilldatabase
> peek onto the first page, but not very often). I don't think the
> memory usage was really high or anything, but I'll have to check
> again. I'll probably wait until 0.24 (I'm running 0.23 fixes from the
> debian multimedia repository on debian unstable) to put it in
> production, but I'll try to give it another shot on an idle system,
> and maybe set up a recording to start during a run to see what
> happens.

This sound a lot like the MySQL behavior you'd see if your processor
were scaled to its lowest frequency the entire run. Might be worth
another test or 2--run once today (maybe while recording something you
don't care about) to watch the CPU frequency, and if it stays low, run
again tomorrow after telling the CPU to go to full speed. If that makes
it run better, then just script it to freq up, then call
mythfilldatabase --dd-grab-all, then freq down, and set the script as
your mythfilldatabase program.

Also, I highly recommend running optimize_mythdb.pl on a daily basis.
It may not help /that/ much, but it shouldn't hurt.

Mike

_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


gaberubin at gmail

Sep 1, 2010, 1:18 PM

Post #13 of 18 (4104 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On Wed, Sep 1, 2010 at 1:07 PM, Michael T. Dean <mtdean [at] thirdcontact> wrote:
> Also, I highly recommend running optimize_mythdb.pl on a daily basis.  It
> may not help /that/ much, but it shouldn't hurt.
>

This brings up another question I have. I have cron jobs to optimize
and backup the database daily. Does it matter which goes first (i.e.,
is it possible that optimizing could damage the database so I would
want a backup prior to that operation)?
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


ozzy.lash at gmail

Sep 1, 2010, 1:29 PM

Post #14 of 18 (4099 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On Wed, Sep 1, 2010 at 3:07 PM, Michael T. Dean <mtdean [at] thirdcontact> wrote:
>  On 09/01/2010 11:45 AM, Ozzy Lash wrote:

> This sound a lot like the MySQL behavior you'd see if your processor were
> scaled to its lowest frequency the entire run.  Might be worth another test
> or 2--run once today (maybe while recording something you don't care about)
> to watch the CPU frequency, and if it stays low, run again tomorrow after
> telling the CPU to go to full speed.  If that makes it run better, then just
> script it to freq up, then call mythfilldatabase --dd-grab-all, then freq
> down, and set the script as your mythfilldatabase program.


I'll take a look this evening. This is a relatively new dedicated
backend system that I set up in my basement to take the load off my
aging (and sometimes overheating) core2duo system in the living room
that was both a backend and frontend system.

It is possible I haven't configured cpu scaling, or have it
mis-configured. I would think if it isn't configured, it would run at
full speed all the time, though.

Bill
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


hansonorders at verizon

Sep 1, 2010, 1:56 PM

Post #15 of 18 (4101 views)
Permalink
Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

Just another data point:

Backend only
Xeon X3210 2.13 GHz Quad-core
4 GB ECC DDR2 667 RAM
LinHES 6.03 i686

OS/DB on one spindle, recordings,videos on several other spindles. ;)

Box basicly idle at the time of the run.

$ mythfilldatabase --remove-new-channels --dd-grab-all

97s for listings to d/l.
3 lineups (69 chan, 69 chan and 72 chan)
total run time: 15m 18s

Mike
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


mtdean at thirdcontact

Sep 1, 2010, 1:56 PM

Post #16 of 18 (4105 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On 09/01/2010 04:18 PM, Gabe Rubin wrote:
> On Wed, Sep 1, 2010 at 1:07 PM, Michael T. Dean wrote:
>> Also, I highly recommend running optimize_mythdb.pl on a daily basis. It
>> may not help /that/ much, but it shouldn't hurt.
> This brings up another question I have. I have cron jobs to optimize
> and backup the database daily. Does it matter which goes first (i.e.,
> is it possible that optimizing could damage the database so I would
> want a backup prior to that operation)?

That's kind of a chicken-and-egg problem. It's possible that repairing
(which is one of the things optimize_mythdb.pl does) a crashed table
could cause damage or data loss (especially if there's a mysqld crash
while repairing a table, but data loss is actually possible even without
the crash). That said, if you have any crashed tables, those tables
can't be backed up to a SQL-based backup. So, when you have a crashed
table, you can't back up the database because of the crashed table***.
But, if you repair the table, it can cause damage or data loss (though
the likelihood of either is rather small).

So, basically, when you get to the point where data loss is most likely
to occur--when you have a crashed table--the most important backup is
yesterday's backup. The mythconverg_backup.pl script will actually keep
around older backups, rotating them out as new are created. This means
that when you lose a table today, you have yesterday's backup to save you.

That said, I've been running optimize_mythdb.pl daily since it was first
committed and haven't had any problems with it. I do back up the
database with the mythconverg_backup.pl script, so that helps me to
worry less. :)

Mike

***Or, at least, you'd have to tell mysqldump not to back up the crashed
tables--in which case you're not backing up the crashed table, which is
the one most likely to undergo loss.
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


mtdean at thirdcontact

Sep 1, 2010, 2:05 PM

Post #17 of 18 (4106 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

On 09/01/2010 08:30 AM, David Asher wrote:
> On Sep 1, 2010, at 2:16 AM, Michael T. Dean wrote:
>> On 09/01/2010 01:39 AM, Ozzy Lash wrote:
>>> I looked at the listings at one point during the run through mythweb,
>>> and they were all cleared, which worried me about using this option
>>> all the time. What would happen a recording was to start during this
>>> interval?
>> That I can't tell you. If the scheduler runs without any listings in the database, it won't record anything. In truth, though, if your system is taking 45mins to complete the run, you are probably better equipped to test that "what if" than anyone else, since you have such a large working time in which to put the recording start. :) I'd be very interested to see your results.
> Does --dd-grab-all handle the table update differently than the --refresh variants? I don't recall the tables being cleared (and therefore the listings going empty) for an extended period of time using the old arguments. Of course, I haven't paid this close attention to a mythfilldatabase run in years.

No, the difference is that the default refresh does a single day at a
time. Therefore, it deletes 24 hrs worth of listings, then populates
them, then deletes 24 hrs worth of listings, then populates them, etc.
The --dd-grab-all refresh deletes the listings, then populates them, and
the blank listings are much more noticeable because it takes longer to
delete and repopulate all 14 days (and, for some users, /much/ longer :).

Mike
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users


ylee at pobox

Sep 1, 2010, 3:38 PM

Post #18 of 18 (4086 views)
Permalink
Re: Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question) [In reply to]

M.A.E.M. Hanson <hansonorders [at] verizon> says:
> $ mythfilldatabase --remove-new-channels --dd-grab-all
>
> 97s for listings to d/l.
> 3 lineups (69 chan, 69 chan and 72 chan)
> total run time: 15m 18s

I also use --remove-new-channels, and your runtime of 15 minutes is
the only one (other than that 45-minute outlier) that resembles my 17
minutes. Could that be the difference between our times and the one to
two-minute times others have reported?

--
MythTV FAQ Q: "Cheap frontend/backend?" A: Revo, $200-300 @ Newegg
Q: "Record HD cable/satellite?" A: Hauppauge HD-PVR, $200 @ Newegg
Q: "Can't change Live TV channels w/multirec!" A: Hit NEXTCARD key
More answers @ <URL:http://www.gossamer-threads.com/lists/mythtv/>
_______________________________________________
mythtv-users mailing list
mythtv-users [at] mythtv
http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users

MythTV users RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.