View unanswered posts    View active topics

All times are UTC - 6 hours





Post new topic Reply to topic  [ 31 posts ] 
Go to page 1, 2, 3  Next

Print view Previous topic   Next topic  
Author Message
Search for:
PostPosted: Sun Mar 21, 2010 12:21 pm 
Offline
Joined: Tue Jan 18, 2005 2:07 am
Posts: 1530
Location: California
I have implemented a new architecture for my mythtv deployment that is optimized for instant-access and low power consumption -- posting in case others are interested. At a high level:

1. The storage platform is a Dlink-321 dual-drive NAS with 2 "green" drives and 3.5TB of storage. It consumes 8 watts idle and 13-15 watts busy. Average power consumption is 11.2 watts/hour, based on a 5 day sampling on my killowatt. The storage tier contains all recordings, transcodes and meta data that enables my front-end to operate.

2. The recording and trascode platform is a home built machine with an E8400 processor and 2 digital PCI-based tuner cards. It runs the mythtv backend and does all transcoding. It also extracts all metadata from the database and places it in XML files on the storage platform for use by the front end. This machine is only powered on when recording and transcoding -- it is NOT needed for playback. It is typically powered up 8 hours a day; the remaining 16 hours it is powered down. Average power consumption is 21 watts/hour, as measured by a killowatt.

3. The front end is an Acer Revo R1600 PC, single core ATOM, Nvidia ION LE, 2 GB RAM running windows XP. I am using the branch of XBMC that supports "DSPLAYER" for watching recordings and Hulu desktop for HULU. This machine draws 20 watts when powered up; less then 1 watt when suspended and can resume reliably from a suspended state in a few seconds. It only depends on the always-on storage platform for playback; it has no dependencies on the recoding and transcode platform.

Pulling it all together my average power consumption, assuming 3 hours of TV watching a day is 37 watts/hour. This is comparable to a series 3 TIVO. Some other notes:

1. The storage platform has been tested at a load of 4 concurrent hidef recordings and 1 concurrent hidef playback without problems.

2. I am using gigabit networking. While the Dlink-321 cannot run at full gigabit speeds, it is fast enough.

3. I am working on the glue layer required to enable the use of a Popcorn Hour as a front-end platform.

4. All shows of interest are transcoded to xvid so that members of the household can watch their favorite recordings on their laptop. This is necessary because most recordings are now hidef, and hidef bitrates don't play reliably across the wireless network.

_________________
Marc

The views expressed are my own and do not necessarily reflect the views of my employer.


Last edited by marc.aronson on Sat Mar 27, 2010 1:15 am, edited 1 time in total.


Top
 Profile  
 
PostPosted: Sun Mar 21, 2010 8:53 pm 
Offline
Joined: Sat Apr 21, 2007 6:55 pm
Posts: 306
Location: CA,USA
marc.aronson wrote:
2. The recording and trascode platform is a home built machine with an E8400 processor and 2 digital PCI-based tuner cards. It runs the mythtv backend and does all transcoding. It also extracts all metadata from the database and places it in XML files on the storage platform for use by the front end. This machine is only powered on when recording and transcoding -- it is NOT needed for playback.


I'm a bit confused by this - so this is an MBE? But it's not required for playback?? I'm guessing the meta-data extraction is so the FEs can playback w/o the BE?

I installed my MBE in a VirtualMachine (on a server I run 24X7 for various reasons). I then put my tuner cards in an SBE which is woken/shutdown for recordings as per this wiki I wrote.
http://www.knoppmythwiki.org/index.php?page=WakeSBEOnly

Nice setup Marc!

_________________
Paul O'Flynn


Top
 Profile  
 
PostPosted: Sun Mar 21, 2010 9:18 pm 
Offline
Joined: Tue Jan 18, 2005 2:07 am
Posts: 1530
Location: California
poflynn wrote:
marc.aronson wrote:
2. The recording and trascode platform is a home built machine with an E8400 processor and 2 digital PCI-based tuner cards. It runs the mythtv backend and does all transcoding. It also extracts all metadata from the database and places it in XML files on the storage platform for use by the front end. This machine is only powered on when recording and transcoding -- it is NOT needed for playback.


I'm a bit confused by this - so this is an MBE? But it's not required for playback?? I'm guessing the meta-data extraction is so the FEs can playback w/o the BE?


Paul, you hit the nail on the head. It is an MBE and the meta data extract creates XML files that enable XBMC to function as an effective front end without the MBE running. The XML files contain the episode synopsis and other info. Deleting of recordings is managed through the rules set up on the MBE that limits the max # of retained episodes, etc.

The remaining power optimization challenge is separating the "recording platform" from the "transcoding platform", as the processing power requirements of a "recording platform" are minimal.

Perhaps the biggest problem is that I'm so much of a geek that I'm having fun spending this much time on building my own "green" DVR :-).

On a totally different topic -- are you going to join us when Cecil comes into town this weekend?

_________________
Marc

The views expressed are my own and do not necessarily reflect the views of my employer.


Top
 Profile  
 
 Post subject:
PostPosted: Mon Mar 22, 2010 11:15 am 
Offline
Joined: Thu Sep 07, 2006 11:20 am
Posts: 389
Marc,

Sounds interesting. Can you provide more info on this step: "It also extracts all metadata from the database and places it in XML files on the storage platform for use by the front end."

Thanks!


Top
 Profile  
 
 Post subject:
PostPosted: Mon Mar 22, 2010 11:47 pm 
Offline
Joined: Tue Jan 18, 2005 2:07 am
Posts: 1530
Location: California
nharris wrote:
Marc,

Sounds interesting. Can you provide more info on this step: "It also extracts all metadata from the database and places it in XML files on the storage platform for use by the front end."

Thanks!

I have a python script that

1. Generates an ".nfo" file for each episode that contains the episode title, airdate and episode description.

2. Creates a symbolic link to the ".png" preview image that is of the form "filename.tbn".

3. Creates a symbolic link to the recording, much link the old "mythlink" script did.

This script runs at 3 minutes and 33 minutes past the hour. XBMC uses the ".nfo" and "tbn" file to populate the "video library TV SHOW" view with appropriate info. My approach is different than some of the others out there in the following ways:

1. Mythbox and the "myth://" source: These both require that the myth backend is running.

2. mythicallibrarian: This depends on successfully looking up the episode on one of the TV information websites. My method only depends on the data that is already stored in the myth database.

I also had to do some fiddling with the "season number" & "episode number" to get things to work.

Hope that helps -- let me know if you want more details...

_________________
Marc

The views expressed are my own and do not necessarily reflect the views of my employer.


Top
 Profile  
 
 Post subject:
PostPosted: Tue Mar 23, 2010 6:38 am 
Offline
Joined: Thu Sep 07, 2006 11:20 am
Posts: 389
Sure. Details and scripts would be welcome. I am currently investigating MythicalLibrarian and it has limitations (like the one you point out). Your approach seems better since it uses Myth's DB to generate all the program info.


Top
 Profile  
 
 Post subject:
PostPosted: Wed Mar 24, 2010 11:05 am 
Offline
Joined: Thu Sep 07, 2006 11:20 am
Posts: 389
Marc,

Thanks again for sharing your setup. I have been looking into your approach and reading the XBMC wiki.

http://wiki.xbmc.org/index.php?title=Im ... rt_Library

Any chance you would share your python code?

Thanks,

-Nathan


Top
 Profile  
 
 Post subject:
PostPosted: Wed Mar 31, 2010 12:40 am 
Offline
Joined: Tue Jan 18, 2005 2:07 am
Posts: 1530
Location: California
I've started a document that better describes my new setup -- you can find a copy of it here.

A copy of the scripts can be found here

_________________
Marc

The views expressed are my own and do not necessarily reflect the views of my employer.


Top
 Profile  
 
 Post subject:
PostPosted: Wed Mar 31, 2010 1:38 pm 
Offline
Joined: Sun Sep 25, 2005 3:50 pm
Posts: 1013
Location: Los Angeles
marc: I built a package in the LinHES repos called mythtv-contrib for mythtv-user created "stuff." If you'd like I can include your script(s) and documentation in the package. If yes, I'd prefer the documentation in plain text so it could be read in a console, but if a PDF is all you have, we could deal with that for now.

_________________
Mike
My Hardware Profile


Top
 Profile  
 
 Post subject:
PostPosted: Wed Mar 31, 2010 10:21 pm 
Offline
Joined: Tue Jan 18, 2005 2:07 am
Posts: 1530
Location: California
mihanson wrote:
marc: I built a package in the LinHES repos called mythtv-contrib for mythtv-user created "stuff." If you'd like I can include your script(s) and documentation in the package. If yes, I'd prefer the documentation in plain text so it could be read in a console, but if a PDF is all you have, we could deal with that for now.


Mike, that sounds good. Is there a specific time when you need this by? Nathan is trying it out and we're trouble shooting some issues related to the handling of unicode strings.

In terms of the doc -- its going to be difficult to convert everything to plain text, but I can convert the basic linux install procedure easily. Can I provide both an abbreviated text-only version and the full PDF file?

_________________
Marc

The views expressed are my own and do not necessarily reflect the views of my employer.


Top
 Profile  
 
 Post subject:
PostPosted: Wed Mar 31, 2010 11:36 pm 
Offline
Joined: Sun Sep 25, 2005 3:50 pm
Posts: 1013
Location: Los Angeles
marc.aronson wrote:
Mike, that sounds good. Is there a specific time when you need this by?

No. That's the beauty of a rolling release. :)

marc.aronson wrote:
Nathan is trying it out and we're trouble shooting some issues related to the handling of unicode strings.

By all means work out the kinks first. Let me know when it's "ready."

marc.aronson wrote:
In terms of the doc -- its going to be difficult to convert everything to plain text, but I can convert the basic linux install procedure easily. Can I provide both an abbreviated text-only version and the full PDF file?
Sure. The purpose of the text only was to make it as easy as possible for anyone to read up on how to use your scripts. We can always refer them to the PDF for more info.

_________________
Mike
My Hardware Profile


Top
 Profile  
 
 Post subject:
PostPosted: Thu Apr 01, 2010 5:04 am 
Offline
Joined: Tue Jan 18, 2005 2:07 am
Posts: 1530
Location: California
OK -- I'll let you know when its ready!

Is anyone else interested in trying it out?

_________________
Marc

The views expressed are my own and do not necessarily reflect the views of my employer.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Apr 01, 2010 7:05 am 
Offline
Joined: Thu Sep 07, 2006 11:20 am
Posts: 389
Here is a quick summary of the different scripts out there for MythTV and XBMC integration.

#1) mythlink.pl
http://svn.mythtv.org/trac/browser/trun ... ythlink.pl
This is what is currently used for the "pretty" cron job. I have been using this with samba instead of Myth's UPnP and it works well.

#2) mythvidexport.py
http://svn.mythtv.org/trac/browser/trun ... dexport.py
The script is run as a user job to copy recordings into your video archive. It will determine if it's a movie or a tv show and try to scape the meta data. Could be nice to borrow code from, but not what we need for XBMC.

#3) mythicalLibrarian
http://code.google.com/p/mythicallibrarian/
http://wiki.xbmc.org/?title=MythicalLibrarian
This script is run as a user job. There are a bunch of options, but basically it copies recordings into a XBMC friendly directory structure with an XBMC friendly file name. It will determine if it's a movie or a tv show and try to scape the meta data. It puts the meta data into a ".nfo" file (and xml file which XBMC understands) and puts the commerical skip data into a ".txt" file.

#4) xbmc_mythlink.py
Marc's script which is still in process. Works like mythlink.pl by creating symbolic links, but uses XBMC friendly names and directory structure. It also puts the meta data and/or mythtv DB data into a ".nfo" file (and xml file which XBMC understands). It's currently missing export of the commercial skip data into a file XBMC understands.

#5) mythbox
http://code.google.com/p/mythbox/
This is an XBMC application script (runs inside of XBMC). It connects to the MythTV database and presents that info inside of XBMC.

I have tried all these scripts. I believe the solutions lies in an approach like #4 with the features and metadata capabilities of #2 and #3.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Apr 01, 2010 9:23 am 
Offline
Joined: Tue Jan 18, 2005 2:07 am
Posts: 1530
Location: California
nharris wrote:
Here is a quick summary of the different scripts out there for MythTV and XBMC integration.

#1) mythlink.pl
http://svn.mythtv.org/trac/browser/trun ... ythlink.pl
This is what is currently used for the "pretty" cron job. I have been using this with samba instead of Myth's UPnP and it works well.

#2) mythvidexport.py
http://svn.mythtv.org/trac/browser/trun ... dexport.py
The script is run as a user job to copy recordings into your video archive. It will determine if it's a movie or a tv show and try to scape the meta data. Could be nice to borrow code from, but not what we need for XBMC.

#3) mythicalLibrarian
http://code.google.com/p/mythicallibrarian/
http://wiki.xbmc.org/?title=MythicalLibrarian
This script is run as a user job. There are a bunch of options, but basically it copies recordings into a XBMC friendly directory structure with an XBMC friendly file name. It will determine if it's a movie or a tv show and try to scape the meta data. It puts the meta data into a ".nfo" file (and xml file which XBMC understands) and puts the commerical skip data into a ".txt" file.

#4) xbmc_mythlink.py
Marc's script which is still in process. Works like mythlink.pl by creating symbolic links, but uses XBMC friendly names and directory structure. It also puts the meta data and/or mythtv DB data into a ".nfo" file (and xml file which XBMC understands). It's currently missing export of the commercial skip data into a file XBMC understands.

#5) mythbox
http://code.google.com/p/mythbox/
This is an XBMC application script (runs inside of XBMC). It connects to the MythTV database and presents that info inside of XBMC.


A few more options:

#6. XBMC's built in "myth://" source. Connects directly to the myth back end.

#7. mythSExx -- kind of a stripped down version of mythical librarian.

nharris wrote:
I have tried all these scripts. I believe the solutions lies in an approach like #4 with the features and metadata capabilities of #2 and #3.


I've also tried all of them before I wrote my script. Your observations are good -- I'd add the following:

#1, Works fine if you don't care about seeing episode synopsis info.

#3, #7, very dependent on scraping data from sources like thetvdb.com. If it can't make a "heuristic" match, you're out of luck.

#5, #6, probably the best bet if you don't separate out the storage platform from your backend and if your backend is always on. They speak the myth protocol and provide the ability to delete recordings as well as watching them. mythbox is still evolving.

Main benefits of #4 (my script) is that the myth backend can be power down and all information about the episode is extracted from the myth database -- no scrapping / lookup is needed. It does use external lookup to get the description of the series and associated fanart. If this lookup fails, the only downside is a less elegant look at the series level.

Nathan,

A few quick questions:

1. Any update on the Unicode issue?

2. What other capabilities of 2 and 3 are missing?

I don't use commercial skipping, but I will look to see if there is a way to feed that data into XBMC.

_________________
Marc

The views expressed are my own and do not necessarily reflect the views of my employer.


Top
 Profile  
 
 Post subject:
PostPosted: Fri Apr 16, 2010 8:51 am 
Offline
Joined: Fri Apr 16, 2010 8:44 am
Posts: 8
nharris wrote:
Sure. Details and scripts would be welcome. I am currently investigating MythicalLibrarian and it has limitations (like the one you point out). Your approach seems better since it uses Myth's DB to generate all the program info.


I wrote mythicalLibrarian and I have no idea what you guys are talking about.

mythicalLibrarian uses thetvdb to rename files if the information is available. Once it has been availble to mythicalLibrarian, it is always available. It will work offline until it updates its own database again. mythicalLibrarian is not effected by any internet access limitations. it will rename any and all shows in the best possible way.

There is no other better option, and I can guarantee that.

I started the project mythicalLibrarian 6 months ago, worked on it diligently every day, asked for problem reports, took in ideas...


    Renames Episodes to Title S##E## (Subtitle)
    Renames Generic Programming to Title S0E0 (Subtitle)
    In the event of internet failure, episodes will be handled as generic programming
    Renames Movies to Movie (Year)
    Generates RSS feeds to keep you up to date
    Will render your files still human readable in the event you loose your database
    Maintains consistancy by symlinking back to original file
    Allows mythtv to serve files
    Allows mythtv to delete files
    Provides Ubuntu desktop notifications upon completed jobs
    Provides episode lookups based upon zap2it ids referenced to thetvdb
    Provides episode lookup based upon subtitle or original airdate
    Optional primary and secondary Episodes, Movies, and Shows folders for NAS users
    Creates COMSKIP files from mythcommflag
    Creates NFO files for generic programming
    Tracks and deletes created helper files if main video file is deleted
    Sends notifications to XBMC
    Updates XBMC's library
    Provides detailed logs and daily report in the /daily report folder
    Showtranslations allows for title renaming if guide data is incorrect


It does not get any better. The only thing it does not do is download artwork. I leave that to the front-end.


Top
 Profile  
 

Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 31 posts ] 
Go to page 1, 2, 3  Next



All times are UTC - 6 hours




Who is online

Users browsing this forum: No registered users and 4 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group

Theme Created By ceyhansuyu