View unanswered posts    View active topics

All times are UTC - 6 hours





Post new topic Reply to topic  [ 11 posts ] 
Print view Previous topic   Next topic  
Author Message
Search for:
PostPosted: Fri Jun 10, 2005 9:11 am 
Offline
Joined: Mon Jun 06, 2005 2:35 pm
Posts: 30
So someone on another MythTV-related forum posted a question regarding setting up MythTV for his entire 108+ person residential college/dorm.

This is an interesting question.

Dislaimers: I have not deployed a MythTV set-up as of yet, so some of my ideas might be off-base when reality hits the theory of MythTV. I plan to set one up in the next month or so (just purchased my two cards).

So how could one do this? This gentlemen did not have a lot to spend, so it was a moot but interesting point.

My ideas on the requirements of MythTV:
1 Capture Card for per TV.
1 Capture Card per-user.

This would allow each TV to be able to watch Live TV and each user to record their own shows.

So if we're looking at this basic system (not allowing for PiP and multiple recordings at the same time kind of stuff), a 108-person setup for a college dorm would need, at minimum, this:

162 Capture cards (108 "users," 54 rooms w/ 2 people in each room, each room 1 TV).

These are, of course, assumptions.

So let's say we go the TV route with recording per user. Maybe .... 4 GB per user.

So let's say we need .... oh, 1 terabyte just to be safe.

Of course you can't put that many video capture cards on one computer. Let's say, oh, 4 per computer.

So you need about, let's say, 42 computers.

(BTW, what if you put MythTV on a back-end that was a rack server ... I don't imagine the video capture cards working on it, so you couldn't have a whole back-end array on racks, but you could maybe put the primary backend with no card .... is that even possible as long as you have other slave back-ends with cards attached?)

That's alot. Let's assume 100GB per computer. That is 4200 GBs of storare. Not too bad.

You'd also probably want to distribute these around the building in various locations in order to minimize the cooling needs of their storage location. (42 computers would get hot. Hey, 42. Deep Thought was right.)

The network would be interesting as well. I imagine the old 100 mbps network might fold under this load unless you did some serious network topology adjustment. (I.E. I am not sure a flat network would work. You might need zones which would primarily serve specific areas of the building, but this might also break the ability to access saved info from anywhere.)

So, in short, I think the 1gbps networks would have to be the rule. (I would hope this would work. Someone mentioned ~7 mbps per stream, but lets assume 10 mbps. X 54 ... Oh crap. Might be cutting it close.)

Now that I think of it, you'll also need either a lot of cable ports in the wall or a lot of really good RF splitters which do not degrade signal. That could be interesting.

Of course now you have to consider the clients.

I make needing about 54 clients. All higher end than the servers. Plus the requisite VGA to TV converters.

Another way to look at it is this:

If the Dorm has a cable jack in each room, put a backend/frontend in each room. Maybe then use the uber-server with no capture card in the basement or some other server storage place as the primary backend.

You would need 3 video cards in each client/slave backend in each room. Plus you won't need all the cable jacks or RF splitters.

This will reduce your CPU number from 42+54 to 54+1, so 96 to 55.

The real question is cost -- you would need more expensive frontends in order to use the frontend/backend system. But over all hardware cost still might be less, or at least comparable, and you have less machines to maintain.

So we're looking at:
1. 54 CPUS w/ 3 Capture cards each, 100GB HD.
2. 162 capture cards.
3. 5400 GBs of storage.
4. 1 1GBPS network.
5. 1 Dedicated server with no capture cards.

Okay, so let's try to get the costs:
1. 54 Clients @ ~$1000 (Good CPU built ground up - $500, VGA to TV ~ $100, 3 capture cards ~$300, Gigabit Ethernet ~$100);
2. 1 Dedicated backend server @ $100 (basic nice server, could be a tower);
3. GigaBit network installatoin ????

So, hardware alone, maybe a $55,000 project?

Any thoughts? Just random ideas .....


Top
 Profile  
 
PostPosted: Fri Jun 10, 2005 5:19 pm 
Offline
Joined: Mon May 10, 2004 8:08 pm
Posts: 1891
Location: Adelaide, Australia
pacergh wrote:
162 Capture cards (108 "users," 54 rooms w/ 2 people in each room, each room 1 TV).

These are, of course, assumptions.

You only need one capture card per concurrent recording. Do you really have 162 channels someone might want to record at the same time?


Top
 Profile  
 
 Post subject:
PostPosted: Fri Jun 10, 2005 9:22 pm 
Offline
Joined: Fri May 21, 2004 11:55 pm
Posts: 1206
Location: Silicon Valley, CA
Greg's on the right track. If your users can agree on, say, 50 channels of programming, you could set it up to record those channels on dedicated cards. Plus, you could record ALL the content from those channels. Say the channels broadcast 20 hours/day, that's 1000 hours of programming per day. If you are using PVR cards or equivalent, you get what, 1G/hour? Less? Call it 500G to 1T per day. Not too shabby.

_________________
Do you code to live, or live to code?
Search LinHES forum through Google


Top
 Profile  
 
 Post subject:
PostPosted: Sat Jun 11, 2005 3:16 pm 
Offline
Joined: Tue Dec 07, 2004 12:04 pm
Posts: 369
Liv2Cod wrote:
Greg's on the right track. If your users can agree on, say, 50 channels of programming, you could set it up to record those channels on dedicated cards. Plus, you could record ALL the content from those channels. Say the channels broadcast 20 hours/day, that's 1000 hours of programming per day. If you are using PVR cards or equivalent, you get what, 1G/hour? Less? Call it 500G to 1T per day. Not too shabby.


I think the real bottleneck is currently storage. At appx. maximum density/drive ({P,S}ATA), you're paying about $750/TB retail these days.

The more interesting bit to me is setting up a series of threads to perform aging-driven and storage-space-driven transcoding.

Shows might be initially transcoded into Mpeg-4 after recording (assuming we're stuck with mpeg-2 encoder chips primarily, such as the pvr cards) to save space. Then as they age, rather than deleting immediately to make room, they're transcoded into lower and lower quality recordings based on various expiration profiles that might incorporate a show's popularity, etc.

Maybe you set aside a few TB for 5-10 days of current programming, plus a few additional TB for programs flagged as interim archives, and a few additional for permanent archives?

You'd probably want to start incorporating stand-alone hardware RAID chassies. I recommend the ARC-5010 and ARC-6010, since they're ~$600-$700 for 5 drives and do raid 0, raid 1 and most importantly raid 5.

Then, there's movie and music storage...

...so...is MySQL multi-threaded? :)

-brendan


Top
 Profile  
 
 Post subject:
PostPosted: Sat Jun 11, 2005 9:51 pm 
Offline
Joined: Thu Mar 25, 2004 11:00 am
Posts: 9551
Location: Arlington, MA
brendan wrote:
I think the real bottleneck is currently storage. At appx. maximum density/drive ({P,S}ATA), you're paying about $750/TB retail these days.

Huhn? 250Gb PATA drives can easily be had for $120. That's under $500 per raw Tb (OK, so it's only a decimal Tb and only unformatted at that...)

I think the real secret is to play the stone soup game. Everybody probably has a PC. That means you probably only need the capture cards and then you play distributed computing games with existing hardware for the rest...


Top
 Profile  
 
 Post subject:
PostPosted: Sun Jun 12, 2005 8:07 pm 
Offline
Joined: Tue Dec 07, 2004 12:04 pm
Posts: 369
tjc wrote:
brendan wrote:
I think the real bottleneck is currently storage. At appx. maximum density/drive ({P,S}ATA), you're paying about $750/TB retail these days.

Huhn? 250Gb PATA drives can easily be had for $120. That's under $500 per raw Tb (OK, so it's only a decimal Tb and only unformatted at that...)


My caveat was "at appx. maximum density/drive"...and retail. I was thinking of the 400GB and 500GB drives from seagate and hitachi. I'll stick with that, but I also see your point ... however, my experiences so far with the sources of the current crop of 250GB drives (maxtor and WD) haven't been so hot...but I suppose that's mitigated if you go raid-5.

The unspoken assumption is that you can create larger arrays using larger base disks. My back-of-envelope calculations indicated that you're probably better off price-wise with those 400GB drives, because you'll end up buying less of the hardware raid-5 units.

tjc wrote:
I think the real secret is to play the stone soup game. Everybody probably has a PC. That means you probably only need the capture cards and then you play distributed computing games with existing hardware for the rest...


Yikes! :shock:

I was more thinking about a system you could depend on to find recent programs! :P

-brendan


Top
 Profile  
 
 Post subject:
PostPosted: Sun Jun 12, 2005 9:54 pm 
Offline
Joined: Thu Mar 25, 2004 11:00 am
Posts: 9551
Location: Arlington, MA
What? It's not that radical an idea... Most of the pieces already exist.. Set up a P2P mirroring system using something like bit torrent. Spare disk space automatically gets used for stashing recordings, like using spare cycles for SETI@home. Extend auto expiration to try to preserve stuff that doesn't have at least one mirror...


Top
 Profile  
 
 Post subject:
PostPosted: Sun Jun 12, 2005 10:07 pm 
Offline
Joined: Mon Oct 06, 2003 10:38 am
Posts: 4978
Location: Nashville, TN
better yet if you could manage to make the whole thing ride ontop of some sort of automatically configuring distributed file system much like google does. every file is mirrored in several places so if one machine drops out it doesn't hurt anything. I read somewhere that when a machine at google dies they just unplug it and don't even take the time to remove the box.

_________________
Have a question search the forum and have a look at the KnoppMythWiki.

Xsecrets


Top
 Profile  
 
 Post subject:
PostPosted: Mon Jun 13, 2005 12:04 am 
Offline
Joined: Tue Dec 07, 2004 12:04 pm
Posts: 369
tjc wrote:
What? It's not that radical an idea... Most of the pieces already exist.. Set up a P2P mirroring system using something like bit torrent. Spare disk space automatically gets used for stashing recordings, like using spare cycles for SETI@home. Extend auto expiration to try to preserve stuff that doesn't have at least one mirror...


Ah, ok. I was thinking of something more formal (e.g. a permanant dorm- or apartment-building-wide system with separate storage servers) vs. the more ad-hoc system (where every PC unit added becomes part of the storage tapestry) that you propose.

As long as either approach has some sort of self-healing way of dealing with dead/off-lined storage devices with no impact on content availability, implemented either via the centralized or distributed mechanism, I am sold!

So, when's the rollout date? :)

-brendan


Top
 Profile  
 
 Post subject:
PostPosted: Fri Jul 01, 2005 12:21 am 
Offline
Joined: Tue May 10, 2005 9:57 pm
Posts: 21
The way I would do it would be a little bit more....Server related. I would have them just do KnoppMyth to their regular computer, as a Live Boot-up cd. and have it do Only front end. Then, buy a higher-end Master backend, and a bunch of lower end slave backends. Put in 4-5 PVR 500s in each one, and two 300 gig hard drives in each one.

At my calculations, even 10-15 of these backend servers (if you wanted to save space, i guess you could rackmount them). Lower end Motherboards would be be fine, as the hardware does the endcoding. I don't know if Gigabit network would be required, (it probably wouldn't hurt), but that would just add to the cost, and you would either have to buy more expensive motherboards, or sacrifice one more PCI slot to a gigabit ethernet adapter.

So $250 for two 250 gig hard drives, $200 for a case(rack mount), $200 for a motherboard/CPU combo (For the lower end slave backends with built in gigabit). 20 bucks for a CD-rom, (just in case?), 650 (130*5) for 5 dual tuners on each backend server. put 100 bucks for ram in each one (which is probably overkill). That comes out to around 1420, but to round it lets say 1500 a server. Say you want 15 of those (150 channels worth, 7.5 terabytes). that comes to around $22,000. You could probably set up the system for less then $25,000 spent on hardware. Have them just use a live KnoppMyth for frontend on their computers, but even if you bought xbox frontends for each person (108*168) it comes out to less then $18,000. Add that to the top part, and its still less then your original $55,000 solution. If you only need 108 channels (one per user) you could take off 3 or 4 of those backends, taking off up to $6,000 off the original $22,000.

A system like that would make it much easier to admin it and do maintenance if the central servers are where you can access them. (for admin purposes, add $1,000 for a keyboard/lcd pull out, 500 for a 16 port KVM, and maybe another $1,000 for a good Server rack, $1,000 for some good rackmount UPS. $3,500 for admin stuff, and its still less then the original.)

It seems that in your first assessment you forgot that there are now Dual Tuner cards. They will save you a LOT if you use them (in terms of many many more systems).


Top
 Profile  
 
 Post subject:
PostPosted: Fri Jul 08, 2005 3:29 pm 
Offline
Joined: Wed Feb 18, 2004 9:07 pm
Posts: 89
Here's a simple (hopefully not too simple) distributed approach:

Set up all 54 rooms with a single fontend/backend unit. Segregate the boxes into groups of nine , placed on distinct subnets.
Nomintate one of these units to be the subnet master backend, with the other fifteen set up as slaves to it.
You now have 6 subnets of nine units, ten master backends. These nine-unit "squads" all function autonomously.
Now place an aggregator at the center of it all which crawls through all servers on all the subnets, keeping a unique copy of each recording.

Clients will then be able to play back recordings:
a) stored within their own subnet
b) stored on the aggregator, available via MythWeb on the aggregator. Files "roll off" the aggregator by date priority -- either created or last accessed.
c) As long as any master or slave retains a copy of a program, it will still be available to anyone in the network, by going to the MythWeb of the master backend for that subnet. It may take some hunting, but then there's only six subnets.

Segregating the entire network into smaller subnets eliminates the need for Gigabit ethernet across the network.

Equip each unit with 250GB storage ($125) and a $130 tuner. Add the MoBo, video, memory ($250) and you're at $505 per unit.

For the "aggregator" choose any old server ($500) and add 2 TB network storage ($4000?)

Total cost = 54 x $505 + $4500 = $31,770 (plus any additional networking hardware, if any)

The key assumptions and benefits here are:
--In a group of nine boxes, serving 18 individuals, you're seldom going to have the demand to record MORE THAN nine unique programs at one time. However, if that should happen, there's always a possibility that the tenth (and unrecorded) program was coincidentally recorded on a different node.

--Popular episodes (spread through word of mouth) will linger longer (or forever) on the aggregator.

--Very little "custom" software is needed here. The nine-node subnets are merely a matter of MythTV configuration. The aggregator would need a script to crawl all servers on the network, gathering new programs and deleting old ones.

--Assume that machine maintenance is not necessary. I don't know what happens if a client goes down somewhere, or if one of the master backends gets disconnected/shut down/hosed. Everyone on that subnet is screwed until the admin gets it back up again...

All in all, an excellent project. Is this a new building, or are you retrofitting? Who's footing the bill for all of this? What's the timeline?

_________________
R5Fxx

Home-Brewed Mythic Dragon Clone
200GB WD IDE /myth
2x250GB WD IDE (lvm) /myth/tv


Top
 Profile  
 

Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 11 posts ] 


All times are UTC - 6 hours




Who is online

Users browsing this forum: No registered users and 85 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Jump to:  
Powered by phpBB® Forum Software © phpBB Group

Theme Created By ceyhansuyu