My primary motivation for doing this was security. I’ve been reading some forensic logs lately and a great way to shoot yourself is if your critical servers perform “push” backups to elsewhere.
A typical example of this would be if I have a database server, and on this happy database server I have a crontab entry that creates a nightly snapshot of all the databases and then runs a little script to scp them all over to a backup server somewhere.
On first impressions, this sounds fine. You’ve got a production database server. You’ve got a remote backup server. You’ve got nightly jobs that run. Awesome! Well protected!
*bzzzzzz* You’re protected from innocent hardware failure, sure, but what happens when someone manages to compromise your database server and login? Regardless of their motivation, if they can read your nightly backup script, you are unbelievably screwed. They’re now not only pwning your production database server, but they have access to your remote backup server too and any historical data might as well be kissed good bye.
I wanted to be defensive and consider this scenario a “when” and not an “if”. My solution is nothing new, but it’s something I don’t see suggested much: Make the backup server PULL from the production database server using private/public ssh keys . For example: backup server has crontab entries that tell it to login to your production systems and copy the same databases you were previously telling the production servers to worry about. Also you can limit any commands the backup user’s login is allowed to issue on the production side. The backup server should also be behind a firewall, on a private network, isolated away from the outside world except for outbound connections in the direction of the production server.
Now not only do you have a happy database, you have an isolated backup server with absolutely no references to where or how the backups are stored.
An added benefit is that if you need to deploy any new production servers, all of their backup requirements can be centralized instead of adding multiple crontab entries to multiple new server at each deployment state. (It’s also a pain if your backup strategy changes greatly as you’d have to maintain many distributed references to your backup infrastructure.)
Obviously this is primarily for smaller scale systems. If you’re running Oracle 10g and have the infrastructure to support distributed RAID arrays full of redundant backups with offline tape libraries, well you’re probably here by accident then. However, if you’re running PostgreSQL or MySQL on a VPS somewhere, I hope this has been thought provoking at the least.
(Later I’ll post some actual How To code/commands to implement this.)
A handy command-line of text to mass rename files (in this case, change file extensions) that works in Linux and OSX:
ls -d *.cbz | sed 's/\(.*\).cbz$/mv -v "&" "\1.cbr"/' | sh
In this example, all .cbz files are renamed to .cbr (as I noticed some comic book archives I had were wrong).
An even more concise version using ‘basename’ was mentiond but OSX seems to have different default extension handling.
The bane of having an awesome media server, regardless of whether it houses audio, video, photos or all-of-the-above is that you have to rely on physical devices somewhere to store the data…
Having recently had yet another hard drive decide to begin spewing bad sectors, I went on the hunt for recovery information for the overly complicated LVM2 system I was running. The immediate prospect of having to mirror the entire ~2TB filesystem before being able to run a repair made my head hurt. After acquiring a replacement disk for the 1 in poor health, I was tempted to try the standard Linux command ‘dd’ with some ignore errors and pad blocks options, but then I happend to stumble upon TestDisk which sounded extremely versatile and useful. However, what I was most impressed with was their extremely informative MediaWiki-based site and in particular, the Damaged Hard Disk area with references to two different versions of ‘dd rescue’ tools, in particular Antonio Diaz’s ddrescue utility. Essentially after you tell it the bad disk, and somewhere (file or other disk) to write the data, it’s fully automated. If you make sure to use the logfile feature, it can even resume and pick-up where it left off if your recovery process is interrupted for any reason.
If you’ve had hard disk/CD/DVD failures for whatever reason, I strongly suggest looking at the TestDisk page as it runs across >6 operating systems and supports >17 different filesystems derivations – oh and their site is very helpful. Have at it!
Peter Chabada posted a length, nitty-gritty list of 40+ possible improvements to Linux desktops (mainly Gnome). Worth a read for ideas: http://chabada.sk/better-desktop/
I’d been having some annoyances with RDP (Remote Desktop) over SSH. The primary source of annoyance stemmed from the Win2k/XP client not allowing you to connect to your local IP regardless of port, forwarded or otherwise. Luckily, it isn’t actually clever enough to know that the 127.0.0.2 address is also tied to the loopback device (one of Microsoft’s little liberties that turns out to actually be handy – who knew?!). So, here’s a solution that’ll save you time struggling and cash from buying an application such as WiSSH that is entirely unnecessary.
I was originally commenting on an article on Engadget but felt like expanding it a bit here.
Ok, what everyone has to remember is that the User Interface of whatever is presenting you with your >1,000 DVD library has to not just be good, but has to be GREAT. The only GREAT interface I’ve found is from Kaleidascape but sadly that’s only inside their $20k media server (nuts!). In looking at the Niveus and Escient screenshots, they look like rejected 80s MTV visuals… Don’t even get me started on MythTV, Meedio, DVD Lobby or the like; goodness.
Although not leaps and bounds better, I am very happy with Xbox running Xbox Media Center. I have my 2TB library ripped to my file-server and the Xbox Media Center software is by far the most friendly and elegant for access. However, even this isn’t a match to the Kaleidascape UIs. Frankly, I’ve given-up and settled for writing my own interface in Flex. Not sure when it’ll be released, but it’ll definitely be free when it’s done.
Just wanted to add my thoughts to this debate in the hopes of tempering all these new product announcements lately and remind people that a >1,000 DVD library is really pretty useless when you can only see ~9-12 covers on screen at a time… Oh and the Sony XL-1 Digital Living System does look quite impressive, but is still bound by the Microsoft MCE2005 limitations. Sony did have some UI screenshots of a different media server but I can’t seem to locate them right now – more on that later.