Work

RDPLauncher

TL;DR: Here's the Link:
RDPLauncher

I use RDP a lot and had some scripts to let me launch lots of RDP sessions without having to enter my random-generated passwords over and over. I wasn't happy with how I was handling those passwords so I've made it more secure using gpg and KeePassXC. Last night I made it compatible with Windows and MSTSC which will be uploaded here shortly once it's cleaned up a bit.

Basically I'll click a shortcut for whatever host, which runs my launcher. I get prompted for my GPG passphrase, which reads from an encrypted file containing my KeePassXC passphrase, which is then used to retrieve the user password for launching the RDP session.

Gpg-agent uses a cache-TTL to "hold the door open" for 10 minutes by default, so I can launch a bunch of sessions and only type my passphrase once.

Requirements:

- gpg client and running gpg-agent (gpg4win, etc) with a private key set up, etc.
- cygwin if you're running Windows
- KeePassXC (or some other key-store that has a command-line interface
to query the database. In the beginning I was just using the gpg file
with user/password pairs, so that works too)

The tool has a few neat features:

- If run from the command line with no arguments, it will prompt for user/pass/host/domain, good for one-off sessions to machines I won't log into much. That's great since I spend all my time in terminal windows and this stops me having to go back and forth to the mouse and keyboard while entering credentials.

- If launched with -b, it prompts you for information for a one-off connection, but will also build a new shortcut launcher from a template. So like for the first connection to a machine you know you're going to use a lot. (Linux/Mac only)

- Automatically tunnel sessions over ssh. This means I can launch RDP sessions on my Mac and they'll seamlessly proxy through my work laptop to the VPN.

For tunneling, I am taking an arbitrary range of 200 ports and incrementing them based on what's currently listening. If there's already a process listening on port 6201, then try 6202 etc until there's an open one. So I can easily open 20-30 ssh tunneled sessions each with its own ssh process which will close down when the RDP window closes. 200 is "probably overkill", which means it might just be barely enough in the real world.

The launcher shortcut mechanics are a bit different on my Linux and Mac machines so I split the -b script builder piece out based on OS. On Linux, I use KDE/Plasma, and so I generate these as KDE desktop files which look like this:

#!/usr/bin/env xdg-open
[Desktop Entry]
Comment[en_US]=
Comment=
Exec=/home/xrayspx/bin/rdplauncher.sh -h it-host.xrayspx.com -d xdomainx -u xrayspx
GenericName[en_US]=
GenericName=host.xrayspx.com
Icon=remmina
MimeType=
Name[en_US]=
Name=host.xrayspx.com
Path=
StartupNotify=true
Terminal=false
TerminalOptions=
Type=Application
X-DBUS-ServiceName=host.xrayspx.com
X-DBUS-StartupType=
X-KDE-SubstituteUID=false
X-KDE-Username=

On the Mac side, I use shell scripts with the extension .rdp (which conflicts with Microsoft's client, but I don't care since I never use their client anyway). Those just launch using Terminal, so it does pop a terminal for a fraction of a second, but I really don't have a problem with that.

The launcher for that looks like:

#! /bin/bash
rdplauncher.sh -h host.xrayspx.com -d xdomainx -u xrayspx &

If I call it with AppleScript or Automator instead of a bash script as above, none of the password retrieval process works. I think it short circuits and sends the output back to the AppleScript rather than the bash script which ran the command. If I can get that working that would be ideal.

The mechanics on Windows are similar to the Mac method. a .bat file which launches the bash script via Cygwin:

C:\cygwin64\bin\mintty.exe -w hide -e /bin/bash -l -c '/home/user/bin/rdplauncher.sh -h host -u username -d domain'

On Windows at least the Cygwin window it creates is hidden from the user, so that's nice.

xrayspx's picture

Lots of RDP

Music: 

Annie Lennox - Why?

Do you do lots of RDP? Like lots and lots? I do, and even with password management it's annoying. I tend to use generated passwords for all my normal user, Domain Admin user and obviously Administrator accounts. That means lots of workarounds to deal with those passwords while doing bulk RDP sessions.

A typical use case for me is to RDP to 20 machines at a time, run a thing, wait, and log out. I've always scripted this, but not always in strictly the safest way. Plaintext passwords stored in a script, or read off disk. The philosophy is "if someone can read this script, I've already lost the game anyway", but still it's ugly and sick, and so I fixed it. In my defense, the Red Team never did pop my laptop...

I already use gpg-agent to facilitate unpacking of log files. On my syslog servers I roll logs over hourly, gzip them and then gpg encrypt them to my key. Then I can download a bunch of them, run my logunpack script, enter my passphrase once and since gpg-agent caches that credential for a period of time, decrypt all my files in one go.

What I wanted here was basically a way to have keepassxc.cli "hold the door open" and cache the passphrase like gpg-agent does. So what I've done is to use gpg-agent itself for that purpose. I have a GPG encrypted file containing my KeePass-XC passphrase, and I open it using gpg-agent, so it can be reused until gpg-cache-ttl expires.

I've also always had slightly different copies of this script for use cases of "Fullscreen on my laptop" and "fullscreen on larger displays", so I have a switch here for "resolution" as well. "fs" for fullscreen or "fsbm" for "big monitors". Since I'll never go to my office again, that's pretty much never going to get used. The default for the $res value will remain 1280x960. Reasonable enough.

I also added prompts so that it'll ask for host, domain, user and password if you run the script with no prompts from a shell. So /that/ will be super useful to me when I have to do a one-off connection to some remote host but don't need a whole launcher for it. While I'm at it, I put in the -b switch so that you can have it generate a launcher based on that input. That saves me hand editing a template when I add a new RDP host.

I use Linux, but this should work with minimal-if-any changes on Mac and Windows/Cygwin, both of which can run xfreerdp and gpg-agent. I have a good automated ssh-tunneled RDP setup for my Mac, so I might try using that with this so I can use a 4k display for those "busy RDP days".

Being that I do run Linux, here's how I launch this. KDE desktop files like this:


xrayspx@dummyhost:~/rdps$ cat windowsmachine
#!/usr/bin/env xdg-open
[Desktop Entry]
Comment[en_US]=
Comment=
Exec=/home/xrayspx/bin/rdplauncher.sh -h windowsmachine -d domain -u xrayspx
GenericName[en_US]=
GenericName=windowsmachine
Icon=remmina
MimeType=
Name[en_US]=
Name=windowsmachine
Path=
StartupNotify=true
Terminal=false
TerminalOptions=
Type=Application
X-DBUS-ServiceName=windowsmachine
X-DBUS-StartupType=
X-KDE-SubstituteUID=false
X-KDE-Username=

So anyway, here's the thing - Oh good, the code tag doesn't work anymore and so all my whitespace is gone. Ah well fuck it I'll fix it with &nbsp like a fucking old man:

--------------------------------------------

#! /bin/bash

while getopts ":h:d:u:p:r:t:b" options
do
    case "${options}" in
    b) build="1" ;;
    h) host=${OPTARG};;
    d) domain=${OPTARG};;
    u) user=${OPTARG};;
    p) pval="$OPTARG";;
    r) rval="$OPTARG";;
  t) tunnel="1"; tval="$OPTARG";;
    ?) printf "Usage %s: [-b build new launcher] [-h hostname] [-d domain] [-u user] [-p password id (id of password in gpg file)] [-r resolution (fs, fs-bigmon)] [-t 10.6.5.4 to ssh tunnel through host 10.6.5.4]args\n" $0
       exit 2;;
    esac
done

#Lowercase usernames for later matching, set width x height which I may make variable someday.
user=$(echo $user | tr '[:upper:]' '[:lower:]')
width="1280"
height="960"
os=$(uname -a | awk '{print $1}');

#Set tunneling to default for this client
tunnel="1"

#Set default tunnelling host for this client
tval="xrayspx-zbook"

#Call with no switches or use as a command-line client for one-off RDP sessions
#Or use with -b to build a new RDP launcher shortcut
if [ -z $host ]
  then
  read -p "Hostname:" host
  read -p "Username:" user
  read -s -p "Password:" pass
  echo ""
  read -p "Domain:" domain
  if [ "$build" = "1" ]
    then
  if [ "$os" = "Darwin" ]
   then
    if [ "$tunnel" = "1" ]
     then
     rdp="rdplauncher.sh -h $host -u $user -d $domain -t $tval &"
     else rdp="rdplauncher.sh -h $host -u $user -d $domain &"
    fi
   echo "%! /bin/bash" >> ./host.rdp
   echo "$rdp" >> ./host.rdp
  elif [ "$os" = "Linux" ]
   then
   cp ~/rdps/rdp.template ~/rdps/$host
   if [ "$tunnel" = "1" ]
        then
        rdp="rdplauncher.sh -h $host -u $user -d $domain -t $tval"
        else rdp="rdplauncher.sh -h $host -u $user -d $domain"
   sed -i "s/rdp.template/$rdp/" ~/rdps/$host
      sed -i "s/host.template/$host/g" ~/rdps/$host
   fi
  fi
  fi
fi

if [ -z "$pass" ]
  then

  kpass=$(GPG_AGENT_INFO="" gpg -q -d ~/bin/kp.gpg)
  if [ $domain = "domain1" ]
    then

    if [ -z "$user" ]
      then
      user="xrayspx"
      kpentry="domain1user"
    fi
    if [ "$user" = "xrayspx" ]
      then
      kpentry="domain1user"
    fi
    if [ "$user" = "administrator" ]
      then
      kpentry="domain1admin"
    fi
  fi

  if [ $domain = "domain2" ]
    then
      if [ -z "$user" ]
        then
        user="xray-domainadmin"
        kpentry="domain2domainadmin"
      fi
      if [ "$user" = "xray-domainadmin" ]
        then
        kpentry="domain2domainadmin"
      fi
      if [ "$user" = "administrator" ]
        then
        kpentry="domain2admin"
      fi
      if [ "$user" = "xrayspx" ]
        then
        kpentry="work"
      fi
  fi

  if [ $domain = "local" ]
    then
      if [ "$user" = "administrator" ]
        then
        kpentry="domain1localadmin"
      fi
  fi
 pass=$(echo "$kpass" | keepassxc.cli  show -s ~/Nextcloud/kees/keep.kdbx $kpentry | grep "Password:" | awk -F "Password: " '{print $2}')
fi

if [ "$tunnel" = "1" ]
 then
 for i in {6300..6500}
  do

  proxexist=$(netstat -nat | grep 127.0.0.1.$i | grep LIST | awk '{print $2}')
  if [ -z "$proxexist" ]
   then
   #echo "iteration $i"
   #echo "hostname $host"
   ssh -c aes256-ctr -N -L $i:$host:3389 xrayspx@xrayspx-zbook &
    sshpid=$(jobs -p)
   echo "sshpid $sshpid"

   sleep 3

   if [ "$rval" = "fs" ]
     then
    cmd="xfreerdp +clipboard +compression /cert-ignore /w:1280 /h:960 /bpp:16 /v:127.0.0.1:$i /u:$user /d:$domain /p:$pass /t:$host /f /floatbar"
   else cmd="xfreerdp +clipboard +compression /cert-ignore /w:1280 /h:960 /bpp:16 /v:127.0.0.1:$i /u:$user /d:$domain /p:$pass /t:$host /dynamic-resolution"
   fi
   $($cmd)
   kill $sshpid; echo "killed pid $sshpid"
   exit 0
  fi
 done
else if [ "$rval" = "fs" ]
 then
 cmd="xfreerdp +clipboard +compression /cert-ignore /w:$width /h:$height /bpp:16 /v:$host /u:$user /d:$domain /p:$pass /f /floatbar"
  else cmd="xfreerdp +clipboard +compression /cert-ignore /w:$width /h:$height /bpp:16 /v:$host /u:$user /d:$domain /p:$pass /dynamic-resolution"
 fi
$($cmd)
fi

exit 0

--------------------------------------------

xrayspx's picture

Bouncing from Kodi to EmulationStation, and back

Music: 

Ninety-Nine And A Half (Won't Do) - Wilson PIckett

Update:

----
As pointed out on the RetroPie forum, just add the loop in autostart.sh, duh: I searched for a while before writing this thing and if I'd seen anyone mention that I'd have just done that instead.

while :
do
kodi
emulationstation
done

I also think it makes a more sensible default for RetroPie to implement. That's all I actually wanted at the start.

However...

Now I've added Features. I can hijack my loop and add one-off commands.

So now there's a Desktop button in my Kodi main menu that will touch a file to cause the loop to gracefully exit Kodi and send me to a desktop session. When I leave the desktop session, it takes me back to Kodi. So that's pretty goddamn convenient.

-----

Because if there's one thing I love, it's having to sysadmin my TV.

Like most reasonable people I use a Kodi mediacenter to run my TV. Lately this has been on a Raspberry Pi 4 running RetroPie. Generally people boot RetroPie into EmulationStation and use it as an emulator, such as on an arcade cabinet. I'm also one of those people.

But in this case I primarily use the TV to watch TV shows and movies, but also want to run console games, so I upgraded to a better RPi and migrated from LibreElec to RetroPie.

RetroPie lets you choose whether to boot into EmulationStation or Kodi, which is fine, and the idea is that if you quit Kodi, it loads ES so you can play games. That works fine. Once. The trouble is in going the other way. If you quit EmulationStation, you exit to a shell. If you run Kodi from within the Ports menu in EmulationStation, well, now you're running both ES and Kodi. This also changes the behavior the next time you quit Kodi to play a game. You end up back in the Ports menu with Kodi highlighted, because ES never quit.

So, that's what I fixed.

The way the RetroPie tool works is they create a script at /opt/retropie/configs/all/autostart.sh. If you have Kodi booting first, it will have two lines:

kodi-standalone
emulationstation.

That script gets run at login time for the pi user. Basically it runs Kodi, and autostart.sh is still running. When Kodi exits, it runs ES and autostart.sh exits. If you wanted to you could just put 1000 lines of:

kodi-standalone
emulationstation
kodi-standalone
emulationstation
kodi-standalone
...

However that's ugly, so I kind of daemon-fied it with a bash script of my own that I wanged together in like 10 minutes, and then I launch that through their autostart.sh. I didn't want to replace their script with mine because the RetroPie one could get regenerated with an upgrade or if I hit something in RetroPie-config. It's safer to have their script call mine.

So what I do is I start with whichever application is passed to me in the command line:

autolaunch.sh -f kodi

Then I start an infinite loop and, based on what application the script is called with, it will start the first application. When that app exits, I change the value of the variable so that the next time it loops, it runs the other one:


#! /bin/bash

while getopts f: name
do
  case $name in
    f) fval="$OPTARG";;
    ?) printf "Usage %s: [-f application to start] args\n" $0
    exit 2;;
  esac
done

while :
do
  if [ $fval = kodi ]
  then
    kodi-standalone
    fval="emulationstation"
  elif [ $fval = emulationstation ]
  then
     emulationstation
    fval="kodi"
  fi
done

Downsides and ToDo's:

Obvious downside is that this makes it difficult to get a shell at the console of the machine. However, I can count on one hand the number of times I've had to do that in the last 6 years or so of running my TV from a Raspberry Pi, so I really don't care.

A definite ToDo is to add some level of process control and general safety so I don't somehow end up running a bunch of instances of Kodi and ES. I did test with "Restart Emulationstation", so it would pick up new games, and it seemed to work as expected. It didn't launch another instance of Kodi or anything.

My main ToDo is to have the ability to use more launchers. Basically right now I have a "Games" menu item in my Kodi main menu, I hit it, it just runs the Kodi "Quit" command, which causes ES to start. Same thing in ES, though I'm just quitting it using the context menu at the moment.

I'd like to be able to add a "Desktop Session" button to quit Kodi or ES and launch a desktop with a browser for those very rare times I want a browser on my TV. This would also solve the "can't get a local shell" problem, at least mostly. I could add a "quit to shell" in this way obviously as well. I think the best way to do this is to stop the script as I exit Kodi and restart it with a new starting value, like -f startx. Kind of like if it were a real system daemon.

However I think in my case, since I'm not a very good programmer, I'm going to just bang this out with a file in /var/tmp or somewhere which carries the "Next Command", so rather than update $fval as I am now, I'd check that file and have it read in each loop to set fval. That would allow me to hijack it from outside the loop.

So I'm in Kodi, if I quit, it's going to set $fval to "emulationstation" and load ES. However, if I run a shell script, and /then/ quit or killall kodi-standalone, that shell script can populate /var/tmp/nextcommand or whatever with "startx".

Then, when Kodi quits, it sets $fval to ES, the next loop comes, but instead of just launching ES, we check to see if there's a value in nextcommand. If there is, set $fval to that and run it instead.

Then you'll start an X session, and when that quits, it should take me back to Kodi.

I seem to recall Kodi's internal tools are pretty good, and I can combine "run this external command" with "run this internal 'quit' command" and assign that to a menu "Action". Just need to remember where all that stuff is.

xrayspx's picture

Running the Lattice of Convenience

Music: 

New Order - 5 8 6

Since posting about the week of 1983 TV Guide viewing, I've had questions from some people wondering about the storage and other hardware and software we use for our media library. It's really not very complicated to do, though I do have preferences and recommendations.

So here's what we've got.

Motivation:

Mainly I don't like the level of control streaming companies have. That they monitor everything we do, and that stuff comes and goes from services like Netflix and Amazon Prime on their timeline, not mine. I don't like the concept of paying for things like Spotify so that I can rent access to music I already own.

I realized like 15 years ago that while we often spent $200/$300 per week on CDs earlier in our marriage, Natalie and I were drifting away from actually listening to it much, because who wants to dig around for a CD to hear one song, then move to another CD. Ultimately, the same applies to movies, we have lots of DVDs, and I don't want to have to dig through booklets just to watch a couple of James Bond movies.

It's super easy to maintain, and we like being able to watch Saturday morning cartoons, "Nick-at-Nite" or throw on music videos while we play arcade games and eat pizza. Once up and running, it's all pretty much push-button access to all the media we like.

Media:

- 2000-2500 CDs (Maybe 200GB of music)

- Couple hundred movies, really probably not as many as most people.

- Lots of TV shows. Space-wise, this is where it adds up fast when you're ripping a box-set of 10 seasons of some show.

- Commercials, mainly from the '80s and '90s, but I'll grab anything fun that strikes us.

- Music videos. We have an overall collection of around 2000, and a subgroup of about 700 which represent "'80s arcade or pizza place" music. That's music that was just ubiquitous when we were growing up in the '80s and early '90s, and you heard it all the time whether you liked it or not. I've since come to appreciate these songs and bands in a way I didn't when I was a dickhead punk kid.

So all told, there's about a 5TB library of stuff, mainly TV shows, but also a decent music library that needs to get maintained and served.

Hardware:

- Ripping machines - Mainly, all I need is the maximum number of DVD trays I can get my hands on. There's nothing special here. My tools work on Mac or Linux so I can work wherever. We have one main Mac Pro that has 2x 8TB drives mirrored which hold the master copy of the media collection.

- NAS - Seagate GoFlex Home from like 10 years ago. I think I originally bought this with a 1TB drive, and have since upgraded it twice, which is kind of a massive pain. Now it's got an 8TB drive which has a copy of the media library from our main machine. I'll get into the pros and cons of this thing below.

- Raspberry Pi - I have a multi-use RaspberryPi which does various tasks to make things convenient and optimizing TV viewing. There are a handful of scripts which create random playlists every night for various categories of music videos, TV shows (Sitcoms, 'BritBox', 'Nick-at-Nite'), etc. It also runs mt-daapd, which I'll get into below.

- Amazon Fire Sticks - We have a couple of them. I'm not super impressed with their 8GB storage limit, but I'm definitely happy enough for the money they cost. They're cheap, around $20 now, and they do what they say on the box. Play video. I have side-loaded Kodi 17.x, but they seem not to quite have the resources for 18.x, though I'm really not sure why not. It's just slower.

- The Shitphone Army - I've got obsolete phones (Samsung Galaxy S4-ish) around the house and decent speakers set up so we can have music playing while doing the dishes for example.

Software:

- Kodi - I mentioned Kodi, which is just an excellent Free Software media library manager. Kodi gets /such/ a bad rap because of all the malware infected pirate boxes for sale, but you never see much from people who actually use it to manage a locally stored library of media they own. Can't recommend it enough. Get familiar with customizing menus in Kodi and making home-screen buttons linking directly to playlists. It's worth it and makes it look nice and easy to use.

- mt-daapd - I'm running out of patience with music streaming, though everything does work right now. MT-Daapd just basically serves up a library of music using the DAAP protocol, which used to be used by iTunes

- DAAP (Android app) - This could be great, but it seems to be completely un-maintained, and somewhat recently moved from being open source to closed, so unless I have an off-line copy of the source, there go my dreams of updating it. But it works well on the Shitphone Army and on the road so we can basically stream from anywhere. Other DAAP players for Android are pretty much all paid applications, and none of them seem to work better particularly than DAAP.

- Scripts A handful of poorly written scripts for ripping DVDs and maintenance of the library (below)

Recommendations:

Players - While the Fire Sticks work great, they're really very dependent on having constant access to Amazon. Were I installing mainly a Kodi machine, it would be much better to use a Raspberry Pi either with a direct-connected drive or mounting a network share. It's super easy to set up with ready-to-go disk images which boot straight into Kodi.

Playlists - Create lots of playlists. Playlists and randomizing things are two things that Kodi is terrible at, so I don't try to make it do it. These scripts run nightly on the Raspberry Pi and make .M3Us for us.

Filenames - Have a good naming convention. All my playlists are M3Us of just lists of files. That means that you don't get Kodi's metadata database with the pretty titles and descriptions, and so the files must be named descriptively enough that you can tell what episode you're looking at from the list of filenames. My template is "Name of the Show - S02E25 - Title of the Episode". Kodi's scrapers work well with that format and it makes it easy enough to fire up the Nick-at-Nite playlist and decide where to jump in.

At various times, I've considered parsing a copy of the Kodi database to suck out the metadata and add it in before the file location. In an M3U, that looks like this:

#EXTINF:185,Ian Dury & The Blockheads - There Ain't Half Been Some Clever Bastards
/mnt/eSata/filestore/CDs/Ian Dury & The Blockheads/Ian Dury And The Blockheads The Best Of Sex & Drugs & Rock & Roll/17 There Ain't Half Been Some Clever Bastards.mp3

It seems like having all that sqlite stuff happening would add a lot of overhead to generating playlists, and having well-named files saves me from having to worry about it, so I haven't bothered.

Storage - Though I use a "Home NAS" product that overall I've been pretty happy with, it does irritate me. Consumer market stuff is /so/ proprietary that it's quite hard to just get to the Linux system beneath and customize it the way you see fit. Specifically in the case of the GoFlex, "rooting" it even involved replacing Seagate's customized version of SSH with a vanilla one. Screw that up and you brick the device. I also run into network bottleneck issues with that thing. While you can enable jumbo frames, for instance, when syncing new content the CPU gets pegged, I believe I'm running out of network or disk buffer, which is kind of unacceptable in a NAS device.

Building it today, I'd just use a Raspberry Pi 3 with a USB drive enclosure. For the time being, my growth curve is still (barely) pacing along with the largest "reasonably priced" drives on the market. My ceiling is about $200 per drive when I do upgrades, because I am a very cheap man.

I have no opinion on consumer RAID arrays. I can only imagine consumer RAID based NASs come with all the shit I hate about the GoFlex. Yes, I'm biased against consumer grade garbage tech and that's probably not going to change. I'll have to buy one someday I'm sure, but for now it's all being kept simple.

Backups Keep backups. While I have multiple copies of everything, it does make me somewhat nervous that the only part of the media library currently being backed up off-site is the MP3 collection. That's got to change, and rsync is your friend. Ultimately I'll probably end up upgrading my home Internet from 20Mb/2Mb to something which will allow me to sync over a VPN tunnel to somewhere off-site (friend's house, work...).

Sample Scripts:

Here are some samples of the shitty bash scripts that run this whole nonsense. I know the better ways to write these, but the fastest possible way to hammer these out worked well enough and there's no way I'm going to bother going back and fixing them to be honest.

Rip CDs

I use an application called MAX on the Mac to rip CDs. I think its usefulness might be coming to an end, and I'm not sure what to do about that. It uses (used?) MusicBrainz database to automatically fingerprint and tag discs, but the last CD I ripped it seemed to have problems. You can run iTunes side by side with Max and drag the metadata over from there, so maybe that works well enough?

Anyway, I use that because I rip to both 320k CBR MP3 and FLAC. I have a shitload of stuff that really should be re-ripped since they're 128k and no FLAC, but I've so far been unmotivated to do so.

I wrote a bunch of stuff to move all the output files around and update iTunes libraries. Honestly I don't rip a whole lot of new music, which is a shame and which I should really fix.

Rip DVDs

DVD ripping is a lot more fragile than it should be. Good software like Handbrake are bullied into removing the ability to rip protected DVDs, and things are being pushed toward the commercial. I use mencoder in the script below.

DVD titles are sketchy at best, and as far as I know, you can't really fingerprint a DVD and scrape titles in the way you can with CDs. So I do what I can. I take whatever title the DVD presents and make an output directory based on that name plus a timestamp. That way if you're doing a whole box set and all the DVD titles are the same they're at least writing out to separate directories and not overwriting each other.

As far as file-naming, unfortuantely we don't live in the future yet and that's all down to manually renaming each output file. I use the information from TVDB, not IMDB, since that's the default library used by Kodi's scrapers. Sometimes the order of things is different between that and IMDB (production order vs airing order vs DVD order issues plague this whole enterprise).

#! /bin/bash

timestamp=`date +%m%d%Y%H%M`
pid="$$"
caffeinate -w $pid

id=$(drutil status |grep -m1 -o '/dev/disk[0-9]*')
if [ -z "$id" ]; then
echo "No Media Inserted"
else
name=`df | grep "$id" |grep -o /Volumes.* | awk -F "Volumes\/" '{print $2}' | sed 's/ /_/g'`

fi
name=`df | grep "$id" |grep -o /Volumes.* | awk -F "Volumes\/" '{print $2}' | sed 's/ /_/g'`
echo $name
dir="$name-$timestamp"
mkdir /Volumes/Filestore/dvdrip-output/$dir

echo $dir

for title in {1..100}
do
/Applications/mencoder dvd://$title -alang en -ovc lavc -lavcopts vcodec=mpeg4:vhq:vbitrate="1200" -vf scale -zoom -xy 640 -oac mp3lame -lameopts br=128 -o /Volumes/Filestore/dvdrip-output/$dir/$title.avi
done
chmod -R 775 /Volumes/Filestore/dvdrip-output/$dir

Playlist Script

The simplest Music Videos one below just looks at one directory of videos and one directory of TV commercials and randomizes all the content into an M3U. The more complicated ones have dozens of directories, and I'm sure I'm doing this array-building the wrong way. I'm sure I could have a text file with the un-escaped directory names I want and read that to build the array, either way, it really doesn't matter because if I want to add a TV series, I still have to edit a file, so this works fine. I've also thought about having a file in each directory like ".tags" that I search for terms in, like "comedy,nickatnite,british" and build the array from that, I dunno, sounds like work.

#! /bin/bash

array=`find ./ -type f;
find ../../Commercials -type f`

printf '%s\n' "${array[@]}" | sort -R | grep -v dvd_extras | grep -v "./$" | grep -v "\.m3u" | grep -v -i ds_store | grep -v ".nzb" | grep -v ".srt" > full-collection-random.m3u

- rsync the TV library. I have several of these, one for TV shows, one for movies, music videos, mp3s etc. It's just somewhat faster to only sync the thing I'm actually adding content to, rather than have to stat the entire library every time I rip a single DVD. The TV show sync tool also deals with the playlists, which are actually created on the NAS drive, so they have to be copied local before syncing or else they'll just get destroyed every day.

This checks to see if the NAS volume is mounted, if not it will mount it and re-run the script.

#! /bin/bash

mounted=`cat /Users/xrayspx/xrayspx-fs01/.touchfile`

if [ "$mounted" == "1" ]
then

cp ~/xrayspx-fs01/Common/TV\ Shows/1\ -\ Playlists/* /Volumes/Filestore/Common/TV\ Shows/1\ -\ Playlists/

rsync --progress -a --delete /Volumes/Filestore/Common/TV\ Shows/ ~/xrayspx-fs01/Common/TV\ Shows/

~/bin/umounter.sh
exit 1
else
mount -t smbfs //192.168.0.2/filestore ~/xrayspx-fs01/
~/bin/synctv
fi

xrayspx's picture

Today In Donuts Annoying Me News

Music: 

Blur - Coffee and TV

For a couple years now, I've been telling Natalie that if I had a couple bucks and an inclination to build a thing or interact with people, I'd Do It Right. I'd make fresh donuts daily. If I worked at Red Arrow, I'd make the case that the absurd Milford, NH third shift should dedicate themselves to making bread for the rest of the day. Instead, they recently discontinued that shift.

So in that vein... I have a lot of music videos. To go with them, I've downloaded a bunch of '80s commercials for Nostalgia's Sake. Things like cereal, BMX bikes, Underoos, Schoolhouse Rock, and since you can guess I'm from the Boston market, Spags and Dunkin' Donuts.

At Spags, they'll save you money:

These old commercials, pre and post Fred, all tout the freshness of their product. It's the "Freshest you can buy". Then they stopped making donuts on site. There is no longer a fryer in the building at this point. I guess they nuke or salamander their croissant, bagel or muffin sandwich and formed eggs, I dunno they do something to make them hot, but none of this stuff could be mistaken for "fresh".

Sadly, I imagine that by and large, they were right. They /are/ the freshest donuts most people can get. Their 1980, Pre-Fred accusations that "most super markets donuts are made by machine" actually came about in my store in around 1994 or 1995. The day the woman* who fried and filled and decorated the donuts every day moved to Heath and Beauty Aids, which was coincidentally the day before /I/ learned to "make" donuts and muffins.

Muffins were from mix, + any other stuff like frozen blueberries, cinnamon apple chunks or cranberries or whatever else we'd mix in. Muffins were easy, but donuts were even easier. They came in frozen now, so all we did was heat 'em up, glaze them and put them out. We still glazed, filled, sugared, dipped them by hand, and the "baking" process wasn't that bad, they tasted fine, i guess. I never really ate them much. You tend not to want to eat muffins if you're covered in muffin mix for 5 hours every day. Now that I think of it, even though we were frying donuts daily, there's no way that wasn't just frozen dough that we bought in. Don't get the impression these were scratch-made, just rolled and fried on site. When I stopped making donuts and muffins, we had transitioned away form even making the muffins from mix, they now came in as frozen batter in muffin cups and plastic trays. We'd just move them to metal trays, bake 'em and serve 'em up. That was probably '95 or so at the latest.

The couple of times I've gone into Dunkin's (as it's now trying to rebrand itself officially) in the last 2 decades, I've come away with the impression that they don't even do that much anymore. I don't see that there's even room for a table to glaze and decorate anymore. Do donuts just come in all pre-filled and room temperature ready to put out? Like the Krispy Kremes at the convenience store?

There is still a good donut place in our town, but I imagine even they aren't actually scratch-making anything. It's just fresher than Dunkin's. Meanwhile I'm pretty sure the baker my parents went to in the '70s and '80s hand-made everything in his shop from scratch from donuts to birthday cakes.

* - Hey Millennials, here's some trivia: I remember the woman who made the donuts was about 23 or 24, and that she and her husband, who drove forklift as I recall, and with whom she owned a house, had managed to save enough in their water jug full of change to go on vacation in the Caribbean. Just sayin'.

xrayspx's picture

Antique Desk And Its Dazor Task Lamp

Music: 

This just made me unreasonably happy today, so I am gonna have to share it.

A couple of years ago we found an antique drafting table for pretty cheap money at a local shop, I think we maybe paid $200 or so for it. It had been used and taken very good care of for...80 years? Maybe more? It had a Kilroy on it. So we snatched it and replaced Natalie's less beautiful portable drafting table with it. It's a real monster, like 48" x 36", and great shape, well built, though it does have quite a twist to it. It'll last Natalie forever.

Since then, one problem she's had was getting an adequate task light. She had a plastic fluorescent arm-light, but it was nowhere near long enough to cover the new desk. And, you know face it, it looked like junk.

So the other day we were in another local consignment shop and I spotted a monster arm-light for $25. Natalie didn't like the look of the fact that it was fluorescent (will we be able to get bulbs...) and wasn't sure about the mount, since it didn't clamp, it looked like it screws to the desk. As is my way, I needled her for a couple days and let it work on her that she needed to check it out. No one's gonna stop making fluorescent bulbs, and even if they do, so we get an LED adapter, or just rewire it all the way to the plug for LED. Today she went back and grabbed it, and score, it was on sale, now $18.

We hadn't really looked at it, but turns out it's a Dazor from 1950. As we were trying to figure out how this was supposed to attach (none of the hardware was there), I figured we could get a couple of set screws with wingnuts and big ass washers, drill the desk (Natalie was not a fan), and just bolt it in.

So she started measuring up the distance between the screws, and found that they exactly matched the existing holes someone had already drilled in the desk. We were just re-uniting the drafting table with it's long-lost lamp!

The table top had to be turned around so the holes were at the back, so that was an hour well spent, but it all lined up and she dropped right in. Now it's at that point that I thought it was too cool and started writing this post. However, interesting bit of trivia, Dazor was founded by Harry Dazey, of the Dazey Churn and Manufacturing company in St. Louis. We happen to have a good-size collection of Dazey ice crushers, a can opener, and one of what's probably a small handful of portable stands that are left in existence. This makes Natalie super happy, because we've completed the Dazey set finally.

So here we are, desk, lamp, and ice crusher:

The thing that impresses me the most about this is that in the spare parts section on Dazor's site they list all the various switches and ballasts so you can repair your lamps. Not only are ours still fully in stock, but they've only got 7 listed switches and 5 ballasts, which I'm sure cover virtually every product they've ever made. Simplicity and rugged construction = happy customers forever.

Not only were these lamps built to outlive your granddad, and they did, obviously, but you can still get parts for 'em if they ever do let the magic smoke out! Right from the manufacturer. Try that with literally any other product, especially now. Man. I mean, I get that if you make a lamp, and that lamp lasts forever, then you never sell another one to that customer, and your company dies. But the other side of that coin is that you end up the standard in task lighting, forever, with multi-generational product loyalty.

We'll probably end up buying brand new Dazor lamps for spaces like our office workbench once it's built, and I fully expect them to last just as well as this one clearly has.

xrayspx's picture

Roadside America

Music: 

Natalie and I are on a kind of meta-road trip. We're not actually going to see The Thing, but we're seeing the roadside attractions which have sprung up around The Thing to amuse and draw in visitors.

Today we went to Roadside America, which is a massive O gauge model railroad layout. 6000 feet, assembled over 60 years of one Laurence Gieringer's life, from when he was 9 until he died.

Natalie took tons of photos, but I put up 3 short videos covering about 15% or so, along one short edge:

Mine, mountains, farms:

The zoo:

Midcentury Downtown:

xrayspx's picture

My Life Is Going To Suck Without Net Neutrality

Music: 

There are so many things I do which are likely to suffer with Net Neutrality's loss.

I run my own mail, web and cloud sharing services on a VPS that I maintain. Owncloud syncs all my devices, I use IMAP and webmail. I also run lots of "consumer" stuff for myself. I own 2500 CDs which I've ripped and share for my own personal use. I have playlists. I can connect with DAAP from my phone, and listen to my own CD collection, music I have paid for, Spotify style. I know people are saying "Spotify will work just fine", but what if I don't want to use Spotify?

This is all encrypted, personal connections. Nothing illegal is happening here. I'm not filesharing or streaming Torrents or any other grey-area services. It's just all my personal stuff, owned and manually copied myself, sharing to myself. No one gets ripped off here.

I can plug my Amazon Fire stick or Raspberry Pi into any TV and use Kodi to stream my own MP3s or movies, etc. I can use it to watch Amazon Prime or Netflix as well. Kodi also has a wealth of plugins to watch content from sources such as the PBS website. We all can watch Nova, or Julia Child, or even Antiques Roadshow over the Internet, for free, legally. This may all suffer when backbone providers and local ISPs can both decide which packets have priority over other traffic. PBS could be QOS'd out of the budgets of millions.

(Note *)I don't own a Nest or any other IOT garbage, but I have toyed with the idea of building my own, running on infrastructure I build. I don't want Google to know what temperature my house is right now. And I don't want some mass hack of 500 Million Nest users or idiot IOT Lightbulbs to let some Romanian turn my furnace off in the middle of February either.

So yeah, losing Net Neutrality could effectively disable all of this. Small hosts like me could be QoS'd off of the Internet entirely, unless we pay extra /at both ends/. Pay my hosting provider to pay their backbone providers to QoS my address at a decent speed. Then pay my consumer ISP to QoS my traffic so I can reach "The Good Internet", like they have do in Portugal.

This is going to cut my lifeline to my own data, hosted by me on my own machines. Am I going to have to pay an additional "Get Decent Internet Access Beyond Google, Spotify, Facebook and Twitter" fee to the Hampton Inn just so we don't get QoS'd away from our own stuff? It's bad enough that the individual hotel can effectively do this already today, but the hotels are at least limited by the fact that they're in competition with each other and if they have ridiculously shitty Internet that you can't check your mail over, well people would notice that. Backbone providers pretty much have no such direct consumer accountability. No one's going to say "well, fuck that I'm not going to route over AT&T anymore", they might say "Hilton has shitty Internet, I'm going to Marriott".

Some of the most demoralizing part of this is that the rule-makers just don't get it. I already know they don't care, but former FCC Chair Michael Powell's statement, which boils down to "You can still use Facebook, (Amazon) Alexa, Google and Instagram, just like you can now" is missing the point either deliberately or purposefully. That most "consumers" will be fine isn't the point. The point is that everyone be equal, and all traffic be routed equally.

* The risk to my information is proportional to the value an attacker places on the information. Could a state actor target my email server and read my mail? Yeah, the Equation Group or Fancy Bear or some Eastern European ID theft ring could probably exploit some flaw in whatever software serves my VPS, or flat out order the ISP to give them access to my stuff, but why? What does the NSA gain by ransacking my mail server? Not much. How about criminal attackers? However they /would/ expose 1.5 Billion Yahoo accounts all at once, and have that entire corpus of mail to search against, plus passwords they could use to try and attack everyone's bank account all at once.

xrayspx's picture

Kitchen Designs

Music: 

Van Halen - Panama

As anyone who reads Natalie's site, or who has been around either of us for more than five minutes in the last six months will know, we've been in the middle of a kitchen renovation for...way, way too long now. Since I did the actual layout design (twice) Natalie asked that I write up how that process went and how we progressed from the original layout, through to what we've got now.

The original kitchen layout was less than ideal in many key ways. It was basically a galley kitchen which acted as a footpath from a hallway at one end where there was an external door, a restroom, and our living room through to the dining room and the main part of the house (office, library, bedrooms). This split the workflow of the kitchen between the "sink side" where the doors were and the "stove side". In amongst that were afterthoughts like "oh hey someone should put a fridge here" or "who wants a laundromat?". It wasn't great.

One of the biggest problems was that these two opposing doors weren't lined up. The dining room side door was a good 30" from the wall, which gave enough space for the countertop, even though the end of the counter did intrude into the door trim an inch or so. The other door however was maybe 20" or so from the wall, meaning that if you ran countertop right to the end of the room, you'd be intruding 5" or so into the door opening.

This is illustrated in this rough sketch of the beginning state and a couple of photos:

Since my imagination is limited, I originally planned our new layout based on the layout as we had it here. This means that to get to the (newly finished) breakfast and laundry area one would go out that hallway-side door, then out what used to be the exterior door into what used to be the porch to eat breakfast or wash clothes.

Thus the new design ended up looking like this, around three walls, with the left-hand side wall still being entirely blank, since there was a fridge and doorway there. We figured we'd put posters there like we had in the past:

Sink Side (top of the above image):

Dining Room Side:

"Stove Side":

You get a sense for how conventional my thinking was, to the point of comically over-engineering to try and shoehorn as much crap as we could in the same space. The awkward doorway was rather elegantly handled by the fact that that tall-ass broom closet (21" wide full-height cabinet in the diagram) is only 15" deep, so it would give nearly two feet between the door and where that lazy susan, with its 45 degree angled door would "guide" you into the room, helpfully saving the reproductive organs of any guy who staggers through that door without really looking.

But what a mess. Take the refrigerator. We knew that any fridge we bought in the Shiny New Future was going to be much wider than the 29.5" GE Home Depot special we had, so I had to plan for that with spacers that could be removed, or custom cabinetry that could be ripped out when we bought a new one. And all the cramming in of bookshelf space wherever we could fit it. And that half-height cabinet above the fridge slammed all the way to the ceiling, ugh. It was just forced.

At some point around the fourth or fifth sink we decided on, I could no longer shoehorn it into this design. We were wavering between a fully integrated Elkay with a built in steel backsplash and countertop, and the one we ultimately got, which is a more conventional, but still huge (FIFTY FOUR INCHES FUCK YEAH!) drop-in with left and right side drainboards. This simply blew my model all to hell. I spent a few days in Omnigraffle screwing around to make space for that full-countertop monster. At a basic level the problem was that the full steel countertop sink had to line up directly to the edge of a Youngstown cabinet on both sides, since it couldn't really overhang them. Everything under that sink would then need to be custom carpentry.

I had to find a third way. So I completely changed my outlook. That doorway is annoying me and is going to cause me to lose a testicle? GET RID OF THE DOORWAY. We're taking the thing down to studs anyway. Put the fridge there, where it will be convenient and out of the way. Let's make a huge (45 inch) entryway from that breakfast area, which will also let light flood in from the massive window out there.

So what we ended up with is a far superior layout both for foot traffic flow, and for kitchen workflow. We changed the layout from a "Galley" style kitchen to a more traditional 3-sided model with entrances to the breakfast area on one side and the dining room on the other. It adds a slight zig-zag to get to the living room & restroom, but it's really, really minimal.

That plan looks more like this, with the walls in the same order, starting at what used to be the sink area.

Here's the top-down:

Dining-room facing:

Sink wall:

As you can see, we /did/ save the front of that sink:

Stove wall:

As you can see from the photos, our contractor and his subs have done a phenomenal job of executing this design. It's exactly as we envisioned it from day one, and we couldn't be happier with their work. Stay tuned for the "Complete" complete photos which I'm sure will be coming shortly on Natalie's site.

Throughout this process Natalie and I have had slightly different goals. She wants the Ultimate Vintage Kitchen, which, I think we can all agree on, has been achieved. I wanted to see how close I could get to having a professional quality and ergonomically correct and functional space. I think we've ultimately achieved that as well with an industrial quality sink and faucet fixture, but which fit perfectly into the retro aesthetic we wanted. It just took a mental break on my part to force the pieces together.

If anyone needs them, I'll update when I've posted the set of Omnigraffle stencils I whacked up to fit all this stuff together. They are proportionally correct to each other, and there are some in the stencils which didn't ultimately make it into the room, since they are "cabinets we own", but we just couldn't jam any more crap in there :-) If anyone can figure out a good way to represent these crazy corner cabinets and lazy susans in 2D I would very much appreciate your input. It's not like I live with a goddamn graphic designer or anything.

xrayspx's picture

You know what, no, they don't

Music: 

Because if people could remember what 100 years ago Earth was like, they'd know that the best things to happen in the last 100 years are based around the idea that if we all work together, then when we're old, we will take care of each other. And when we're young, rather than work like adults, we will teach our children with the collective knowledge of our species so we can continue to advance. We can afford to take care of those who can't work like the rest. Too much of the time, we choose not to take care of those people.

Pages

Subscribe to RSS - Work