Duply, Duplicity, and The Best Laid Plans

I was going to go to bed at a reasonable hour tonight. Honest.

The background is too long to explain, but in short I was trying to configure Bitwarden_RS, an open-source Rust implementation of the Bitwarden platform. I’m using an OpenVZ VPS, on an older OS image, so docker deployments of even the stripped down Bitwarden_RS was out. So I grabbed their code, built it, and struggled through spinning up a service-to-service nginx configuration. I got everything working. But before I took a dependency on a new service, I wanted two things: self-starting (system.d to the rescue) and backed up.

Wait, whoops.

It turns out that around January, Microsoft turned off the legacy OneDrive API. Apparently, my backup alerts never fired, but I haven’t had a successful backup since January. Well, that’s not good. So over the last twenty-four and change hours, I’ve been trying to figure out what’s going wrong.

Problem 1: 410 response from OneDrive API (“gone forever”).

A bit of investigation, and it appears that someone implemented the updated OneDrive API for duplicity! It’s been in since.. right after the last stable release.

Problem 2: Daily builds from Duplicity are still targeting the last stable release (0.7.19). I needed 0.8.0.

The dev channel builds don’t appear to be working. So I grabbed the source code.

Problem 3: The duplicity source code is under-documented.

No make file — but if you fumble around a bit, you discover the “setup.py” script actually does incredibly useful things — like building. And installing. I had to fumble through a few dependencies, but… yay.

Problem 4: The apt-get package for duply has a dependency on duplicity, and the duplicity install from a local build isn’t recognized by apt-get.

Problem 5: … and can’t be suppressed.

Okay, so I worked around it. The “equivs” package lets you create a stub package to fulfill requirements. So I used “apt-cache showpkg duply” to find out which version of duplicity was required, modified the faux duplicity package configuration, and used “dpkg -i duplicity.deb” to install it.

Problem 6: I couldn’t install it because duplicity 0.4.whatever would break other things.

FINE. duplicity.0.8.0 it is. “apt-get install duply” worked fine afterwards.

Problem 7: duplicity did NOT work fine.

So I tried rebuilding it, reinstalling it. Installing pip. Updating pip. Checking that I had python3 installed. Installing python3 pip. Rebuilding duplicity. Learning “find . -name ‘_libsyncr.so'”.

Actual problem? setup.py, which I alluded to earlier, installs all of duplicity to /usr/local/lib/python2.7/dist-packages/duplicity/, but /usr/local/bin/duplicity starts with “!env python3” — note the version. Change duplicity to use python2.7 — and bam, everything’s working!

Problem 8: No, not really. Duplicity (and duply) report an error trying to reach the Microsoft Graph (the new API) — invalid scope. There’s exactly one blog post I could find that talks about how to setup the OAuth flow for duplicity to OneDrive, and I got the same error when I tried to follow it’s steps.

The solution, in hindsight, is painfully obvious. Delete the file ~/.duplicity_onedrive_oauthtoken.json , and the OAuth flow will be re-run! Unfortunately, you have to remember the file is there. It’s not documented that I could find. New OAuth flow -> permissions -> everything works!

Problem 9: No, not really. “duply blogs status” reports no backups!

duply/duplicity actually stores files in a OneDrive folder based on “username” — maybe there’s a setting to change that default naming, but that’s what it is. So the various files are stored in OneDrive:[email protected]@serverbackups/whatever/*. Except, they’re not. They were stored in OneDrive:user%40office-monkey.com@serverbackups/whatever. Apparently, at some point in the last few months, they stopped URL encoding the username. How did I find this out? I started a full backup of the largest data set! It backed up over a gig of data that had already been backed up. I moved all my existing backups over into the new path, and now all the backup patterns can see them!

Problem 10: I’m still up at 2:27am.

Solution: Good night!

How I Accidentally Torched My Wife’s Blog, and How I (Mostly) Recovered It

 

Let’s rewind a year. My wife’s first published book is in editing. We discuss her need for a professional author’s website. I take a look at what I’m paying for Windows hosting with Arvixe, and think about how irritating it’s been and how many things I can’t control, and how their service has been generally awful.

I’m a computer guy. I do random things, they generally work. Hardware, software — whatever. Rocks that do very fast math make sense to me. But… I wanted off Arvixe, and this was an excuse to make it a Project.

First I moved all our domains off Arvixe to Namesilo. Between their pricing and their functionality, I was happy. Specifically — because they didn’t offer hosting, no one was going to ever try to convince me to use their hosting services. Great! Then I started poking at VPS solutions. A step up from shared hosting and nearly as much control as physical hardware for substantially less cost. I priced out a bunch of configurations at different vendors, and at the end went with RamNode (I have an entire set of data as to prices and functionality, but it’s now a year out of date — aka, worthless). I will admit that RamNode’s documentation left something to be desired — but I was happy with what I managed to setup. With NameSilo providing nameservers, I didn’t need to worry about any functionality RamNode had in that regard — just a VPS.

And then I learned how to configure a VPS. I started with a LEMP stack. Yes. LEMP. Not LAMP. Instead of Apache, I used NGINX. Instead of MySQL, I used MariaDB. And, because I didn’t know better, instead of PHP I used HHVM. And I got it all working. A site per configuration file, linked from sites-enabled to sites-available. I setup duply to backup both the web content and the database. I scheduled crontab jobs to run duply incremental  six days a week, and a full backup every Sunday. I backed up all the NGINX configurations, same schedules. I even checked that the backups were working, and that the logs had no errors.

Over time, I found out that HHVM made NextGen Gallery for WordPress not work, so I swapped out fastcgi-php for all the sites. I stripped out common code and put them into reusable snippets. I setup Let’s Encrypt SSL certificates for every site. When my father-in-law passed away, I setup a new domain with email address, hosting, a blog, and content in under an hour — including a brand-spankin’ new SSL certificate. Because I had setup everything with scriptlets in the past, it was easy. I even threw Cloudflare in the middle.

Then I ran into a problem. I was trying to configure a Mediawiki installation — should be minor, right? — and discovered I didn’t know the root user password for the MariaDB installation. Well, damnit. I tried everything I could think of, and went bust. But hey — if I uninstalled MariaDB, wiped the data directories MariaDB used, installed MariaDB again, configured it fresh (with the same WordPress-intended usernamed and password), then reimported the same tables from the existing SQL dump collected by the duply scripts — I’d be golden! I’d know the root user password to create a new user and new tables for Mediawiki, and the data would still be there for all the other sites. Perfect! I’d be using the backup strategy I so carefully put in place!

Note, gentle reader, that if that last sentence filled you with dread, you are WISE.

Now, I didn’t do this blindly. I double checked that the backups were there, the backup logs were clean, and that the blogs.dump.sql (such a great name!) was there from the last run.

cd /var/www 
ls blogs.dump.sql

Perfect!

So I uninstall MariaDB, wipe it’s data directory, backup it’s configuration, reinstall MariaDB, make sure to pay attention to the password this time, recreate the WordPress-expected user, and import blogs.dump.sql. Total time, under five minutes. The Uptime Robot alarm didn’t even trigger. I checked the sites, and everything still looked good.

I then had to spend entirely too much time getting test.office-monkey.com to work, as default ports and servers and mistakes in reloading nginx configs just annoyed me enough to fail to get it right — but that was a side project. The site was working.

Until two days later, when my lovely wife told me she couldn’t access the admin console any more. Weirdly, it took her to the “signup for a new network site” page — not a 404, not a domain registration error. Then I just started making mistake after mistake…

I did two things concurrently. I spun up the host default URL to check configurations for her domain, and launched Cloudflare. To address the problem with Mediawiki, I had spent a LOT of time Purging Everything to try to address wrong pages being served. So without thinking too much about it… I purged everything for my wife’s blog. Right after I clicked it, I tabbed over to the host URL — and saw nothing was there. You can’t cancel a purge.

Every monitor on her blog tripped. I took a look a harder look at blogs.dump.sql:

ls -l

blogs.dump.sql was dated from last July. It most definitely did not have anything from her professional website — except the placeholder post.

I immediately started scrambling. I grabbed the entire contents of the SuperCache directory, which had most — but not all — of her pages cached. I grabbed it for my blog as well, but then I made mistake number two — I turned on SuperCache for every blog in the network — turns out, that clears the previous state (bye, every other site I was hosting!).

At some point in the middle of this, I went upstairs to tell her I may have just lost everything from her blog. She was surprisingly calm.

Now, on the plus side, all images, themes, and plugins were still there. Posts were not. BUT!

The SuperCache files? They had hints. They indicated which theme was in use by what the URL to a CSS file was. They told me which icon she had used by filename. Between those, I was able to get the site to “look right” again. Then I started copying and pasting. I’d grab the raw HTML from the SuperCache page, put it into a new post, then set the “Publish Later” date to the historical date she had originally published it. That actually got me about 90% of the way there. But then I discovered not all the posts were cached.

But before I had started on this series of unfortunate events, I had poked at her Cloudflare analytics data — trying to get some idea “when” taking her site down for maintenance would be less impactful. While spelunking through it, I had noticed something everyone wants to see for a professional website: web crawlers.

So I started searching every search engine I could think of for the magic “cached page” result. Bing and Google helped. Baidu did not. And if you want to be depressed: https://en.wikipedia.org/wiki/Web_search_engine — take a look at how many are inactive. DuckDuckGo and Dogpile weren’t really helpful, either. But via judicious copy and paste — I was able to recover all but three blog posts. I even recovered her pages in addition to her posts, and linked them all to a menu that matched the previous results (yay, SuperCache!). Interestingly, the Way Back Machine had indexed her site — but not saved anything from it. Jerks.

All told, less than two hours after I screwed the pooch, I had her blog back up at 98% — and all the posts that were missing were from almost a year ago.

Unfortunately, that was one blog out of several.

My personal blog (this one!) was down. So was her old personal blog. So was her dad’s memorial.

I postponed that while I went to make sure everything was backed up. I checked to make sure the duply jobs were scheduled, and then that they were run. Everything worked perfectly.

So, why hadn’t they run in OVER A YEAR?

So, undocumented secret that caused all this mess (also, me not looking before I leaped, or checking ls -l to make sure the file had actually been updated):

10 1 * * * /usr/bin/duply /root/.duply/blogs incr > /var/logs/blogs.incr.log

Doesn’t look like much could be wrong with it, right?

Here’s the dirty part: duply {name} incr doesn’t run pre.

So everything was being correctly backed up, and versioned, and old versions cleaned up weekly — but the MySQL/MariaDB sqldump command located in /root/.duply/blogs/pre was never run. I was particularly proud of it — I had setup php to render the username and password from the WordPress config files so the settings weren’t in two places. But without pre being run, the duply backups had only backed up the files. Admittedly, all the files, but none of the post data.

So first thing, I figured out how to fix that. I switched to daily use of the more basic “duply {name} backup” with a severely underdocumented full-if-older flag. I added a weekly purge command that would wipe older data than two full backups. I confirmed that duply {name} backup would always run pre.

Then I tried to figure out what to do…

NAS Review: QNAP TS-563: I’ll Keep This One

TL;DR:
There’s a reason QNAP is one of the top players. The AMD chipset means
no Plex hardware transcoding, but it’s still a lovely box.

Preface:
I started trying to replace my 2-bay QNAP TS-220 when I realized that
it wouldn’t support my new 8TB hard drives without new drive caddies
(whoops). The ARM chipset was getting a bit old, and it was time to
upgrade to something that will still be supported in five more years. Or
two. Whichever. After months of shopping, I tried out the TerraMaster
F5-420, and it just… wasn’t great. Some good hardware and decent
performance (or so I thought), but a damaging level of observable
software engineering naivete (you can read about it here.

Then
while I was returing the TerraMaster, B&H ran a sale on the QNAP
TS-563. I had paid $299 for the TerraMaster, usually priced $499. The
TS-563 was on sale for $409! Yes, it was $100 more expensive than I had
paid, but man, that was a 5 bay QNAP at a lower price point than many of
the 4-bay units I had looked at. Sure, it had an AMD chipset, but that
was mostly going to impact power draw (still less than a desktop) and
Plex encoding (which sometimes sucks as the algorithms aren’t as good,
and is so situationally dependent for someone who doesn’t currently have
a Plex setup or any recordings it’s basically a distraction). It was
“only” the 2GB model, but I found you could upgrade the RAM — so I
would, if it was necessary.

B&H does not offer Prime shipping,
but I could afford to wait a week. I started setting it up the day
after I got it, but it took a few days to put this review together.

PACKAGING:
4/5 – A slightly prettier outer box than the TerraMaster, but slightly
harder to unpack — they made it deep, while TerraMaster had gone “wide.”
In addition, their accessories weren’t as nicely packaged as
TerraMaster’s, and I had to actually go look for a screwdriver!

PHYSICAL:
4/5 – Metal trays. Both cases were nice, but the metal caddies make a
world of difference — even if it’s meaningless once they’re assembled. I
do like the internal power supply as well. Even though the TerraMaster
trays had come up higher on the sides, the QNAP trays were just more
substantial. However, bay 3 — and only bay 3 — had issues with
insertion. Not at the backplane, but at the front. I tried several
times, and always had issues — but just with Bay 3. Bonus points to QNAP
for not labelling the trays as being associated with specific bays.

SOUND: 4/5 – Maybe I just expect too much. The device is nearly silent on idle, but drive noise is audible when under load.

INSTALLATION:
4/5 – QNAP, if anything, has gotten easier to install. Their website
(accessed via SSL) gives you a bunch of options for setting the
device up — including one that’s entirely cloud based! I opted out of
that option, and used their QFinder application. Nice installer, signed
installer.

USAGE: 2/5 – Logged in remotely via my default browser,
which QFinder invoked correctly, completed all the setup, including
updating the firmware. No hiccups. No goofs. Possibly a little too
helpful in the UI and too much going on, but I’m borderline competent —
someone who knows more may have appreciated more information (“What’s a
Thin Volume? What’s a Thick Voume?”), while a completely novice would
likely have loved how much help they offered through the web flow. The
web flow was also really nice — and bug free. No CSS errors, no minor
mistakes.

PERFORMANCE: 4/5 – Faster than the TerraMaster!
Surprisingly slow on Thin Volumes! Which I may never use, so the 4/5 is
somewhat spurious.

Read performance saturates the 100MBps Gigabit
ethernet connection. I tested via an isolated subnet behind a router
(the Archer C7) supporting no other devices, connected via CAT6 cables. I
used LAN Speed Test (registered!) to try a random assortment of 100
file sizes between 2MB and 5 GB written to the default public share on
the TNAS device, with Network Recycle Bin turned on. LAN Speed Test
writes a file, then reads it back to verify it, then deletes it. I tried
against four different RAID configurations, all with the same 5 8TB
drives; in all cases I waited while the drives configured, then
restarted the TNAS, then waited until the TNAS web interface indicated
the array was “Good.” I tried: RAID5, RAID5 with encryption, RAID6, and
RAID6 with encryption. By the time I got to RAID6 I did have a few other
things to do (attaching a bad USB device to a system can kill even
network I/O — did you know?), so the data is a bit noisier on that test.

Encrypted
results vary far more wildly than the unencrypted results. READ speeds
for both RAID5 and RAID6 hovered above 100MB/s at all file sizes. WRITE
operations on both RAID5 and RAID6 were about 85MB/s, clearly not
saturating the network bandwidth, and probably constrained by the
requisite parity calculations. Surprisingly, RAID6’s two distinct parity
calculations didn’t more significantly impact throughput — but I don’t
have CPU utilization information for this time, so I can’t guarantee
that two cores were involved in RAID6 versus only one for RAID5. The
noise on the RAID5 write information makes me wonder if I did something
wrong, but the average is really clear. RAID6 I already acknowledged I
caused some harm, but the removing the obvious noise makes a pretty
clear picture that RAID6 doesn’t substantially impact READ or WRITE
performance.

Encrypted
READS were still saturating the network on both RAID5 and RAID6. WRITE
speeds dropped by about 5MB/s, except some data outliers on RAID5 that I
can’t explain. Still, average write performance was around 80MB/s.

Overall,
average READ/WRITE performance trounces the TerraMaster, and READS are
consistently saturating the network, with WRITES still at acceptable
levels.

Wait! What’s that bottom row?

In
addition to RAID levels, QNAP also offers mechanisms to virtualize
constrained disk systems on top of a RAID array. They can be used for
quota enforcement and block-level snapshots… and I’m sure they can be
used for other things, but please see earlier where I admitted I hadn’t
read all the documentation yet. I will likely be directly using RAID5 or
RAID6, but I’ll read before I make a final decision. I decided to
attempt a “Thin” volume solution. I configured it on top of a RAID disk
storage. I chose Thin as it is “dynamically resized,” and I figured it
had to have worse performance than the Thick (fixed size) option.

It appears that Thin Volumes with Snapshot on WRITE performance is even worse than encrypted RAID6.

So,
will I use Thin Volumes? Probably not — but again, read the
documentation. And, in every case, it’s still more performant than the
TNAS F5-420.

APPLICATIONS: 5/5 -Wow. QNAP has most apps I could want! Well, they no longer have CrashPlan, but that’s not their fault.

CONCLUSION:
5/5 – I’m sure there are better options out there. I will, at some
point, want to upgrade the minimal 2GB of RAM. But QNAP delivers an
altogether killer package, with area for a technologist to play
(Virtualization support!) as well as an abundance of information for the
novice. Even though I will never use the majority of the apps, they are
there. But the performance easily tops the TerraMaster I had to compare
it to, and was a far more polished application package — even if I did
miss the screwdrivers. And Bay 3. WTH, Bay 3? All in all, I feel
confident that my data will still be there in the morning.

 

NAS Review: TerraMaster F5-420: Some nice touches, but not worth the risk

(Amazon
won’t let me write a review because I got a price other customers can’t
get — I double upped a publicly available coupon with a Lightning deal.
Not my fault. But I took the time to prepare to warn people, so I’m
writing this anyway.)

TL;DR: Seemingly acceptable hardware
perceived as untrustworthy and unreliable due to numerous unprofessional
and unreliable software engineering behaviors on display. I just can’t
trust it.

My 2TB 2-bay NAS is
getting a bit long in the tooth. For many things, my QNAP TS-220 still
works great after four and a half years. But capacity wise it’s no
longer there. Along with half the NAS-friendly US population, I picked
up the easy shuckable BestBuy-exclusive WD Easystore external 8TB
harddrives — but discovered that my TS-220 came with incompatible bays.
Rather than spend $80 on new drive caddies, I started a months long idle
browsing process, seeking something newer and more powerful. Mostly I
had been focusing on QNAP and Synology, because who hasn’t heard of QNAP
and Synology. I was wrestling with the price point, however —
especially as I jumped from looking at the 453B to the TVS series. So
when TerraMaster popped up with “not as well known” reviews, I did
remember the brand, but considered them only casually. The lack of
documentation and reviews means there aren’t as many forum posts or
reviews even mentioning the F5-420 (go look, and see how quickly you
start getting results for the F2 instead).

Then I hit pay dirt.
The Amazon app, that evil money-sucking-pocket-demon, alerted me to a
$99 off lightning deal on the Noontec TerraMaster. I poked around, and
was astonished to see NOONTEC was ALSO offering a $100 off coupon — and
free headphones, but I could only redeem one coupon. $300 for a FIVE bay
system that should be roughly as powerful as the QNAP TS-453 I had been
considering, and the higher RAM model at that? Done!

It arrived
the next day! And sat for almost a month before I had time to try it
out. I scanned my hard drives, discovered four out of 6 of the 8TB
drives were actually WD REDs, meaning there was only a risk the case
wouldn’t support 2 of WD white label drives. Finally, I had a chance to
install.

PACKAGING: 5/5 – Some wasted space, but generally good
layout. Bagged screws identifying 3.5″ (“HDD”) vs. 2.5″ (“SSD”)
compatibility (with spares leftover); a SCREWDRIVER! Taking it out, it
just feels well put together. I’m a big fan of a metal case!

PHYSICAL:
3/5 – … And then you discover the non-locking trays (known) are flimsy
plastic affairs. They verged on wiggling while I was mounting the hard
drives. I appreciated the dual ethernet ports, although I don’t
currently have a need for them. Given the size of the overall unit
versus the size of the individual drive bays, I’m not surprised they had
to use an external power brick — but I still don’t love external power
bricks. The biggest physical drawback is definitely the trays,
especially when contrasted with the metal case. I always felt I was on
the verge of destroying the tray when I put it in or took it out.
Hopefully, though, you wouldn’t have to do that too often.

SOUND:
4/5 – The sound from the system itself (the fan, mostly) wasn’t
noticeably audible at any point. The hard drives, however, were. I can’t
be sure that a better job could have been done, but it definitely
didn’t feel like there was any attempt to dampen the interior sound.

INSTALLATION:
1/5 –  Up until when I went to configure the device, I only felt
slightly awkward about using the hardware. I’ve backed Kickstarter
campaigns — hardware is HARD. A little bit of “immaturity” in case
design is a minor problem, and I was impressed with the ease of
connecting to the backplate. So, there was still a change. Then I went
to their website (http://start.terra-master.com)… and started wondering
what level of company I was working with. I have an SSL certificate.
Sure, it’s from Let’s Encrypt, but it’s not hard to do. I tried to visit
their HTTPS page, and it was rejected. Their corporate website
(www.noontec.com) has the same issue. I’ve seen startups with more
secure websites. Now, admittedly, I wasn’t sending private information
over the wire — but I was left with an increasing level of skepticism. I
selected my model from the drop-down (seemingly randomly ordered), and
clicked next — where I got my second surprise. The download link for the
manual had a sibling link (“Download Link 2”) pointing to Dropbox. From
past experience, I know that internet connections from China to the
rest of the world are not 100% reliable, and can understand why they’d
want a backup — but using DropBox just registered as surprisingly
unprofessional. A second VPS, perhaps, with nothing but the long-lived
content? There are ways to do this…

Clicked through, and download
TerraMaster_TNAS_for_win_V3.0.zip — again, with the second link being
via Dropbox. (A quick diagnosis by me now makes me think they’re using
their “connect remotely to your TNAS” functionality to host these files…
which is surprisingly clever, but doesn’t save the feeling). What’s the
contents of
http://dl.tcloudme.com/cn/TerraMaster_TNAS_for_win_V3.0.zip? Huh.
That’s… scary.

In my day job, I program. Or engineer. Or sometimes
go to meetings. But for several years off and on, I’ve been using a
product called “Visual Studio” to write software. And what was in that
zip file was CLEARLY the zipped output from a build. Some highlights:

  • TerraMaster.vshost.exe
    – VS creates a “{appName}.vshost.exe” to serve as the virtual process
    that it can debug through. It started in VS2005, and it only really
    affects engineers… unless your software team is so amateur you don’t pay
    attention to your build artifacts. For the curious: MSDN article on the subject. Now, this file won’t break the application — it just shouldn’t be there.
  • TerraMaster.exe.CodeAnalysisLog.xml
    – Okay, this one might be more concerning. If I was going to attack
    this app, this gives some idea where. It calls out unsecure — or
    possibly unsecure — coding practices. Now, this isn’t the software
    running on the TNAS itself, but it’s a hint of an engineering culture.
    Good: they use code analysis tools! Bad: they seem to be making some
    really curious choices about P/Invoked APIs…
  • Newtonsoft.Json.dll – Hey! I use this a lot! Version 9… wait, that’s at least a year old. Might not be an issue, but still…
  • TerraMaster.application
    – Well, that makes no sense whatsoever. That’s a ClickOnce deployment
    manifest… For those less embedded in Windows development: ClickOnce
    isn’t great for end-user software, but it’s killer for Line-of-Business
    applications. ClickOnce provides you with a mechanism where you can
    create a stub application which will automatically update on publication
    of a new version. Think App Store type model, but without central
    control. However, this manifest doesn’t point even to Terramaster’s
    unsecure website, so there’s zero reason for it to be there.
  • Copyright date: 2015. Wait, what? Version 1.0.0.0.
  • TerraMaster.exe.config
    – Well, that’s mostly normal and boring and why are there fields down
    here for UserName and Password? Is that how this is configured to store
    the information locally?? And why is jitDebugging=”true”?
  • TerraMaster.pdb – Debug symbols? Really?

None
of these things are inherently dangerous — but most of them shouldn’t
be there! Newtonsoft.Json.dll is a REASONABLY new version, and upgrading
arbitrarily can introduce risks, so that’s okay. And
TerraMaster.exe.config is even expected. But I would never expect to see
a PDB file in a shipped product (you can capture a memory dump without
it!). The CodeAnalysisLog and the .vshost.exe are just ridiculous.
Someone with less knowledge might just wonder which to click (which is
what they tweeted at me at Twitter when I complained), but for me, it
was a bunch of signs for an amateur engineering organization.

UPDATE:
As I’m writing this, I went back through the start.terra-master.com
flow — and version 3.1 of the software has since been released! This has
a proper installer and MSI, but, for me, the damage is done.

USAGE:
2/5 – So I used the app to find the IP address for the Terra-Master
F5-420. It was reasonably fast to do so. The app hinted at other
functionality (file management), but I wasn’t inclined to try and see if
the apparent images were actually clickable. I just double clicked the
listed device, and it launched Internet Explorer. Not my default
browser, mind you: it’s apparently hard coded to launch INTERNET
EXPLORER. INTERNET EXPLORER. I’ll say that again: INTERNET EXPLORER.
Yes, if you’re on any version of Windows from the last fifteen years
you’re guaranteed that it will be installed… but Windows does a kick-ass
job with executing “https://255.255.255.255” based on the default
handler — at least for the last ten years. Who the hell launches
Internet Explorer?

I grab the web link and hop over to Firefox,
and start the configuration process. It asks me to register my own email
address, which only allows you 30-60 seconds to receive the email and
enter the verification code. This is email, which is by definition a
best-effort, non-real-time communication medium. I timed out twice and
then gave up (another black mark). It identified my installed drives
(both — I only had two installed on this first attempt), and prompted me
to select drives and define which RAID level I wanted. I clicked
through, there was a nice progress ring, and then I was in. It then
prompted me if I wanted to update the firmware to 3.1! Sure! If it was
broken, I wanted to know anyway. I approved the installation, the device
restarted, I went back in and WTF. Launch the Control Panel, and
there’s an unformatted list of links. Click into one, and it’s a little
better — the content is there, even if broken. I deleted the RAID
cluster I had temporarily formed, and turned the entire unit off to add
three more drives. Turned it back on… and have to go through the
initialization steps again. Apparently it doesn’t do well if you delete
all the storage, which is alarming. Go to try to create a RAID6, and the
UI just… doesn’t work. The drop downs are visible, but changing the
value doesn’t always work. Try to enable encryption, and SOMETIMES it
prompts you for your password. SOMETIMES. The Create button doesn’t
consistently work, either. I was rapidly losing patience at this point.

I
gave up, and came back the next day. I opened the F12 debug tools in
Firefox to observe network activity (my theory was that it wasn’t
loading either a CSS or Javascript file) — but no 404s. Continued trying
to use the broken UI, and almost threw the entire unit out the window.
Then I remembered there was another option: I hit Ctrl+F5, which forces
the entire site to reload, forcing a refresh of the cache… and things
magically started working. What does this tell me? Noontec’s engineers
don’t properly version their resource files, and set an entirely too
long TTL for a local-network device. The UI was broken because I had
logged in — once — the day before to the previous version. They have no
version qualification in their resource paths (which would automatically
force a reload), and have a minimum of several hours in their
time-to-live (I assume at least 24 hours) for a device typically on a
local network. Is this bad? No. Again, it’s amateur. On a
public website, you wouldn’t go back to it — depending on when you last
visited, the entire site would stop working. For static content, this
isn’t as big a deal. For dynamic, interactive content, this is crippling
— hence I almost couldn’t test it out. Once I figured out THEIR bug, I
was finally able to configure a 5-drive RAID array and get down to
testing it.

PERFORMANCE: 3/5 – This may be an unfair rating, and
partially arbitrary. Once I have another device I can configure in a
RAID5/RAID6, I’ll update this.

Read performance pretty much
saturates the 100MBps Gigabit ethernet connection. I tested via an
isolated subnet behind a router (the Archer C7) supporting no other
devices, connected via CAT6 cables. I used LAN Speed Test (registered!)
to try a random assortment of 20 file sizes between 2MB and 5 GB written
to the default public share on the TNAS device, with Network Recycle
Bin turned on. The TNAS was not otherwise doing anything, nor was the
source desktop. LAN Speed Test writes a file, then reads it back to
verify it, then deletes it. I tried against four different RAID
configurations, all with the same 5 8TB drives; in all cases I waited
while the drives configured, then restarted the TNAS, then waited until
the TNAS web interface indicated the array was “Good.” I tried: RAID5,
RAID5 with encryption, RAID6, and RAID6 with encryption.

Encrypted
results vary far more wildly than the unencrypted results. READ speeds
for both RAID5 and RAID6 hovered above 95MB/s at all file sizes. WRITE
operations on both RAID5 and RAID6 were about 50MB/s, clearly not
saturating the network bandwidth, and likely constrained by the
requisite parity calculations. Surprisingly, RAID6’s two distinct parity
calculations didn’t more significantly impact throughput — but I don’t
have CPU utilization information for this time, so I can’t guarantee
that two cores were involved in RAID6 versus only one for RAID5.

Encrypted READS were closer to 90MB/s — with a single customer. WRITE speeds similarly dropped by about 5MB/s.

Because
RAID5/Encrypted had a lot more noise than any other, I did a second run
against RAID5 with encryption enabled — but this time with 100 samples.

While these numbers bore out the same general indicators, it also was very noisy, and lead me into my next set of data.

With encryption enabled, the CPU utilization on all cores are hammered at about the same rate:

If
you notice, all four CPUs show a nearly identical noise pattern, with
frequent spikes to 100% utilization. What was I doing? I had the TNAS
web interface open to capture this data, and was running the
aforementioned RAID5/encrypted speed test. This tells me the device may
support RAID and encryption — but it’s not designed for it. But this is a
$500 (or to me, $300) device, so the fact that it’s not built for
hardcore processing isn’t a dreadful black mark.

So why 3/5, and
why the equivocation? I believe the CPU load and the saturation point on
the RAID5/6 for writes is too high. But until I have another “recent”
device to compare it to, it’s just whim. Read speeds seem fine. I
expected RAID5 write speeds to be higher than RAID6 (should be 3x slower
than a direct write due to the parity block, with RAID6 4x slower).
But… I could be wrong.

APPLICATIONS: 1/5 – Just as with Qnap and
Synology devices, Terra-Master OS devices (“TNAS” devices) have
available a list of web-installer apps for extending the functionality
of the TOS3. The list of available apps is nearly impossible to find
online (I couldn’t find it while shopping, at least), so here’s the
list. Please note the number of apps with “v1.0”, meaning they’ve NEVER
BEEN UPDATED.

  • Emby Server v3.2
  • Elephant Drive v3.1
  • Transcoding v1.0, Description “null”
  • Mail Server v1.0
  • MySQL Server v1.0
  • Transimission v1.0
  • WordPress v4.8.1 (current: 4.9.2)
  • SugarCRM v6.5.23
  • Apache Tomcat v1.2
  • Node.js v5.8 (current: 8.9.4LTS, 9.4.0 Current)
  • rclone v1.37
  •  

    iTunes Server v1.1

  • Aria2 v1.32
  • DLNA Media Server v1.1.5
  • Net2FTP
  •  

    Gcc Build tools v1.0

  • SVN
    Server v1.0 (“Version management tool, which is frequently used in the
    software development project and can realize storage, sharing and
    privilege management of history versions such as codes and documents.”)
  •  

    Java Virtual Machine v1.0

  • Plex Media Server v1.10
  • Dropbox Sync v41.4.80
  • Clam Antivirus v1.0

In
short: 21 apps. One without any description. At least two painfully out
of date. One which has such poor information (“null”) I’m not even sure
why I should install it. Java is currently on version 10, not 1.0. I
didn’t bother tracking down the others. The selection is just dismal —
but it is there. You can even login via SSH and install your own apps —
but if you want apps that are already configured to work with your NAS?
You’re out of luck.

CONCLUSION: 2/5 – A good external case and
polite packaging (they include a screwdriver!) was brought down by cheap
plastic trays; solid READ performance was brought down by unexpectedly
low WRITE performance; and the nail in the coffin was the numerous small
unprofessional choices their engineering choices make.

I
mentioned the installation package issue and the CSS issue to a friend,
and his response was, “This is the product you’re going to trust your
data to?” He’s right. When it comes right down to it, I do not feel
comfortable trusting this unit with my data — even though I have other
offsite backups. In my delusional moments, I want docker and
virtualization support and to be able to play with it — but I am not
going to complain that this device doesn’t have what it doesn’t have.
But a certain level of polish is expected for me to trust irreplaceable
data to a unit, and there was just one too many cases of rolling my eyes
and shaking my head at the “amateur hour” exhibition. Even the new 3.1
discovery software — now with installer! — is unsigned, and the MSI is
authored by “Default Company Name”, created on “6/21/1999”. This might
be a fine device if I just want to play with it — but I can’t trust it.

Hello, nosey people!

In the process of updating my résumé, I realized that in this increasingly Facebook-social-networking time, people might very well take the presence of a personal domain in the email address of an applicant as an invitation to look at their life.

Go ahead. Feel free. I mostly write about my cats, it seems, and I haven’t had a chance to update in over a year.

For the record, my cat is awesome. I’ll take a 5% pay cut in exchange for being able to bring him to work with me each day (not really).

(I’ve also managed to accidentally trash my theme while trying to update my WordPress installation… but really, you shouldn’t be thinking about hiring me to do visual design, anyway. You should hire Albert Lee of Yellow Devil Designs.)

Random Notes From a Random Day

  • Passing a bus on a highway is easy. Trying to pass a bus before you get to a particular exit can be more challenging.
  • The vending machine supply company has caught on. We now have two full rows of chocolate-frosted Hostess Donettes in our vending machine. The one half-row sold out every week.
  • The generic burn-aid cream in the first-aid kit in the kitchenette smells funny.
  • The largest sterile gauze pad in the first-aid kit in the kitchenette is just about the right size for the burn on my left forearm.
  • Running a 5k after only two weeks of physical therapy requires me to make an increase in the quantity of ice packs we own.
  • Watching Doctor Who lends me a weak British accent at random points throughout the day.
  • I have two lunch meetings tomorrow. I’m going to eat at the first one (with a friend), and perhaps snack at the one featuring my new boss.
  • Mandrina shaved four minutes off her 5k time in her first race on Saturday. I added four minutes to the time from my last race. The universe likes balance.
  • The single-serve “iCup” vending machine in the kitchenette provided a hot-cocoa tease today. It cycled, and then… nothing. I used the other iCup machine, and got my cup of coffee.
  • I have a lot of work to do this week.
  • I have now almost set up a photogallery so I can upload any of the oodles of pictures I’ve taken over the last five years.

Twitter Updates for 2010-01-06

  • How long do I have to wait for the other party to call in for a teleconference before I can just throw in the towel and go on with my day? #
  • @cytherea I went with ten minutes. No one else called in. #
  • Three weeks ago I was looking forward to the meeting that was about to start. An opportunity to shine, etc. Now all I want to do is skip it. #
  • @pfqrst But November is over! #
  • After 7 hours, my 12 hour maximum strength Mucinex starts wearing off. #
  • Drunk Frisbee Golf on Wii Sports Resort? Okay! (I'm the little-bit-drunk one.) #

Twitter Updates for 2010-01-05

  • One of my coworkers doesn't think I can make it 72 hours (down to 70 hours now) without losing my cool completely. He may be right. #
  • What drink goes with Italian (eggplant parmesan, I think), black and white cookies (none to share, sorry), and wanting to kill coworkers? #
  • (The euphemistic kind of "kill", obviously.) #
  • (I'm not stupid enough to Tweet any intended homicides. I'd just change jobs instead.) #
  • (Note to self: Update resume…) #
  • I don't want to know what my wife thought I was going to do when I pulled my wallet out to get my work ID out. I fear it would be expensive. #