Opinion: Petition w/ 3.9M+ signatures to overturn election is a road that undermines democracy

What We Know: Petition to overturn the election, 3.9 million signatures

This time, I’m posting mostly opinions, but here are the key facts I’ll be commenting on:

On Change.org there is a petition out to overturn the election in what would be Clinton’s favor.  It’s been signed by over 3.9 million people at the time of this posting:

Electoral College: Make Hillary Clinton President on December 19

Opinion: Reasonable sentiment, wrong solution

Many Americans have long argued that the Electoral College is a broken system.  Many have called for reforms or its abolishment.  Reform isn’t a crazy idea.

Hillary Clinton did win the popular vote, and many are saying that in a democracy this should be enough to make her president.  At first blush, this too seems quite rational.

I voted for neither major party candidate.  So, why would I oppose flipping the vote?

I am against so-called “faithless electors” voting contrary to how they were bound in the election.

To me, this is at best a dereliction of duty by electors to represent the people and the states.  Furthermore, I believe we would see serious (hopefully unintended) consequences.

Yes, Hillary Clinton would be president. Then the other half of the country would feel that the rules and laws everyone thought they were playing by had been trampled.

If you think there’s anger today, imagine the level of anger we would see from tens of millions of people who would then feel betrayed.

The very foundations of the Constitution and our government as a whole would be called into question, and respect for law and order would devolve.

We need to keep the peace, and we need to work on ways to improve our institutions of government.

We are already seeing some glimmers of hope that Donald Trump understands compromise and will work toward it.  All presidents break campaign promises.  Meanwhile, let’s not forget that the Republican party, while in power, is far from unified.

I do believe at this juncture that America’s system of checks and balances can and will survive almost any of Trump’s detractors worst-case scenarios.

That’s precisely why we need to maintain the integrity of our government.  The Electoral College, for better or worse, is part of that foundation today.

Let’s preserve as much of the integrity of our system as we can to protect all of us.

Is the Electoral College Totally Wrong?

I do support some aspects of the Electoral College that favor concepts like states rights.  The more local control we have, in many ways the more free people are.  Your vote “counts more” when more power is at state or local level.  How many decisions do you want made by someone across the country?

To maintain this sort of state and local autonomy, each state no matter how small needs to have a certain voice and power.

Our Founding Fathers were not perfect people.  Yes, some did own slaves.  The rights of native peoples were utterly trampled.  They were probably pretty hypocritical at times.  But … they also gave us a system that’s worked pretty well the last couple of centuries.  They did more good than harm for our nation as it is today.

So, let’s give the tried and true ideas the respect they deserve.  I think a read of things like The Federalist Papers (I read them in highschool) would remind us how much effort they put into finding the right balances.

Respect the rights of the majority AND the minority.  We should always strive to evolve systems to do that better.

From Austin, Texas, I bid you peace.

Why I’m removing the “Fake Protests” Twitter post

UPDATE:  Yup, I’m pulling it.  Details below.

Dear Twitterverse, Girls and Boys, Republicans, Democrats, Libertarians, Peoples of the Green Party and More,

Yes, I got it wrong.

While there’s no such thing as absolute certainty, I now believe that the busses that I photographed on Wednesday, November 9, were for the Tableau Conference 2016 and had no relation to the ongoing protests against President Elect Trump.

This information was provided to me from multiple professional journalists, and I do still have some faith in humanity.  🙂


Right Context, Wrong Facts?

I remain with a skeptical eye on just how much manipulation has occurred behind the scenes of many political events, but I do believe that these specific busses were used for a technology conference — nothing more.

I don’t know that Donald Trump was talking about me (posted 24 hours after my post), but he’s among many with doubts:

So, Why Remove?

I initially believed it was in the best interest of everyone to keep the Tweet live while augmenting the story.  I will indeed post a screenshot for posterity.

The realities of Twitter mean that many people see the Tweet without seeing my follow-ups and corrections.  Therefore, if I continue to show this Tweet on Twitter, then people will often see the retweets of the original Tweet without corrections alongside.  They will not know that the Tweet is incorrect.

As I have said before, I value the truth.  I will remove the Tweet so more people can have a higher proportion of truth in their lives.  I also want us all to refrain from repeating information that is likely untrue so that we can have greater credibility when our evidence is stronger.  (Less “boy who cried wolf”)

Why Not Remove?

There are some risks in removing this Tweet.

They include:

  • It sends the impression censorship has occurred — something I’m against
  • Some people will believe I was pressured to remove the Tweet or did so purely out of self-interest
  • It may reduce the dialog among all of us

Let’s not be afraid to say things when we aren’t completely sure, but let’s provide the right qualifiers and probabilities when we can.

Rapid discourse won’t always be fact-checked.  If it had to be, much of it simply wouldn’t occur.  That could be as bad as occasionally getting it wrong.  It’s not journalism, and it’s not to be held to the same bar.  (But this only works when everyone understands the “bar” is lower and exercises skepticism — another longer conversation for another time.)

Many Thanks!

I appreciate the conversations I’ve had with many in the “Twitterverse”.

I have tried to be magnanimous to many including those who disagree with me.  I value our ability to discuss with each other in a civil and respectful tone regardless of where our views may stand.  I’ve seen a good amount of that in these past few days, and I’d love to see more.

An Apology and a Promise

I would like to express my sincere apologies to anyone who feels misled.  I can assure you my intentions have always been for the best.  Now that I know just how wide these things can go so quickly, I’ll be more careful to give you the right information should there be a next time.

To the Future!

I am not a professional blogger nor a professional journalist. I do hope to find more ways to make a difference.  Being involved in political discourse is vital to democracy.

All the best,

Eric  🙂

Original Tweet here:


My “Fake Protest” Claims and America’s Angry Division

On Wednesday a few minutes after 5pm, upon leaving a meeting near downtown Austin, I chanced on a large group of busses parked just east of the I-35 on 5th Street.  I snapped a few pictures and was on my way.

Later that day, I noticed news reports of protests in downtown and near the University of Texas campus.  Having dealt with closed streets and unusual traffic patterns that day toward the south of the downtown (below 8th street) and having seen some pictures of protests that looked more like the south end of downtown than near the capitol, I presumed the busses had something to do with the protests.

Casually, I texted a few friends and then made a Twitter post.  I post on Twitter just a few times a year, and until yesterday I had about 40 followers.


The response was massive (about 15,000 retweets in the first 36 hours), and even the local news commented:

Fox 7 News: Protests across US and Austin accused of being fake by some on social media

And for a fleeting moment the front page of Reddit I’m told as well:


Was I flat wrong?  Perhaps!

It turns out Tableau was having a massive conference having nothing to do with politics less than a mile away.  Could these have been busses for Tableau’s shenanigans?  I hope they don’t mind me linking to the schedule from that same day:

Tableau 2016 Conference Schedule

And so, I posted this:

Does Anyone Care if I was right or wrong?  Sadly, not enough.

In the 3 hours since posted, my alternative (and possibly true) view of reality has garnered a whopping 8 retweets and 11 likes.

What’s going on?  The systems that carry information to us all are filtered by what’s sensational — not by what’s true.

To be cynical for a minute, people are surprisingly uninterested in truth but very interested in what helps them to make their own case.  This is probably human nature, but is it healthy?  Is that really the basis and process by which we want to make big decisions about our future based upon?

A Few Words from the Middle of the Road

First of all, I voted for Gary Johnson.  I’m neither a supporter of Trump or Hillary, but I did consider voting for both of them for different reasons.

I’m secular independent who leans Republican.  I often describe my political views as “little ‘l’ libertarian with a heart.”

A few of my key political positions:

  • Reduce taxes on both individuals and businesses
  • Encourage the repatriation of wealth
  • Pro business
  • Freedom of religion
  • Pro gay marriage
  • Cover pre-existing health conditions for every American
  • Pro gun
  • Increase the availability of visas for foreigners while reducing illegal immigration

As you can see, I don’t fully align with any candidate.  Let’s promote new voices so that we can have the dialog necessary to reach real and lasting solutions.

Parting Words

I want to set an example.  I can be wrong, and I can admit it when I am.  I will strive for the truth, and I ask you to do the same.

Let’s respect and defend the rights that make America … America!  Let’s respect each other, and let’s give each side a chance to be heard.

Whether we like what’s going on in the streets today or not, remember those people have a voice.  I do not want to live in an America where people cannot make their views heard.

I ask everyone to do their best to tone down some of the anger and find compromises if not collaborations that can move us toward a better America.




A NAS in every household will help you and archaeologists. Do it now!

Our lives are digital.  Our cameras are no longer film.  Our notes are no longer postcards.  The USPS is having a hard time staying in business.

To get really deep about this … Thousands of years from now, archaelogists will see our world vividly just like on the day your iPhone or DSLR captured it. That is … if the data’s still around.

We’re losing data left and right because we aren’t practicing good ways of storing it.

Stop spreading your digital existence across 12 devices (including the ones long retired but never copied data from in the attic/garage/dumpster/Goodwill). Keep a definitive copy of everything in one place.

It’d be a shame cave paintings outlive our digital pictures, and right now that’s scarily possible.

If we could just centralize and manage it better, then maybe we could also have an easier time archiving it all.

So, let’s get practical!

First off … problems … how data was stored in the dark ages:

  • Cloud services.  They keep things accessible, can help centralize and they’re often inexpensive.  Cloud services miss the boat on your precious pictures and home movies because:
    • Your internet is too slow, and while Google et al are working on this, it’ll be a while yet.
    • Easy to user cloud storage providers are charging too much.
    • Inexpensive cloud storage providers are usually too hard to use.
  • The hard drive inside your computer can die at any time, and it’s probably not big enough.  Plus, it’s harder (not impossible) to share that stuff with say … your smart TV … and the rest of your family.
  • Portable/external hard drives.  Don’t get me started.  No.  I own far too many, and I have no clue what’s on most of them.  Plus 1/3 of them are broken — in some cases with precious photos or bits of source code lost forever.

Solution:  Get a Network Attached Storage device.  Today.  Without delay.

Why?  If you can centralize everything, it’s easier to back up.  You also have super fast access to it, and everybody in your home can share (or not — they do have access control features).

I have serious love for Synology‘s devices for three reasons:

  1. They integrate with Amazon’s Glacier service.  To me, this is a killer feature.  Now I can store every single one of my selfies, vacation pictures, inappropriate home movies, etc. in a very safe place until my credit card stops working.  At $10 per terabyte per month, that credit card should work a while.  Glacier is a good deal.
  2. It’s seriously awesome, fully featured software.
  3. Quality, fast hardware.

All at a price that while not the cheapest doesn’t particularly break the bank.

Now, I’ll assume that if you’re anything like me you want speed.  You want access to your data, or you’re not going to use that NAS like it’s supposed to be.

You’re also not going to invest in a 24 drive SSD enterprise SSD NAS because … well … you’re a home user.

So, some guidelines:

  • Buy at least twice as much storage as you think you need.  Your estimate is low.
  • Plan to upgrade/replace in 3 years.  You don’t have to make a perfect buying decision — nor do you have to buy for eternity.  Plan to MIGRATE! — which is why you’ll want hardware that’s fast enough you can copy data off it before the earth crashes into the sun!
  • Don’t plan to add more hard drives anytime soon.  Fill all the drive bays.
  • Buy the largest available drives.
  • Forget SSD.  SSD is too small and far too expensive for the storage you want.  Buy more drives and get performance advantages of having more drives instead.
  • Plan on backing up every computer you own to the NAS — size appropriately — and then some.

My Picks

With price and performance in mind, I’ll wade through the mess of models Synology has to tell you what makes sense in my opinion:

Recommendation 1:  Synology DS414

  • Four drives provide 16TB physical space — 10-12TB usable with Synology’s own RAID.
  • Four drives provide better read performance than two or one
  • Spare fan just in case one fails
  • Link aggregation, but you’ll never use it.

Recommendation 2:  Synology DS214+

  • Fastest Synology two drive model.
  • Two drive redundancy.
  • For some users, the video playback features of the DS214play may be more appropriate, but it’s slower and more expensive.

Recommendation 3:  Synology DS114

  • Danger!  Just one drive — no redundancy.  You are backing up with Glacier, right?
  • Fast for a single drive NAS

All provide:

  • USB 3.0 port(s) to load your data from a portable drive
  • Gigabit ethernet
  • All that lovely Synology software!

Hard drives?

Personally, I’d buy the Western Digital Red 5400RPM NAS drives in 4TB.  Based on Amazon’s pricing, I don’t see much of a premium if any for getting the largest model on the market.  The larger the drives, the more benefit you get from your NAS, so I wouldn’t skimp.

If you really truly believe you won’t need the space, but you’d like the performance of four drives on the DS414, then you can save around 350 USD by purchasing 4x 2TB drives instead of 4x 4TB.

Your Network Needs Speed

Now, along with all that firepower in the NAS, you need the network to feed that speed addiction.

Get a good quality switch, and if you’re going to use your NAS over wireless check out Amped Wireless RTA 15.  Wired speeds will nearly always be faster, but I like wireless convenience just like you.

You’ll Love Speedy Backups

For extra credit, Apple’s Time Machine backup works really nicely with my NAS.  It works a lot faster when I plug in the ethernet cable.  On a Cisco 2960G switch (yes, I have some serious commercial grade switches lying around), my late model Apple MacBook Pro Retina did around 100 gigs under 15 minutes.

Do I need a NAS in the future?

Possibly not.  When bandwidth gets there and cloud offerings match up at the right price points.

Oh, and a little re-arrangement of the letters NAS … NSA.  User trust!  Yes, all this assumes user trust of cloud services.  Then again, the NSA can probably backdoor your NAS if they really want to.  Sorry.  Nothing’s perfect.

Happy Trails

Your mileage my vary.  My new DS414 was a religious experience.

Why Amazon’s EC2 Outage Should Not Have Mattered

This past week I got a call in the middle of the night from my team that a major web site we operate had gone down. The reason: Amazon’s EC2 service was having issues.

This is the outage that famously interrupted access to web sites ordinarily visited by millions of people, knocked Reddit alternately offline or into an emergency read-only mode for about a day (or more?) and made mention in the Wall Street Journal, MSNBC and other major news outlets.

In the Northern Virginia region where the outage occurred and where we were hosted, Amazon divides the EC2 service into four availability zones. We were unlucky enough to have the most recent copies of crucial data in exactly the wrong availability zone, and this made nearly impossible an immediate graceful fail-over to another zone because the data was not retrievable at the time. Furthermore, we were unable to immediately transition to another region because our AMI’s (Amazon Machine Images) were stuck in the crippled Northern Virginia region and we lacked pre-arranged procedures to migrate services.

While in the works, we had not yet established procedures to migrate to another region. Having some faith in Amazon’s engineering team, we decided to stand pat. Our belief was that by the time we took mitigating measures, Amazon’s services would be back to life anyways. And … that proved to be true to the extent that we needed.

The lessons learned are this:
(1) Replicate your data across multiple Amazon regions
(2) Do 1 with your machine images and configuration
(3) For extra safety, do 1 and 2 with another cloud provider as well
(4) It’s probably a good idea to also do an off-cloud backup

Had we already done just 1 and 2, our downtime would have been measured in minutes, not hours as one of our SA’s flipped a few switches… all WHILE STAYING on Amazon systems. Notice how Amazon’s shopping site never seemed to go down? I suspect they do this.

As for the coverage stating that Amazon is down for a third day and horribly crippled, I can tell you that we are operating around the present issues, are still on Amazon infrastructure and are not significantly impacted at this time. Had we completed implementation of our contingency plans only within Amazon by the time this happened, things would have barely skipped a beat.

So, take the hype about the “Great Amazon Crash of 2011” with a grain of salt. The real lesson is that in today’s cloud contingency planning still counts. Amazon resources providing alternatives in California, Ireland, Tokyo and Singapore have hummed along without a hiccup throughout this time.

If Amazon would make it easier to move or replicate things among regions, this would make implementation of our contingency plans easier. If cloud providers in general could make portability among each other a point and click affair, that would be even better.

Other services such as Amazon’s RDS (Relational Database Service) and Beanstalk rely on EC2 as a sub-component. As such, they were impacted as well. The core issue at Amazon appears to have involved the storage component upon which EC2 increasingly relies upon: EBS. Ultimately, a series of related failures and overload of remaining online systems caused instability across many components within the same data center.

Moving into the future, I would like to see a world where Amazon moves resources automagically across data centers and replicates in multiple regions seamlessly. Also, I question the nature of the storage systems behind the scenes that power things like EBS, and until I have more information it is difficult to comment on their robustness.

Both users and providers of clouds should take steps to get away from reliance on a single data center. Initially, the burden by necessity falls on the cloud’s customers. Over time, providers should develop ways such that global distribution and redundancy happen more seamlessly.

Going higher level, components must be designed to operate as autonomously as possible. If a system goes down in New York City, and a system in London relies upon that system, then London may go down as well. Therefore, a burden also exists to design software and/or infrastructure that carefully take into account all failure or degradation scenarios.

Ruby Developers: Manage a Multi-Gem Project with RuntimeGemIncluder (Experimental Release)

A couple of years ago in the dark ages of Ruby, one created one Gem at a time, hopefully unit tested it and perhaps integrated it into a project.

Every minute change in a Gem could mean painstaking work often doing various builds, includes and/or install steps over and over.  No more!

I created this simple Gem (a Gem itself!) that at run-time builds and installs all Gems in paths matching patterns defined by you.

I invite brave souls to try it out this EXPERIMENTAL release now pending a more thoroughly tested/mature release. Install RuntimeGemIncluder, define some simple configuration in your environment.rb or a similar place and use require as you normally would:

Here’s an example I used to include everything in my NetBeans workspace with JRuby.

Download the Gem from http://rubyforge.org/frs/?group_id=9252

To install, go to the directory where you have downloaded the Gem and type:

gem install runtime-gem-includer-0.0.1.gem

(Soon you may be able to install directly from RubyForge by simply typing ‘gem install runtime-gem-includer‘.)

Some place before you load the rest of your project (like environment.rb if you’re using Rails) insert the following code:

trace_flag = "--trace"
$runtime_gem_includer_config =
:gem_build_cmd = "\"#{ENV['JRUBY_HOME']}/bin/jruby\" -S rake #{trace_flag} gem",
:gem_install_cmd = "\"#{ENV['JRUBY_HOME']}/bin/jruby\" -S gem install",
:gem_uninstall_cmd = "\"#{ENV['JRUBY_HOME']}/bin/jruby\" -S gem uninstall",
:gem_clean_cmd = "\"#{ENV['JRUBY_HOME']}/bin/jruby\" -S rake clean",
:force_rebuild = false,
:gem_source_path_patterns = [ "/home/erictucker/NetBeansProjects/*" ],
:gem_source_path_exclusion_patterns = []
require 'runtime_gem_includer'

If you are using JRuby and would like to just use the defaults, the following code should be sufficient:

$runtime_gem_includer_config =
:gem_source_path_patterns = [ "/home/erictucker/NetBeansProjects/*" ],
:gem_source_path_exclusion_patterns = []
require 'runtime_gem_includer'

Now simply in any source file as you normally would:

require 'my_gem_name'

And you’re off to the races!

Gems are dynamically built and installed at runtime (accomplished by overriding Kernel::require).  Edit everywhere, click run, watch the magic! There may be some applications for this Gem in continuous integration. Rebuilds and reloads of specified Gems should occur during application startup/initialization once per instance/run of your application.

Interested in source, documentation, etc.? http://rtgemincl.rubyforge.org/

More Efficient Software = Less Energy Consumption: Green Computing isn’t just Hardware and Virtualization

Green is a great buzzword, but the real-world driver for many “green” efforts is cost. Data center power is expensive. Years ago, Oracle moved a major data center from California to my town Austin, Texas. A key reason: more predictably priced, cheaper power in Texas vs. California. What if Oracle could make the data center half the size and take half the power because its software ran more efficiently?

Your bank, your brokerage, Google, Yahoo, Facebook, Amazon, countless e-commerce sites and more often require surprisingly many servers.  Servers have traditionally been power-hungry things favoring reliability and redundancy over cost and power utilization.  As we do more on the web, servers do more behind the scenes.  The amount of computing power or various subsystem capabilities required varies drastically based on how an application works.

These days, hardware vendors across the IT gamut try to claim their data center and server solutions are more power efficient. The big push for consolidation and server virtualization (the practice by which one physical server functions as several virtual servers which share the hardware of the physical machine) does make some real sense.  In addition to using less power, such approaches often simplify deployment, integration, management and administration. It’s usually easier to manage fewer boxes than more, and the interchangeability facilitated by things like virtualization combined with good planning make solutions more flexible and able to more effectively scale on demand.

Ironically, the issue people seem to pay the least attention to is perhaps the most crucial: the efficiency of software.  Software orchestrates everything computers do.  The more computer processors, memory, hard drives and networks do, the more power they need and the bigger or more plentiful they must be. One needs more servers or more power burning servers the more operations those servers must perform.  The software is in charge.  When it comes to operations the computer performs, the software is both the CEO and the mid-level tactical managers that can make all the difference in the world.  If software can be architected, coded or compiled to be manage more efficiently the operations per unit of work produced goes down.  Every operation saved means power saved.

Computers typically perform a lot of overly redundant or otherwise unneeded operations. For example, a lot of data is passed across the network not because it absolutely needs to be, but because it’s easier for a developer to build an app that operates that way or the application to be implemented that way in production. There are applications that use central databases for caches when a local in-memory cache would not only be orders of magnitude faster but also burn less power. Each time data goes across a network it must be processed on each end and often formatted and reformatted multiple times.

A typical web service call (REST, SOAP, etc) – the so-called holy grail of interoperability, modularity and inter-system communication in some communities – is a wonderful enabler, but it does involve parsing (e.g. turning text data into things the computer understands), marshalling (a process by which data is transformed typically to facilitate transport or storage) and often many layers of function calls, security checks and other things.  The use of web services is not inherently evil, but far more carbon gets burned to make a web service call to a server across the country or even inches away than it is for the computer to talk to its own memory.  It’s also a lot slower.

Don’t get me wrong, I’m a big believer in the “army of ants” approach. However, I see the next big things in power utilization being software driven. We’re going to reach a point where we’ve consolidated all we reasonably can, and at that point it’s going to be a focus on making the software more efficient.

If my code runs in a Hadoop-like (Hadoop is open source software that facilitates computing across many computers) cluster and the framework has tremendous overhead compared to what I’m processing, how much smaller could I make the cluster if I could remove that overhead? What if I process more things at once in the same place? What if I batch them more? What if I can reduce remote calls? What if I explore new languages like Go with multi-core paradigms?  What about widely deployed operating systems like Linux, Windows and MacOS become more power efficient.  What about widely used apps consuming less power hungry memory?  What about security software taking fewer overhead CPU cycles?  Can we use multi-core processing more efficiently?

In most cases, performance boosts and power savings go hand-in-hand.  Oriented toward developers, here are a few more obvious areas for improvement.  Most are pre-existing good software design practices:

– Caching is the first obvious place:  (1) more caching of information, (2) less reprocessing of information, (3) more granular caching to facilitate caching where it was not previously done.

– Data locality:  Do processing as close to where data resides as possible to reduce transportation costs.  Distance is often best measured not in physical distance but in the number of subsystems (both hardware and software) that data must flow through.

– Limit redundant requests:  Once you have something retrieved or cached locally, use it intelligently:  (1) collect changes locally and commit them to a central remote location such as a database only as often as you need to, (2) use algorithms that can account for changes without synchronizing as often with data on other servers.

– Maximize use of what you have:  A system is burning power if it’s just on.  Use the system fully without being wasteful:  (1) careful use of non-blocking (things that move on instead of having the computer wait for a response from a component) operations in ways that let the computer do other things while it’s waiting;  (2) optimize the running and synchronization of multiple processes to balance use, process duration and inter-process communication such that the most work gets done with least waiting or overhead.

– Choose the language, platform and level of optimization based on amount of overall resources consumed:  Use higher performance languages or components and more optimizations for sections which account for the most resource utilization (execution time, memory use, etc.).  Conversely, use easier to build or cheaper components that account for less overall resource use so that more focus can go to critical sections.  (I do this in practice by mixing Ruby, Java and other languages inside the JRuby platform.)

In certain applications, maybe we don’t care about power utilization or much at all about efficiency, but as applications become increasingly large and execute across more servers development costs in some scenarios may become secondary to computing resources.  Some goals are simply not attainable unless an application makes efficient use of resources, and that focus on efficiency may pay unexpected dividends.

Developers especially of large-scale or widely deployed applications, if we want to be greener let’s focus on run-times, compilers and the new and the yet-to-be-developed paradigms for distributed massively multi-core computing.

There is a story that Steve Jobs once motivated Apple engineers to make a computer boot faster by explaining how many lifetimes of waiting such a boost might save.  Could the global impact of software design be more than we imagine?