If you excuse the meme format, the story of the disastrous launch of the video game No Man’s Sky and its tale of fame, failure, and eventual turnaround is an interesting story of shipping software.
Sean Murray and the rest of Hello Games turned around what was a disastrous, incomplete launch of a game into a working product very quickly by shutting out critics, zeroing in on customer experience and feedback, and getting back to work on core features using something akin to painless software scheduling.
I didn’t pay attention too much to this game back when it released, but I certainly remember the internet drama from the poor launch. The history and retrospective behind it all is certainly interesting.
If you live in a modern, industrialized society, your role as an individual in the economy has likely been relegated (at some level) to that of a “consumer“
Firms and industries produce goods, and you “consume” them.
There has been a trend for a while to make more and more goods subscription based, non-durable, or “consumable”.
Examples of this include recent trends towards subscription models for many kinds of products, and the 2 year smartphone cycle.
This idea of companies artificially putting an “expiration date” on products isn’t new to the digital age, however. GE and other companies for instance infamously formed the Phoebuscartel before WWII to put an artificial upper limit on the life of lightbulbs.
If you’re a savy “consumer” however, you can rebel against this trend.
…Then you finally get the focus group to agree that your software is worth $25 a month, and then you ask them how much they would pay for a permanent license and the same people just won’t go a penny over $100. People seriouslycan’t count.
The reason that the rich were so rich, Vimes reasoned, was because they managed to spend less money.
Take boots, for example. He earned thirty-eight dollars a month plus allowances. A really good pair of leather boots cost fifty dollars. But an affordable pair of boots, which were sort of OK for a season or two and then leaked like hell when the cardboard gave out, cost about ten dollars. Those were the kind of boots Vimes always bought, and wore until the soles were so thin that he could tell where he was in Ankh-Morpork on a foggy night by the feel of the cobbles.
But the thing was that good boots lasted for years and years. A man who could afford fifty dollars had a pair of boots that’d still be keeping his feet dry in ten years’ time, while the poor man who could only afford cheap boots would have spent a hundred dollars on boots in the same time and would still have wet feet.
—Terry Pratchett, Men at Arms: The Play
If you’ve bought a car, a pair of shoes, or an appliance lately, you’ve probably asked the question: “is it durable?”
Thanks to agencies such as the Kelly Blue Book and Consumer Reports, you no longer have to rely on secondhand knowledge or brand reputation alone to evaluate a good. Instead, you can look at cold, hard data that will tell you the quality of your purchase, and how long it will last.
In microeconomics, goods can be considered durable if they’re supposed to stay around (like your kitchen appliances) and non-durable if they’re supposed to be readily consumed / used destructively (e.g. that pizza you ordered last night)
Smartphones, I prefer durable
my old phone
My old phone through high school and my first years at college was a Samsung Galaxy S4. It was a great phone with an OLED screen, headphone jack, removable battery, SIM card, and MicroSD card slot, and it had a great life. I ended up carrying around 3 externally chargeable, swap-able batteries with it as well. It lasted me many years.
Other brands (+) hardware that I’ve encountered have used tactics (+) to prevent you from performing your own maintenance, likely to diminish the secondhand market and attempt to force consumers to pay for repair costs or buy newer models. I took training and became an Apple Certified Mac Technician in high school, and learned the many ins and outs of such products. It’s not a pretty world being under such a company’s thumb.
In other areas of technology, this practice has become increasingly common. At the forefront of this question has come the concept of the right to repairmovement. This is the idea that you, as a purchaser of a product, have certain rights including those of “first sale” for it. You have the legal right to repair something that you bought, ostensibly because you bought it.
There were also laws on the books at one point in time that said that if attempt to repair something (i.e. even open the darn thing), the company has no grounds to void your warranty. This has been eroded however by large companies doing the commendable act of simply ignoring this law.
My new phone:
For a while, OnePlus pioneered in the smartphone realm by offering low and moderately priced repairable, and user-configurable (including the ability to boot custom ROMs) phones to the mass market. They’ve since boosted their prices a bit past what they were in their heyday, but oh well.
I’ve had this phone for a few years now, along with my ThinkPad laptop that I’ve had for a few years. I noticed the battery wasn’t lasting quite as long on either of them, so I replaced the batteries on both of them. The laptop took no more than 60 seconds to replace. The OnePlus 5t, while tricky (curse you whichever east-Asian line worker who decided to put adhesive in there, that wasn’t in my YouTube tutorial!), wasn’t too bad to take apart.
Matt’s note: so far, to me it looks like the best place to source parts for Chinese made smartphones is Alibaba.com. IDK though for others.
For all electronic devices though, it’s very important to get original OEM batteries from the manufacturer. Third-party ones are almost always (notoriously) of poor quality.
EDIT: my new laptop, very happy with it (cost me an ARM and a leg!):
I’ve decided to stick with my Oneplus 5t for a while. I’ve had decent luck finding spare parts on Alibaba.com including OEM batteries, and have stocked up on a few. (You might as well if you have a Chinese-made smartphone) My next phone will likely be a pinephone
Cars, I prefer durable
my old car
Honda 2005 Odyssey with 220,000 miles, finally died
In the year 2006 when I was no more than probably four feet tall and still in elementary school, my parents bought a 2005 Honda Odyssey minivan with a rear entertainment center and onboard navigation (Whoah! You may not know this, but in the years before smartphones, Google Maps, and Data, this was a huge deal. It used advanced “DVD-ROM technology” with a disc drive under the driver’s seat to store navigation data for the whole U.S.)
This car saw many a family road trip, drive to and from school, and hauling lumber and equipment for scout projects during its prime. It was given to my older brother for his first car, and it carried football players, cargo, etc. until he graduated.
At this point, it was probably almost eleven years old with 150,000 or more miles on it in ~2016. He hated driving a minivan, and would drive it real fast up and down a steep hill nearby our house. I’m not entirely uncertain that his intention was to break it so he could drive something else. He failed.
So, it was my first car, and I kept it around when I went to college. I didn’t travel much anyways since I was busy with school, so I figured I could keep it around until it died.
It lasted through my time at college at this point almost 14 years old, 220,000 ish miles, three accidents in its life, and only oil change, timing belts, and minor maintenance required.
In the January of 2020, the oil pan on the car blew out and I slowly lost all oil in the car. I ended up having to pull it into a parking lot for a Popeye’s kitchen after breaking down, and called triple A and got a tow.
At this point, I figured the repair costs (that I could do ) would be more than the worth of the car, so I was looking to get rid of it. I explained it’s history to my tow truck driver, and once I started talking both him and the repair shop both were giving me decent offers for the car (at this point, a 14+ year old, 220,000 ish miles, 3 accidents car with its oil pan blown out).
Now that is value.
New Car
I got a 2018 Subaru Forrester Premium with EyeSight and the Cold Weather Package
I went car shopping lately, and had a great experience at CarMax (you could probably save money by buying used direct consumer to consumer a la craigslist and hiring a mechanic to check the car out, but who needs that hassle?). I figured that I wanted a small crossover SUV for a good balance of size and utility.
At CarMax I got to sit in a bunch of different model cars and look at their interiors and such. While the Toyota Rav4, Honda CRV’s, HRV’s and others that I looked at were fine, I was unimpressed with the initial quality of some of the Mazda’s and others interiors (looked and felt quite cheap, I could imagine they weren’t as solidly built in other areas).
I fell in love with the Subarus that I saw though due to their build quality, aesthetic, and sturdy-ness and I got to drive one around. The newer models have computer vision in them that help with lane assist and will automatically cut the engine and apply the brake to prevent accidents for you.
It’s a solidly-built car, and looking under the hood everything was very evenly laid out and looked easy to work on. I like how easy parts of the car are to take apart, and I appreciate that the spare tire kit under the trunk includes some extra tools like a screwdriver (which I often find myself using).
I got a used 2018 Premium with only a few thousand miles on it for a great price, and I hope to drive it for at least as long as the Honda minivan lasted.
Moral of the story?
You, as a “consumer”, get to make choices. Choose what you want I guess
Edit:
Upon review, if I was to go car shopping again, I would probably try to buy a Mazda, a Toyota Rav-4, or a boring old Toyota Camry.
For the pothole-filled, hilly, and rough roads outlying Atlanta that I mostly dive on, I also feel Subaru’s smaller crosstrek probably would have been a better-sized car for what I want. It is pretty nice though being able to keep my roadbike in the back of my Forester with my seats down flat, without even having to take off the bike’s front wheel.
The main reason I would buy a different car in hindsight is I’m concerned about the CVT transmission in my car:
If taken care of well, it should last a while, but mechanically it is more complex and fritzy than a normal multi-speed transmission. Same goes for my Subaru’s AWD versus a typical front-wheel drive car. AWD is also bit more mechanically complex and costly to maintain over the life of the car.
The CVT in my Subaru ’18 Forester has a CVT from Aisin, the same subgroup that supplies Toyota for their cars. This CVT should be more reliable than the dreaded Nissan/Jatco CVT transmissions (I wouldn’t buy a car with one of the older ones, at the very least).
I would probably have gotten a bit more mechanical reliability (my biggest concern as a car buyer) with a Mazda or Toyota front-wheel drive over the life of the car. Parts may also have been cheaper with these options.
I hope my next car will be an EV that I have to do very little maintenance on, but I wouldn’t get a Tesla because I don’t like being forced to go to only the dealer for repairs.
Mazda has opted so far to not use CVT transmissions in their cars, and Toyota’s CVT’s are made by a vertically-integrated R&D arm of the company named Asin that seems to make good ones that generally have less mechanical issues than their Jatco counterparts.
Mazda has also come out with a novel Gasoline compression ignition (+) engine, which can deliver torque similar to diesel at low RPM. This is a game changer for SUV’s, and helps with fuel economy and gas mileage. According to their investor letter, they’re looking to make a six cylinder gasoline and second generation diesel engine using this technology.
[As a side note, it’s interesting that Toyota owns a 20% share of Subaru. Apparently, this means they’ve collaborated in the past, which is cool. Mazda and Toyota have also collaborated together. They run a plant together nearby my home state of Georgia, right next door in Alabama. I also find it interesting that Mazda cites investment in partnerships (such as that with Toyota) as a part of its medium term business plan]
So far, I’m loving my Subaru though. It’s very solidly built, and they make it really easy with the design of the car for tinkerers like me to pull parts in and out and mess with things. (I’ve been doing some electronics tinkering lately, including wiring in a dashcam, voltmeter, 5V 10A regulator, and AC power inverter)
I kinda like the advantages of the 4 cylinder 2.5L boxer engine in my Subaru though. It’s good to stick with 4 cylinders instead of more due to improved lifetime and reliability. Toyota and Mazda also make good 4 cylinder engines. I really like it so far, but we’ll see how it holds up though.
I’m going to try to drain out and replace the CVT fluid at 30K mile intervals to ensure the transmission stays in good shape as a measure of preventative maintenance.
Apparently ~2014 ish model year Nissan year cars seem to struggle with CVT transmission issues the most. This Scotty guy says he wouldn’t buy a Nissan car.
Overall, I’ve really liked my Subaru though. The eyesight thing really is quite useful and nice for longer drives. I’m not entirely happy with how the car’s CVT transmission operates (acceleration is often sponge-y, trying to slow down on the interstate is hard sometimes when my car struggles to shift down in torque ratios, and it will do so rapidly in spits and spurts). On the other hand, the safety features are really nice on Subaru’s, and probably something that people overlook when buying a car more often then they should.
Speaking of safety, highway driving was a total nightmare in the Forester due to mirror blind spots until I bought one of these to snap on over my review mirror: amazon.com/gp/product/B07567QLTX
I originally wanted to buy and install a set of rally mirrors, but this worked more than well enough for what I needed.
There is something to be said for pragmatic programming, along with putting checks in pace to make sure your systems and code are chugging along smoothly
I personally know some developers who are very talented and can create wonderful pieces of software with no or little struggle. Because of these gifted individuals, our industry is full of high expectations. But the sad truth is: not everyone is a ninja/guru/rockstar developer.
And that’s exactly who I am: a mediocre developer. This article will guide you through surviving in the industry if you are not a genius
Even though I said Rust is the programming language for the next 40 years due to its ability to enable programmers to write fast and safe "go fast" code at a low level, it seems that human society has decieded to build a bunch of useful things in a garbage-collected language simmiliar to Java from Google called Go(lang) (those &%#@^*$).
It's interesting how quickly Go has gained popularity, and how pervasive it is amongst the new stack sort of tools that are starting to take over software and enterprise computing patterns (i.e. Docker, Kubernetes, containers, hip new tech, code that mows your lawn and walks your dog, etc.).
My uninformed guess is that it's garbage collected like Java (where you don't have to hand back resources to the machine, you just stop using them and it automatically comes around and "picks up the trash"), so it's open season for anyone who's used to working with higher level languages to get started working with. In addition, I would guess that anyone who's taken intro Computer Science classes (which are often taught in Java) could hop on over and start working with Go.
Also, it seems Go seems to have some cool features that make it really useful for networked and multi-threaded code (where multiple tracks of execution are happening at once on one or more CPU cores).
With all of this being said, it's probably a decent idea to learn a little bit of Go given how powerful and popular it is. For either reading code from the plethora of open source tools, or modifying or making your own tools, knowing a bit of Go could be useful.
When in Rome, do as the Romans and all that...
You may notice today's blog post is of a different format than usual. That's because I've decided to write it using markdown on my Github so that I could write this diatribe, and so that I could include the laundry list of (useful?) tools that humans have decided to write in Go below.
memcached - Fast memcache server, which supports persistence and cache sizes exceeding available RAM
memcache - go memcached client, forked from YouTube Vitess
rend - A memcached proxy that manages data chunking and L1/L2 caches
YBC bindings - Bindings for YBC library providing API for fast in-process blob cache
Cloud Computing
LXD Daemon based on liblxc offering a REST API to manage containers
Docker - The Linux container runtime. Developed by dotCloud.
Enduro/X ASG Application Server for Go. Provides application server and middleware facilities for distributed transaction processing. Supports microservices based application architecture. Developed by ATR Baltic.
Kubernetes - Container Cluster Manager from Google.
flamingo - A Lightweight Cloud Instance Contextualizer.
gocircuit - A distributed operating system that sits on top of the traditional OS on multiple machines in a datacenter deployment. It provides a clean and uniform abstraction for treating an entire hardware cluster as a single, monolithic compute resource. Developed by Tumblr.
gosync - A package for syncing data to and from S3.
juju - Orchestration tool (deployment, configuration and lifecycle management), developed by Canonical.
mgmt - Next Generation Configuration Management tool (parallel, event driven, distributed system) developed by @purpleidea, (a Red Hat employee) and the mgmt community.
rclone - "rsync for cloud storage" - Google Drive, Amazon Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Cloudfiles, Google Cloud Storage, Yandex Files
ShipBuilder - ShipBuilder is a minimalist open source platform as a service, developed by Jay Taylor.
swift - Go language interface to Swift / Openstack Object Storage / Rackspace cloud files
Tsuru - Tsuru is an open source polyglot cloud computing platform as a service (PaaS), developed by Globo.com.
aws-sdk-go - AWS SDK for the Go programming language.
Command-line Option Parsers
argcfg - Use reflection to populate fields in a struct from command line arguments
autoflags - Populate go command line app flags from config struct
cobra - A commander for modern go CLI interactions supporting commands & POSIX/GNU flags
command - Add subcommands to your CLI, provides help and usage guide.
docopt.go - An implementation of docopt in the Go programming language.
flaq - Command-line options parsing library, POSIX/GNU compliant, supports struct tags as well as the Go's flag approach.
getopt - full featured traditional (BSD/POSIX getopt) option parsing in Go style
getopt - Yet Another getopt Library for Go. This one is like Python's.
gnuflag - GNU-compatible flag parsing; substantially compatible with flag.
go-commander - Simplify the creation of command line interfaces for Go, with commands and sub-commands, with argument checks and contextual usage help. Forked from the "go" tool code.
opts.go - lightweight POSIX- and GNU- style option parsing
pflag - Drop-in replacement for Go's flag package, implementing POSIX/GNU-style --flags.
subcommands - A concurrent, unit tested, subcommand library
uggo - Yet another option parser offering gnu-like option parsing. This one wraps (embeds) flagset. It also offers rudimentary pipe-detection (commands like ls behave differently when being piped to).
writ - A flexible option parser with thorough test coverage. It's meant to "just work" and stay out of the way.
cli - A Go library for implementing command-line interfaces.
mellium.im/cli — A library for parsing modern CLI apps including subcommands that may have their own flags and a built in help system. Designed to use a minimal API.
cmdline - A simple parser with support for short and long options, default values, arguments and subcommands.
go-getoptions - Go option parser inspired on the flexibility of Perl’s GetOpt::Long.
Command-line Tools
awless - A Mighty command-line interface for Amazon Web Services (AWS).
boilr - A blazing fast CLI tool for creating projects from boilerplate templates.
coshell - A drop-in replacement for GNU 'parallel'.
DevTodo2 - A small command-line per-project task list manager.
dsio - Command line tool for Google Cloud Datastore.
goveralls - Go integration for Coveralls.io continuous code coverage tracking system.
overalls - Multi-Package go project coverprofile for tools like goveralls
Cryptocurrency
Skycoin - Skycoin is a next-generation cryptocurrency written in Go. Skycoin is not designed to add features to Bitcoin, but rather improves Bitcoin by increasing simplicity, security and stripping out everything non-essential.
Cryptography
BLAKE2b - Go implementation of BLAKE2b hash function
cryptogo - some useful cryptography-related functions, including paddings (PKCS7, X.923), PBE with random salt and IV
cryptoPadding - Block padding schemes implemented in Go
dkeyczar - Go port of Google'e Keyczar cryptography library
themis - Multi-platform high-level cryptographic library for protecting sensitive data: secure messaging with forward secrecy, secure data storage (AES256GCM); suits for building end-to-end encrypted applications
b - Package b implements B+trees with delayed page split/concat and O(1) enumeration. Easy production of source code for B+trees specialized for user defined key and value types is supported by a simple text replace.
Tuple - Tuple is a go type that will hold mixed types / values
vcard - Reading and writing vcard file in go. Implementation of RFC 2425 (A MIME Content-Type for Directory Information) and RFC 2426 (vCard MIME Directory Profile).
mxj - Marshal/Unmarshal XML doc from/to map[string]interface{} or JSON
xlsx - A library to help with extracting data from Microsoft Office Excel XLSX files.
goxlsxwriter - Golang bindings for libxlsxwriter for writing XLSX (Excel) files
simple-sstable - A simple, no-frills SSTable format and implementation in Go.
Databases
See also [[SQLDrivers page|SQLDrivers]].
CockroachDB
cockroachdb - A Scalable, Survivable, Strongly-Consistent SQL Database
code.soquee.net/migrate — A library for generating, applying, and listing PostgreSQL database migrations using a mechanism that's compatible with Rust's Diesel.
reform - A better ORM for Go, based on non-empty interfaces and code generation.
go-queryset - 100% type-safe ORM for Go with code generation and MySQL, PostgreSQL, Sqlite3, SQL Server support.
Key-Value-Stores
bolt - Persistent key/value store inspired by LMDB.
dbm - Package dbm (WIP) implements a simple database engine, a hybrid of a hierarchical and/or a key-value one.
fs2/bptree - A memory mapped B+Tree with duplicate key support. Appropriate for large amounts of data (aka +100 GB). Supports both anonymous and file backed memory maps.
nodb - A pure Go embed Nosql database with kv, list, hash, zset, bitmap, set.
tiedot - A NoSQL document database engine using JSON for documents and queries; it can be embedded into your program, or run a stand-alone server using HTTP for an API.
sling - Network traffic simulator and test automation tool to send file requests through the HTTP or TCP protocol, control rate frequency, number of concurrent connections, delays, timeouts, and collect the response time statistics, mean, and percentiles.
errors - The juju/errors package provides an easy way to annotate errors without losing the original error context, and get a stack trace back out of the error for the locations that were recorded.
errors - errors augments and error with a file and line number.
goerr - Allows to make a separate(modular) and reusable error handlers. Exception-like panic() recover() mechanism using Return(error) and catching err := OR1(..)
codec-msgpack-binc High Performance and Feature-Rich Idiomatic Go Library providing encode/decode support for multiple binary serialization formats: msgpack
go-simplejson - a Go package to interact with arbitrary JSON
go-wire - Binary and JSON codec for structures and more
go-xdr - Pure Go implementation of the data representation portion of the External Data Representation (XDR) standard protocol as specified in RFC 4506 (obsoletes RFC 1832 and RFC 1014).
chart - Library to generate common chart (pie, bar, strip, scatter, histogram) in different output formats.
draw2d - This package provide an API to draw 2d geometrical form on images. This library is largely inspired by postscript, cairo, HTML5 canvas.
ebiten - A cross platform open-source game library with which you can develop 2D games with simple API for multi platforms. Cgo/c compiler setup not needed.
fourcc - Go implementation of FOURCC (four character code) (4CC) identifiers for a video codecs, compression formats, colors, and pixel format used in media files.
gotk3 - Go bindings for GTK3, requires GTK version 3.8
go.uik - A UI kit for Go, in Go. (project is closed)
go-webkit2 - Go bindings for the WebKitGTK+ v2 API (w/headless browser & JavaScript support)
Gowut - Gowut (Go Web UI Toolkit) is a full-featured, easy to use, platform independent Web UI Toolkit written in pure Go, no platform dependent native code is linked or called.
go-logging - Supports different logging backends like syslog, file and memory. Multiple backends can be utilized with different log levels per backend and logger.
gomol - A multi-output logging library designed for outputs that support additional metadata with log messages.
bĂogo - Basic bioinformatics functions for the Go language.
Breaker - Breaker enables graceful degraded mode operations by means of wrapping unreliable interservice interface points with circuit breaker primitives.
btcrpcclient - A Websocket-enabled Bitcoin JSON-RPC client.
cast - Safe and easy casting from one type to another in Go
CGRates - Rating system designed to be used in telecom carriers world
cpu - A Go package that reports processor topology
cron - A library for running jobs (funcs) on a cron-formatted schedule
daemonigo - A simple library to daemonize Go applications.
Go-PhysicsFS - Go bindings for the PhysicsFS archive-access abstraction library.
go.pipeline - Library that emulates Unix pipelines
go-pkg-mpd - A library to access the MPD music daemon
go-pkg-xmlx - Extension to the standard Go XML package. Maintains a node tree that allows forward/backwards browser and exposes some simpel single/multi-node search functions
Prometheus Instrumentation/Metrics Client - This is a whitebox instrumentation framework for servers written in Go. It exposes programmatically-generated metrics automatically for use in the Prometheus time series collection and post-processing environment.
randat - Devel tool for generating random bytestrings and encoding files in code-friendly forms
recycler - A more flexible object recycling system than sync.Pool. Provides constructors and destructors for the objects as well as control over the length the free.
replaykit - A library for replaying time series data.
HTTPLab - HTTPLabs let you inspect HTTP requests and forge responses.
httpmock - Easy mocking of HTTP responses from external resources
stress - Replacement of ApacheBench(ab), support for transactional requests, support for command line and package references to HTTP stress testing tool.
sling - A Go HTTP client library for creating and sending API requests.
httptail - tools push stdout/stderr to http chunked
IMAP
go-imap - An IMAP library for clients and servers.
telnet - Package telnet provides TELNET and TELNETS client and server implementations, for the Go programming language, in a style similar to the "net/http" library (that is part of the Go standard library) including support for "middleware"; TELNETS is secure TELNET, with the TELNET protocol over a secured TLS (or SSL) connection.
telnet - A simple interface for interacting with Telnet connection
telnets - A client for the TELNETS (secure TELNET) protocol.
VNC
glibvnc - Go wrapper using CGO for the libvnc library.
Websockets
lib/websocket - A library for writing websocket client and server (using epoll)
Minio - Object Storage compatible with Amazon S3 API
libStorage - an open source, platform agnostic, storage provisioning and orchestration framework, model, and API
OpenEBS - Containerized, Open source block storage for your containers,integrated tightly into K8S and other environments and based on distributed block storage and containerization of storage control
storage - An application-oriented unified storage layer for Golang
Strings and Text
allot - Placeholder and wildcard text parsing for CLI tools and bots
make.go.mock - Generates type-safe mocks for Go interfaces and functions.
code.soquee.net/testlog — A log.Logger that proxies to the Log function on a testing.T so that logging only shows up on tests that failed, grouped under the test.
webtf - Web app to graphical visualization of twitter timelines using the HTML5
Wikifeat - Extensible wiki system using CouchDB written in Golang
Freyr - Server for storing and serving readings from plant environment sensors. Integrates Golang API with ReactJS web app; uses Docker for testing/deployment.
Rickover - Job queue with a HTTP API, backed by Postgres
Web Libraries
Authentication
goth - Package goth provides a simple, clean, and idiomatic way to write authentication packages for Go web applications
authcookie - Package authcookie implements creation and verification of signed authentication cookies.
dgoogauth - Go port of Google's Authenticator library for one-time passwords
goauth - A library for header-based OAuth over HTTP or HTTPS.
goha - Basic and Digest HTTP Authentication for Go http client
hero - OAuth server implementation - be an OAuth provider with Go
httpauth-go - Package httpauth provides utilities to support HTTP authentication policies. Support for both the basic authentication scheme and the digest authentication scheme are provided.
httpauth - HTTP session (cookie) based authentication and authorization
totp - Time-Based One-Time Password Algorithm, specified in RFC 6238, works with Google Authenticator
go-otp - Package go-otp implements one-time-password generators used in 2-factor authentication systems like RSA-tokens. Currently this supports both HOTP (RFC-4226), TOTP (RFC-6238) and Base32 encoding (RFC-3548) for Google Authenticator compatibility
code.soquee.net/otp — A library for generating one-time passwords using HOTP (RFC-4226), and TOTP (RFC-6238). Includes less commonly used profiles, and custom time functions for flexible windows.
GoSrv - A Go HTTP server that provides simple command line functionality, config loading, request logging, graceful connection shutdown, and daemonization.
go-cron - A small cron job system to handle scheduled tasks, such as optimizing databases or kicking idle users from chat. The cron.go project was renamed to this for go get compatibility.
It’s amazing how much of the computing world runs on opensource software nowadays. It’s very likely that web browser and device you’re using to read this post are built on top of open source software (where anyone can look at or propose changes to the code), as well as much of the infrastructure for the site you’re reading it on.
That’s why I was somewhat disappointed when I sat down to watch TV with my Xfinity (nice name change btw Comcast) set top box the other day and clicked:
show open source license information
that it seemed to be coded incorrectly and said:
no license text available
This is bad JUJU, considering common licenses such as GNU public license have a usage clause if you use them to build your tech:
An interactive user interface displays “Appropriate Legal Notices”…
On an opposite note, I was very pleased to see this in the SquareCash App:
Retraction: to be fair to Comcast, they give back to open source through grants, a great practice which should be encouraged: innovationfund.comcast.com
For those not savy, Cambridge Analytica (now renamed as Emerdata Ltd) used ill-gotten data troves, behavioral profiling, ad micro-targeting via Facebook, and Machine Learning to win elections and influence opinions and politics towards a populist, hyper-conservative slant worldwide. Medium had a fantastic article about how these systems work:
While using big data in election campaigning is not new, the efficacy of Cambridge Anylitica’s vast data troves, ability to microtarget on an individual basis, get feedback, and adjust their approaches rapidly is likely more effective than what has been seen in the past with previous data-driven campaigning. That’s not to say that Cambridge Anylitica has been holding back on this front, however: (if I recall correctly) their data insights were used to great effect in determining the location and sometimes even topics of Trump’s campaign rallies leading up to the 2016 election.
Cambridge Anylitica laid out the blueprints for how to use these kind of systems, it will be interesting to see what succeeds them in the coming years.
This is why internet privacy should be considered paramount. Taking a few simple steps to ensure your privacy online can go a long way to thwarting these kind of systems. Encourage others to do the same.
What is the lesson? The lesson here is that when you design software, you create the future.
If you’re designing software for a social network, the decision to limit message lengths, or the decision to use ML to maximize engagement, will have vast social impact which is often very hard to predict.
As software developers and designers, we have a responsibility to the world to think these things through carefully and design software that makes the world better, or, at least, no worse than it started out.
And when our inventions spin out of control, we have a responsibility to understand why and to try to fix them.
While my knowledge is at best surface level for many off these topics, the CNCF’s CTO Chris Aniszcyk has some interesting writings on what goes on in the development, governing, and organization of such substantial open source communities:
The introduction of the software container standard, emerging reduced cost “hybrid cloud” deployments, and competition between differing open source, proprietary, and alternative tool-set makers have created a diverse and thriving ecosystem for cloud-native computing.
Disclaimer:
All contents wherein are the opinions of the author’s alone and carry no express guarantee of veracity nor prudence.
In other words, this is a new scheme of distributing and deploying software packaged in “neatly packed, standard containers” that can be run (almost) anywhere.
This saves the need for much of the tiresome, complicated, and (process-wise) unreliable system configuration that used to be needed alongside the deployment of software.
The scale of containerized infrastructure with Kubernetes can also be scaled up or down to meet demand, making it an attractive option for many developers and businesses.
I also gave an example of how the business history of containers in global shipping has close parallels to that of containerization in software:
Pioneering businessman Malcolm McLean had the foresight to donate the design for his uniform intermodal shipping containers to the International Standards Organization (ISO) as he sought to make his container shipping fleet of re-purposed WWII ships able to dock and unload at any port worldwide
Malcolm McLean, the American businessman who popularized modern intermodal shipping containers globally
In 1956, most cargoes were loaded and unloaded by hand by longshoremen. Hand-loading a ship cost $5.86 a ton at that time. Using containers, it cost only 16 cents a ton to load a ship, 36-fold savings
Just as McLean’s containers removed the need for labor intensive “loading and unloading” of varied cargo across multiple modes of shipping by longshoremen, Docker containers are attractive because they promise that software will run in the same way on a laptop development environment as a (potentially massively scaled) production deployment.
The software container only has to be built or “packed” one time. Then, it can be run in the same way anywhere.
Docker’s donation of Containerd:
Similiar to McLean’s donation of his container standard to the ISO, the Docker team donated their container runtime containerd as a standard to an emerging organization known as the Cloud Native Computing Foundation (CNCF), born out of the Linux foundation.
The Docker team’s hope was likely that, similar to McLean, they could popularize the container standard gratis in order to reap benefits on their true product: infrastructure for containers. In Docker’s case, this is the set of tools known as Docker Swarm, available in the paid version of Docker Enterprise.
Adoption of Google’s Kubernetes over Docker Swarm for container orchestration:
Google’s Kubernetes had already gained popularity, maturity, and battle tested-ness that made it an attractive alternative to Docker Swarm for orchestrating container deployments.
And now, a thriving diverse, mature, and open source container ecosystem:
In economic terms, open source back-end infrastructure can be considered as a complement to proprietary front-end software product.
Smart companies proliferate complements to their products in order to increase demand.
Therefore, it’s easy to see why so many companies have a vested interest and have banded together to ensure the success of open source software and organizations like the Linux Foundation and the CNCF
While the tools listed in the CNCF’s projects are all but guaranteed to be powerful, well supported, and widely adopted, the continued development of the container and cloud landscape has sparked a flurry of new entrants and supporting tools seeking to support and extend the functionality of containerized software and cloud native computing patterns.
This has resulted in an emergent ecosystem of healthy competition between companies, popular tools, and design patterns that strengthens the effectiveness of the landscape as a whole.
Foreword: the importance of a ‘common core’ open source infrastructure and tooling:
Think of the history of data access strategies to come out of Microsoft.
ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET – All New! Are these technological imperatives? The result of an incompetent design group that needs to reinvent data access every goddamn year? (That’s probably it, actually.) But the end result is just cover fire.
The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features.
Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and re-implementing and fixing bugs that crop up only on Windows XP. The companies who stumble are the ones who spend too much time reading tea leaves to figure out the future direction of Microsoft.
Given the possibility of a similar situation arising with dominant cloud providers, as well as the business competition considerations, the utility of an open source “common core” for back-end technology infrastructure is a useful consideration.
Intro: an analogy
For the sake of an analogy, let’s say you own a car manufacturer. You manufacture cars, SUV’s, and light trucks at your factory.
For distribution, you’d like to ship your cars worldwide and on various modes of shipping. You want to be sure the cars don’t get dinged up during shipping and the various shipping infrastructure can handle moving them around. You decide on using the standard inter-modal shipping container and its existing shipping ecosystem.
There are three major providers that promise to ship as many cars as you like, but at a hefty price.
Aardvark shipping is the dominant shipper in the industry. They have great services, additional options, and make it easy to get going. But, they’re expensive, and they charge an extra fee if you want to transfer over to a different provider or do your own shipping part-way.
Azul shipping is a decent size, and promises good customer service. They also work with you pretty well if you’d like to do some or most of the shipping yourself.
Goodboy shipping pioneered some cool things with large-scale shipping, and have some other services that are really high-IQ stuff. They’re still a bit expensive though.
Your secret however? You have a fondness for Cuban cigars, caviar, and fine dining. You decide that you’d like to cut out the middle man, and ship the containers yourself if you can.
You find some like-minded car manufactures, and go over what you’ll need to get started:
Somewhere to put the design of your car containers once they’re ready to be shipped
Some kind of infrastructure for orchestrating the shipment of containers to customers
Some kind of system to monitor the containers
Some kind of system to have your cars to be discoverable, and for them to communicate outside of their container during shipping (this part of the analogy is tortured).
Something that can coordinate standard types of shipments that you or other car manufactures are likely to perform
Others!
You also figure you could save money and have further flexibility with your shipping by doing a ‘hybrid approach‘. You can do most of the shipping yourself, but if you have a surge of demand you can scale out to the bigger providers.
Resources:
By no means are the following documents or items detailed comprehensive nor necessarily the best technical solutions from a design standpoint. As container technologies emerge, there may very well be a lead-in time period where multiple competing platform/toolsets persist. Technical decisions of this nature should be undertaken carefully.
Regardless, it is likely that there is room for multiple standards in each tool-space to suit the needs of particular use cases.
1. The open source container repository:
What good is a fast car if you’ve got no where to go with it?
Or (better), what good are your home-grown software containers if you’ve got no way to distribute them across your systems?
Harbor is a container image and helm chart repository currently incubating in the cloud native computing foundation (CNCF). It can be installed with the some of the same container tools it seeks to support, and has multiple nifty features. From a passing glance, it seems to be the more popular of the two options presented here currently.
Red Hat Quay is a container image repository that is part of Red Hat‘s all-encompassing hybrid cloud/Kubernetesopenshift platform. While probably not as popular as Harbor, it may have more support and easier integration with other components of openshift due to (the now substantially sized) Red Hat’s backing.
Google developed Kubernetes for managing large numbers of containers. Instead of assigning each container to a host machine, Kubernetes groups containers into pods. For instance, a multi-tier application, with a database in one container and the application logic in another container, can be grouped into a single pod. The administrator only needs to move a single pod from one compute resource to another, rather than worrying about dozens of individual containers.
Google itself has made the process even easier on its own Google Cloud Service. It offers a production-ready version of Kubernetes called the Google Container Engine.
Kubernetes is the 800-pound gorilla of container orchestration. It powers some of the biggest deployments worldwide, but it comes with a price tag.
Especially for smaller teams, it can be time-consuming to maintain and has a steep learning curve. For what our team of four wanted to achieve at trivago, it added too much overhead. So we looked into alternatives — and fell in love with Nomad.
…
On top of that, the Kubernetes ecosystem is still rapidly evolving. It takes a fair amount of time and energy to stay up-to-date with the best practices and latest tooling. Kubectl, minikube, kubeadm, helm, tiller, kops, oc – the list goes on and on. Not all tools are necessary to get started with Kubernetes, but it’s hard to know which ones are, so you have to be at least aware of them. Because of that, the learning curve is quite steep.
…
Batteries not included
Nomad is the 20% of service orchestration that gets you 80% of the way. All it does is manage deployments. It takes care of your rollouts and restarts your containers in case of errors, and that’s about it.
The entire point of Nomad is that it does less: it doesn’t include fine-grained rights management or advanced network policies, and that’s by design. Those components are provided as enterprise services, by a third-party, or not at all.
I think Nomad hit a sweet-spot between ease of use and expressiveness. It’s good for small, mostly independent services. If you need more control, you’ll have to build it yourself or use a different approach. Nomad is just an orchestrator.
The best part about Nomad is that it’s easy to replace. There is little to no vendor lock-in because the functionality it provides can easily be integrated into any other system that manages services. It just runs as a plain old single binary on every machine in your cluster; that’s it!
Apache Mesos is a cluster manager that can help the administrator schedule workloads on a cluster of servers. Mesos excels at handling very large workloads, such as an implementation of the Spark or Hadoop data processing platforms.
Mesos had its own container image format and runtime built similarly to Docker. The project started by building the orchestration first, with the container being the side effect of needing something to actually package and contain an application. Applications were packaged in this format to be able to be run by Mesos…
Prominent users of Mesos include Twitter, Airbnb, Netflix, Paypal, SquareSpace, Uber, and more.
Docker Swarm is a clustering and scheduling tool that automatically optimizes a distributed application’s infrastructure based on the application’s lifecycle stage, container usage and performance needs.
Swarm has multiple models for determining scheduling, including understanding how specific containers will have specific resource requirements. Working with a scheduling algorithm, Swarm determines which engine and host it should be running on. The core aspect of Swarm is that as you go to multi-host, distributed applications, the developer wants to maintain the experience and portability. For example, it needs the ability to use a specific cluster solution for an application you are working with. This would ensure cluster capabilities are portable all the way from the laptop to the production environment.
Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.Prometheus does one thing and it does it well. It has a simple yet powerful data model and a query language that lets you analyse how your applications and infrastructure are performing. It does not try to solve problems outside of the metrics space, leaving those to other more appropriate tools.Prometheus is primarily written in Go and licensed under the Apache 2.0 license.
Grafana provides a powerful and elegant way to create, explore, and share dashboards and data with your team and the world.
Grafana is most commonly used for visualizing time series data for Internet infrastructure and application analytics but many use it in other domains including industrial sensors, home automation, weather, and process control.
Grafana works with Graphite, Elasticsearch, Cloudwatch, Prometheus, InfluxDB & More.
Grafana features pluggable panels and data sources allowing easy extensibility and a variety of panels, including fully featured graph panels with rich visualization options. There is built in support for many of the most popular time series data sources.
Istio is an open source service mesh initially developed by Google, IBM and Lyft. The project was announced in May 2017, with its 1.0 version released in July 2018. Istio is built on top of the Envoy proxy which acts as its data plane. Although it is quite clearly the most popular service mesh available today, it is for all practical purposes only usable with Kubernetes.
United States Department of Defense is betting on Kubernetes and Istio:
As hybrid cloud strategies go, the U.S. military certainly is taking a unique approach.
Just like almost everything else, military organizations increasingly depend on software, and they are turning to an array of open source cloud tools like Kubernetes and Istio to get the job done, according to a presentation delivered by Nicholas Chaillan, chief software officer for the U.S. Air Force, at KubeCon 2019 in San Diego. Those tools have to be deployed in some very interesting places, from weapons systems to even fighter planes. Yes, F-16s are running Kubernetes on the legacy hardware built into those jets.
…
Chaillan and his team decided to embrace open source software as the foundation of the new development platform, which they called the DoD Enterprise DevSecOps Initiative. This initiative specified a combination of Kubernetes, Istio, knative and an internally developed specification for “hardening” containers with a strict set of security requirements as the default software development platform across the military.
…
The scale at which the DoD operates is unlike almost all commercial operations; Chaillan had to train 100,000 people on the principles of DevSecOps, not to mention the new tools.
Linkerd (rhymes with “chickadee”) is the original service mesh created by Buoyant, which coined the term in 2016. It is the official service mesh project supported by the Cloud-Native Computing Foundation, Like Twitter’s Finagle, on which it was based, Linkerd was originally written in Scala and designed to be deployed on a per-host basis.
Criticisms of its comparatively large memory footprint subsequently led to the development of Conduit, a lightweight service mesh specifically for Kubernetes, written in Rust and Go.
The Conduit project has since been folded into Linkerd, which relaunched as Linkerd 2.0 in July of 2018.
While Linkerd 2.x is currently specific to Kubernetes, Linkerd 1.x can be deployed on a per-node basis, thus making it a more flexible choice where a variety of environments need to be supported.
Many companies say they are going open-office to boost collaboration — but it’s actually for budgetary reasons. If that’s your company, be honest about it.
VANEK SMITH: Auslander says the constant movement and the bright plastic floor made it impossible to work.
AUSLANDER: Like, if you could climb inside a migraine headache, that’s what that felt like.
VANEK SMITH: Did you have, like, coping mechanisms?
AUSLANDER: Yeah, it was called my house.
VANEK SMITH: The Italian designer Gaetano Pesce, said, yeah, he heard this, that people had trouble working in this new office.
…
VANEK SMITH: Pesce’s virtual office had a short life. A few years after it was completed, Chiat-Day moved to a more traditional space. Warren Berger, the design critic, visited just before the move.
BERGER: They’d already, you know, started the process of shutting it down and moving out. So it was a failed experiment clearly by that point.
VANEK SMITH: Still, says Berger, the office had a big impact. Companies like Google and Apple – and now basically everyone – adopted the ideas of couches and cafe spaces and mobile desks. And Pesce says he still sees pieces of his office when he travels. He saw one of the rolling desks in Milan, a plastic chair in Paris, a piece of the wall in Aspen.
After software, the most important tool to a hacker is probably his office. Big companies think the function of office space is to express rank. But hackers use their offices for more than that: they use their office as a place to think in. And if you’re a technology company, their thoughts are your product. So making hackers work in a noisy, distracting environment is like having a paint factory where the air is full of soot.
Companies like Cisco are proud that everyone there has a cubicle, even the CEO. But they’re not so advanced as they think; obviously they still view office space as a badge of rank. Note too that Cisco is famous for doing very little product development in house. They get new technology by buying the startups that created it– where presumably the hackers did have somewhere quiet to work.
This could explain the disconnect over cubicles. Maybe the people in charge of facilities, not having any concentration to shatter, have no idea that working in a cubicle feels to a hacker like having one’s brain in a blender. (Whereas Bill, if the rumors of autism are true, knows all too well.)
One big company that understands
what hackers need is Microsoft. I once saw a recruiting ad for Microsoft
with a big picture of a door. Work for us, the premise was, and we’ll
give you a place to work where you can actually get work done.
And
you know, Microsoft is remarkable among big companies in that they are
able to develop software in house. Not well, perhaps, but well enough.
“Yeah, we all work in cubicles, but everyone works in a cubicle—up to and including the CEO!”
“The CEO? Does the CEO really work in a cubicle?”
“Well, he has a cubicle, but actually now that you mention it there’s this one conference room that he goes to for all his important meetings…”
Mmmm hmmm. A fairly common Silicon Valley phenomenon is the CEO who makes a big show of working from a cubicle just like the hoi polloi, although somehow there’s this one conference room that he tends to make his own (“Only when there’s something private to be discussed,” he’ll claim, but half the time when you walk by that conference room there’s your CEO, all by himself, talking on the phone to his golf buddy, with his Cole Haans up on the conference table).