(Joel Spolsky) Developers are Writing the Script for the Future

we are just little vegetables swimming in software soup

Software is popular because it has an agenda … it has something it’s trying to do.

If you write the software, you set the agenda.

Therefore, software developers are writing the script for the future.


Marc Andreessen: Why Software is Eating the World (Wall Street Journal)


Loading


More:

Previously:


Krupczak logo

Home

(CNBC) The Rise of Open Source Software

The rise of open source software

An excellent video released yesterday by CNBC on the history of open source, its current state and usage, and its contributors and funding models.

Loading


Previously:




Krupczak logo

Home

Toe in the Water: Examining Deep Learning and Data Science

Synopsis:

Powerful deep learning is poised to become much more accessible given the increased availability of cheap and powerful hardware designed for fast, highly-parallel computations on neural nets.

Tesla NPU
NPU’s inside of tesla’s Full Self Driving Chip

“Toe in the water”:

I’ve been trying to get my toe into the water in understanding machine learning, and I’ve created an account on the competitive data science site Kaggle (+) to start snooping and seeing what’s around.

The available data sets and competitions are certainly interesting in both technical implementations and scope.

Something that I’ve also found immediately interesting about the state of Kaggle and data science currently is the deep learning power currently available for data science using consumer video game graphics cards, or GPU‘s.

Anyone paying attention to cryptocurrencies the past few years probably noticed something interesting: GPU’s are a lot better at cryptocurrency “mining” (or solving a large series of math problems to “mine” and provide ownership of an unclaimed block of currency) than CPU’s.

This is effectually because these “graphics” chips are designed to do many small math problems in parallel with many small discrete processors, whereas a “big and fast'” CPU is generally limited to following one track of execution at a time.

“floats” and GPU architecture:

In addition, GPU’s have generally been designed with special hardware that enables faster calculations on decimal, or “floating point numbers“, that are important for computer graphics.

Neural Nets:

Due to fortunate coincidence, doing many operations quickly and in parallel on floating point numbers also seems to be great for working with neural nets (+) as well.

A neural net illustration as seen in Google’s play with Tensorflow

Generally, the idea with the neural nets used for Machine learning is that numerous virtual “neurons” are exposed to stimulus with a large labeled training set of data. As the neurons are exposed, a large amount of math is performed to “back-propagate” information gleamed from observed stimuli through the various layers of the neural net as it learns. By performing these calculations, a neural net has been “trained” to be able to recognize properties of data of a certain pattern, and can begin to make powerful inferences when exposed to new stimuli.

Neural nets have proven extremely powerful in a wide variety of uses, and have already enabled advances in AI such as allowing Google’s DeepMind to beat the former world champion at GO, an achievement that some thought wouldn’t happen for at least another 10 or so years:

Hardware for Machine Learing:

A Kaggle user participating in data science competitions has seemed to have found a sweet spot in using consumer hardware for training neural nets with the RTX 2070 using PyTorch. He describes the RTX 2070 as a great GPU for deep learning because of its high memory bandwidth, its use of tensor cores to accelerate matrix manipulation math essential to Machine Learning, and its ability to run extremely fast and parallel computations on reduced precision 16 bit floats instead of the normal 32 bits. Together, these properties allow the card to perform a large number of calculations on “floating point” neural net values extremely rapidly.

A comparison of a newer RTX 2070 with my current graphics card, a GTX 980

The increased memory bandwidth, floating point performance (single precision performance),
and ability to process a larger volume of computations using lower-precision 16 bit floats
make it a powerful card for deep learning

Examining the lineup of emerging dedicated hardware for deep learning such as Tesla’s Neural Processor, Intel’s nervana, and Amazon’s Inferentia chip, It seems that going forward extremely good memory and cache performance and the performance of fast, simple operations on floating point numbers with such dedicated hardware will allow for very powerful AI models to be trained and used in the future.

software, hardware and “rapid prototyping” data models:

The aforementioned kaggle user has seemed to also have found that the libraries and software support using NVIDIA GPU’s seems to be the best option around for Machine Learning currently.

Having a fast GPU allows for quicker “rapid prototyping” of training and working with data models.

In addition, he seems to have a preference for Facebook’s PyTorch over Google’s TensorFlow, which will probably impact my projects and how I seek to self-educate in data science and machine learning going forward.

More resources:

The 100 page machine learning book/

Hands-Machine-Learning-Scikit-Learn-TensorFlow

https://github.com/chenyuntc/pytorch-book

Loading


Previously:




Krupczak logo

Home

Book Club 1.5

Preface:

Previously, I gave a list of “dead tree” books that I’ve been taking a look at recently.

If websites or information snacking is more your thing, the following are a couple of neat little compendiums and tid-bits:


The best in software writing I:

neilk.net/blog/2005/06/20/links-to-essays-in-best-software-writing-i/


The best in software writing II:

discuss.joelonsoftware.com:80/?bsw


Top ten (Joel on Software):

joelonsoftware.com/category/reading-lists/top-10


Usability testing, integrating user feedback into software design process:

gamespot.com/articles/the-science-of-playtesting

gdcvault.com/play/1566/Valve-s-Approach-to-Playtesting

The Joel Test: 12 steps to better code

Software testing:

Pushing decisions to the team (lowest possible level):


The “upside down” management pyramid
Devs, workers, etc. are the “leaves” at the top of the tree with the most pertinent information to tasks at hand
Management act as facilitators to identify roadblocks, enhance collaboration, set goals etc.
“move the furniture around”

The saddest thing about the Steve Jobs hagiography is all the young ‘incubator twerps’ strutting around Mountain View deliberately cultivating their worst personality traits because they imagine that’s what made Steve Jobs a design genius. Cum hoc ergo propter hoc, young twerp. Maybe try wearing a black turtleneck too.

For every Steve Jobs, there are a thousand leaders who learned to hire smart people and let them build great things in a nurturing environment of empowerment and it was AWESOME. That doesn’t mean lowering your standards. It doesn’t mean letting people do bad work. It means hiring smart people who get things done (+) — and then getting the hell out of the way.


Gaffney knew his top leaders were historically used to a top-down culture. So to facilitate the transition, he has spent a lot of time coaching the company’s management to help them shift toward a bottom-up leadership structure

“I spend a lot of time with the folks at the top couple layers of the official org chart, just talking to them about what it means to be in service to the folks who are in service to our customers — so in service to them rather than in charge of them,” Gaffney says. 

Providing a model for what you want leadership to look like is important in helping people evolve their approaches and buy into the changes.  One way Gaffney offered this was by implementing a training program to help people examine different approaches to leading. He also decided to run the program personally.  “It’s a leadership development program that is based on reading about leaders in other situations and engaging in a group dialogue of how did those leaders approach the situation, and how did they model the kind of leadership that we’re looking for,” Gaffney says.

One thing that we’ve done very proactively is to make sure that our club member is always front and center, even if the thing that we’re working on might seem so ‘back-officey’ that you don’t know how it could be connected to the member,” Gaffney says. “So we tell a lot of member stories. That’s a very important part of our culture, is to remind everyone why we’re here.” 

Sharing stories about your company helps employees to connect to your customers and your business in a more participative way, because it facilitates a more personal response.  “It just seems to work well though because it is a tool that lowers the barriers to having dialogue versus monologue, because people can tell you what parts of a story resonate with them, what parts they have questions about and what parts trouble them,” Gaffney says. “Storytelling just seems to be a medium that unlike PowerPoint, really draws people in.” 

Another way to motivate employee participation is to ask more questions. This helps draw out people who may be more reserved in bringing their ideas to the table.

“When you ask folks, they usually have things they want to tell you, but when you don’t ask they generally don’t want to bring them up,” Gaffney says. “It’s the rare individual that will proactively bring up something that they know could be improved. But when you ask them, most people respond to that invitation.” 

Gaffney now asks everyone in a leadership role at the company to double their question-to-statement ratio.


(Gen. James Mattis) afcea.org/content/military-needs-new-operational-paradigm

Fifth, training and education play a prominent—if not the predominant—role in command and control and the exercise of commander’s intent, no matter how rudimentary or sophisticated the C2 system. Training and education enable decentralized decision making down to the lowest possible level, and thus allow a reduction in the size of headquarters by orders of magnitude. This will speed decision making, reduce internal frictions and unleash subordinates.

Constant feedback loops are the key. Thus, a change in terminology is needed, from “command and control” to “command and feedback.” This is more than a rhetorical change; it is a change in how to think about and conduct operations.

Focus on customers, not competitors:

mycustomer.com/community/blogs/colin-shaw/bezos-obsess-over-customers-not-competitors

If you haven’t read Jeff Bezos’ latest letter to his shareholders, put it on your to-do list pronto. There are many takeaways from his letter, but one that stands out to me most is this: “Obsess over customers, not competitors.

True Customer Obsession

There are many ways to center a business. You can be competitor focused, you can be product focused, you can be technology focused, you can be business model focused, and there are more.

But in my view, obsessive customer focus is by far the most protective of Day 1 vitality.

Why? There are many advantages to a customer-centric approach, but here’s the big one: customers are  always beautifully, wonderfully dissatisfied, even when they report being happy and business is great.

Resist Proxies:

As companies get larger and more complex, there’s a tendency to manage to proxies. This comes in many shapes and sizes, and it’s dangerous, subtle, and very Day 2.

A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp.

Another example: market research and customer surveys can become proxies for customers – something that’s especially dangerous when you’re inventing and designing products

Good inventors and designers deeply understand their customer. They spend tremendous energy developing that intuition. They study and understand many anecdotes rather than only the averages you’ll find on surveys. They live with the design.


Embrace External Trends

The outside world can push you into Day 2 if you won’t or can’t embrace powerful trends quickly. If you fight them, you’re probably fighting the future. Embrace them and you have a tailwind. These big trends are not that hard to spot (they get talked and written about a lot), but they can be strangely hard for large organizations to embrace.

We’re in the middle of an obvious one right now: machine learning and artificial intelligence.

Over the past decades computers have broadly automated tasks that programmers could describe with clear rules and algorithms. Modern machine learning techniques now allow us to do the same for tasks where describing the precise rules is much harder

Matt’s opinion: Kubernetes / software containerization appears to me to be one of these trends as well, having clearly navigated the transition from a ‘flavor of the month’ software toolkit into a mature software ecosystem and powerful community.

This ecosystem has democratized the software industry and enterprise IT by making, simple, reliable, and heavily scalable infrastructure available to organizations of all sizes. It has become and likely will continue to be a staple in computing.

High Velocity Decisions

…To keep the energy and dynamism of Day 1, you have to somehow make high-quality, high-velocity decisions.

Second, most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow. Plus, either way, you need to be good at quickly recognizing and correcting bad decisions. If you’re good at course correcting, being wrong may be less costly than you think, whereas being slow is going to be expensive for sure.

Fourth, recognize true misalignment issues early and escalate them immediately. Sometimes teams have different objectives and fundamentally different views. They are not aligned. No amount of discussion, no number of meetings will resolve that deep misalignment. Without escalation, the default dispute resolution mechanism for this scenario is exhaustion. Whoever has more stamina carries the decision.

“You’ve worn me down” is an awful decision-making process. It’s slow and de-energizing. Go for quick escalation instead – it’s better.


Three management styles (Joel on Software)

The command and control management style

In life or death situations, the military needs to make sure that they can shout orders and soldiers will obey them even if the orders are suicidal. That means soldiers need to be programmed to be obedient in a way which is not really all that important for, say, a software company. 

In other words, the military uses Command and Control because it’s the only way to get 18 year olds to charge through a minefield, not because they think it’s the best management method for every situation.

The Econ 101 management style

Another big problem with Econ 101 management is the tendency for people to find local maxima. They’ll find some way to optimize for the specific thing you’re paying them, without actually achieving the thing you really want.


The identity management style

Fog Creek Lunches
“at Fog Creek we serve catered lunches for the whole team every day and eat together at one big table…”

That leaves a technique that I’m going to have to call The Identity Method. The goal here is to manage by making people identify with the goals you’re trying to achieve. That’s a lot trickier than the other methods, and it requires some serious interpersonal skills to pull off. But if you do it right, it works better than any other method.

(Shrinkwrap/SaaS) Set your priorities:

joelonsoftware.com/2005/10/12/set-your-priorities/

Shrinkwrap is the take-it-or-leave it model of software development. You develop software, wrap it in plastic, and customers either buy it, or they don’t. They don’t offer to buy it if you implement just one more feature. They don’t call you up and negotiate features. You can’t call up Microsoft and tell them, “Hey, I love that BAHTTEXT function you have in Excel for spelling out numbers in Thai, but I could really use an equivalent function for English. I’ll buy Excel if you implement that function.” Because if you did call up Microsoft here is what they would say to you: 

“Thank you for calling Microsoft. If you are calling with a designated 4-digit advertisement code, press 1. For technical support on all Microsoft products, press 2. For Microsoft presales product licensing or program information, press 3. If you know the person at Microsoft you wish to speak to, press 4. To repeat, press Star.”

The five competitive forces that shape strategy:

ibbusinessandmanagement.com/uploads/1/1/7/5/11758934/porters_five_forces_analysis_and_strategy.pdf

hbr.org/1979/03/how-competitive-forces-shape-strategy

Profitiability of Selected U.S. Industries average ROIC 1992-2006

But to understand industry competition and profitability in each of those three cases, one must analyze the industry’s underlying structure in terms of the five forces.

If the forces are intense, as they are in such industries as airlines, textiles, and hotels, almost no company earns attractive returns on investment.

If the forces are benign, as they are in industries such as software, soft drinks, and toiletries, many companies are profitable. Industry structure drives competition and profit-ability, not whether an industry produces a product or service, is emerging or mature, high-tech or low-tech, regulated or unregulated.


…even though rivalry is often fierce in commodity industries,  it may not be the factor limiting profitability.  Low returns in the photographic film industry,  for instance, are the result of a superior substitute product—as Kodak and Fuji,  the world’s leading producers of photographic film,  learned with the advent of digital photography. 

In such a situation,  coping with the substitute product becomes the priority.

By understanding how the five competitive forces influence profitability in your industry, you can develop a strategy for enhancing your company’s long-term profits. Porter suggests the following:

Position your company where forces are the weakest (+)

Exploit changes in forces (+) (+)

Re-shape the forces in your favor (+)

Strategy Letter I: Ben and Jerry’s vs. Amazon

Building a company? You’ve got one very important decision to make, because it affects everything else you do. No matter what else you do, you absolutely must figure out which camp you’re in, and gear everything you do accordingly, or you’re going to have a disaster on your hands.

The decision? Whether to grow slowly, organically, and profitably, or whether to have a big bang with very fast growth and lots of capital.

The organic model is to start small, with limited goals, and slowly build a business over a long period of time. I’m going to call this the Ben and Jerry’s model, because Ben and Jerry’s fits this model pretty well.

The other model, popularly called “Get Big Fast” (a.k.a. “Land Grab”), requires you to raise a lot of capital, and work as quickly as possible to get big fast without concern for profitability. I’m going to call this the Amazon model, because Jeff Bezos, the founder of Amazon, has practically become the celebrity spokesmodel for Get Big Fast.

Ben and Jerry’s companies start on somebody’s credit card. In their early months and years, they have to use a business model that becomes profitable extremely quickly, which may not be the ultimate business model that they want to achieve. For example, you may want to become a giant ice cream company with $200,000,000 in annual sales, but for now, you’re going to have to settle for opening a little ice cream shop in Vermont, hope that it’s profitable, and, if it is, reinvest the profits to expand business steadily. The Ben and Jerry’s corporate history says they started with a $12,000 investment. ArsDigita says that they started with an $11,000 investment. These numbers sound like a typical MasterCard credit limit. Hmmm.

Amazon companies raise money practically as fast as anyone can spend it. There’s a reason for this. They are in a terrible rush. If they are in a business with no competitors and network effects, they better get big super-fast. Every day matters.

Getting big fast gives the impression (if not the reality) of being successful. When prospective employees see that you’re hiring 30 new people a week, they will feel like they are part of something big and exciting and successful which will IPO. They may not be as impressed by a “sleepy little company” with 12 employees and a dog, even if the sleepy company is profitable and is building a better long-term company.

A sleepy little company in Albuquerque

As a rule of thumb, you can make a nice place to work, or you can promise people they’ll get rich quick. But you have to do one of those, or you won’t be able to hire.

Still can’t decide? There are other things to consider. Think of your personal values. Would you rather have a company like Amazon or a company like Ben and Jerry’s? Read a couple of corporate histories – Amazon and Ben and Jerry’s for starters, even though they are blatant hagiographies, and see which one jibes more with your set of core values. Actually, an even better model for a Ben and Jerry’s company is Microsoft, and there are lots of histories of Microsoft. Microsoft was, in a sense, “lucky” to land the PC-DOS deal, but the company was profitable and growing all along, so they could have hung around indefinitely waiting for their big break.

Think of your risk/reward profile. Do you want to take a shot at being a billionaire by the time you’re 35, even if the chances of doing that make the lottery look like a good deal? Ben and Jerry’s companies are not going to do that for you.

Probably the worst thing you can do is to decide that you have to be an Amazon company, and then act like a Ben and Jerry’s company (while in denial all the time). Amazon companies absolutely must substitute cash for time whenever they can. You may think you’re smart and frugal by insisting on finding programmers who will work at market rates. But you’re not so smart, because that’s going to take you six months, not two months, and those 4 months might mean you miss the Christmas shopping season, so now it cost you a year, and probably made your whole business plan unviable.

Running a headquarters

The ideal way to run a headquarters is to have one man, preferably over 80, sitting in an office by himself. Anything else is pure frippery.

-Charlie Munger – Becoming Warren Buffett
(a bit tongue-in-cheek)

Net present value

When forced to choose between optimizing the appearance of our GAAP accounting and maximizing the present value of future cash flows, we’ll take the cash flows.

Jeff Bezos, 1997 letter to shareholders

hbr.org/2014/11/a-refresher-on-net-present-value

Most people know that money you have in hand now is more valuable than money you collect later on. That’s because you can use it to make more money by running a business, or buying something now and selling it later for more, or simply putting it in the bank and earning interest. Future money is also less valuable because inflation erodes its buying power. This is called the time value of money. But how exactly do you compare the value of money now with the value of money in the future? That is where net present value comes in.

Vertical and Horizontal Integration:

In the first gilded age, Rockefeller’s Standard Oil:

Standard was growing horizontally and vertically. It added its own pipelines, tank cars, and home delivery network. It kept oil prices low to stave off competitors, made its products affordable to the average household, and, to increase market penetration, sometimes sold below cost. It developed over 300 oil-based products from tar to paint to petroleum jelly to chewing gum. By the end of the 1870s, Standard was refining over 90% of the oil in the U.S.[60] Rockefeller had already become a millionaire ($1 million is equivalent to $26 million[38] in 2019 dollars).[61]

He instinctively realized that orderliness would only proceed from centralized control of large aggregations of plant and capital, with the one aim of an orderly flow of products from the producer to the consumer. That orderly, economic, efficient flow is what we now, many years later, call ‘vertical integration‘ I do not know whether Mr. Rockefeller ever used the word ‘integration’. I only know he conceived the idea.

— A Standard Oil of Ohio successor of Rockefeller [54]


Carnegie Steel Company:

Carnegie made his fortune in the steel industry, controlling the most extensive integrated iron and steel operations ever owned by an individual in the United States. One of his two great innovations was in the cheap and efficient mass production of steel by adopting and adapting the Bessemer process, which allowed the high carbon content of pig iron to be burnt away in a controlled and rapid way during steel production. Steel prices dropped as a result, and Bessemer steel was rapidly adopted for rails; however, it was not suitable for buildings and bridges.[35]

The second was in his vertical integration of all suppliers of raw materials. In the late 1880s, Carnegie Steel was the largest manufacturer of pig iron, steel rails, and coke in the world, with a capacity to produce approximately 2,000 tons of pig metal per day. In 1883, Carnegie bought the rival Homestead Steel Works, which included an extensive plant served by tributary coal and iron fields, a 425-mile (684 km) long railway, and a line of lake steamships.[28] Carnegie combined his assets and those of his associates in 1892 with the launching of the Carnegie Steel Company.[36]

Vertical Integration:

In microeconomics and management, vertical integration is an arrangement in which the supply chain of a company is owned by that company. Usually each member of the supply chain produces a different product or (market-specific) service, and the products combine to satisfy a common need.

Vertical integration has also described management styles that bring large portions of the supply chain not only under a common ownership but also into one corporation (as in the 1920s when the Ford River Rouge Complex began making much of its own steel rather than buying it from suppliers).

forbes.com/sites/gregpetro/2017/08/02/amazons-acquisition-of-whole-foods-is-about-two-things-data-and-product

Second, and more interestingly, is that Whole Foods has a strong private label business with its 365 brand. Why is this important to Amazon? In case you haven’t noticed, Amazon is becoming more and more vertically integrated. It now runs eight private brand lines of fashion apparel, including Lark & Ro, Ella Moon and Mae, and this business has been growing rapidly. The online giant also offers private brand products for everything from batteries to baby wipes and diapers. Amazon is even developing its own content. Its “Manchester by the Sea” film was produced completely by Amazon and was a blockbuster hit last year.

The typical argument for vertical integration is that private brand product is higher margin than third-party branded product. That is true and is an important part of Amazon’s strategy. But even more important is the fact that private brand product represents differentiation. In a retail market where there is a “sea of sameness” and national brands can be found through nearly every channel, private and exclusive brands create a reason for the consumer to buy through Amazon as opposed to going elsewhere. If Amazon has the best shopping experience, the fastest delivery, the best prices, and now unique products, why would you shop anywhere else?

Horizontal Integration

Horizontal integration is the process of a company increasing production of goods or services at the same part of the supply chain. A company may do this via internal expansion, acquisition or merger

Facebook and Instagram

One of the most definitive examples of horizontal integration was Facebook’s acquisition of Instagram in 2012 for a reported $1 billion. Both Facebook and Instagram operated in the same industry (social media) and shared similar production stages in their photo-sharing services. Facebook sought to strengthen its position in the social sharing space and saw the acquisition of Instagram as an opportunity to grow its market share, reduce competition, and gain access to new audiences. Facebook realized all of these through its acquisition. Instagram is now owned by Facebook but still operates independently as its own social media platform.

Horizontal integration can allow companies to quickly expand their reach and expertise while reducing costs. (+)

Ecosystems are the new oil

packet.com/blog/ecosystems-are-the-new-oil/

hbr.org/1993/05/predators-and-prey-a-new-ecology-of-competition

Step outside the tech world (where we are comfortable using the word in a very narrow sense) and you’ll recall from primary school science that ecosystems are much bigger concepts. A series of highly interdependent elements, functioning together symbiotically to enable the lifecycle.

Much like the magnificently complex geological ecosystem that provides for life on our planet, end-to-end technology ecosystems are nuanced, crazy hard to create, and fragile. They require time, meticulous effort, and patience to create. When you’re collecting GitHub stars instead of mineral rights, you have to consistently add value, show accountability, and demonstrate stewardship. Otherwise those stars quickly lose their brightness.

Ways your company can support and sustain open source

via Chris Aniszcyk [CTO of Cloud Native Computing Foundation]

The success of open source continues to grow; surveys show that the majority of companies use some form of open source, 99% of enterprises see open source as important, and almost half of developers are contributing back.

It’s important to note that companies aren’t contributing to open source for purely altruistic reasons. Recent research from Harvard shows that open source-contributing companies capture up to 100% more productive value from open source than companies that do not contribute back.

Matt’s comments:

If you’re using open source software at your company, then this software could be considered in economics terms as a “complement” to your company’s product, where its availability, quality, and ecosystem directly bolsters it.

Here, Joel Spolsky writes on how smart companies in the past have strategized to further their business interests by supporting complements to their products, or otherwise “commoditizing their complements” to great results

(Joel on Software) Hitting the high notes:

joelonsoftware.com/2005/07/25/hitting-the-high-notes/

So, why isn’t there room in the software industry for a low cost provider, someone who uses the cheapest programmers available? (Remind  me to ask Quark how that whole  fire-everybody-and-hire-low-cost-replacements plan is working.)

Here’s why: duplication of software is free. That means that the cost  of programmers is spread out over all the copies of the software you  sell. With software, you can improve quality without adding to the  incremental cost of each unit sold.

…The same thing applies to the entertainment industry. It’s worth hiring Brad Pitt for your latest blockbuster movie, even though he demands a high salary, because that salary can be divided by all the millions of people who see the movie solely because Brad is so damn hot. …

Five Antonio Salieris won’t produce Mozart’s Requiem. Ever. Not if they work for 100 years. 

Five Jim Davis’s — creator of that unfunny cartoon cat, where 20% of the jokes are about how Monday sucks and the rest are about how much the cat likes lasagna (and those are the punchlines!) … five Jim Davis’s could spend the rest of their lives writing comedy and never, ever produce the Soup Nazi episode of Seinfeld.

Matt’s note: while Joel’s perspective here is a bit extreme, it does raise an important point. I think it’s dangerous to start thinking so narrowly though.

Mozarts are great, but often that’s not all it takes for high performing teams and businesses. This can mean diversity, and also different types of contributors.

In the book Peopleware for instance, an example was given of a worker who had a strange track record that the projects they were on were less likely to fail than the average. They worked to make the projects they worked on “fun” for team members, and were able to rally and get people interested in the projects.

This type of contribution wouldn’t have been measurable with performance metrics (see Econ 101), but was still a significant contribution to the team goal (see identity management method)

(Joel Spolsky) finding great developers

What doesn’t happen, and I guarantee this, what never happens is that you say, “wow, this person is brilliant! We must have them!” In fact you can go through thousands of resumes, assuming you know how to read resumes, which is not easy, and I’ll get to that on Friday, but you can go through thousands of job applications and quite frankly never see a great software developer. Not a one. 

Here is why this happens. The great software developers, indeed, the best people in every field, are quite simply never on the market.

(Paul Graham) Great Hackers

paulgraham.com/gh.html

After software, the most important tool to a hacker is probably his office. Big companies think the function of office space is to express rank. But hackers use their offices for more than that: they use their office as a place to think in. And if you’re a technology company, their thoughts are your product. So making hackers work in a noisy, distracting environment is like having a paint factory where the air is full of soot.

Several friends mentioned hackers’ ability to concentrate– their ability, as one put it, to “tune out everything outside their own heads.” I’ve certainly noticed this. And I’ve heard several hackers say that after drinking even half a beer they can’t program at all. So maybe hacking does require some special ability to focus. Perhaps great hackers can load a large amount of context into their head, so that when they look at a line of code, they see not just that line but the whole program around it. John McPhee wrote that Bill Bradley’s success as a basketball player was due partly to his extraordinary peripheral vision…

This could explain the disconnect over cubicles. Maybe the people in charge of facilities, not having any concentration to shatter, have no idea that working in a cubicle feels to a hacker like having one’s brain in a blender. (Whereas Bill, if the rumors of autism are true, knows all too well.)

One big company that understands what hackers need is Microsoft. I once saw a recruiting ad for Microsoft with a big picture of a door. Work for us, the premise was, and we’ll give you a place to work where you can actually get work done.

And you know, Microsoft is remarkable among big companies in that they are able to develop software in house. Not well, perhaps, but well enough.

Companies like Cisco are proud that everyone there has a cubicle, even the CEO. But they’re not so advanced as they think; obviously they still view office space as a badge of rank. Note too that Cisco is famous for doing very little product development in house. They get new technology by buying the startups that created it– where presumably the hackers did have somewhere quiet to work.

In a low-tech society you don’t see much variation in productivity. If you have a tribe of nomads collecting sticks for a fire, how much more productive is the best stick gatherer going to be than the worst? A factor of two? Whereas when you hand people a complex tool like a computer, the variation in what they can do with it is enormous. 

That’s not a new idea. Fred Brooks wrote about it in 1974, and the study he quoted was published in 1968. But I think he underestimated the variation between programmers. He wrote about productivity in lines of code: the best programmers can solve a given problem in a tenth the time. But what if the problem isn’t given? In programming, as in many fields, the hard part isn’t solving problems, but deciding what problems to solve. Imagination is hard to measure, but in practice it dominates the kind of productivity that’s measured in lines of code. 

Productivity varies in any field, but there are few in which it varies so much. The variation between programmers is so great that it becomes a difference in kind. I don’t think this is something intrinsic to programming, though. In every field, technology magnifies differences in productivity. I think what’s happening in programming is just that we have a lot of technological leverage. But in every field the lever is getting longer, so the variation we see is something that more and more fields will see as time goes on. And the success of companies, and countries, will depend increasingly on how they deal with it.

Interesting

Along with good tools, hackers want interesting projects. What makes a project interesting? Well, obviously overtly sexy applications like stealth planes or special effects software would be interesting to work on. But any application can be interesting if it poses novel technical challenges. So it’s hard to predict which problems hackers will like, because some become interesting only when the people working on them discover a new kind of solution. Before ITA (who wrote the software inside Orbitz), the people working on airline fare searches probably thought it was one of the most boring applications imaginable. But ITA made it interesting by redefining the problem in a more ambitious way.

I think the same thing happened at Google. When Google was founded, the conventional wisdom among the so-called portals was that search was boring and unimportant. But the guys at Google didn’t think search was boring, and that’s why they do it so well.

This is an area where managers can make a difference. Like a parent saying to a child, I bet you can’t clean up your whole room in ten minutes, a good manager can sometimes redefine a problem as a more interesting one.

… Along with interesting problems, what good hackers like is other good hackers. Great hackers tend to clump together– sometimes spectacularly so, as at Xerox Parc. So you won’t attract good hackers in linear proportion to how good an environment you create for them. The tendency to clump means it’s more like the square of the environment. So it’s winner take all. At any given time, there are only about ten or twenty places where hackers most want to work, and if you aren’t one of them, you won’t just have fewer great hackers, you’ll have zero. 

Having great hackers is not, by itself, enough to make a company successful. It works well for Google and ITA, which are two of the hot spots right now, but it didn’t help Thinking Machines or Xerox. Sun had a good run for a while, but their business model is a down elevator. In that situation, even the best hackers can’t save you.

I think, though, that all other things being equal, a company that can attract great hackers will have a huge advantage. There are people who would disagree with this. When we were making the rounds of venture capital firms in the 1990s, several told us that software companies didn’t win by writing great software, but through brand, and dominating channels, and doing the right deals. 

They really seemed to believe this, and I think I know why. I think what a lot of VCs are looking for, at least unconsciously, is the next Microsoft. And of course if Microsoft is your model, you shouldn’t be looking for companies that hope to win by writing great software. But VCs are mistaken to look for the next Microsoft, because no startup can be the next Microsoft unless some other company is prepared to bend over at just the right moment and be the next IBM.

It’s a mistake to use Microsoft as a model, because their whole culture derives from that one lucky break. Microsoft is a bad data point. If you throw them out, you find that good products do tend to win in the market. What VCs should be looking for is the next Apple, or the next Google.  I think Bill Gates knows this. What worries him about Google is not the power of their brand, but the fact that they have better hackers.

Harvard Business Review – Stop Hiring for Culture Fit

Via Patty McCord, former Chief Talent Officer of Netflix (+)

Extra: NPR’s All Things Considered interviews Patty McCord: How the Architect of Netflix’s Innovative Culture Lost Her Job to the System


hbr.org/2018/01/how-to-hire

Finding the right people is also not a matter of “culture fit.” What most people really mean when they say someone is a good fit culturally is that he or she is someone they’d like to have a beer with. But people with all sorts of personalities can be great at the job you need done. This misguided hiring strategy can also contribute to a company’s lack of diversity, since very often the people we enjoy hanging out with have backgrounds much like our own.

Making great hires is about recognizing great matches—and often they’re not what you’d expect. Take Anthony Park. On paper he wasn’t a slam dunk for a Silicon Valley company. He was working at an Arizona bank, where he was a “programmer,” not a “software developer.” And he was a pretty buttoned up guy.

A few months later I sat in on a meeting of his team. Everyone was arguing until Anthony suddenly said, “Can I speak now?” The room went silent, because Anthony didn’t say much, but when he did speak, it was something really smart—something that would make us all wonder, Damn it, why didn’t I think of that? Now Anthony is a vice president. He’s proof that organizations can adapt to many people’s styles.

Once you’ve made an offer and hired someone, you need to keep assessing compensation. I learned this during a period when Netflix was losing people because of exorbitant offers from our competitors. One day I heard that Google had offered one of our folks almost twice his current pay, and I hit the roof. He was a really important guy, so his manager wanted to counter. I got into a heated e-mail exchange with his manager and a couple of VPs. I wrote, “Google shouldn’t decide the salaries for everybody just because they have more money than God!” We bickered for days. They kept telling me, “You don’t understand how good he is!” I was having none of it.

But I woke up one morning and thought, Oh, of course! No wonder Google wants him. They’re right! He had been working on some incredibly valuable personalization technology, and very few people in the world had his expertise. I realized that his work with us had given him a whole new market value. I fired off another e-mail: “I was wrong, and by the way, I went through the P&L, and we can double the salaries of everybody on this team.” That experience changed how we thought about compensation. We realized that for some jobs we were creating expertise and scarcity, and rigidly adhering to internal salary ranges could harm our best contributors, who could make more elsewhere. We decided we didn’t want a system in which people had to leave to be paid what they were worth. We also encouraged our employees to interview elsewhere regularly. That was the most reliable and efficient way to learn how competitive our pay was.

The New Stack (Architecture, Kubernetes, modern software architecture):

thenewstack.io/newsletter-archive

Data and business:

The Weekly Chartr

The Guardian email-newsletters

newsletters

The morning briefing is pretty good

The Wall Street Journal

newsletters

Glassdoor Economic Research

(analysis of economic and BLS data, economic insights based on data obtained from Glassdoor’s platform, and other analysis)

Glassdoor Economic Research

Seeking alpha newsletter:

https://seekingalpha.com/account/user_settings


Previously:

https://matthew.krupczak.org/2019/11/10/matts-declassified-nerd-survival-guide-gear/

Loading


Krupczak logo

Home

Book Club I

Preface:

I’ve been flipping through (and sometimes even reading) a few books recently. Here’s what I’ve taken look at:



Automate the Boring Stuff with Python 2nd edition:

I wrote 20 short programs in Python yesterday. It was wonderful. Perl, I'm leaving you.

I’ve worked a bit with Python in my college computer science classes for AI and simple scripting. I’ve found that Python’s simple syntax and powerful libraries have made it great for math-y computational stuff or puzzle-like problems such as in a college-level intro to AI course.

Python is good at all of this, but it’s really good for tackling small, disparate tasks too.

Python’s creator and former benevolent dictator for life (BD4L) Guido Van Rossum created the language (among other things) with the intention that more novice computer users should be able to program computers to do simple, everyday tasks in plain English. He laid out many related goals in a funding proposal to DARPA titled “Computer Programming for Everybody“.

Who’s it for:

Automate the Boring Stuff with Python (and the new 2nd edition) actualize many of these goals by providing tools and examples for automating everyday tasks in Python that are useful to total programming novices and veterans alike. It’s definitely the kind of book that every company should have at least one of and a “Python gal/guy” around who knows how to use it for streamlining processes with automation and reducing toil.


Smart and Gets things done

Joel Spolsky writes a lot of things worth reading about the software industry on his website: joelonsoftware.com.

If you prefer reading something tangible, short and about the size of a little red book, and made from dead trees you can hold with your hands, then Joel Spolsky’s “Smart and Gets things done” is a great book to get on how to find and keep good technical talent.

A coder and economist at heart, Spolsky brings up some interesting ideas, such as that great programmers are never on the market. He elaborates further that obtaining skilled and diverse technical talent is important for software companies for being able to “hit the high notes” with solving difficult technical challenges.

He gives examples in his own experience as a recruiter and business owner with techniques for finding the best technical talent in the places they tend to congregate, the importance of internship programs for discovering and developing prospects ahead of competing employers, and how to “treat good talent like royalty” and make sure they want to work for you.

In addition, something Joel talks about is the importance of businesses to build talent “pipelines” and communities to seek and develop talent.

Who’s it for:

This book is great for business owners, managers, and recruiters intent on finding good technical talent. If you are a student or prospective employee yourself, this book may still be worth a read for you to have a better idea for what goes on from the business side of tech companies.


PeopleWare: Productive Projects and Teams

Good software comes from two things: good work environments and great programmers. The book “PeopleWare” comes out of rigourous study of what does and doesn’t enable good relations and productive output amongst people in a work environment.

Amongst the ideas presented in PeopleWare are those such as quiet and private environments greatly increase the happiness, creativity, and productiveness of workers, and that individuality and varying types of contributions from different employees can be recognized and encouraged by good management and organizational practices for greater overall output.

Who’s it for:

Managers and business owners can get a lot out of this book in the multitude of topics it covers. Prospective employes could also use this book to gain insight on differences in work environments.


Dale Carnegie: How to Win Friends and Influence People

Before there were memes on the internet, Monty Python’s flying circus delighted British television viewers with all manners of animated and live-action nonsense.

Before there was Buzzfeed to release such great articles as “18 photos of food that aren’t alive but have a whole lot of personality“, Dale Carnegie made name for himself (and in another way, quite literal way ‘made his name’ to match that of the steel manganate) by publishing the inaugural self help and applied psychology book: How to Win Friends and Influence People.

While his name-changing shenanigans and folksy writing style may seem bit quaint to current readers, Dale Carnegie was a student of his era and his fellow man and he practiced the principles he preached.

Dale Carnegie writes that the materials of his book were born out of a curriculum he generated as he taught a community class at the YMCA on “people skills”. He claims roughly that at the time, his income was tied to the number of people in his class and that if he didn’t produce results, he wouldn’t be able to eat.

While his principles can be laid out barely in a number of line-item bullet points on Wikipedia, the book itself is a living work detailing his own experience, history, and research working with everyone from engineers to 1920’s housewives on their experiences with others.

While it may have become cliche in the business world, there’s no case to think that this isn’t for reason.

Investor Warren Buffett famously doesn’t display his degree from the University of Nebraska in his office, but does display his certificate for graduation of the “Dale Carnegie Course in Effective Speaking, Leadership Training, and the Art of Winning Friends and Influencing people” +

Who’s it for:

Yes.

Chief Customer Officer 2.0

Chief Customer Officer

Ol’ baldy has revealed the key to his company’s success:

“(T)he No. 1 thing that has made us successful by far is obsessive compulsive focus on the customer as opposed to obsession over the competitor”

mycustomer.com/community/blogs/colin-shaw/bezos-obsess-over-customers-not-competitors

So, you would think with the secret to world domination being revealed, companies would have learned to develop a deep customer intuition and integrate that tightly with their company, right?

Well, I guess they do? Sometimes? Maybe?

It seems my confusion is not unique. Some people have been at this game a bit longer though, and some companies have even though to have a position on the board of the “Chief Customer Officer”:

Because the CCO role is still so new, there is as yet no executive MBA program or even a Harvard Business Review treatise about becoming a CCO. Jeanne Bliss, who was the Chief Customer Officer for Lands’ End, Microsoft, Mazda, Coldwell Banker and Allstate Corporations has written a book called Chief Customer Officer that she wrote as a field guide, based on her twenty five years’ experience in the role.

via Wikipedia, the Chief Customer Officer

Who’s it for:

Those interested in becoming a Chief Customer Officer, or integrating customer relations more tightly into their business.

(I’ll be honest, I haven’t yet given this book as close of a look as the others, but it’s a cool new trend in software and business I think so: monkey see monkey do)

The Rust Programming Language

In a post Moore’s law world, Rust has emerged as a front runner for fast and safe systems programming.

While lower level systems programming in C and C++ have previously only been accessible to the most headstrong, stubborn, and educated programmers around, Rust’s “friendly helpful compiler” and inherent “fail-safe” design are built around making it possible for more developers to write fast, safe, and readable low-level code.

Co-author Carol Nichols is heavily involved in the open source development and community around Rust and has delivered talks such as “Rust: A language for the next 40 years” that detail the problem ecosystem in which Rust exists and the strength of Rust’s positive and progressive open source community.


Who’s it for

Persons in industry interested in converting or moving portions of development on legacy code in C/C++ to Rust, or those interested in learning an emerging and powerful language.

Shoshana Zuboff’s The Age of Surveillance Capitalism

History has plainly shown that capital has a tendency to beget more capital. Zuboff’s’s “The age of surveillance capitalism”, however, details a new order for the 21st century where mass data collection and information systems allow for leading tech companies to have a supreme order of business, marketing, and personal-level targeted data intelligence that ensnares users and trounces the competition.

Whether it’s targeted advertising in search results or wining Trump the presidency, understanding the development of this form of hidden, soft power this century is important for a wider audience to understand. The book certainly has a somewhat negative spin around some aspects that I consider to be more morally ambiguous (instead of malicious) and has a somewhat professorial tone, but it could be one of those texts that 50 years from now will still be looked back on as a key to understating developments in technology, business, and society in the 21st century.

Who’s it for:

Those interested in the state of “big tech” in a social and historical context.

Loading


Next:

Previously:

https://matthew.krupczak.org/2019/11/10/matts-declassified-nerd-survival-guide-gear/


Krupczak logo

Home

Failure Tolerance with the Rafting Algorithm (Kubernetes)

Let’s break! (said the production server)

link to visualizer

link to writeup on algorithm

Play with the algorithm (break stuff!) as implemented in etcd


Why?

Failure resiliency and “uptime”are two hallmarks of good sysops. In addition, much of operations level goals can be boiled down to:

Raising the mean (average) time between failures (MTBF)

Lowering the mean (average) time to repair (MTTR)

Software containerization, and container orchestration with Kubernetes can be good at achieving both of these due to its use of the rafting algorithm for failure resiliency and quick and automated resolution of failures.

The rafting algorithm used by etcd, the backing key/value store for kubernetes, allows a cluster of five-“many” computers to dynamically form “consensus” even in a chaotic environment. A computer will mark itself as a candidate to be elected leader if one it has not had communication with a lead for a period of time, and once a leader is elected the group “rafts” together with consensus towards its common goal.

Tikam02 DevOps-Guide Kubernetes

Tikam02 DevOps-Guide Kubernetes Advanced

Loading


Related:

Previous:


Krupczak logo

Home

Ben and Jerry’s

Ben and Jerry’s

Overflowing pint of ben and jerry's karma sutra ice cream

Joel on Software’s Strategy letter I: Ben and Jerry’s vs. Amazon


On “linked prosperity” with the Ben and Jerry’s business model:

From their earliest days at the gas station, Ben and Jerry had been committed to running the business in a way that gave back to the community. Strapped as they were for cash, their efforts usually took the form of free ice cream cones or low-budget celebratory events like the movie festival and Fall Down.

….

The motivation for giving back had always been genuine. At the same time, it was proving to be an effective marketing strategy. There was no doubt that our customers were more inclined to buy our ice cream and support our business because of how we, in turn, supported the community.

In describing all of this, we began to talk about the concept of “linked prosperity,” a term coined by Dave Barash, one of our managers. What it meant was that as the company grew and prospered, the benefits would accrue not just to the shareholders, but also to our employees and the community. Each constituency’s interests were intertwined with the others’.

To institutionalize the donations policy, we created the Ben & Jerry’s Foundation, a nonprofit organization that was set up with an “independent” board of directors, albeit one on which Jerry, the foundation’s president, and Jeff were two of the three members. Ben gave the foundation fifty thousand shares of his stock as an initial endowment. In addition, the company planned to make the foundation the primary recipient of its cash donations by giving them seven and a half percent of its pretax profits. The foundation, in turn, would give away the money via grants to nonprofit organizations.

-from Ben and Jerry’s: The Inside Scoop p. 237

The idea that a company has obligations to stake-holding  groups other than its shareholders is a relatively new one. As  we grappled with the practical implications of that fundamental  premise, there were few examples in the business world to follow and learn from. It’s my hope that this book, by relating our  company’s experience, will be of assistance to the growing  number of businesses that are now moving in that direction.


Loading


More:


Krupczak logo

Home

(Joel on Software) Advice for Computer Science Students

Advice for Computer Science Students

Dog with graduation cap
Good luck doggo for college students 🎓

1. Learn how to write before graduating.

2. Learn C before graduating.

3. Learn microeconomics before graduating.

4. Don’t blow off non-CS classes just because they’re boring.

5. Take programming-intensive courses.

6. Stop worrying about all the jobs going to India.

7. No matter what you do, get a good summer internship.

I would venture to add my own unwritten “number 8” for this list:

8. learn your way around a Linux/Unix computer

You can practice this easily on Mac, Windows, or with a rented server.


1 and 3 are areas I’ve been thinking about a lot recently.

Upon reflection, I really wish I had taken more time in high school and college for practicing writing and getting good feedback on how to improve. I don’t really think there’s any substitute for this kind of practice with back-and-forth feedback from an experienced writer.

Economics can be pretty easily digestible though. The textbook learning is really important, but it can be fun too to learn “economic thinking” in a more low stakes way as well. NPR’s Planet Money podcast is a great example of this, including zany stories such as “the tale of the onion king” when a single trader was able to corner the market for onions one year. There’s also great news reporting through an “economic” lens of thinking, including practical stuff such as news on tech and business (if you’re interested in that too, I guess).

Loading


Update: This is the textbook for a microeconomics class I started earlier (but couldn’t complete due to scheduling conflicts):

Michael Parkin: Microeconomics (13th ed.)

ISBN: 978-0-13-474447-6

I think it really does a good job of bringing up economic concepts in very accessible and thought-provoking ways.

It also does well to inspire in the reader the process of “economic thinking” in framing many different types of issues as economics ones. I think for this alone the contents of the book are of value to most readers.


More:


Krupczak logo

Home

Paradigm Shift: A Tale of Two Cities

The 01 Machine city from The Animatrix

Previously, I’ve written on how easy Docker makes it to deploy standard “software in boxes” on a simple cloud server, saving a lot of the hassle that used to be needed for configuration and setup for running software.

You may be left wondering: what if instead of small services, I’m trying to setup a full scale application with many users? So, to answer this question we’ll zoom out, “think BIG”, and see what answers we can find.

In this article, we’ll examine the “paradigm shift” that has occurred in the software industry with containerized software and cloud computing. We’ll look at two technical teams that, with minimal resources, were able to build large, scalable applications by leveraging the deploy-ability provided by containers and open source software, scalability enabled by Kubernetes, and simplicity offered by cloud providers.

In addition, we’ll take a look at the direction the industry is headed for different types of cloud deployments, open source vs proprietary tools, and how to avoid vendor lock-in.

But first: a brief history of global shipping

In their official presentation slides, the Docker team likens the containerization of software to that of the containerization of cargo in global shipping. I’ve described before how nice it is that software containers are standard and easy to run, but today we’ll be looking at another important aspect of the container analogy: scalability.

WWII saw a level of industrialization, manufacturing, mobilization, and global shipping that had never before been seen. Towards the end of the war, the U.S. army began experimenting with shipping containers that could be packed once and shipped on any mode of transportation. During the Korean war a few years later, the U.S. army standardized on the CONNEX box system and was able to almost halve their end-to-end shipping time (later during the Vietnam war, they would scale to numbers of over 200,000 CONNEX boxes by 1967).

Commercial and business use:

Malcolm McLean at railing
Malcolm McLean, the American businessman who popularized modern global container shipping

Seeing opportunity, American businessman Malcolm McLean secured a loan for $22 million dollars in 1956 based on the proven U.S. army system (almost $207 million in today’s dollars) to buy and convert two large WWII oil tankers into container transport ships.

In 1956, most cargoes were loaded and unloaded by hand by longshoremen. Hand-loading a ship cost $5.86 a ton at that time. Using containers, it cost only 16 cents a ton to load a ship, 36-fold savings

Within 3 years of the first container ship running McLean’s business was profitable and expanding. By submitting his container design as a standard to the International Standards Organization (ISO), McLean had proven and popularized a new method of global shipping with standard, easy to handle shipping containers that is still in use today.

And now, software:

Relevancy?

Just as McLean’s containers removed the need for labor intensive “loading and unloading” of varied cargo across multiple modes of shipping by longshoremen, Docker containers are attractive because they promise that software will run in the same way on a laptop development environment as a (potentially massively scaled) production deployment. The software container only has to be built or “packed” one time.

Commoditization

Cloud Native Computing Foundation Logo

I’ve written before about how the economics of software mean that open source back-end software infrastructure can be considered as a complement to front-end software product, and how smart companies commoditize their product’s complements:


McLean likely wanted to submit his container design to the International Standards Organization (ISO) because he had developed a product (container shipping infrastructure) and wanted to make his product’s compliment (the container standard) a “commodity” so he could load or unload his ships at any port. For similar reasons, the Docker team submitted their standard container run-time containerd as a free resource to an emerging organization known as the Cloud Native Computing Foundation (CNCF).

Actors in the “software space”:

This also explains why companies such as Google, Facebook, etc. release and contribute to so many free and open source software projects (FOSS). Their goal is to make back-end infrastructure resources a commodity, or in other words, it makes it trivial for them to develop user-facing product at low cost by having it backed by an ecosystem of free and open source software.

For such reasons, the Cloud Native Computing Foundation is emerging as a governing body to make standardized cloud computing a commodity as multiple actors in the software space are contributing to and utilizing extraordinarily powerful resources through free and open source software and cloud deployments.

This exists in contrast to the existing scheme where certain cloud vendors have proprietary product that has become somewhat standard in the industry due to the first mover advantage.

Member companies such as Red Hat have been banking on the idea that cloud computing patterns will become ubiquitous, cheaper, and open source.

Case Studies of “Cloud Native Computing”:

Introduction:

M1 Abrams tank firing

In military science, there’s a concept commonly referred to called “force multiplication“. Every battle needs troops on the ground in some shape or form, but the tools, environment, or conditions used by the troops can “multiply” their effectiveness on the outcome. For example, during the first Gulf War, the United States forces had GPS available as a force multiplier allowing armored units to navigate through unmarked desert while their adversary was confined to marked roads and navigable terrain.

The rise of internet and Software as a Service (SaaS) platforms could be considered one of the biggest “force multipliers” that has become available to software companies in the past few decades. Before, if you wanted to sell shrink-wrap software, you needed to do customer research and develop your software, really wrinkle it out until it was ready to ship, then wrap it in a box and get someone to sell it for you on a shelf. Take it or leave it.

Box being shrinkwrapped

With the internet, software can be provided:

  • Instantly and often
  • To more users
  • At a lower cost

In much the same way, we’ll examine through two case studies how the rise of cloud computing technologies has likewise acted as a “force multiplier” for businesses in the software industry by allowing small teams to build large, heavily scalable applications, at a lower complexity.

Case study A: Robinhood

Robinhood on an android (left) and iphone (right)

With a team of just two DevOps engineers, the Robinhood team sought to create a full-fledged stock trading platform for day-traders as a simple mobile app with a cloud infrastructure back-end. They built the platform in secret, then announced it on several online forums. They had a waiting list of over 1 million ready to use the app at launch.

It’s unclear whether or not the launch used Amazon’s container solutions. Either way, it’s clear that the scalable, standard, and easy to integrate solutions provided by cloud providers such as Amazon’s AWS make it possible for even a small development team to have a greatly out-sized impact on a software market.

There is a risk; however, with tightly tying in your fortunes to that of a giant like Amazon. Much like how you can go to Walmart to buy both your underwear and your groceries, Amazon has worked hard to make itself the “one-stop shop” ecosystem for cloud services, going as far to even mildly clash with commercial open source vendors to provide its own product. Using the AWS “one-stop shop” can work well for a company just starting out, but in the later stages of a business highly coupled cloud costs can be a serious competitive business disadvantage, as companies such as Lyft have noted.

Lessons from Robinhood:

  • A small team can have a huge impact by using emerging tools
  • It’s okay to do initial preparation in secret when no one’s operating yet in the market
  • Operating costs can be high when using the “one-stop shop”, but that can be okay sometimes if it saves time or money in other areas.

Case Study B: Pokemon Go:

Pokemon GO on an iphone

To fully wrap back around to the main topic of scalability enabled by software containerization, container orchestration, and commodity open source software, we’ll now take a closer look at Niantic’s launch of Pokemon Go in 2016, running on Google’s Cloud Platform (GCP).

As Google director of Customer Reliability Engineering Luke Stone observes, “a picture is worth a thousand words” in this graph of activity during its launch:

Graph of pokemon GO launch cloud datastore transactions per second on GCP

With Google’s engineering team working as “insurance” in concert with Niantic’s during the huge launch of Pokemon Go with millions upon millions of players, Google’s open source container orchestration platform Kubernetes allowed the teams to scale the back-end out to absurd proportions on the fly.

In addition, Google and Niantic’s engineers changed both the container engine orchestrating the back-end deployment and the network load balancer while the game was live, a feat they compared to changing out an airplane’s engine while it was running.

Nightmare at 20000 feet, gremlin and man.
Don’t mind me here, just making sure little Timmy can catch Bulbasaur on his mom’s IPhone X

When the game’s launch extended to Japan, the number of users tripled from the launch in the US two weeks earlier.

All this was made possible by containerizing the back-end that serviced the app. The network load balancers sat in front of the servers working as a “traffic director” to route customers towards multiple back-ends and the container orchestration provided by kubernetes made sure there were enough back-end container instances for customers to use and that the instances were healthy.

Lessons from Pokemon Go:

  • Plan “big”, and buy “insurance” just in case
  • Set up your back-end so you can scale
  • If all goes well, keep going!

Future?

Hashicorp terraform logo

Tools such as Hashicorp’s open source Terraform are seeking to make it possible to coordinate deployments across multiple clouds. Through this tool, the features of the individual clouds are made irrelevant in favor of a higher level “infrastructure as code” abstraction. This can make it easier to shift or extend deployments from one cloud to another, and can provide companies with options as they expand and for flexibility without being tied to a single cloud’s environment.

In addition, the container technology ecosystem that continues to evolve around container and container orchestration technologies is further cementing the value of these tools, and proving that further value could likely be leveraged from a stack implementing containerization as complementary technologies emerge.


Conclusion:

Cloud computing is hardly in its infancy, but there are technologies, governing bodies, and software companies that are seeking to make it ubiquitous, standard, and cheaper. In addition, the tools that have arisen as many actors have operated in this space have added new opportunities for even small teams to have an enormous impact on a large number of customers.

Loading


Next:

Previously:




Krupczak logo

Home

Hawking My Projects II: 1 Million + installs with Freedoom for Android

Why I made Freedoom for Android (open source):

Ever since getting a Playstation 2 for Christmas in the 2nd grade, I’ve been a fan of videogames as a way to keep mentally active. In my opinion, a great game generally incorporates some kind of core game system (e.g. an role-playing game with equipment and spells, or a first person game with gun-play) with some kind of open-ended exploration system that feeds into the main game and rewards creativity and risk-taking.

In Half-Life 2 for instance, you could play most of the main game completely oblivious to your surroundings, but if you were ever in a pinch you could scrounge around and find resources in unexpected areas (pull a cinder-block out of a basket to lower down a health-pack via pulleys? Genius.)

Half life 2 cinderblock puzzle with a seesaw

Example of a game system that "feeds" the main gameplay, something that Freedoom for Android does as well.
A different puzzle, but you get the idea…

Games for the small screen:

So, in college I found I was often too busy for games with lectures and homework and would often just read or play with my phone between classes. I’ve tried mobile games in the past, but I think my problems with mobile games can be summarized in two images:

revenue of mobile games 2018. Freedoom for Android makes no revenue :/

It technically could be monetized per the terms of the GPL, as long as the source code was made available to customers
Decent money, so you would expect decent games right?
Graph outsized impact of 2-6 percent of "whale" user's impact on IAP value
Oh no. The %2-6 of “Whales” are critical to how mobile games are designed.

Unfortunately for most mobile games, they often operate on the freemium model instead of the paid model since users aren’t as wiling to pay up front for mobile games, and they heavily court the top 2-6% of “whale” players who are willing to spend lots of money on in app purchases. This means the games are designed to be somewhat fun on the free tier, but not too fun, otherwise no one would pay. Also there’s generally a core gameplay system, but instead of a “open-ended exploration” system designed to make gameplay more varied, there’s generally a “add a small pain point to the gameplay that can be fixed with 💰💵 🤑 ” system. Granted, many of these games are fun and have a lot of passion put into them, but I have a strict rule that:

  • If something in my wallet affects
  • How well I can do in the game
  • I probably won’t have fun with the game

Alternatives?

So, my goals were to:

  1. On a mobile phone or small, lightweight laptop
  2. Find game(s) that can be played for 5-15 minutes at a time at a minimum
  3. The games are deep enough to be fun to play for longer if time permits

So, I found that I ended up liking a lot of older games ranging from the early 90’s to the early 2000’s. 90’s Games for the Super NES generally have beautiful pixel art with colors that pop on a smartphone’s OLED screen, and the PC games of the 90’s and early 2000’s had excellent core gameplay and design creativity before advanced graphics made it too expensive for mid-sized studios to compete.

With platforms like retroarch nowadays, I could even play games on my phone with a Bluetooth controller and I could use scripts to have the save files sync across all of my devices.

Doom!

John Romero (game designer) plays Doom E1M1:

The original Doom from 1993 would fall under my definition of a “great game” because it has:

  1. Great core gameplay system (fast and rewarding movement mechanics, varied and responsive enemies, etc.)
  2. Interesting, expansive, and varied levels with hidden “secrets” that reward creativity, exploration, and risk-taking

Open Source:

GNU logo

Also, thanks to the technical genius John Carmack (+), the source code for the original Doom was released as “open source” to the public for anyone to tinker with and improve. As such, in the more than 25 years since the game’s release the game has been a model example of an open source software community with loyal and active members, multiple new and updated versions, and countless add-on levels and mods.

What’s great about open source software like Doom, or Firefox, Linux, and Apache is that anyone can use, improve, or redistribute modified versions of these programs and look at the source code that controls how the software runs. Open source can also work well for business, both for vendors and companies using software.

Freedoom:

Freedoomguy, as used in the Freedoom for Android app icon

While the Doom engine was released as open source, the game asset files (including monsters, music, and game levels) were kept under copyright by Id software and were not legal to redistribute. If the Doom engine is the “peanut butter”, the Doom asset or WAD files could be considered the “jelly” of the game needed for the full, delicious PB&J. Without free and open source WAD files, it is impossible to distribute version of Doom that doesn’t require a purchased copy of the original game.

A community rose to the task and created Freedoom, which is licensed under a BSD-like license and allows for modification and redistribution. It is also distinctly not Doom: the maps, music, levels, enemies, weapons are original works that are different than the original Doom, yet are still compatible with fan-made add-ons and mods. The terms of the license also means that it can be included or modified for free with Linux distributions, open source Doom engines, on app stores, etc. as long as the the license is included with the software.

Other common open source licenses, such as some versions of the GPL (i.e. the license with which Carmack released the doom engine) require that code is available to customers upon request

An example of the "GPLv2" in the wild on Nintendo's NES Classic, which is based on Linux and other open source software. 

Freedoom for Android is built using GPLv2 components and BSD-like licensed components for the Freedoom wad files.
An example of a similar license, the “GPLv2” in the wild on Nintendo’s NES Classic, which is based on Linux and other open source software

Publishing Freedoom for Android on the Play Store:

Freedoom for Android as it appears on the Google Play store

The Fork:

An open source developer had developed a fork of Doom designed to run on Android smartphones by using the native development kit (NDK) and published it as a paid app on the Play store, but it had been taken down due to a copyright strike at the time I was working. Another developer had taken the publicly published code of his open source components (as required by the GPL) and repackaged it as an open source GZDoom for android.

I decided to create something known in the software world as a fork of GZDoom for android and called it “Freedoom for Android” (forks can occur in software for any number of reasons, such as when MariaDB forked from MySQL or Amazon forked Elasticsearch). While I didn’t strictly need to, I first got permission via email from both the GZDoom for Android developer and the Freedoom community to use their resources.

Building:

An example android studio page, as was used for building Freedoom for Android

It was a bit tricky to get the project to build on my computer, and I ran into some frustrating build errors with the compilation of the C and C++ code that runs in a simulated execution environment with the NDK. By digging through “tombstone” stack dumps and working with the “GZDoom for Android” developer over email, I was able to debug the error outputs of the low level code and fix a problem in the game engine.

Once I had it building, I then decided to modify the code further to fix various bugs such as a file permissions error, problems with the user interface, and music not working in game with the default settings.

Publishing:

Graph showing new user installations of Freedoom for Android before and after copyright strike and the release of bethesda's doom port
Install rate over the life of the app

Once I had built my app, I was able to publish Freedoom for Android to the Play store. I very quickly started receiving a growing number of users and installations.

Then, my app was taken down from the Play store due to a copyright strike. I was able to appeal it by submitting the emails I had received from the Freedoom community, emails from the GZDoom Android developer, and the open source software license that came with the code and proved I had the right to modify and redistribute the code.

In addition, I removed any mention of the word “Doom” in the description and replaced it with “everyone’s favorite 1993 game” as well as adding a legal disclaimer at the bottom indicating that this game was not affiliated with Id or Bethesda. This seemed to do the trick and my app was re-published after review. I still believe the app was somehow hidden from showing up in trending pages or other areas of the Play store however, and think this might explain the reduced install rate after the strike.

Competition:

How Freedoom for Android appears on the Google play store
how Freedoom for Android appears on the Play store

After publishing Freedoom for Android on the Play store, the developer of my upstream Delta Touch decided to continue development of his app and was able to fight his copyright strike and republish on the Play store (though he hasn’t yet published his updated code as required by the GPL, tsk, tsk).

[Edit: it may be that he’s publishing only the required GPL components such as the GZDoom engine, but not his full app packages or other modules which he formerly had as open source, which is understandable].

Bethesda (the current owner of the Doom franchise) decided to join in on the fun and release their own mobile port, but their initial launch was marred by online-only DRM and a 300Mb install due to using the Unity game engine in order to avoid using any GPL licensed code (for whatever reason?).

Internationalization:

transifex project page for Freedoom for Android

By writing some GitHub integrations with a cloud translation service called Transifex, I was able to add translations for my app into many different languages. Because my project was open source, Transifex allowed me to use their service for free. I translated the Play store listing pages, set up A/B testing for the translations, and then published an internationalized version on the Play store.

After the update, I immediately started having more installs and more reviews in non-English languages. I got reviews ranging from a Hatian user with a phone with only ~250-300mb of usable storage who chastised me for the size of an app update (I deserved it) to reviews in Thai, Vietnamese, and a tribal dialect of Turkish.

Some of my translations weren’t great; however, what was great about Transifex was I could invite reviewers and native speakers of each language to improve the translations (even if they had no coding experience).

Bundling

sigil-small

Around the same time, John Romero released for free a “megawad” map pack SI6IL for the 25th anniversary of Doom. I emailed him asking for permission to distribute SI6IL.wad with Freedoom for Android and I got a reply somewhere along the lines:

Sure thing! As long as it’s the only wad of mine that you include 🙂

I also decided to include 10sectors.wad with the app. It was a community made map pack with small, action-packed levels that I figured would be fun to play in 5-15 minute settings on a mobile device.

Future:

A user story "prototype" for the "Freedoom for Android" app
“User Stories” of new features

I really liked playing through 10sectors.wad on the phone, and want to modify the app to make it easy to download and install new doom levels, or any of the plethora of existing wads on the idgames archive. The “killer app” for Freedoom for Android would likely be if you could download and play a small addon level on your lunch break. To plan for writing this feature, I wrote three user stories and drew a quick mock-up. This kind of thing is common in Agile software development.

If you’re interested in contributing to this feature, this app, or anything else I’m interested in, feel free to check me out on GitHub.

Previous in this series:

Loading


Krupczak logo

Home