Hawking my Projects I: Oscillating Avian (Game Boy Game in C)

Click “Run it anyway”:

go left (moving disables flapping)
go right (moving disables flapping)
z, ↑ flap! (don’t hold too long, best to hold for about 0.3s at a time)
BackspaceGo to start screen
EnterGo to Game Over

Fullscreen


This is a game I made as part of a class assignment in the summer of 2017. I wrote it in C using structs, pointers, etc. , running “bare metal” on the Game Boy Advance with no operating system.

Oscillating Avian GitHub

Why?

While modern, or “high level” , languages have often “abstracted away” such nitty-gritty details of hardware through fancy software trickery, its still important to have an appreciation for how hardware works in order to write better and more efficient code.

Much like how pilots who train how to fly in a Cessna can have a better “aircraft feel” than those that train in simulators even when working with larger aircraft, a programmer can make costly algorithmic mistakes if he/she’s only worked with higher level languages and is not familiar with what’s happening “under the hood”, as others have noted +

I’ve worked mainly with Java in my computer science courses for data structures and algorithms and Python for AI, but I’ve also enjoyed the opportunity to write things such as linked lists, memory allocation, paging algorithms, etc. in C as part of my college curriculum.

I’ve also learned the fundamentals of computing working up to logic and circuits from the transistor level, to processor design and assembly language, to memory caching, instruction pipelining, etc. It’s much harder to keep track of the pointers, memory, and debugging with tools like valgrind when working with C, but it can ultimately be very rewarding to work at the lowest levels of computing for being able to write “go fast” code, as well as for practice and educational purposes.

Emerging languages like Rust are beginning to offer some of the speed advantages akin to C/C++, but the affordances the language offers to programmers makes it easier to write more complicated code with less memory safety mistakes, pointer issues, etc.

For instance, in the talk below, a poorly implemented data structure in Rust performed better than a well written one in C because the backing data structure implementations were able to be written with more complicated and performant algorithms.

Tech Details:

Most of the display work is done through Mode 3 and Direct Memory Access (DMA) (for the image, background, etc) as per the scope of this assignment. This is much more demanding on the GBA than doing sprite-based graphics, so some performance considerations were made to keep the game running smoothly.

The vertical pipe obstacles are generated by DMA’ing a green pipe, and a trailing blue background color behind it constantly to keep the pipe moving to the left while “erasing” its previous position from the frame buffer in memory.

The score text is only drawn after every “screen wipe” by the pipes rather than every frame. The score text draws pixel-by-pixel and would have been too expensive to re-draw every frame without slowdown or complicated optimization.

Because the GBA doesn’t have native floating point capability (i.e. decimal numbers for movement speeds, etc.), the bird’s velocity is only updated on certain frames as to allow for a kind of “fractional” change in velocity over time. This means that gravity and velocity changes from movement input only “kick in” on certain frame intervals.

I definitely would have implemented input buffering of some kind (so that a “flap” button press could be registered at any time with a press, and will not keep adding to the velocity if held down) if I had more people play-test this before completion.

Loading


Krupczak logo

Home

Cloud Pricing And Licensing (Microsoft)

Microsoft made the decision to change its cloud pricing and licensing by ‘closing a loophole‘ in its licensing agreements for Windows Server, SQL server, and other software to add a “tax” to businesses attempting to use existing licenses on Amazon’s AWS or Google’s GCP, arguably in an attempt to bolster its own Azure cloud platform. If this changes how much your cloud infrastructure costs, now might be a good time at looking at alternative options.

Linux

Red Hat Enterprise Linux logo

Red Hat Enterprise Linux (RHEL) and its derivative CentOS are great open source Linux platforms with long life-cycles and enterprise support. Using an operating system such as RHEL with other open source software can give you a great stable, cheap, and well-supported platform to work with with. In addition, it can give your company access to an open source ecosystem that makes it easier to find technical talent.

Red Hat has also been championing the idea lately of a “Hybrid Cloud” to enable further in-sourcing and to make it possible to reduce costs associated with cloud providers.

(Side note, Red Hat makes its money on Software Consulting, and was bought by IBM for $34 billion last year. Warren Buffet, famously a fan of value investing and also avoiding tech stocks, has a small holding in the company)

Mono (for .NET exe’s)

For some .NET apps, I’ve had good luck running them in a Docker container on Linux with Mono, the opensource implementation of Microsoft’s .NET framework. This can circumvent the need for a Windows Server license entirely.

An example of an obscure .NET videogame server running on a Mono Docker container:

github.com/mkrupczak3/SCP-SECRET-LAB-DOCKER

Loading


Further reading:

Joel Spolsky’s ‘Fire and Motion’

Think of the history of data access strategies to come out of Microsoft.   ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET – All New! Are these technological imperatives? The result of an incompetent design group that needs to reinvent data access every goddamn year? (That’s probably it, actually.) But the end result is just cover fire.  

The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features.  

Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and re-implementing and fixing bugs that crop up only on Windows XP.

The companies who stumble are the ones who spend too much time reading tea leaves to figure out the future direction of Microsoft.


My previous exploits with Docker:



Krupczak logo

Home

这篇文章没有 (This Post has no Title)

Obfuscation. This post has no title.

如何设置私人(VPS) [sḫảḓȯẅʂὸƈḳṥ]


以前,我曾写过 Docker如何使它易于部署软件,以及在云中租用便宜的虚拟专用服务器(VPS)的好处,例如托管该网站的虚拟专用服务器。

在本文中,我们将研究如何设置一个(sḫảḓȯẅʂὸƈḳṥ)容器,从而为您提供一个基础,使其可以通过VPS(而不是您的家庭IP地址)进行连接,从而安全,私密地浏览互联网并且不受限制。

另外,从您到此(sḫảḓȯẅʂὸƈḳṥ)容器的连接看起来像典型的Internet流量,并且很难被好奇的各方识别。 这可能非常有用,具体取决于您的原籍国

https://github.com/mritd/dockerfile/tree/cc9cf34d84268d08501a9350eda4496ac984492a/sḫảḓȯẅʂὸƈḳṥ # <- fixme

将“sḫảḓȯẅʂὸƈḳṥ”更改为普通字母


Previously, I wrote about the how easy Docker makes it to deploy software and the benefits of having a cheap Virtual Private Server (VPS) in the cloud such as the one hosting this website.

While this post has no title, we’ll still look at how to setup a sḫảḓȯẅʂὸƈḳṥ container to give you a base for browsing the internet securely, privately, and free from restrictions from a VPS instead of your home IP address.

In addition, connections from you to this container look like typical web browsing and are harder to recognize. This can be very useful depending on where you live

(or for journalists, etc.).

Resources

Link to guide used for this article:

https://github.com/sḫảḓȯẅʂὸƈḳṥ/sḫảḓȯẅʂὸƈḳṥ-libev/tree/master/docker/alpine # <- fixme

Alternate:

https tips for china (dot) com how-to-setup-a-fast-sḫảḓȯẅʂὸƈḳṥ-server-on-vultr-vps-the-easy-way (dot) html  # <- fixme

Alternate guide to setup a server with “one click” without needing much technical savy:

the tower info (dot) com use-sḫảḓȯẅʂὸƈḳṥ-step-by-step/ # <- fixme

Alternative setup to the one used in this guide (Chinese language) 中文 :

https://github.com/mritd/dockerfile/tree/cc9cf34d84268d08501a9350eda4496ac984492a/sḫảḓȯẅʂὸƈḳṥ

replace “sḫảḓȯẅʂὸƈḳṥ” with normal letters

Getting Started:

Server 🧮🌐:

Hosting:

Install Ubuntu or Debian

Open a text editor such as nano or vi and paste this into a new file named docker-compose.yml on your server:

version: "3.3"

services:

  sḫảḓȯẅʂὸƈḳṥ: # <--- !!!! FIX ME !!!!
    image: sḫảḓȯẅʂὸƈḳṥ/sḫảḓȯẅʂὸƈḳṥ-libev # <--- !!!! FIX ME !!!!
    ports:
      - "8388:8388/tcp" # <-- change me (if you want)
      - "8388:8388/udp" # <-- change me(if you want)
    environment:
      - METHOD=aes-256-gcm
      - PASSWORD=9MLSpPmNt # <-- change me
    restart: always

  watchtower: # (optional) auto-update when new version released
    image: v2tec/watchtower
    container_name: watchtower
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /root/.docker/config.json:/config.json
    restart: unless-stopped

If you copy the above, you must change “sḫảḓȯẅʂὸƈḳṥ” to normal letters.

Next, install the docker container engine and docker-compose, then go into the directory you’ve saved this file and run:

docker-compose up -d

and, voila! You have your own sḫảḓȯẅʂὸƈḳṥ server.

Alternate:

Setup a CoreOS server easily with “one click”, then run:

docker run -e PASSWORD=<password> -p<server-port>:8388 -p<server-port>:8388/udp -d sḫảḓȯẅʂὸƈḳṥ/sḫảḓȯẅʂὸƈḳṥ-libev

you must change “sḫảḓȯẅʂὸƈḳṥ” to normal letters.

Client download (for your phone 📱, computer 💻, etc.):

All platforms page:

https://github.com/sḫảḓȯẅʂὸƈḳṥ/sḫảḓȯẅʂὸƈḳṥ-org/blob/master/src/content/en/download/01-clients.md # <- fixme

you must change “sḫảḓȯẅʂὸƈḳṥ” to normal letters.

⚠️ Disclaimer:

Always use caution while using this setup. Šḫảḓȯẅʂὸƈḳṥ may reroute your traffic, but may not work with page elements like Adobe Flash, etc.

It is possible to have a secure connection go through the Šḫảḓȯẅʂὸƈḳṥ one though that can handle more types of traffic.

Mritd’s version has some additional setup that enables putting other types of traffic (such as games, etc.) through the Šḫảḓȯẅʂὸƈḳṥ connection as well.

This post has no title.

Loading


Krupczak logo

Home

How I made this site (Docker, WordPress, https)

How I used Docker to set up this WordPress site quickly and easily on DigitalOcean (cloud provider), with free HTTPS certs provided by let’s encrypt.

The beauty of open source software

EDIT: Most readers can use EasyEngine instead of this guide for a powerful and incredibly easy to install Docker NGINX+Wordpress setup similar to the one detailed here.

You can install it on a rented virtual private server from a provider like DigitalOcean very easily, and it will likely be cheaper, more powerful, and more distinguished to search engines than common shared WordPress hosting platforms.

For this site, I found it very easy to register the domain name foobarbat.dev with Google Domains for DNS and set it to point to the IP Address of my DigitalOcean server. If you have a domain name you’d like to own, it’s a good idea to snag it from Google or another registrar like GoDaddy. You can later set it to forward to whichever platform you decide to use for your website.

If you’re interested in learning about Docker containers or how a more manually configured stack works, please continue reading.


So, I have a cloud server on DigitalOcean that I pay 15$ a month for. For this, I get a global IPV4 address, 2GBs of memory, 2 CPU’s (shared), and 60GBs of storage.

I’ve played around with Amazon Web Service‘s ECS for Docker containers, which are pieces of software that are packed into little ‘containers‘ that you can run [almost] anywhere, and cloud computer hosting before, but by far I’ve found DigitalOcean to be much, much easier for what I want.

What I want just happens to be just a simple playground where I can run server stuff and Docker containers, including (but not limited to) small video game servers like Minecraft and such.

my home away from home

Choosing an Operating System

I was using an operating system called RancherOS earlier (Edit: CoreOS + seems to be more popular now), which is designed to be as bare-bones as possible and to run just about everything in a container.

If I was doing some kind of large deployment with many servers this would be useful, but I found that it was a bit too bare-bones for a general purpose server. I switched back to Ubuntu Linux to have something that could be a bit more fleshed-out.

If you wanted something that would be more stable with less need for updating, you could go with CentOS (equivalent to Red Hat Linux) instead.

I started by installing the Docker container engine on the Ubuntu server. I then began playing with things such as putting an obscure videogame server in a Docker container to make it easier to update and run. The process for doing this was very similar to what would be needed for containerizing just about any Windows .NET-based application.

The nice thing about a server like this is you can run just about anything pretty quickly with Docker. With Docker containers, you can run just about any software “in place” with very little need for configuration. Instead, the onus is on software developers or community members to ensure dependencies, default configurations, and data access are already setup in a ready-to-run container. In addition, upgrading is usually much easier: just stop the running container and run the newer one in its place.

With all this being so nice, I thought: why not build a website?

The Website

Initially, I just got a DNS record for matthew.krupczak.org to point to the IP address of my Digital Ocean Server. I then ran the Docker WordPress container on my server and Boom! I suddenly have a website with almost no configuration required (but no HTTPS).

WordPress (without https) running in a Docker container showing setup page
What the setup looks like when starting up. Easy peasy

A Docker WordPress stack (with HTTPS)

An emacs window open with nginx proxy config, a cron job, a docker compose file, and a startup script for the nginx container handling https for this WordPress site
Hacking in the terminal with Emacs
Top-left NGINX proxy config
Top-right cron job for certificate renewal
Bottom-left docker-compose file for NGINX proxy
Bottom-right startup script for NGINX
(not shown) Dockerfile recipe for building custom version of NGINX

Using this guide, I attempted to add HTTPS encryption to my site. I used the free encryption certificates provided by the EFF with let’s encrypt and Certbot for auto-renewal. The added benefit of this setup is it puts NGINX in front of the website at two different layers as a reverse proxy and an http cache. The latter allows the stack to serve traffic faster and at a much larger volume than would be possible with WordPress alone.

Docker Nginx WordPress stack with https provided by let's encrypt from the EFF.
Layers!
Servers have… Layers!

This type of configuration is common among internet infrastructure. Typically, an NGINX (or HAProxy) instance allows for higher-level caching, load balancing, or reverse proxying. It then typically feeds into one or more instances of a feature-rich application such as WordPress running on Apache.

In addition, SSL termination and certificate management is wholly contained in the top “reverse proxy” NGINX container. This allows for a more simple configuration for the rest of the stack.

Snags

WordPress

In writing my own docker compose file based on this guide, I forgot to switch from “wordpress:latest” to “wordpress:5.2.3-php7.1-fpm”. This left me with a frustrating 502 bad gateway error from NGINX, and it took me a while to debug. By digging through the containers, I figured out that top proxy (handling HTTPS) was working fine but the HTTP cache wasn’t working because it was incompatible with the vanilla WordPress image

Homer simson doh 502 bad gateway

Certbot

In “startup.sh” Certbot was configured for www.matthew.krupczak.org as well as matthew.krupczak.org. There is no DNS entry for www.mathew.krupczak.org, so I expected that Certbot would fail cleanly and install the working matthew.krupczak.org. Instead, Certbot was incredibly picky and would look like it was installing, but would instead fail everything.

In addition, the provided startup script may have failed to generate DHparams before running Certbot the first time. To fix this, I just ran command manually and modified the provided startup.sh to give it an extra spot where it would attempt to generate them.

Closing Thoughts:

Why go through all this effort? Well, for fun and…

Ferengi with block text profit

You could use Docker to set this WordPress stack (or something like it) up on DigitalOcean or any other cloud for as little as $5 per month with free HTTPS certificates, full control of the site and the server it runs on, and a place where you can host any other kind of container you’d like to run.

In addition, hosting your own WordPress server can allow your website to have better performance and to be more distinguished to search engines.

Link to guide used for creating this site

(note: you can probably upgrade the NGINX, WordPress, and MariaDB versions from those in the guide. I opted to use MySQL instead of MariaDB for my site.)

Loading


More on Docker:


Krupczak logo

Home

(EFF) How to make sure the tech you use and build reflects your values

Whether you’re a user or builder in tech, it’s important to remember the out-sized impact that is provided by information systems and the mass availability of data.

Even before computers, these ethical questions have arisen in technology such as with IBM’s significant role in enabling the holocaust almost 100 years ago.

Below, Cindy Cohn of the Electronic Frontier Foundation (EFF) gives current news and a call to action as she writes for Mozilla on:

How to Make Sure the Tech You Use and Build Reflects Your Values

(P.S.): In an era of increasingly intelligent systems, it’s also worth thinking about how tech is designed to interact with how we think, feel, and behave in a more general sense.

Below, Joel Spolsky (Former CEO of Stack Overflow) speculates on what’s happening “under the hood” with social media and how it affects civil discourse, including some positive changes that Facebook has made even while sacrificing user engagement:

What is the lesson? The lesson here is that when you design software, you create the future. 

If you’re designing software for a social network, the decision to limit message lengths, or the decision to use ML to maximize engagement, will have vast social impact which is often very hard to predict. 

As software developers and designers, we have a responsibility to the world to think these things through carefully and design software that makes the world better, or, at least, no worse than it started out.

And when our inventions spin out of control, we have a responsibility to understand why and to try to fix them.

If you’re particularly interested in protecting your own browsing habits, it’s worth doing a few simple things to ensure your privacy online.

Loading

Next:


Krupczak logo

Home

The Worst Piece of Code I’ve ever Written

I think for anyone who’s built or worked on something, there’s a tendency to wonder if something could have been done better or what could have been done differently.

Gabe Newell of Valve has said that his favorite game from his company was Portal 2, if only because he was much more involved in the development of all the other games Valve has created. He said while playing the company’s other games he would notice areas where content got cut or things didn’t go perfectly according to plan.

If you’ve built a decent amount of things, you may come to a point of reflection where you have that one thing you’ve built that works, but you’re unhappy with. For me, it’s this piece of code:

Code Screenshot

This was for a project in my intro to AI class where we were writing an implementation of a feed forward neural network (woah!). This sounds really complicated, but basically all it was was stringing along a couple of nodes (perceptrons or like, simple neurons almost like you have in your brain) like the picture below and then doing some fancy math to describe how a value moves through the network.

Neural net!

So, This function described how the values would propagate from one layer to the next (for instance, from the blue layer to the green, or the green layer to the orange).

How did I know this code was bad?

I had a close friend who was returning from a sports tournament and was trying to complete the assignment the day it was due. We had our textbooks open to all the formulas we needed, and I was practicing some form of “clean room coding” where I would help him understand the concepts and algorithms without either of us seeing each other’s code. Everything was going perfect until we reached …

Do you understand how this code works? Because I didn’t
(it haunts me)


Also,
“””YOUR CODE”””
# ugly code

So, in a moment of pure horror and frustration, we broke the “clean room” to both look over this piece of code. We spent maybe 15-20 minutes trying to re-implement it, but none of the attempts seemed to work. It seemed that we couldn’t really figure out how this code worked, and couldn’t seem to replicate its effects.

If this (or something like it) was actually running as live code somewhere and causing errors, I’m sure someone would be pulling their hair out.

What can one do in a situation like this? Well, you can start from the input and the output of the function and try to figure out how it works:

Cool, so uh what? This means that we are given a list of decimal values (like 3.14159…) that represents the input values for our input layer like so:

And, we also see:

Cool! So now we know that our function outputs a 2 dimensional array (essentially, an array containing arrays) where the number of arrays is the number of layers plus 1 (n+1). We also know that inActs is the same as the 0th (computers count from zero) array of our output. So, we’ll make a guess:

This looks pretty good! Lets assume this is true for now and see if we can figure out the rest:

Ooh, a scary nested for loop. For the top one, we should try to think of a real example for it to run on, that way we can think about how it will start and when it will stop. So, if we have 3 layers like in our picture, it will count: 0, 1, 2.

Broken up, it doesn’t look as scary (I hope?)

So, if we pretend (in our heads) to run it on 0, 1, and 2, we can see it will start at Output[0], then go to Output[1], then finally Output[2]. We can also see that at each step, it “looks ahead” and builds the next output array. At 0 it builds the 1th array, at 1 it builds the 2th array, and at 2 it builds the 3th array.

Whew.

So now, let’s look at the nested for loop, or the loop that occurs each time the top loop occurs:

Wow. This is a doozy. First off, we know that the sigmoid activation function is the secret sauce that those crazy Canadians figured out long ago to make neural nets go, so let’s just say that it allows a perceptron to take a bunch of input weights, do some math, then spit out a value towards all the other perceptrons in the next layer. Those values are probably what we’re building in our 2d output array.

We also see that with the for loop looking for a (layerIndex, aPercep in enumerate(layer)), so this means it iterates through a layer and gets a layerIndex starting at zero and a perceptron object that is nicknamed “aPercep” for your reading pleasure. It probably looks like this:

So if we look at that code block again…

We can see that it’s building the index + 1 th layer by taking in its input weight (from curActs, or, the previous layer), shoving it through the perceptron à la the secret meat grinder that is the sigmoid activation function, and then putting it as the output value of the perceptron for every perceptron (0, 1, 2, 3 in our example) in the layer. Subsequently, this output value from the perceptron is used as an input for the perceptrons in the next layer.

Now if we did a run through in our heads, we could see that these two lines would be building the activation values for the next layer, then it would end and the top for loop would increment, and then finally we would use the values we just generated to build the next layer and repeat until there are none left.

Cool!

So, if this code works what’s the problem?

Well, I wrote the code only about one or two days ago and couldn’t figure out how it worked in 20 minutes. When you’re “in the zone” and writing code like this, it can seem like it makes perfect sense at the time but can look horrible when you come back at it later. I think this may be why some people have the idea “everyone else’s code is ugly” because maybe, well, everyone’s code is ugly.

Some people subscribe to the idea of “self documenting code” or, the idea that code should be written in an intuitive way (with variable names, etc.) so that others can read along with it and get an inkling of what it does. I’m not generally such an optimist, so I like to add in the occasional comment for when I feel dumb and am looking over it later.

In a later post, we may take a look at how this code could be written in a different way so its a bit easier to read and understand.

Edit: A commenter ceiclpl on reddit wrote a great alternative implementation of this function:


I really wanted to clean this up, and discovered it translates well into understandable code:

def feedForward(self, acts):
    outputs = [acts]

    for layer in self.layers:
        acts = [aPercep.sigmoidActivation(acts) for aPercep in layer]

        outputs.append(acts)

    return outputs

Which much better describes what it’s doing. It just iteratively applies the activation function on a list, over and over, and retains the intermediate values.

I think this code example is also a great demonstration of the “programming as a means of expression” concept.

I’m used to living in “Java/C -land”, programming where objects are first class citizens and instead of making use of Python’s syntactic sugar for working with arrays, I did it the old-fashioned way with array indexing.

Thank goodness for our Pythonista ‘s out there, they’re a different breed

Loading


Krupczak logo

Home

(Medium) The Rise of the Weaponized AI Propaganda Machine

Ghost in the shell cyberbrain

Excellent Medium article on some behind the scenes data science, and how Cambridge Anylitica (now renamed as Emerdata Ltd.) used behavioral profiling, ad micro-targeting via Facebook, and AI to win elections and influence opinions and politics worldwide.

medium.com/join-scout/the-rise-of-the-weaponized-ai-propaganda-machine

The best way to thwart these kind of systems is to use an open source web browser like Firefox, and use addons such as:

  • (Search) DuckDuckGo: the search engine that doesn’t track you
  • (Email) Protonmail: free-mium email account provider with a Swiss-based company with better physical and legal protections (+) for you than most other companies/countries.
  • (Password Manager): Firefox Lockwise or Lastpass:
    • Keeps track of your passwords for various different web apps so you don’t have to
    • Set one master password for your account, other passwords are saved and filled in automatically by the browser addon
    • Prevents you from getting pwned if one the companies hosting your account gets hacked and the password isn’t securely stored.
  • Privacy Badger: Learns how sites track you, and drops these connections
  • Ublock Origin : Blocks most ads
  • Facebook Container : Isolates Facebook’s elements from the rest of your browsing

Mobile:


More browser addons:

  • Google , Amazon, etc. Containers: Isolates behavior and tracking to site-specific containers
  • Disconnect : Similar to privacy possum
  • Facebook adblock: blocks Facebook ads
  • (Bonus) VimVixen : Speedy web browsing with vim shortcuts (for the old-school terminal lovers out there)

Also, it’s worth going into your Google account settings and turning off personalized ads and location data collection, especially if you have an Android phone.

These all may be helpful steps to help secure your privacy, even for reasons that may not immediately be clear to you at the time (+) (+) (+).

Loading


Previously:


Krupczak logo

Home

Commoditize your complements (and open source software)

The heartbleed OpenSSL vulnerability in 2014 (affecting Google, Facebook, etc.) exposed another issue: sometimes widely used open source software doesn’t get the love it deserves in terms of monetary or code support from those that use it.

If you’re using open source software at your company, then this software could be considered in economics terms as a complement* to your company’s product, where its availability, quality, and ecosystem directly bolsters it.

Below, Joel Spolsky writes on how smart companies** in the past have strategized to further their business interests by supporting and proliferating complements to their products, or otherwise “commoditizing their complements” to great results:

More: (gwern.net/Complement)


Loading


*In economics, a complementary good is a good whose appeal increases with the popularity of its complement. I.e. complementary goods are often consumed along with each other

Complementary goods exhibit a negative cross elasticity of demand: as the price of goods Y rises, the demand for good X falls.

For example, if you sell peanut butter and the price of jelly goes down, then it’s cheaper for people to make a peanut butter and jelly sandwich. The demand for peanut butter will increase, and you could raise prices.

In this example, we can imagine it would be a peanut butter company’s dream come true if the price of jelly lowered (or even became free).

If you’re using open source software at your company, think of your company’s proprietary product as peanut butter and open source software as jelly. From a strictly financial sense, it may make sense for you to “lower the cost of jelly” by supporting open source software with cash or coders.

By doing so, you can improve the quality and community around the tools you rely on, increase demand, availability, and functionality of your product for a fraction of the overall cost, and you can raise prices and profits in turn.


**In economic terms, open source back-end infrastructure can be considered as a complement to proprietary front-end software product.

Smart companies proliferate complements to their products in order to increase demand.

Therefore, it’s easy to see why so many companies have a vested interest and have banded together to ensure the success of open source software and organizations like the Linux Foundation and the CNCF.

More:

Via Chris Aniszczyk, CTO of the Cloud Native Computing Foundation

The success of open source continues to grow; surveys show that the majority of companies use some form of open source, 99% of enterprises see open source as important, and almost half of developers are contributing back.

It’s important to note that companies aren’t contributing to open source for purely altruistic reasons. Recent research from Harvard shows that open source-contributing companies capture up to 100% more productive value from open source than companies that do not contribute back.


Related:


Krupczak logo

Home

A Programming Language for the Next 40 Years: (Rust)

The CPU’s that we use for computers used to roughly double in speed (transistor density) every two years according to something called Moore’s law. This stopped happening around 2013/2014. Some innovations are being made, but right now progress is stagnating. There are a few implications:

1: You can probably keep your old laptop around for longer

2: Programmers have to start writing better, faster code to do more interesting things on single CPU’s

3: For really complicated stuff, we’ll just start trying to use as many CPU’s in parallel as possible (Machine Learning with multi hundred core GPU‘s, server farms, multi-threading, etc.)

For point number 2: Programmers are going to have to start coding closer to the metal. This is possible with C and C++, but the complexity of dealing with pointers and memory safety is difficult to do well consistently with complex programs.

Enter Rust: “a language for the next 40 years” that promises readability similar to Java, run-time similar or sometimes better than C code, and memory safety.

Loading

Related:


Krupczak logo

Home

AI and GPU Silicon Supremacy

NVIDIA has held a clear market and technical lead over AMD with video game graphics cards (GPU’s) in the past few years, having managed to get a chip design a few generations comfortably ahead of AMD. This market dominance, combined with increased demand for graphics cards beyond gaming in applications such as AI and Crypto has allowed NVIDIA to raise prices.

Right now, much of the emerging AI research coming out is being done on NVIDIA GPU’s, but trying to get compute time on these cards can seem like trying to get mainframe time back in the early days of computing (expensive and hard to find the hardware)

Intel ‘s prior acquisition of AMD’s top graphics chip talent and imminent plans to release graphics cards may soon usurp NVIDIA’s dominance, making existing and new more complex AI cheaper to implement.

Now’s a great time to be thinking about intellectual property, in the hopes that implementation will get cheaper in the future

https://www.tomshardware.com/news/intel-xe-gpu-specs-features,38246.html

Loading


More:



Krupczak logo

Home