My first family computer was the dual disk kind. It meant you had two floppy disk drives – you inserted a DOS disk into one, and a disk for whatever program you wanted to run into the other. There was a tiny amount of hard-disk space, but that was meant for more dynamic programs – it wasn’t a hard-disk big enough for DOS, and we never wasted it on static programs for which we already had a disk.
Instead, my dad had a book with programs in it. Literally. The source code was printed on every page. He’d open up a text editor and painstakingly type every line of code so he could run simple programs.
I used it to make art programs. And password-generation programs. And once a naive attempt at a chatbot that returned scripted responses to text inputs – my first try at an artificial intelligence that was anything but.
I learned tech the hard way
I actually tried to teach myself C++ in middle school. The library had an old C manual in the library, but I couldn’t figure out the difference between C and the “++” variety. I found better manuals at the local bookstore, but they still wouldn’t help me learn development. The only “free” C++ compiler was command-line only; the available GUI compilers were expensive and I didn’t have enough of an allowance to buy one.
I didn’t really learn programming until grad school. In my free time. By searching Google and Wikipedia, reading various blogs on the subject, and emailing a very patient friend in another state who ended up teaching me PHP remotely.
Then I went to a WordCamp that features specific breakout sessions for kids. I meet a 10 year old from Brazil who’s attended and been a key participant in international hackathons. I meet a 16 year old at a high school career day who, even before graduation, had amassed more software development experience than I had by that time in my professional career.
Most professional events only present established speakers on the agenda. Some meetups I’ve attended have wrapped up with a migration to bars with strict adult-only requirements. The keynote speaker at one event – and lead developer of one of the frameworks we were using – was only 19. He was a genius, but couldn’t participate in any of our networking events because of strict carding policies due to the presence of alcohol.
Who owns the future?
It’s evident that we have a serious lack of diversity in the technology community when it comes to age. Those below the threshold of 21 years are often forgotten or explicitly excluded. I’ve meet teenagers and pre-teens who have just as much experience (if not more than) as some of the 30, 40, and 50-yr old developers I know. It’s a shame that they’re not more involved in our community.
The youth of today aren’t just the ones who will inherit the infrastructures we build. They’re already bright enough to understand – and optimize – them. They’re learning to code by playing games and solving novel problems that their aged tech colleagues can barely articulate. We just don’t always give them the chance professionally.
I met a brilliant 22-yr old who wanted to be a senior dev. He’d skipped college to instead launch an app, and it really took off. After selling it, he wanted to get a regular job in software to level up other skills and network. But at 22, no one was willing to give him an interview for a senior role, or really any role. The expectation was – college until 22, then at least 7 years of professional experience first for a senior position. Most teams wanted a formal CS degree for anything beyond entry level to begin with.
Sadly, I don’t know what happened to him. After striking out for a year to get offers for anything but an internship, he moved back to his parents’ house. His Twitter account is gone, his website and blog abandoned. No one took him seriously because of his youth and he opted out of the community entirely.
How many more times will others like him be told “you can’t do that” before they internalize it, begin to believe it, and doom us to a future of technological mediocrity?