I’ve been a little busy over the past year or so. You see, I have been creating the most advanced neural net known to mankind. It took me 9 months to code it and get it just right. My husband helped to provide some of the code, I provided the rest. As often happens, we repurposed some of the code from people we trust. It’s basically open source. Almost anyone can do it. Compiling it wasn’t the most fun I’ve ever had… There were some late nights, and I definitely lost some sleep. But my neural net turned out really cute! I lucked out. I didn’t plan for it, but it has dimples!
Now that she is here, I’m training my model. It will take a long time to train. 18 years by some accounts. Maybe more? I often wonder if the training time depends more on me training it well or its implicit structure? It has many layers and a variety of activation functions. There is definitely drop out automatically encoded. By all accounts, it’s very sophisticated. The only major downside is that I have to feed it all its labeled and unlabeled data manually.
When I put together my training data, I think a lot about ethics and values. I want my little neutral net to eventually make good decisions and be kind to other programs. It’s not so easy to build an unbiased model. Because I’m creating the training set, it’s up to me (and my family) to teach it well. Luckily, it’s not a one and done situation. If, in a few years, I learn that my neural net has learned to hit other neural nets, then I can ramp up the training data to try discourage that activity. But, I guess ultimately, it’s still a black box. I’ll never understand exactly why it made each decision!
Having a human neural net makes me think differently about how I might train my digital neural nets in the future… Meanwhile, my neural net woke up from her defragging and sleep processes, so I need to go. I get to go to stack more soft blocks so she can collect more unlabeled data about gravity!