From Django to Machine Learning: Why This Transition Feels Like Brain Gymnastics
Hey, I’m Rai. I live in the world of Django – the land of models, views, templates, and the occasional soul-sucking deployment bug. I’ve built APIs, dashboards, full-on SaaS platforms, and I can wrangle a Django app like it’s second nature.
But recently, I’ve been venturing out of my comfort zone. I’m learning Machine Learning.
Not deep learning (yet), not building LLMs from scratch or crafting YOLO models for object detection. Just classic machine learning – the solid foundations. You know, the stuff with pandas, scikit-learn, and enough CSV files to make you question your life choices.
The Dream: Smarter Apps
The reason I even got into ML was simple: I wanted to make my apps smarter. I was tired of building the same CRUD-based platforms over and over. I wanted them to adapt, to learn, to do things – like suggest better domain names, detect tone in messages, classify feedback, or automatically flag weird behavior.
That meant learning ML. I thought, “How hard could it be?”
Spoiler alert: it’s way harder than I expected.
The Struggles (aka why ML is mentally exhausting)
I’m not going to sugarcoat it. ML is a different beast from web dev. It’s a mental shift that hits you like a truck if you come from a deterministic world like Django.
1. The Math Monster
You can technically use ML libraries without diving deep into the math. But the moment you want to understand why a model performs badly or how to tune hyperparameters, you hit a wall.
I knew Python decently, but linear algebra? Matrix multiplication? Gradient descent? These weren’t part of my regular Django stack.
Learning ML forces you to relearn high school math – and this time, it actually matters.
2. Too Many Tools, Too Little Time
In Django, you can do so much with just a few tools: the ORM, the template engine, maybe Django REST Framework.
ML, on the other hand, is like walking into a warehouse full of libraries with overlapping functionality.
- scikit-learn for the classics
- XGBoost for competitions
- HuggingFace for NLP
- Optuna for hyperparameter tuning
- Gradio, Streamlit, or FastAPI for putting it all together
And don’t even get me started on numpy, pandas, and how badly I’ve abused them while cleaning data.
I know how to fine-tune models now – mostly NLP ones using HuggingFace – but it took months of just trying to understand how tokenizers work, how data is structured, and what "batch size" even means.
3. No Instant Gratification
In Django, you refresh your browser and boom – instant feedback.
In ML? You train a model, wait... wait longer... then wait some more. Only to find out you forgot to encode a column properly. Or your accuracy is trash. Or your model overfits like crazy.
Debugging a web app is fast. Debugging a training pipeline is like detective work in the dark with a flashlight that runs on hope.
4. The Language Barrier
The ML world has its own language. Terms like:
- precision vs recall
- F1 score
- regularization
- bias-variance tradeoff
- stratified sampling
They sound fancy until you realize you’re Googling each term every time you read a research paper or Kaggle notebook.
Sometimes I wonder if machine learning is just a secret club for people who like confusing names for simple concepts.
5. Impostor Syndrome Deluxe
Everyone on the internet is building LLMs, training diffusion models, or running 4-bit quantized transformers on a Raspberry Pi.
Meanwhile, I’m here figuring out how to handle class imbalance and why my accuracy dropped after one-hot encoding.
It’s easy to feel behind.
But Still... I’m Hooked
Even with all the hurdles, I’m not giving up. Because every time something clicks – every time a model I fine-tune gives a great result, or I understand why something worked – I feel like I just leveled up in the real world.
I still haven’t touched deep learning, and that’s fine. I’m focusing on getting my ML fundamentals rock-solid. I’m learning to ask the right questions. I’m building tiny apps with smart features. I'm tweaking models, tracking metrics, and learning to think like a data scientist while still coding like a dev.
What Keeps Me Going
- Project-based learning: I’m building things I actually care about. Tools for domain name generation. A sentiment classifier for student feedback in Bisaya and Tagalog. These aren’t textbook exercises – they’re real projects.
- Tiny wins matter: Even getting .predict() to return something halfway decent is a win.
- Fine-tuning is a shortcut: I don’t need to train massive models from scratch. I can start with pre-trained models and just tweak them to my needs. It's like using Django packages – start with batteries included, then customize.
- The long game: I know this stuff won’t click overnight. I’ve committed to the journey, and I trust that future me will look back and be glad I kept going.
Final Thoughts
If you’re a web dev like me thinking of learning ML, be ready to suffer. But also be ready to grow in ways you didn’t expect. You’ll break your brain, rage at your code, and question your decisions weekly.
But eventually, it all starts to make sense. Slowly. Painfully. And then beautifully.
I’m not an ML engineer yet – not even close. But I’m getting there. One fine-tuned model at a time.
And when I do build that million-dollar AI-powered SaaS? You’ll know this blog post was part of the journey.