All projects
Category

Deep Learning

TensorFlow projects across time-series forecasting, NLP transfer learning, and computer vision. Fewer notebooks, more end-to-end evals.

Aug 2023 – Sep 2023

BitPredict — N-BEATS time-series ensemble

Walked the model zoo end-to-end, landed on an ensemble with MAPE 2.55% on Bitcoin price.

A TensorFlow time-series forecasting project on Bitcoin price. Progressed from a naïve sequential CNN baseline → Conv1D → LSTM → multivariate Conv1D → a from-scratch replication of the N-BEATS architecture (Oreshkin et al., "Neural Basis Expansion Analysis for Interpretable Time Series Forecasting") → a stacked ensemble of all of the above. Final ensemble: MAE 566.77, RMSE 1072.96, MAPE 2.55%, MASE 0.996 — the ensemble beat every individual model. Honest writeup: crypto is too exogenous-factor-driven for a forecasting model to "win" in any absolute sense, but the experimentation mapped the trade-offs between architectures cleanly.

TensorFlowN-BEATSLSTMConv1DEnsembles
Read on GitHub →
Jul 2023 – Aug 2023

SkimLit — NLP medical-text skimmer

Replicated the PubMed 200k RCT paper — token + char + positional embeddings hit 83% accuracy.

A TensorFlow NLP classifier that labels the sections of medical RCT abstracts (Background / Objective / Methods / Results / Conclusions) so long papers can be skimmed. Replicates the PubMed 200k RCT paper. Progression: sklearn baseline (72% accuracy) → Conv1D (78.6%) → Universal Sentence Encoder transfer learning (71%) → token + character embeddings (73.4%) → final model combining token + character + positional embeddings (83.2% accuracy, F1 0.83) — matching the original paper's ballpark. Good case study in why embedding composition matters more than model size.

TensorFlowTransfer LearningUSEEmbeddingsConv1D
Read on GitHub →
Jun 2023 – Jul 2023

Food Vision — EfficientNet transfer learning

Food101 classifier — baseline → data augmentation → fine-tuning, ~77% val accuracy.

Computer vision classifier over the Food101 dataset, inspired by the original Food101 paper. Workflow: simple CNN baseline (overfit quickly) → add data augmentation (variance up, raw accuracy down — but model generalizes) → EfficientNet feature extraction → unfreeze top layers for fine-tuning. Final validation accuracy ~77%, with TensorBoard curves and confusion matrices tracking every step. Also tried EfficientNetB7 and documented that bigger wasn't better for this dataset — a useful "simple > complex" lesson captured in the writeup.

TensorFlowEfficientNetTransfer LearningData AugmentationTensorBoard
Read on GitHub →