Tech Deployments
Senior AI Systems Engineer & Technical Project Lead specializing in production-grade AI and data infrastructures — from ingestion and orchestration to analytics, deployment, and automation at scale.
🧩 Data Infrastructure
- Built End-to-End Data Orchestration System – modular scheduler overseeing 100+ ETL and ML workflows with CI-style logging, monitoring, and failure recovery.
- Engineered Textflow ETL & Memory Architecture – JSONL/Parquet pipelines feeding FAISS-based retrieval layers; includes data validation, lineage tracking, and summarization.
- Deployed ChromaDB Knowledge Store – persistent embedding system with automated retraining, query APIs, and metadata versioning.
- Integrated Multimodal Data Stack – unified ingestion of audio, video, OCR, and structured text into reproducible, schema-validated datasets.
- Automated Financial Data Framework – OCR and parsing engine reconciling invoices and ledgers with full audit trails and analytic dashboards.

Deployment: M.I. Journal
Quartz knowledge hub with tags, monthly logs, and semantic navigation.
🧠 Machine-Learning & Knowledge Systems
- Designed Entity & Semantic Graph Engine – NER→knowledge-graph pipelines powering cross-document analytics and search.
- Implemented Repository Intelligence Toolkit – CI scripts and GitHub-API analytics for dependency audits, code-health metrics, and workflow telemetry.
- Built Cloud-Linked Analytics Environments – integrated BigQuery and GCP storage for socio-economic modeling; automated data pulls and transformations.
⚙️ Automation & Governance
- Developed Static-Site & API Deployment Pipelines – continuous Markdown-to-web builds and JSON API endpoints for analytical outputs.
- Architected AI Consulting & Automation Framework – productized internal automation stack into modular micro-services with paywall logic and dashboards.
- Maintains Control-Tower Documentation Layer – ADR and runbook system that records architecture decisions and operational lifecycles.
🤝 Collaboration & Learning Tools
- Created Cross-Team Collaboration Layer – CRM + CMS systems turning datasets and research outputs into accessible dashboards for non-technical partners.
- Auto-Generated Pedagogical Code Materials – notebooks and examples derived from live repositories, ensuring reproducible teaching and documentation.

Deployment: LDD-NBS Course Exercises
Notebook-based exercises for Data Lab at UBA, built for reproducible learning.
- Deployed Question-Answering Systems – LLM-based app for internal knowledge retrieval and teaching assistance.

Deployment: Evaluar App
AI-assisted exercise distribution and feedback for departmental courses.
Core competencies: Python, SQL, ChromaDB, FAISS, Docker, Airflow-style orchestration, BigQuery, GitHub Actions, OCR pipelines, and data governance design.
Focus areas: automation reliability, semantic data architecture, and reproducible analytical ecosystems.