Projects
Each case study focuses on the same arc: the constraint, the system that was built, the technical decisions that shaped it, and the outcome it produced.
Bersama Bill ID: Face-Verified Bill Payment POC

Context
Artajasa needed a live banking demo for its annual Members Meeting, where the audience included banking clients and executives across Indonesia. Bersama Bill ID was scoped as a proof of concept for face-verified bill payment, and I owned the build from service design through demo delivery.
Challenge
The scope was to deliver a demo-ready, end-to-end proof of concept in about one month as the sole developer. That meant learning Go, Flutter, and Docker quickly while also building a recognition pipeline that was fast enough for a live demo, reliable enough for repeated face matching, and complete enough to feel like a real payment product rather than a disconnected prototype.
Process
I treated the project as a compressed production-style build: define the system boundaries first, choose an inference approach that could survive demo conditions, then integrate the mobile experience around the service layer.
Architecture & Stack Ramp-Up
The first phase focused on defining clear service boundaries and getting productive with the stack: Go for backend services, Flutter for the mobile client, and Docker for repeatable packaging and deployment.
Recognition Pipeline & Service Layer
The core system combined 4 Go and Python services for authentication, facial recognition, payment orchestration, and embedding inference. Embeddings were served over gRPC, stored in PostgreSQL with pgvector, and the recognition layer was benchmarked across 8 models on 2,000 image pairs before the final approach was selected.
Mobile Frontend & Integration
The Flutter client was built around PIN plus face-based authentication, biller selection, single and bulk payment flows, balance top-up, transaction history, and JWT-backed session handling, then integrated with the backend APIs.
Deployment & Demo Readiness
The final stage focused on containerization, end-to-end testing, and hardening the demo flow so the proof of concept could be presented reliably at the event.
Results

The final deliverable was a working proof of concept that combined facial verification, bill payment flows, and a mobile client in one integrated system. It was demonstrated live at the Members Meeting to hundreds of banking executives without technical failure.
It brought together applied AI, backend systems, and product delivery in one compressed build, with clear ownership from architecture and model evaluation through integration and live presentation.
IS 2024 Event Operations Platform
Context
IS 2024 required a dedicated web platform to support the operations of HIMAFI ITB's physics-student orientation program for more than 100 participants. The system needed to centralize participant workflows and give organizers a reliable operational backend during a short preparation window, rather than relying on scattered spreadsheets and manual coordination.
Challenge
The platform had to be built from zero to production in a two-week sprint while covering task submission and scoring, attendance and leave handling, daily participant condition reporting, leaderboard updates, organizer dashboards, and Google Sheets synchronization for non-technical operations staff. The implementation also had to be practical to run during the event despite my having no prior Next.js experience at the time.
Process
The delivery process prioritized operational risk: get the participant flows stable first, then add the organizer-side controls and integrations that would reduce manual work during the event itself.
Architecture & Planning
The project began with core data modeling, role boundaries, and user-flow planning so participant actions, organizer workflows, and day-specific event controls could be implemented without rework.
Core Platform Delivery
The main participant and organizer workflows were built in a concentrated sprint, including task distribution, file submission, attendance and leave handling, leaderboard logic, and daily condition tracking on a stack using Next.js 14, tRPC, Prisma, PostgreSQL, NextAuth.js, and UploadThing.
Integration & Testing
The next stage focused on integrating organizer-side controls, syncing operational data into Google Sheets, and testing the application around the workflows that staff would use throughout the event.
Deployment & Handover
The platform was deployed with an admin-controlled, day-by-day CMS so organizers could manage attendance passwords, active event days, and operational status without code changes.
Solution
The final system combined participant task handling, committee scoring, attendance and leave management, leaderboard updates, and organizer operations in one full-stack application. The admin side also exposed a day-by-day CMS for passwords, event-day activation, and Google Sheets document mapping, so organizers could operate the system without code changes.
Results
The platform launched on schedule with the full set of participant and organizer workflows required for the event.
It supported a 100+ participant orientation program and reduced operations friction by syncing event data into Google Sheets for non-technical staff who still needed a spreadsheet-native workflow.
The project showed that the work was not a static event website, but a full operational system delivered under real time and coordination constraints.
Low-Resistivity Hydrocarbon Prediction from Well Logs
Context
At FTTM ITB, I worked on machine learning approaches for classifying hydrocarbon-bearing zones in low-resistivity well-log data. The work sat at the boundary between domain interpretation and ML engineering: derive features from petrophysics, handle class imbalance, and evaluate whether the pipeline generalizes across wells rather than only on a single dataset.
Challenge
The main challenge was to build a reliable experimental pipeline rather than only train a single model. Features had to remain geologically meaningful, hydrocarbon intervals were underrepresented, some wells lacked deep resistivity logs, and the workflow needed to generalize across wells.
Process
Feature Engineering & Interpretation
I engineered petrophysical features from raw logs, including shale volume, porosity indicators, neutron-density separation, gamma-ray texture, and porosity gradients, to create better analytical inputs for downstream modeling.
Model Benchmarking
I benchmarked 5 machine learning models across 4 experimental scenarios and evaluated more than 8 feature combinations together with 7 class-imbalance strategies, including SMOTE, ADASYN, and undersampling variants, instead of treating model choice as a one-shot decision.
Cross-Well Validation & Acceleration
I extended the work to cross-well prediction and GPU benchmarking, showing up to 92.0% accuracy on unseen wells and RAPIDS cuML training speedups up to 32.7x while maintaining comparable accuracy to CPU baselines.
Results

The strongest configuration reached 0.71 F1-score and 0.87 AUC-ROC, while SMOTE-based balancing improved hydrocarbon recall from 0.37 to 0.68. The resulting workflow combined petrophysical interpretation, predictive modeling, and scalability benchmarking in a single research pipeline.
This work also led to an endorsement to become a technician for the faculty HPC cluster, which reflected not only the modeling results but also the systems and compute aspect of the work.
Kuliah Kit: AI Study Material Analysis Platform

Context
Kuliah Kit was designed to help students review dense lecture materials more efficiently. The product centers on uploading class materials, turning them into structured study outputs, and surfacing what to review next instead of leaving students with a folder of disconnected files.
Challenge
The main challenge was to turn uploaded lecture files into structured, usable study material without adding friction to the student workflow. The system needed to support multiple academic file formats, generate consistent LLM outputs, handle math-heavy content, and track concept-level quiz performance over time without making the product feel like a slow AI wrapper around file upload.
Process
The product was structured around a straightforward loop: ingest materials, generate study outputs, measure quiz performance, and keep useful feedback and progress data in the system so the output improves from one study session to the next.
Material Ingestion
Users upload materials in 5 supported formats: PDF, PPT or PPTX, DOCX, JPG, and PNG. Files are stored in S3-compatible storage and prepared for downstream analysis.
Structured Analysis Pipeline
The backend sends uploaded content to Gemini and requests structured JSON output, including a summary, learning goals, practice suggestions, and a quiz tied to specific concepts, which keeps the response shape predictable for the frontend.
Study Workspace & Quiz Analytics
The material page presents summaries, learning goals, practice suggestions, and concept-tagged quizzes. Quiz results are aggregated to identify stronger and weaker concepts across attempts rather than treating every quiz as an isolated event.
Progress Tracking & Feedback
The product also keeps supporting data around the learning workflow, including bookmarks, streaks, quiz history, and structured feedback on generated outputs.
- Summary: A structured overview of the uploaded material.
- Learning Goals: 3-6 actionable bullet points to orient the user's focus.
- Quiz: Generated questions with explanations and concept tags for later analysis.
- Performance Metrics: Visualizing "Strengths" (concepts mastered) and "Weaknesses" (areas to review).
- Practice Suggestions: Concrete ways to apply the knowledge.
Results

The delivered product combines multimodal material ingestion, structured AI outputs, concept-level quiz analysis, and math rendering in one study workflow. The dashboard is designed to make recent materials, bookmarks, and concept performance easy to review.
What makes the product interesting is the application layer around the model: ingestion, structured outputs, learning analytics, and feedback loops all had to work together for the experience to feel useful.
Actionable Insights: The "Strengths & Weaknesses" analysis helps users identify which concepts to revisit next.
Retention: The structured navigation and progress features keep materials, quiz attempts, and study history organized in one place.
PHIWIKI ITB: Subscription Learning Platform and Digital Distribution
Context
PHIWIKI had already built recognition through printed physics learning materials. The next step was to extend that model into digital delivery by connecting physical products, online courses, and recurring subscription access inside one platform instead of treating print and digital as separate businesses.
Challenge
The system had to support course delivery, premium access control, payment verification, referrals, and an operational admin workflow without fragmenting the user experience. It also needed to preserve PHIWIKI's print business by linking physical books to digital content rather than replacing them.
Process
The platform work was split between core learning delivery, monetization infrastructure, and the operational controls needed to run the product without constant developer involvement.
Platform Architecture
The core platform was built on Next.js 15, tRPC, Drizzle ORM, TypeScript, and PostgreSQL, with a structured course model covering chapters, sub-chapters, video lessons, lecture-note downloads, quizzes, and progress tracking.
Monetization and Access Control
A subscription pipeline was integrated with Mayar to handle payment verification, premium license activation, expiry-based access control, and a referral system that generates discount coupons and reward logic directly from the application.
Print-to-Digital Integration
The digital rollout was designed to complement the existing book business through QR-linked learning flows, while the public website and structured course catalog expanded PHIWIKI's reach beyond the printed catalog.
Operational Governance
Role-based access was enforced with NextAuth.js v5 and tRPC middleware so public visitors, subscribers, members, and admins could use the same platform with different capabilities and minimal manual intervention.
Results
PHIWIKI moved beyond a print-only distribution model into a platform that supports digital subscriptions, structured course delivery, and learner progress tracking.
The referral and payment flows reduced manual admin work by tying discounts, purchases, and premium access to one operational system.
The project established a reusable foundation for PHIWIKI to sell courses, grow its public presence, and connect physical learning products to digital content.
The technical work mattered because it sat directly on top of the business model: subscriptions, access control, and content delivery all had to function as one product.
HIMAFI ITB Organizational Platform
Context
HIMAFI ITB needed a central digital presence for multiple departments and student-facing programs. The project scope was broader than a landing page: it needed content publishing, member operations, branded utility tooling, and a foundation that could interoperate with other HIMAFI platforms.
Challenge
The platform had to support open public content while still giving members and admins controlled creation privileges. It also needed to replace scattered third-party tools with in-house workflows for publishing, link sharing, and user management.
Process
The platform was designed as shared infrastructure for the organization: a public web presence on the surface, and a role-aware operating layer for members and admins behind it.
Core Site Delivery
The site was architected and launched from zero to production on a modern full-stack based on Next.js 15, tRPC, Drizzle ORM, TypeScript, and PostgreSQL, giving the organization a single official web home.
Publishing and Member Tooling
A News and Blog system was built with a rich-text workflow, draft and publish states, author attribution, and content-type separation so members could publish organizational updates without developer involvement.
Branded Infrastructure
A custom shortener with QR export and conflict-aware slug management replaced third-party link tools, while an admin user-management panel gave leadership direct control over role assignment.
Shared Access Model
The RBAC and authentication model was aligned with PHIWIKI under the shared himafiitb.com ecosystem so members could operate across platforms with consistent permissions and identity handling.
Results
HIMAFI gained an official website that centralizes public visibility, internal publishing, and member-facing utility tooling in one platform.
Members can publish news and blog content and create branded short links without developer assistance, while admins retain direct control over permissions.
The shared platform model with PHIWIKI created a more coherent subdomain ecosystem instead of isolated tools and duplicated access control logic.
The project became an exercise in platform ownership across content systems, access control, internal tooling, and cross-product integration.
G-Corp: BSG Leadership and R&D Execution
Context
G-Corp needed stronger R&D execution around BSG, especially a more scalable production workflow for its flagship exam-preparation books and a simple supporting layer for QR-linked tutorial delivery.
Challenge
The role had to bridge R&D coordination, tutorial-content preparation, and academic publishing while modernizing a book workflow that had previously depended on manual document tooling. The supporting web component also needed to stay narrow: reliable QR routing and hosted tutorial delivery for printed book content.
Process
The work ran on two parallel tracks: lead the BSG publishing operation into a scalable workflow, and set up the supporting QR-linked delivery infrastructure needed to publish tutorial content alongside the books.
Supporting QR Delivery Infrastructure
A lightweight supporting stack was built on Next.js 15, tRPC, Drizzle ORM, TypeScript, and PostgreSQL to provide a branded shortener, QR generation, and reliable routing for book-linked tutorial content.
Tutorial Media Hosting Workflow
A simple media-hosting workflow was added so teams could upload and manage tutorial video and image assets referenced by QR codes in the books, keeping the system focused on book-linked tutorial support.
BSG Production Migration
The BSG book workflow was migrated from Word-based editing into LaTeX, using a modular structure that could manage content compiled from three academic years of exam materials while improving consistency and reducing formatting overhead.
QR-Linked Tutorial Delivery
QR codes generated through the in-house shortener were embedded into the LaTeX source so each problem could point to hosted tutorial explanations, extending the printed books with tightly scoped digital support rather than a standalone web product.
Results
The BSG workflow now supports two complete B5-format books built from three years of exam material, with the 2025 editions used by 100+ incoming students.
The project combined R&D leadership, BSG publishing ownership, and a narrowly scoped QR-linked tutorial delivery system in one role.
Streak: Habit Tracking App
Context
Consistency is the engine of personal growth, yet maintaining new habits is notoriously difficult. Many people understand the theory but lack a practical tool to visualize their consistency and maintain momentum.
Challenge
The challenge was to build a habit tracking application that goes beyond simple checkboxes. It needed to educate users on the principles of behavioral change while providing a compelling visual representation of their "streaks" to gamify the process of self-improvement.
Process
Conceptualization
Researched habit formation psychology to design features that reinforce positive behavior. The core concept was built around the "streak" mechanic to leverage loss aversion as a motivator.
Implementation
Developed a responsive web application using Next.js. Key features included a dashboard for daily tracking, visual graphs of progress over time, and educational resources explaining the "why" behind habit formation.
Results
Streak serves as a digital companion for personal development. It successfully combines utility with educational content, helping users not just track what they do, but understand how to change their behavior effectively.
Python Well Log Analysis Instructor, Universitas Pattimura

Context
Universitas Pattimura (Unpatti) is a state university in Ambon, Maluku, Indonesia, with a strong tradition in Earth and petroleum sciences. Its Petroleum Engineering students are trained rigorously in geophysical methods, well logging theory, and reservoir evaluation yet the day-to-day data workflows in the industry are increasingly driven by Python-based pipelines. Commercial software such as Petrel or Interactive Petrophysics handles analysis out-of-the-box, but understanding how to build those tools from code gives engineers the flexibility to solve non-standard problems and automate repetitive work.
I was engaged to design and deliver a structured Python workshop series targeting this audience: domain experts in petrophysics with little to no programming background. The deliverable was a practical, end-to-end curriculum backed by runnable Jupyter Notebooks, progressing from the very first line of Python all the way to publication-quality well log visualizations and quantitative petrophysical interpretation on real, industry-format data.
Challenge
The core design challenge was bridging two independent bodies of knowledge that rarely share a common entry point. The students understood formation evaluation deeply, they knew that a high Gamma Ray reading indicated shale, that resistivity inversions suggested hydrocarbon presence, and that the Neutron-Density crossover was the fingerprint of gas. What they lacked was the vocabulary to express those concepts computationally.
The curriculum therefore had to satisfy three distinct constraints simultaneously. First, Python itself had to feel immediately purposeful: abstract language syntax was always introduced in the context of real log data, never as isolated exercises. Second, the progression had to be genuinely incremental each module assumed only what had been taught in the previous one, so students who had never written a loop could still arrive, three sessions later, at a working Archie water saturation workflow. Third, the outputs had to look like tools a professional petrophysicist would trust: industry-standard axis conventions (depth increasing downward, NPHI reversed, ILD on logarithmic scale), color-coded multi-track figures, and interpretable interactive crossplots.
All of this had to be packaged inside self-contained Jupyter Notebooks that students could run, modify, and take away as their own working reference after the sessions.
Process
Module 1: Python Fundamentals for Data Work
Module 1 established the Python foundation, covering every primitive that later modules would rely on. Topics were sequenced to mirror how a practicing engineer thinks through a dataset: first understanding what kind of value something is (int, float, str, bool), then collecting multiple values (list), then deciding what to do with them (if /for), then packaging reusable logic (functions), and finally organizing structured records (dictionaries and classes).
Every example used geology-adjacent data depth values, API unit numbers, formation names so students were never context-switching between an abstract exercise and their domain. By the end of Module 1, students could define a function that accepts a list of Gamma Ray readings and returns a classification string, a task that immediately mapped to something they already understood conceptually.
Module 2: Well Log Data Analysis Pipeline
Module 2 was the technical core of the curriculum. It used the real Bonanza-1 well dataset, an industry-format LAS (Log ASCII Standard) file containing 11 log curves recorded from surface to total depth: DEPT, GR (Gamma Ray), CALI (Caliper), DT (Sonic Travel Time), MLL (Medium Laterolog Resistivity), ILD (Deep Induction Resistivity), NPHI (Neutron Porosity), RHOB (Bulk Density), SN (Short Normal Resistivity), SP (Spontaneous Potential), and TEMPERATURE.
Custom LAS Parser
Rather than using a pre-built library, students were taught to write a custom regex-based LAS parser from scratch. The parse_metadata() function taught file I/O, string parsing with re.match(), and the structure of LAS section headers (~WELL INFORMATION, ~CURVE INFORMATION, ~A data block). The companion read_bonanza_data() function then loaded the numeric log data into a Pandas DataFrame, replaced the -999 null sentinel values with NaN, and exposed a clean, analysis-ready table. This exercise demystified a format every petroleum engineer encounters daily but rarely inspects at the byte level.
Comprehensive Statistical Exploration
Students computed descriptive statistics (depth range, sample count, sampling interval, min/max/mean per curve) before generating a full suite of diagnostic visualizations: a 2×3 grid of histograms with mean and median overlaid for the six most diagnostic curves (GR, RHOB, NPHI, ILD, DT, SP); a 10×10 Seaborn correlation heatmap across all numeric log curves, revealing the strong anticorrelation between NPHI and RHOB in porous zones; a 2×3 box plot grid for outlier detection; and a detailed data quality report quantifying missing-value percentages per curve.
Interactive Crossplots with Altair
Three linked interactive scatter plots were built using the Altair declarative visualization library: an RHOB vs. NPHI Neutron-Density crossplot (colored by GR, NPHI axis reversed per industry convention), a Resistivity vs. Sonic crossplot (ILD on log scale, DT reversed), and a GR vs. SP lithology-permeability indicator (colored by ILD on log scale). All three were rendered simultaneously with brushing selection, letting students explore zone-by-zone relationships interactively.
Multi-Track Well Log Visualization
The capstone of Module 2 was a pair of Matplotlib multi-track log plotting functions. The first, combo_plot_list(), used a fixed four-track layout: Track 1 overlaying GR (blue, solid), SP (green, dashed), and CALI (red, dotted); Track 2 showing ILD (red), MLL (black), and SN (blue, dotted) on a shared logarithmic scale; Track 3 overlaying RHOB (red) and NPHI (blue dashed, x-axis inverted); and Track 4 plotting DT (purple, axis reversed). Every curve used stacked twiny() axes with outward spine offsets to prevent label collisions, and a Savitzky-Golay smoothing pass (scipy savgol_filter, window 5, order 3) was available per-track for noisy curves.
The second function, combo_plot_dict(), extended this with a fully dictionary-driven configuration schema, letting students define an arbitrary number of tracks and curves each with its own label, color, linestyle, scale limits, log-scale flag, axis-inversion flag, and smoothing option without modifying the underlying function. This design reinforced the power of data-driven code and gave students a reusable, professional-grade plotting engine they could adapt to any well.
Module 3: Quantitative Petrophysical Interpretation
Module 3 applied the logging and visualization infrastructure from Module 2 to a synthetic Bonanza dataset (bonanza-synthetic.las), enabling reproducible petrophysical computation without the ambiguities of raw field data. Three interconnected workflows were covered.
Volume of Clay (VCL) from Gamma Ray
Students derived VCL two ways: a simple linear Gamma Ray Index (IGR = (GR − GR_clean) / (GR_clay − GR_clean)) and the non-linear Larionov formula for Tertiary rocks (VCL = 0.083 × (23.7 × IGR − 1)), clipped to [0, 1]. Comparing the two outputs on the same depth track made the practical difference between linear and non-linear clay indicators immediately visible.
Effective Porosity (PHIE) from Density and Neutron
Density porosity was calculated from RHOB using a quartz matrix density of 2.65 g/cc and a fluid density of 1.0 g/cc. Effective porosity was then derived as the neutron-density average corrected downward by the clay volume fraction (PHIE = ((NPHI + PHID) / 2) × (1 − VCL)), with clean-zone enhancement and clipping to a realistic range of 0–0.4 v/v. Students traced each arithmetic step directly back to the formation evaluation theory they already knew.
Water Saturation & Pickett Plot
The culminating exercise implemented Archie's equation:
Using calibrated parameters a = 1, m = 1.79, n = 1.78, and an Rw of 0.075 ohm·m, students computed a continuous water saturation curve across the entire well. To determine the appropriate Rw, they built an interactive Pickett Plot using Altair: a log-log scatter of deep resistivity (ILD) vs. effective porosity (PHIE), colored by VCL, with overlaid saturation isoline families at Sw = 100%, 80%, 60%, 40%, and 20% drawn from the linearized form of the Archie equation. The plot was filtered to VCL < 0.5 to exclude shale intervals and focused on a discrete reservoir zone (10,600–10,700 ft), giving students hands-on experience in the kind of targeted depth-window analysis used in daily petrophysical practice.
The resulting Sw_Archie column was appended back to the main DataFrame, completing a full formation evaluation pipeline from raw LAS file to interpreted reservoir quality entirely in open-source Python.
Results & Impact

By the end of the three modules, students who had never written a Python program could:
Parse any industry-standard LAS file from scratch, extracting well metadata, curve mnemonics, units, and numeric log data into a clean Pandas DataFrame ,; without relying on a third-party LAS library.
Generate a full statistical audit of any well dataset: distribution histograms, a 10×10 correlation heatmap, box plot outlier detection, and a quantified missing-data quality report.
Produce professional, multi-track well log figures in Matplotlib that honor all industry visualization conventions (inverted depth axis, inverted NPHI scale, logarithmic resistivity tracks, Savitzky-Golay smoothing on request), suitable for inclusion in technical reports or presentations.
Build interactive Altair crossplots (Neutron-Density, Resistivity-Sonic, GR-SP) with brushing and tooltips, enabling rapid zone-level facies identification directly in a Jupyter Notebook.
Execute a complete quantitative petrophysical interpretation workflow: compute VCL via the Larionov equation, derive PHIE from the neutron-density combination, calibrate formation water resistivity using an interactive Pickett plot, and calculate continuous water saturation via Archie's equation all with transparent, auditable Python code they wrote themselves.
The curriculum demonstrated that domain knowledge is a force-multiplier for learning to code: students who understood why a formula worked grasped how to implement it far faster than general-purpose programming students would. The three notebooks remain with Unpatti as a permanent, reusable teaching resource that any instructor can run against new well data.
Web3 Strategy Research: Digital Rupiah, RWA Tokenization, and DeFi

Context
While public discourse fixated on crypto speculation, institutional finance had quietly entered production. By 2024–2025, BlackRock had launched the BUIDL fund a tokenized US Treasury vehicle on Ethereum that crossed $500M AUM within months of launch. JP Morgan had rebranded its Onyx division to Kinexys and was executing live intraday repo settlements and programmable cross-border payments via JPM Coin. ING and Commerzbank were running real-time securities lending on HQLAx's R3 Corda platform near T+0 settlement. These were not pilots or white papers they were live revenue-generating infrastructure.
Simultaneously, Indonesia was navigating its own strategic inflection point. Bank Indonesia had published the White Paper and Consultative Paper for Project Garuda, the national initiative to develop a Central Bank Digital Currency (CBDC) the Rupiah Digital formally recognized under Undang-Undang No. 4 Tahun 2023 (UU P2SK) as one of three legal forms of the Rupiah. The CFX (Bursa Kripto) was fully operational as the regulated commodity exchange for crypto assets. And the MSME financing gap stood at a staggering US$234 billion (IDR ~3,600 trillion), with MSMEs receiving only 19% of total bank loans despite contributing 61% to GDP.
The strategic question was not "is Web3 real?" that had been answered. The question was: where exactly does the value settle, how do institutions capture it, and what is the concrete entry playbook for Indonesia? This research engagement was designed to answer that question rigorously.
Challenge
The challenge was to reduce a technically dense subject into a set of quantified, decision-useful models that fit Indonesia's regulatory and market structure.
Signal vs. Noise
The Web3 space is saturated with speculative narratives. The research had to distinguishing verified production deployments (BlackRock BUIDL, JP Morgan Kinexys, HQLAx) from whitepaper-stage projects and build all strategic recommendations exclusively on the former.
Technical-to-Business Translation
The technology stack L1/L2/L3 protocols, smart contracts, atomic settlement, oracle networks, ZK proofs, IPFS needed to be translated into precise, quantified banking implications (e.g., T+2 → T+0, 6.49% remittance cost → sub-1%) that a CFO or banking executive could evaluate directly.
Regulatory Context Navigation
Indonesia's regulatory framing (crypto as Commodity, not currency; CBDC as legal tender under UU P2SK; the permissioned DLT architecture of Project Garuda) required mapping global production benchmarks onto local constraints identifying what transfers, what requires adaptation, and what is blocked by local regulatory structure.
Revenue Model Specificity
Generic blockchain saves costs, but generic narratives are useless for a market entrant. The research required reverse-engineering exact revenue mechanics from live cases: basis points on AUM, issuance fee ranges, SaaS licensing tiers, all sourced from verified public filings and press releases.
Research Process
The research was organized into five workstreams covering infrastructure mapping, CBDC architecture, revenue-model analysis, product concepts, and Indonesian market-entry opportunities.
Technology Stack Mapping & Banking Implications
A hierarchical L1/L2/L3 framework was built from primary protocol documentation, mapping each layer to concrete banking implications: L1 as immutable settlement, L2 enabling Visa-level throughput at sub-$0.01/tx, and L3 application chains for KYC/AML-compliant institutional DeFi. Smart contracts, IPFS, Chainlink oracles, and consortium RBAC governance were analyzed as core infrastructure components.
Project Garuda / Digital Rupiah Architecture Analysis
Bank Indonesia's White Paper and Consultative Paper were analyzed in detail covering the two-tier w-DR/r-DR architecture, the permissioned DLT platform (both R3 Corda and Hyperledger Besu cleared the 30 TPS benchmark with ISO 20022 interoperability confirmed), the three-phase rollout roadmap, and the disintermediation risk the r-DR introduces for commercial banks.
B2B Revenue Model Deconstruction (RWA Tokenization)
Three production revenue models were reverse-engineered from live deployments: Yield Spread (BlackRock BUIDL: 0.50% on $500M+ AUM), Infrastructure & Issuance (Securitize: $50k–$500k setup + 10–100 bps admin), and Collateralization (Centrifuge: $600M+ financed, ~1% origination + NIM spread). The analysis concluded with a market-entry recommendation: B2B Treasury Management as a Service at 0.25% referral fee on AUM no capital at risk.
Five CBDC-Era Business Models for Indonesia
Five business model concepts were developed for the Project Garuda architecture, each with defined revenue mechanics and target segments: Programmable Treasury & Liquidity Management, Digital Securities & RWA Tokenization Platform, ISO 20022 Integration Bridges, Retail Smart Wallets, and CBDC RegTech Compliance Services.
Indonesia Market Opportunity: Three High-Leverage Verticals
Three verticals were identified where Web3 creates structural advantages Web2 cannot replicate: MSME Credit Bridge (US$234B gap addressed via on-chain e-factoring, benchmarked against Centrifuge and Goldfinch), TKI Remittance Rails (6.49% → sub-1% via Stellar/USDC model, with a "One-App Receiver" concept for GoPay/OVO/Dana)
Key Deliverables
The research produced a library of enterprise-grade strategic intelligence across multiple formats and verticals:
Project Garuda Strategic Analysis
Long-form analysis of Bank Indonesia's Digital Rupiah architecture, two-tier design, PoC findings (R3 Corda vs. Hyperledger Besu), three-phase roadmap, and strategic implications for the financial sector.
RWA Tokenization Revenue Playbook
Deconstruction of three B2B revenue models (Yield Spread, Infrastructure & Issuance, Collateralization) with real-world benchmarks, fee structures, comparative framework, and a market-entry recommendation (Treasury Management as a Service, 0.25% referral fee).
ISO 20022 Middleware Model
Technical architecture and business model for a legacy-to-CBDC integration layer, including message definition mapping, technical overview, and licensing revenue structure.
Programmable Treasury Model
Detailed model for conditional payment infrastructure and overnight liquidity optimization leveraging Project Garuda's 24/7 DLT and smart contract programmability.
RegTech / CBDC Compliance Services Model
Architecture for real-time AML/CFT monitoring and automated Proof-of-Reserve auditing on the permissioned Rupiah Digital ledger.
TKI Remittance Corridor Study
Full feasibility analysis for the Malaysia-Indonesia TKI corridor: cost modeling (current 6.49% vs. projected sub-1%), settlement speed analysis (3-5 days vs. seconds), One-App Receiver UX design, and regulatory pathway.
Executive Strategy Deck
A Marp-based presentation slide deck titled "Web3 Banking: From Hype to High-Value Infrastructure", with 15+ slides translating the full technical landscape into actionable banking strategy for a non-technical C-suite audience, with production benchmarks for every claim.
Literature Note & Peer Review Framework
Structured synthesis of all primary sources with annotated citations, a formal literature note, and a peer-reviewer prompt framework for iterative quality control.
Results & Strategic Impact
Produced a strategic research library covering remittance, CBDC infrastructure, RWA tokenization, ISO 20022 middleware, and compliance-oriented business models, with each recommendation tied to explicit revenue or cost assumptions.
Identified several Indonesia-specific opportunities, including lower-cost remittance rails, MSME financing access through tokenized receivables, and treasury use cases for institutions evaluating blockchain-based infrastructure.
Developed a low-risk market-entry option around treasury management services, alongside more infrastructure-heavy models for issuance, settlement, and compliance.
Built a reusable analytical framework for evaluating new Web3 financial products against three dimensions: regulatory fit within Indonesia's commodity-classification regime, revenue model specificity (not just "cost savings" but exact fee mechanics), and implementation complexity relative to risk-adjusted return potential.
Synthesized the research into an executive-ready slide deck that distills the entire technical and strategic landscape into a clear narrative for non-technical banking executives demonstrating the ability to operate across both deep technical analysis and high-level strategic communication.
This engagement demonstrated the capacity to conduct rigorous, primary-source research into a technically complex and rapidly evolving domain, synthesize it into actionable strategic intelligence, and communicate it credibly to both technical and executive audiences, all as an independent, self-directed initiative.
Teaching Introduction to Computation at ITB
Context
In the second semester of my second year at ITB, I served as a Practicum Assistant for the Introduction to Computation course, the required Python and algorithmic-thinking course taken by all first-year students. Over a 4-month semester (February to June 2024), I delivered structured weekly practicum sessions to cohorts of incoming students, covering the full progression from basic Python syntax through data analysis and visualization.
Challenge
The course serves a broad intake: students from every engineering and science discipline at ITB, many with no prior programming background. The task was to make algorithmic concepts legible to students who had not yet built any intuition for computational thinking, while maintaining the technical depth that the curriculum demanded. Each session had to scaffold progressively, so that conditional logic introduced in week one still felt natural when students encountered nested loops and array operations by week four.
What Was Taught
Five practicum modules were delivered across the semester, following a progression from foundational control flow through data engineering:
Conditional Branching & Iteration
The first two modules covered conditional branching and iterative algorithms, building the mental model for control flow before introducing more complex structures.
Array Manipulation & 2D Matrix Operations
Later modules introduced 1D array manipulation and 2D matrix operations. Applied problems included multi-variable cost-optimization models, factor-pair enumeration, digit-divisibility classification, and advanced sliding-window algorithms, exercises designed to reinforce abstraction rather than mechanical syntax recall.
Modular Function & Procedure Design
A dedicated module on modular function decomposition and code quality practices: inline documentation, consistent indentation, and reusable procedure design. The goal was to close the gap between working code and professionally maintainable code, establishing habits applicable to scientific computing and data engineering work beyond the course.
Capstone: Data Analysis & Visualization
The final module was a full data-engineering practicum using a real-world Brazilian housing rental dataset. Students worked through dataset ingestion and filtering, multi-column aggregation, statistical correlation analysis using Pandas, and chart generation, histograms, pie charts, scatter plots, and horizontal and stacked bar charts, using Matplotlib, connecting the algorithmic foundations from earlier modules to applied data workflows.
Outcome
Delivered 5 structured practicum sessions across a 4-month semester to cohorts of first-year ITB students with mixed programming backgrounds, covering the full curriculum progression from conditional logic through data analysis.
The applied problem sets, cost-optimization models, combinatorial enumeration, and sliding-window algorithms, were chosen to build transferable computational thinking, not just pass specific exercises.
The capstone data analysis module connected practicum fundamentals to a complete end-to-end data workflow, bridging course content to the kind of analysis students would encounter in research projects and internships.
The role required consistent technical preparation each week, staying ahead of the module content, anticipating where students would get stuck, and adjusting explanations on the spot. The teaching experience directly sharpened my ability to explain systems at varying levels of abstraction.