format, flake update
All checks were successful
Build and Release Resume PDF / date-fetch (push) Successful in 3s
Check flake.lock / Check health of `flake.lock` (push) Successful in 15s
Check Nix flake / Perform Nix flake checks (push) Successful in 44s
Build and Release Resume PDF / build (push) Successful in 1m27s
All checks were successful
Build and Release Resume PDF / date-fetch (push) Successful in 3s
Check flake.lock / Check health of `flake.lock` (push) Successful in 15s
Check Nix flake / Perform Nix flake checks (push) Successful in 44s
Build and Release Resume PDF / build (push) Successful in 1m27s
This commit is contained in:
26
resume.tex
26
resume.tex
@@ -155,25 +155,25 @@
|
||||
{JPMorgan Chase}{Jersey City, NJ}
|
||||
\resumeItemListStart
|
||||
\resumeItem{Designed and deployed configurable data ingestion framework
|
||||
using Iceberg CTAS and time-travel for zero-outage updates,
|
||||
orchestrating 200+ refinement pipelines with automated data
|
||||
using Iceberg CTAS and time-travel for zero-outage updates,
|
||||
orchestrating 200+ refinement pipelines with automated data
|
||||
reconciliation across four zones (OLTP, raw, trusted, refined)}
|
||||
\resumeItem{Implemented PyArrow-based validation and dual-engine
|
||||
architecture supporting on-prem (Starburst) and off-prem (Databricks)
|
||||
architecture supporting on-prem (Starburst) and off-prem (Databricks)
|
||||
reporting for 50+ downstream teams}
|
||||
\resumeItem{Architected and implemented Apache Airflow orchestration
|
||||
supporting 1,000+ tasks per DAG with templated configuration-driven
|
||||
design, tiered pooling to prevent resource exhaustion, and automated
|
||||
supporting 1,000+ tasks per DAG with templated configuration-driven
|
||||
design, tiered pooling to prevent resource exhaustion, and automated
|
||||
partition registration in Trino for large Hive tables}
|
||||
\resumeItem{Led weekly office hours to help onboard new datasets and
|
||||
trained 10 developers to operate and extend the framework across
|
||||
trained 10 developers to operate and extend the framework across
|
||||
multiple applications, reducing MTTR for incidents}
|
||||
\resumeItem{Led Kubernetes resource optimization across 30+ services in
|
||||
three applications, implementing best-effort QoS in dev and test
|
||||
environments while tuning production resources, achieving \$50k
|
||||
three applications, implementing best-effort QoS in dev and test
|
||||
environments while tuning production resources, achieving \$50k
|
||||
annual cost savings in reservations and usage}
|
||||
\resumeItem{Created reusable Helm charts and a shared service layer
|
||||
that enabled 4 platform teams to deploy and configure UI services
|
||||
that enabled 4 platform teams to deploy and configure UI services
|
||||
more consistently}
|
||||
\resumeItemListEnd
|
||||
|
||||
@@ -185,10 +185,10 @@ more consistently}
|
||||
{JPMorgan Chase}{Jersey City, NJ}
|
||||
\resumeItemListStart
|
||||
\resumeItem{Owned production support for 30 applications across
|
||||
multiple teams, including deployment approvals, incident response,
|
||||
multiple teams, including deployment approvals, incident response,
|
||||
root cause analysis, and post-mortems}
|
||||
\resumeItem{Served as primary support engineer for a Hadoop-based data
|
||||
lake platform spanning Tableau, Kubernetes, Cloud Foundry, Dremio,
|
||||
lake platform spanning Tableau, Kubernetes, Cloud Foundry, Dremio,
|
||||
and S3-compatible object storage}
|
||||
\resumeItem{Served as the team expert on Linux, networking, and
|
||||
Hadoop infrastructure supporting business-critical applications}
|
||||
@@ -199,8 +199,8 @@ applications, improving alert coverage and observability consistency}
|
||||
\resumeItem{Automated disaster recovery procedures for a subset of
|
||||
production applications, reducing manual failover steps}
|
||||
\resumeItem{Automated historical data reload workflows using backup
|
||||
cluster for reprocessing and merge back to primary Hive datasets,
|
||||
reducing 72 hours of manual effort to zero and enabling on-demand
|
||||
cluster for reprocessing and merge back to primary Hive datasets,
|
||||
reducing 72 hours of manual effort to zero and enabling on-demand
|
||||
backfill capabilities}
|
||||
\resumeItemListEnd
|
||||
|
||||
|
||||
Reference in New Issue
Block a user