This ATS system is only a demo of my automated job search and scoring pipeline — it only evaluates me.
Anti-ATS Evaluatorv1.3
Automated ATS analysis and scoring system.
8383 jobs evaluated
70
Engenheiro(A) De Ia - Automação E Chatbots
[EA] MeuCashCard
São Paulo, São Paulo, Brazil
View
→
GOOD MATCH▼
[ANALYSIS]
**MEDIUM**
[gemini-3.1-pro-preview] The candidate has excellent backend engineering (Node.js, Postgres, SOLID) and LLM orchestration (RAG, prompt engineering, agents) skills. However, they lack explicit production experience building multicanal chatbots (WhatsApp API, RCS) which is a core requirement. The resume must be modified to emphasize the conversational and RAG components of their existing AI scoring and scraping platforms.
**Strengths:** Node.js / PostgreSQL, RAG & LLM Orchestration, Agentic Workflows
**Missing Required:** Experiência real construindo chatbots em produção
Missing:
C# / .NET, WhatsApp Business API / Evolution API
#4382691528 · 03-08-26 17:12
60
Consultor de Hiperautomação Sênior
[EA] ITEAM
São Paulo, São Paulo, Brazil
View
→
GOOD MATCH▼
[ANALYSIS]
**LOW**
[gemini-3.1-pro-preview] This is a Senior Hyperautomation Consulting role strictly demanding UiPath expertise, a legacy enterprise RPA tool the candidate does not use. While the candidate has the Python, SQL, and AI skills, the lack of the primary software platform is a blocker.
**Strengths:** AI applied to automation, Python / SQL, Fluent English
**Critical Gaps:** UiPath Experience
**Missing Required:** Experiência sólida com UiPath
Missing:
Enterprise Consulting Experience
#4378766360 · 03-08-26 17:11
75
Engenheiro de Automação e Soluções Digitais (BPMN, IA & Low-Code)
[EA] Ativa Serviços
Uberaba, Minas Gerais, Brazil
View
→
STRONG MATCH▼
[ANALYSIS]
**HIGH**
[gemini-3.1-pro-preview] The candidate excels at translating business logic into code, heavily utilizing Python, APIs, SQL, and Advanced Excel, which perfectly aligns with the core responsibilities. The only gap is specific enterprise BPMN Low-Code platform experience. The ATS might flag the lack of a strict CS/IT degree, but the Mechatronics Engineering background should suffice for human reviewers.
**Strengths:** Translating Business Logic to Code, Python & API Structuring, Advanced Excel & SQL
Missing:
Specific BPMN Platforms
#4380550175 · 03-08-26 17:11
90
Analista de Automações e Inteligência Artificial
[EA] Ignição Digital
Brasília, Federal District, Brazil
View
→
EXCELLENT MATCH▼
[ANALYSIS]
**TOP**
[gemini-3.1-pro-preview] The candidate is technically overqualified but perfectly suited for the responsibilities. Monitoring logs, fixing n8n/Make pipelines, documenting flows, and acting as a technical liaison are directly aligned with his recent freelance and startup work. The focus on autonomy and problem-solving strongly matches the candidate's profile.
**Strengths:** n8n / APIs / Scripting, System Documentation, Autonomous Problem Solving
Missing:
Hotmart, ActiveCampaign
#4375290093 · 03-08-26 17:11
70
Engenheiro(A) De Ia - Automação E Chatbots
[EA] MeuCashCard
São Paulo, São Paulo, Brazil
View
→
GOOD MATCH▼
[ANALYSIS]
**MEDIUM**
[gemini-3.1-pro-preview] The candidate has excellent backend engineering (Node.js, Postgres, SOLID) and LLM orchestration (RAG, prompt engineering, agents) skills. However, they lack explicit production experience building multicanal chatbots (WhatsApp API, RCS) which is a core requirement. The resume must be modified to emphasize the conversational and RAG components of their existing AI scoring and scraping platforms.
**Strengths:** Node.js / PostgreSQL, RAG & LLM Orchestration, Agentic Workflows
**Missing Required:** Experiência real construindo chatbots em produção
Missing:
C# / .NET, WhatsApp Business API / Evolution API
#4382903081 · 03-08-26 17:10
85
AI Agent Specialist
[EA] Builder Lead Converter
São Paulo, São Paulo, Brazil
View
→
STRONG MATCH▼
[ANALYSIS]
**HIGH**
[gemini-3.1-pro-preview] The candidate's ability to build multi-step workflows, handle LLMs, and explain technical concepts to non-technical stakeholders makes this a great fit. The only missing pieces are specific CRM tools (Go High Level) and WordPress, but the candidate's custom orchestrator and custom CMS experience demonstrate they can easily adapt. The resume should highlight the custom CMS built in Go as proof of web integration capabilities.
**Strengths:** LLMs & Prompt Engineering, n8n / Make / API Integration, Stakeholder Communication
Missing:
Go High Level, WordPress / Headless CMS
#4380818228 · 03-08-26 17:09
30
FBS Power Automate & Copilot Engineer
[EA] Capgemini
Brazil
View
→
POOR MATCH▼
[ANALYSIS]
**LOW**
[gemini-3.1-pro-preview] This role requires deep expertise in the Microsoft ecosystem (Copilot Studio, Power Automate, SharePoint, Purview). The candidate operates almost entirely in open-source, Linux, and custom code environments (Go, Python, AWS, Node.js). This is a heavy enterprise IT governance role that clashes with the candidate's skill set and culture.
**Strengths:** Prompt Engineering, API logic
**Critical Gaps:** Microsoft Copilot Studio, Power Automate, Microsoft 365 Architecture
**Missing Required:** Experiência prática com Copilot Studio, Power Automate
Missing:
SharePoint, Microsoft Purview
#4378131228 · 03-08-26 17:09
70
Engenheiro(A) De Ia - Automação E Chatbots
[EA] MeuCashCard
São Paulo, São Paulo, Brazil
View
→
GOOD MATCH▼
[ANALYSIS]
**MEDIUM**
[gemini-3.1-pro-preview] The candidate has excellent backend engineering (Node.js, Postgres, SOLID) and LLM orchestration (RAG, prompt engineering, agents) skills. However, they lack explicit production experience building multicanal chatbots (WhatsApp API, RCS) which is a core requirement. The resume must be modified to emphasize the conversational and RAG components of their existing AI scoring and scraping platforms.
**Strengths:** Node.js / PostgreSQL, RAG & LLM Orchestration, Agentic Workflows
**Missing Required:** Experiência real construindo chatbots em produção
Missing:
C# / .NET, WhatsApp Business API / Evolution API
#4382696224 · 03-08-26 17:09
70
Especialista de Inteligência Artificial
[EA] Almeida Junior Shopping Centers
São José, Santa Catarina, Brazil
View
→
GOOD MATCH▼
[ANALYSIS]
**HIGH**
[Copilot: GPT5.1] The candidate strongly matches the IA + automation focus with recent hands-on work building LLM-based scoring pipelines, scrapers, and orchestration, and has solid SQL, PostgreSQL and data quality experience. However, the JD requires strong Power BI and explicit dashboarding/modelagem in that tool, which is not present in the resume and acts as a primary ATS limiter despite adjacent experience with data reporting and analytics. To improve, the candidate should add a concrete Power BI project (even self-initiated), explicitly list Power BI under skills, and better highlight ML/GenAI use cases tied to business outcomes.
**Strengths:** GenAI and LLM pipeline design with real production metrics, SQL and data modeling experience in ERP and e-commerce contexts, Automation of data workflows using tools like n8n and custom orchestrators in Go/Node.js
**Missing Required:** Power BI expertise for dashboards and semantic layers
Missing:
Scrum, Kanban, traditional machine learning frameworks beyond LLMs, Power BI-specific dashboard design patterns and semantic modeling
#4378397459 · 03-08-26 16:53
60
Script and PowerShell Developer 1
[EA] EY
São Paulo, São Paulo, Brazil
View
→
GOOD MATCH▼
[ANALYSIS]
**LOW**
[Copilot: GPT5.1] The candidate aligns with Python, SQL, data modeling and finance domain experience, which are central to the role, and has some AWS fundamentals plus strong automation and data-pipeline exposure. However, the role is explicitly centered on AWS Glue, Spark and Redshift as core tools in a large consulting environment, and these specific skills are absent in the resume, which will hurt ATS ranking in a very crowded big-4 pipeline. To improve, the candidate should add at least a small AWS project mentioning Glue or an equivalent ETL on AWS, highlight any data warehousing work, and make the Data Engineer angle more explicit in titles and bullet points.
**Strengths:** Python and SQL used to build real trading and automation systems, Finance and capital-markets adjacent experience bridging business and tech, Data quality and governance work in ERP environments
**Missing Required:** AWS Glue in production, Apache Spark on AWS (e.g., EMR or Glue jobs), Amazon Redshift as primary analytical data warehouse, explicit Data Engineer experience on large-scale AWS environments
Missing:
Scala, hands-on data warehousing design specifically on Amazon Redshift, experience working on multi-team enterprise data projects
#4357716015 · 03-08-26 16:52
58
Engenheiro de Dados
[EA] Construtora Metrocasa
Greater São Paulo Area
View
→
WEAK MATCH▼
[ANALYSIS]
**MEDIUM**
[Copilot: GPT5.1] The candidate matches Python, SQL/PostgreSQL, Docker and CI/CD, and has real experience designing data-centric systems and analytics pipelines. However, the role is deeply tied to Azure (Databricks, Data Lake, DevOps), Spark/PySpark and Power BI, none of which appear in the resume, so ATS will view several core requirements as missing. To improve, the candidate should add at least one concrete Azure or cloud data project (even self-hosted but framed with similar concepts), explicitly mention any BI/dashboard work and call out familiarity with Spark-like paradigms where applicable.
**Strengths:** Python and SQL used for real-time trading systems and operational automation, PostgreSQL and relational data modeling experience in production systems, Hands-on CI/CD, Docker and Linux operations for deployed services
**Missing Required:** Azure Databricks, Azure Data Lake, Azure DevOps for CI/CD, Apache Spark / PySpark, strong Power BI skills
Missing:
Oracle database experience, formal Medallion / lakehouse architecture design, experience with non-relational databases such as MongoDB, hands-on Power BI dashboard development
#4379256611 · 03-08-26 16:52
61
Data Engineer (Medior Analyst)
[EA] Whirlpool Corporation
São Paulo, São Paulo, Brazil
View
→
GOOD MATCH▼
[ANALYSIS]
**MEDIUM**
[Copilot: GPT5-mini] Candidate has solid Python, SQL, pipeline and ETL experience and demonstrated analytics-minded work (ERP, pipelines), but lacks explicit GCP Dataflow/Pub/Sub/BigQuery/Cloud Composer experience called out as minimums. Score reflects transferable pipeline skills but penalizes missing platform-specific GCP items. Improve by listing any GCP/BigQuery work, or completing a short BigQuery/Dataflow/Cloud Composer project and surfacing it prominently.
**Strengths:** Python for data processing, Data pipeline design and ETL experience, ERP and operational data domain expertise
**Missing Required:** Hands-on experience with Dataflow, Pub/Sub, and BigQuery
Missing:
Dataflow (GCP), Pub/Sub, BigQuery (partitioning/clustering/cost-aware design), Cloud Composer (Airflow)
#4376428209 · 03-08-26 16:52
18
Lead Spark Data Engineer
[EA] Fusemachines
Brasília, Federal District, Brazil
View
→
POOR MATCH▼
[ANALYSIS]
**LOW**
[Copilot: GPT5-mini] This Lead Spark Data Engineer role demands deep Azure/Databricks/Spark internals and Java expertise (ANTLR/DSLs) which the candidate does not demonstrate; the gap is domain- and language-specific and critical. Score gives credit for Python/SQL and data engineering ability but heavily penalizes missing platform and language requirements. Recommend skipping unless the candidate has prior unseen Spark internals, Databricks, Java, and Azure evidence or is willing to acquire certified experience.
**Strengths:** Python and SQL competence, Architectural and systems design experience, Production pipeline and optimization thinking
**Critical Gaps:** No demonstrated Spark internals / Databricks expertise, No expert-level Java / ANTLR experience
**Missing Required:** Azure Databricks and deep Spark experience, Expert-level Java
Missing:
Azure ecosystem + Databricks (deep), Java (expert-level) and ANTLR / custom DSL experience, Deep Apache Spark internals (Catalyst, Logical Plans), Databricks / Delta Lake optimization
#4374132433 · 03-08-26 16:52
75
Senior/Principal Data Engineer
[EA] Sigma Software Group
Brasília, Federal District, Brazil
View
→
STRONG MATCH▼
[ANALYSIS]
**HIGH**
[Copilot: GPT5-mini] Candidate aligns well with the Senior/Principal Data Engineer profile: strong Python, data platform building, ETL/ELT experience and proven track record delivering early-stage platforms and analytics products. Minor gaps include explicit Snowflake/dbt/Airflow mentions which are commonly expected but are adjacent and inferrable. To improve, call out any dbt, Airflow, Snowflake or equivalent experience and examples of platform v1→production delivery.
**Strengths:** Python-based data engineering and platform delivery, Experience designing/delivering production data platforms, AI/ML pipeline and RAG experience (useful for AI-driven SaaS)
Missing:
Snowflake / dbt (explicit), Airflow / Dagster / Prefect (explicit), Formalized data observability tooling (explicit)
#4380908363 · 03-08-26 16:51
0
Lead Data Engineer
[EA] Perform
São Paulo, São Paulo, Brazil
View
→
POOR MATCH▼
[ANALYSIS]
**LOW**
[Copilot: GPT5-mini] This role is hybrid and requires in-office presence (three days/week) and PST hours; the candidate is Brazil-based and remote-only which violates geographic/onsite requirements and triggers a hard filter. Because the job is not remote-friendly in practice, ATS/legal restrictions would auto-reject. No amount of resume tweaking solves the onsite/hours requirement short of relocation.
#4376785795 · 03-08-26 16:50
68
Senior Data Engineer (Advanced English)
[EA] Rooby HR
São Paulo, São Paulo, Brazil
View
→
GOOD MATCH▼
[ANALYSIS]
**MEDIUM**
[Copilot: GPT5-mini] Candidate has strong platform and pipeline experience (Python, SQL, governance) and can map business requirements into technical solutions, but lacks a certified/advanced degree and explicit Data Vault certification requested by the JD. Score reflects good data engineering fit with penalties for the specified Data Vault and degree preferences. Improve by highlighting any Data Vault work, formal training, or substituting with documented Data Vault project examples or certifications.
**Strengths:** Python and data pipeline experience, Data governance, profiling and lineage experience, Ability to design production-grade transformation pipelines
**Missing Required:** Data Vault certification / mandatory expertise
Missing:
Data Vault framework (certified/mandatory), Advanced degree in CS/Data Analytics (preferred/required)
#4379875384 · 03-08-26 16:50
56
Senior Data Engineer (Tableau,Snowflake, Python)
[EA] Modus Create
Porto Alegre, Rio Grande do Sul, Brazil
View
→
WEAK MATCH▼
[ANALYSIS]
**MEDIUM**
[Copilot: GPT5-mini] Candidate has strong Python, pipeline, and AI/data experience that map well to the role, but lacks explicit Snowflake and Tableau experience which are core JD items. Score reflects solid transferable skills but deductions for missing Snowflake/Tableau. Improve by highlighting any Snowflake/BI work or publishing a Tableau / Snowflake proof-of-work sample.
**Strengths:** Python for data engineering, Experience building production data pipelines, AI/LLM evaluation and pipeline knowledge (bonus)
**Missing Required:** Strong proficiency in Snowflake (not shown)
Missing:
Snowflake, Tableau, Machine learning tooling integrated with Snowflake (explicit)
#4372902753 · 03-08-26 16:49
56
ANALISTA SR ENGENHEIRO DE DADOS
[EA] Braskem
Greater São Paulo Area
View
→
WEAK MATCH▼
[ANALYSIS]
**LOW**
[Copilot: GPT5.1] The candidate aligns with Python, SQL, data quality, governance and strong finance/supply chain domain knowledge, which are all relevant to a financial data engineering team. Yet this senior role is explicitly centered on Azure (Databricks, Microsoft Fabric), PySpark and MLOps in corporate environments, and none of those tools or environments appear in the resume, which will significantly depress ATS scoring relative to more traditional data engineering profiles. To improve, the candidate would need to showcase at least one Azure/Databricks-style project, explicitly mention experience with distributed data processing, and frame past work more clearly as engineering de dados in financial contexts.
**Strengths:** Data quality and governance work in ERP/TOTVS leading to measurable cost reductions, Solid Python and SQL background used in operational and analytical systems, Finance and operations domain understanding, especially around KPIs and forecasting
**Missing Required:** Azure Databricks, Microsoft Fabric, Apache Spark / PySpark, experience with Data Lakes and Data Warehouses on Azure
Missing:
MLOps tooling such as MLflow or similar frameworks, Power Platform (Power BI, Power Automate, Power Apps), experience documenting complex data architectures in enterprise standards
#4381074311 · 03-08-26 16:49
62
Engenheiro de Dados
[EA] FCamara
São Paulo, São Paulo, Brazil
View
→
GOOD MATCH▼
[ANALYSIS]
**HIGH**
[Copilot: GPT5.1] The candidate matches many conceptual and practical aspects of this role: strong Python and SQL, experience designing and operating data/IA pipelines, knowledge of RAG and vector search concepts, Docker, CI/CD and multi-cloud fundamentals. However, the JD expects deep experience with Spark/PySpark, Airflow or Data Factory, specific NoSQL stores, Kubernetes and MLOps platforms like MLflow/Kubeflow/Vertex, none of which appear in the resume, which will limit ATS ranking for a senior consulting position. To improve, the candidate should add explicit containerization/orchestration achievements, highlight any NoSQL/vector DB usage, and consider a small public project using Airflow or Spark to insert those keywords credibly.
**Strengths:** Hands-on LLM and RAG experience with multiple providers (OpenAI, Gemini, Claude, HuggingFace), Solid SQL/PostgreSQL background plus data modeling and pipeline design, Docker, CI/CD and Linux skills that fit well with modern IA data stacks
**Missing Required:** Apache Spark / PySpark, orchestration tools like Apache Airflow or Azure Data Factory, NoSQL databases (MongoDB, Redis, Cassandra), Kubernetes for container orchestration
Missing:
Vector databases such as Qdrant, MLOps platforms (MLflow, Kubeflow, Vertex AI), experience managing large distributed data clusters beyond single-node environments
#4381106440 · 03-08-26 16:49
59
Senior Data Developer, Brazil
[EA] CI&T
Brazil
View
→
WEAK MATCH▼
[ANALYSIS]
**LOW**
[Copilot: GPT5.1] The candidate aligns well with advanced Python for data/IA solutions, CI/CD, Git and data quality/governance, and has real experience designing and running pipelines that feed LLM-based systems. Yet this senior role is heavily built around the AWS data stack (Glue, EMR, Lambda, Kinesis, Step Functions, S3), infrastructure as code (Terraform/CloudFormation), Medallion-style architectures and Athena, none of which appear in the resume beyond basic AWS fundamentals. To improve, the candidate would need to demonstrate at least one serious AWS data project with named services, add IaC experience (even small Terraform demos), and describe pipelines in language closer to Lake/Lakehouse architectures.
**Strengths:** Advanced Python used for real production-like systems and automation, Demonstrated focus on data quality and governance from ERP through IA pipelines, Strong CI/CD and Git experience enabling reliable deployment workflows
**Missing Required:** AWS Glue for ETL/ELT, AWS EMR for big-data processing, AWS Lambda and Step Functions for orchestrated data workflows, AWS Kinesis for streaming, Terraform or CloudFormation for infrastructure as code, Medallion data architecture and Amazon Athena for querying
Missing:
PySpark and other distributed processing frameworks, data mesh implementation experience, data observability using Datadog or CloudWatch
#4381103758 · 03-08-26 16:48
52
Engenheiro(a) de Dados - Sênior
[EA] XP Inc.
São Paulo, São Paulo, Brazil
View
→
WEAK MATCH▼
[ANALYSIS]
**LOW**
[Copilot: GPT5.1] The candidate brings strong SQL/modelagem, Python and experience with data quality, along with financial and operational domain knowledge that is relevant for a banking IA context. However, this senior role is explicitly Databricks-centric (Spark, Delta, Lakehouse) with ADF/Airflow, banking-grade governance/compliance and optimization of Databricks performance/costs, none of which are visible in the resume. To improve, the candidate would need to add explicit exposure to Databricks or Spark-like environments, mention any workflow orchestration tools, and frame previous work more clearly around governance/compliance patterns.
**Strengths:** Solid SQL and data modeling background with ERP and trading use cases, Experience with data quality and governance in operational systems, Ability to translate complex financial and operational needs into technical data solutions
**Missing Required:** Databricks (Spark, Delta, Lakehouse), Azure Data Factory or Apache Airflow, deep data governance and compliance experience in banking
Missing:
feature store design and management, Databricks performance and cost optimization, experience working with wholesale banking product datasets
#4382326924 · 03-08-26 16:47
70
Consultor de Engenharia de Dados - Híbrido
[EA] Accenture Brasil
Greater Rio de Janeiro
View
→
GOOD MATCH▼
[ANALYSIS]
**TOP**
[Copilot: GPT5.1] The candidate matches Python and SQL, ETL/ELT and automation, has solid experience in industrial/operational environments (Electrolux, foodservice, e-commerce) and a strong track record in data quality and integration between ERP and operations. The biggest gaps are Spark, specific OT/IT tools like PI System/SCADA and direct oil & gas/offshore exposure, but the underlying pattern of integrating operational data for analytics and IA is clearly present. To improve, the candidate should explicitly connect past work to OT/IT integration concepts, add at least a Spark-style project, and emphasize CI/CD/DataOps practices in the resume.
**Strengths:** Python and SQL used to build real automation and analytics in industrial-style environments, Deep ERP and operational data experience, including data quality and governance work, Experience designing and running end-to-end pipelines that feed advanced analytics/IA solutions
**Missing Required:** Apache Spark, integration with PI System or SCADA, hands-on projects in oil & gas or offshore operations
Missing:
formal DataOps tooling and practices, experience building high-throughput streaming data pipelines, explicit cloud platform experience dedicated to industrial data workloads
#4378884886 · 03-08-26 16:47
73
Data Engineer I, LATAM CARPOOL
[EA] Amazon
Greater São Paulo Area
View
→
GOOD MATCH▼
[ANALYSIS]
**HIGH**
[Copilot: GPT5-mini] This LATAM Data Engineer role aligns well: it requires regional experience (MX/BR), ETL/warehousing, and Python/SQL skills that the candidate has; the JD is junior-to-mid and many requirements map directly. Score benefits from local/regional fit and production pipeline experience. Improve by emphasizing Brazil retail/operations integrations, ETL scale, and any conversational-AI or agentic data work relevant to the role.
**Strengths:** Python and SQL pipeline experience, Brazil market and operations domain knowledge, Experience building AI-ready data warehouses and conversational pipelines
Missing:
Specific big-data tooling (Hadoop / Spark / EMR listed as preferred), Explicit experience with large-scale Hadoop ecosystem (preferred)
#4375030378 · 03-08-26 16:47
53
Data Engineer
[EA] Pride Global
São Paulo, São Paulo, Brazil
View
→
WEAK MATCH▼
[ANALYSIS]
**MEDIUM**
[Copilot: GPT5-mini] Candidate has strong Python, SQL, and dimensional modeling aptitude but lacks explicit Microsoft Fabric experience which the JD prefers; many Fabric skills are adjacent to other clouds so they are partially inferable. Score indicates reasonable match for data modeling and pipeline work but penalizes the platform-specific Fabric experience gap. Improve by demonstrating Fabric or equivalent Azure Synapse/Databricks projects or describing medallion/bronze-silver-gold pipelines in detail.
**Strengths:** Python and SQL for ETL/ELT, Dimensional modeling and medallion architecture understanding, Experience integrating APIs and building fallback scraping pipelines
Missing:
Microsoft Fabric (lakehouse, pipelines, notebooks), PySpark / Spark experience (explicit), Fabric-specific governance and notebooks
#4378522043 · 03-08-26 16:46
28
Senior Healthcare Data Engineer with AI & Cloud Focus - 100% REMOTE
[EA] ITDS
Brazil
View
→
POOR MATCH▼
[ANALYSIS]
**LOW**
[Copilot: GPT5-mini] This senior healthcare role requires deep, domain-specific healthcare data experience (claims, ICD/CPT, HIPAA compliance) and multi‑TB pipeline experience which the candidate does not demonstrate. Score reflects general data engineering strengths but major deductions for the mandatory healthcare domain and scale experience gap. Recommend skipping unless the candidate can document significant healthcare-specific ETL/claims work and large-scale pipeline ownership.
**Strengths:** Python and ETL pipeline experience, Data profiling and validation practices, Operationalization and monitoring experience
**Missing Required:** 5+ years healthcare data engineering experience, Hands-on Databricks / large-scale data lake + healthcare domain expertise
Missing:
Healthcare data domain (claims, ICD/CPT, NPI, PHI handling), Large-scale ETL/ELT on multi-billion-row datasets, HIPAA / healthcare compliance experience
#4381257666 · 03-08-26 16:46
| Score | Role | Company | Location | Analysis | ID | Date ▼ |
|---|---|---|---|---|---|---|
|
70
|
Engenheiro(A) De Ia - Automação E Chatbots
View_Position
→
|
[EA] MeuCashCard
|
São Paulo, São Paulo, Brazil |
GOOD MATCH▼
[ANALYSIS_REPORT]
**MEDIUM**
[gemini-3.1-pro-preview] The candidate has excellent backend engineering (Node.js, Postgres, SOLID) and LLM orchestration (RAG, prompt engineering, agents) skills. However, they lack explicit production experience building multicanal chatbots (WhatsApp API, RCS) which is a core requirement. The resume must be modified to emphasize the conversational and RAG components of their existing AI scoring and scraping platforms.
**Strengths:** Node.js / PostgreSQL, RAG & LLM Orchestration, Agentic Workflows
**Missing Required:** Experiência real construindo chatbots em produção
Missing_Assets:
C# / .NET, WhatsApp Business API / Evolution API
|
#4382691528 | 03-08-26 17:12 |
|
60
|
Consultor de Hiperautomação Sênior
View_Position
→
|
[EA] ITEAM
|
São Paulo, São Paulo, Brazil |
GOOD MATCH▼
[ANALYSIS_REPORT]
**LOW**
[gemini-3.1-pro-preview] This is a Senior Hyperautomation Consulting role strictly demanding UiPath expertise, a legacy enterprise RPA tool the candidate does not use. While the candidate has the Python, SQL, and AI skills, the lack of the primary software platform is a blocker.
**Strengths:** AI applied to automation, Python / SQL, Fluent English
**Critical Gaps:** UiPath Experience
**Missing Required:** Experiência sólida com UiPath
Missing_Assets:
Enterprise Consulting Experience
|
#4378766360 | 03-08-26 17:11 |
|
75
|
Engenheiro de Automação e Soluções Digitais (BPMN, IA & Low-Code)
View_Position
→
|
[EA] Ativa Serviços
|
Uberaba, Minas Gerais, Brazil |
STRONG MATCH▼
[ANALYSIS_REPORT]
**HIGH**
[gemini-3.1-pro-preview] The candidate excels at translating business logic into code, heavily utilizing Python, APIs, SQL, and Advanced Excel, which perfectly aligns with the core responsibilities. The only gap is specific enterprise BPMN Low-Code platform experience. The ATS might flag the lack of a strict CS/IT degree, but the Mechatronics Engineering background should suffice for human reviewers.
**Strengths:** Translating Business Logic to Code, Python & API Structuring, Advanced Excel & SQL
Missing_Assets:
Specific BPMN Platforms
|
#4380550175 | 03-08-26 17:11 |
|
90
|
Analista de Automações e Inteligência Artificial
View_Position
→
|
[EA] Ignição Digital
|
Brasília, Federal District, Brazil |
EXCELLENT MATCH▼
[ANALYSIS_REPORT]
**TOP**
[gemini-3.1-pro-preview] The candidate is technically overqualified but perfectly suited for the responsibilities. Monitoring logs, fixing n8n/Make pipelines, documenting flows, and acting as a technical liaison are directly aligned with his recent freelance and startup work. The focus on autonomy and problem-solving strongly matches the candidate's profile.
**Strengths:** n8n / APIs / Scripting, System Documentation, Autonomous Problem Solving
Missing_Assets:
Hotmart, ActiveCampaign
|
#4375290093 | 03-08-26 17:11 |
|
70
|
Engenheiro(A) De Ia - Automação E Chatbots
View_Position
→
|
[EA] MeuCashCard
|
São Paulo, São Paulo, Brazil |
GOOD MATCH▼
[ANALYSIS_REPORT]
**MEDIUM**
[gemini-3.1-pro-preview] The candidate has excellent backend engineering (Node.js, Postgres, SOLID) and LLM orchestration (RAG, prompt engineering, agents) skills. However, they lack explicit production experience building multicanal chatbots (WhatsApp API, RCS) which is a core requirement. The resume must be modified to emphasize the conversational and RAG components of their existing AI scoring and scraping platforms.
**Strengths:** Node.js / PostgreSQL, RAG & LLM Orchestration, Agentic Workflows
**Missing Required:** Experiência real construindo chatbots em produção
Missing_Assets:
C# / .NET, WhatsApp Business API / Evolution API
|
#4382903081 | 03-08-26 17:10 |
|
85
|
AI Agent Specialist
View_Position
→
|
[EA] Builder Lead Converter
|
São Paulo, São Paulo, Brazil |
STRONG MATCH▼
[ANALYSIS_REPORT]
**HIGH**
[gemini-3.1-pro-preview] The candidate's ability to build multi-step workflows, handle LLMs, and explain technical concepts to non-technical stakeholders makes this a great fit. The only missing pieces are specific CRM tools (Go High Level) and WordPress, but the candidate's custom orchestrator and custom CMS experience demonstrate they can easily adapt. The resume should highlight the custom CMS built in Go as proof of web integration capabilities.
**Strengths:** LLMs & Prompt Engineering, n8n / Make / API Integration, Stakeholder Communication
Missing_Assets:
Go High Level, WordPress / Headless CMS
|
#4380818228 | 03-08-26 17:09 |
|
30
|
FBS Power Automate & Copilot Engineer
View_Position
→
|
[EA] Capgemini
|
Brazil |
POOR MATCH▼
[ANALYSIS_REPORT]
**LOW**
[gemini-3.1-pro-preview] This role requires deep expertise in the Microsoft ecosystem (Copilot Studio, Power Automate, SharePoint, Purview). The candidate operates almost entirely in open-source, Linux, and custom code environments (Go, Python, AWS, Node.js). This is a heavy enterprise IT governance role that clashes with the candidate's skill set and culture.
**Strengths:** Prompt Engineering, API logic
**Critical Gaps:** Microsoft Copilot Studio, Power Automate, Microsoft 365 Architecture
**Missing Required:** Experiência prática com Copilot Studio, Power Automate
Missing_Assets:
SharePoint, Microsoft Purview
|
#4378131228 | 03-08-26 17:09 |
|
70
|
Engenheiro(A) De Ia - Automação E Chatbots
View_Position
→
|
[EA] MeuCashCard
|
São Paulo, São Paulo, Brazil |
GOOD MATCH▼
[ANALYSIS_REPORT]
**MEDIUM**
[gemini-3.1-pro-preview] The candidate has excellent backend engineering (Node.js, Postgres, SOLID) and LLM orchestration (RAG, prompt engineering, agents) skills. However, they lack explicit production experience building multicanal chatbots (WhatsApp API, RCS) which is a core requirement. The resume must be modified to emphasize the conversational and RAG components of their existing AI scoring and scraping platforms.
**Strengths:** Node.js / PostgreSQL, RAG & LLM Orchestration, Agentic Workflows
**Missing Required:** Experiência real construindo chatbots em produção
Missing_Assets:
C# / .NET, WhatsApp Business API / Evolution API
|
#4382696224 | 03-08-26 17:09 |
|
70
|
Especialista de Inteligência Artificial
View_Position
→
|
[EA] Almeida Junior Shopping Centers
|
São José, Santa Catarina, Brazil |
GOOD MATCH▼
[ANALYSIS_REPORT]
**HIGH**
[Copilot: GPT5.1] The candidate strongly matches the IA + automation focus with recent hands-on work building LLM-based scoring pipelines, scrapers, and orchestration, and has solid SQL, PostgreSQL and data quality experience. However, the JD requires strong Power BI and explicit dashboarding/modelagem in that tool, which is not present in the resume and acts as a primary ATS limiter despite adjacent experience with data reporting and analytics. To improve, the candidate should add a concrete Power BI project (even self-initiated), explicitly list Power BI under skills, and better highlight ML/GenAI use cases tied to business outcomes.
**Strengths:** GenAI and LLM pipeline design with real production metrics, SQL and data modeling experience in ERP and e-commerce contexts, Automation of data workflows using tools like n8n and custom orchestrators in Go/Node.js
**Missing Required:** Power BI expertise for dashboards and semantic layers
Missing_Assets:
Scrum, Kanban, traditional machine learning frameworks beyond LLMs, Power BI-specific dashboard design patterns and semantic modeling
|
#4378397459 | 03-08-26 16:53 |
|
60
|
Script and PowerShell Developer 1
View_Position
→
|
[EA] EY
|
São Paulo, São Paulo, Brazil |
GOOD MATCH▼
[ANALYSIS_REPORT]
**LOW**
[Copilot: GPT5.1] The candidate aligns with Python, SQL, data modeling and finance domain experience, which are central to the role, and has some AWS fundamentals plus strong automation and data-pipeline exposure. However, the role is explicitly centered on AWS Glue, Spark and Redshift as core tools in a large consulting environment, and these specific skills are absent in the resume, which will hurt ATS ranking in a very crowded big-4 pipeline. To improve, the candidate should add at least a small AWS project mentioning Glue or an equivalent ETL on AWS, highlight any data warehousing work, and make the Data Engineer angle more explicit in titles and bullet points.
**Strengths:** Python and SQL used to build real trading and automation systems, Finance and capital-markets adjacent experience bridging business and tech, Data quality and governance work in ERP environments
**Missing Required:** AWS Glue in production, Apache Spark on AWS (e.g., EMR or Glue jobs), Amazon Redshift as primary analytical data warehouse, explicit Data Engineer experience on large-scale AWS environments
Missing_Assets:
Scala, hands-on data warehousing design specifically on Amazon Redshift, experience working on multi-team enterprise data projects
|
#4357716015 | 03-08-26 16:52 |
|
58
|
Engenheiro de Dados
View_Position
→
|
[EA] Construtora Metrocasa
|
Greater São Paulo Area |
WEAK MATCH▼
[ANALYSIS_REPORT]
**MEDIUM**
[Copilot: GPT5.1] The candidate matches Python, SQL/PostgreSQL, Docker and CI/CD, and has real experience designing data-centric systems and analytics pipelines. However, the role is deeply tied to Azure (Databricks, Data Lake, DevOps), Spark/PySpark and Power BI, none of which appear in the resume, so ATS will view several core requirements as missing. To improve, the candidate should add at least one concrete Azure or cloud data project (even self-hosted but framed with similar concepts), explicitly mention any BI/dashboard work and call out familiarity with Spark-like paradigms where applicable.
**Strengths:** Python and SQL used for real-time trading systems and operational automation, PostgreSQL and relational data modeling experience in production systems, Hands-on CI/CD, Docker and Linux operations for deployed services
**Missing Required:** Azure Databricks, Azure Data Lake, Azure DevOps for CI/CD, Apache Spark / PySpark, strong Power BI skills
Missing_Assets:
Oracle database experience, formal Medallion / lakehouse architecture design, experience with non-relational databases such as MongoDB, hands-on Power BI dashboard development
|
#4379256611 | 03-08-26 16:52 |
|
61
|
Data Engineer (Medior Analyst)
View_Position
→
|
[EA] Whirlpool Corporation
|
São Paulo, São Paulo, Brazil |
GOOD MATCH▼
[ANALYSIS_REPORT]
**MEDIUM**
[Copilot: GPT5-mini] Candidate has solid Python, SQL, pipeline and ETL experience and demonstrated analytics-minded work (ERP, pipelines), but lacks explicit GCP Dataflow/Pub/Sub/BigQuery/Cloud Composer experience called out as minimums. Score reflects transferable pipeline skills but penalizes missing platform-specific GCP items. Improve by listing any GCP/BigQuery work, or completing a short BigQuery/Dataflow/Cloud Composer project and surfacing it prominently.
**Strengths:** Python for data processing, Data pipeline design and ETL experience, ERP and operational data domain expertise
**Missing Required:** Hands-on experience with Dataflow, Pub/Sub, and BigQuery
Missing_Assets:
Dataflow (GCP), Pub/Sub, BigQuery (partitioning/clustering/cost-aware design), Cloud Composer (Airflow)
|
#4376428209 | 03-08-26 16:52 |
|
18
|
Lead Spark Data Engineer
View_Position
→
|
[EA] Fusemachines
|
Brasília, Federal District, Brazil |
POOR MATCH▼
[ANALYSIS_REPORT]
**LOW**
[Copilot: GPT5-mini] This Lead Spark Data Engineer role demands deep Azure/Databricks/Spark internals and Java expertise (ANTLR/DSLs) which the candidate does not demonstrate; the gap is domain- and language-specific and critical. Score gives credit for Python/SQL and data engineering ability but heavily penalizes missing platform and language requirements. Recommend skipping unless the candidate has prior unseen Spark internals, Databricks, Java, and Azure evidence or is willing to acquire certified experience.
**Strengths:** Python and SQL competence, Architectural and systems design experience, Production pipeline and optimization thinking
**Critical Gaps:** No demonstrated Spark internals / Databricks expertise, No expert-level Java / ANTLR experience
**Missing Required:** Azure Databricks and deep Spark experience, Expert-level Java
Missing_Assets:
Azure ecosystem + Databricks (deep), Java (expert-level) and ANTLR / custom DSL experience, Deep Apache Spark internals (Catalyst, Logical Plans), Databricks / Delta Lake optimization
|
#4374132433 | 03-08-26 16:52 |
|
75
|
Senior/Principal Data Engineer
View_Position
→
|
[EA] Sigma Software Group
|
Brasília, Federal District, Brazil |
STRONG MATCH▼
[ANALYSIS_REPORT]
**HIGH**
[Copilot: GPT5-mini] Candidate aligns well with the Senior/Principal Data Engineer profile: strong Python, data platform building, ETL/ELT experience and proven track record delivering early-stage platforms and analytics products. Minor gaps include explicit Snowflake/dbt/Airflow mentions which are commonly expected but are adjacent and inferrable. To improve, call out any dbt, Airflow, Snowflake or equivalent experience and examples of platform v1→production delivery.
**Strengths:** Python-based data engineering and platform delivery, Experience designing/delivering production data platforms, AI/ML pipeline and RAG experience (useful for AI-driven SaaS)
Missing_Assets:
Snowflake / dbt (explicit), Airflow / Dagster / Prefect (explicit), Formalized data observability tooling (explicit)
|
#4380908363 | 03-08-26 16:51 |
|
0
|
Lead Data Engineer
View_Position
→
|
[EA] Perform
|
São Paulo, São Paulo, Brazil |
POOR MATCH▼
[ANALYSIS_REPORT]
**LOW**
[Copilot: GPT5-mini] This role is hybrid and requires in-office presence (three days/week) and PST hours; the candidate is Brazil-based and remote-only which violates geographic/onsite requirements and triggers a hard filter. Because the job is not remote-friendly in practice, ATS/legal restrictions would auto-reject. No amount of resume tweaking solves the onsite/hours requirement short of relocation.
|
#4376785795 | 03-08-26 16:50 |
|
68
|
Senior Data Engineer (Advanced English)
View_Position
→
|
[EA] Rooby HR
|
São Paulo, São Paulo, Brazil |
GOOD MATCH▼
[ANALYSIS_REPORT]
**MEDIUM**
[Copilot: GPT5-mini] Candidate has strong platform and pipeline experience (Python, SQL, governance) and can map business requirements into technical solutions, but lacks a certified/advanced degree and explicit Data Vault certification requested by the JD. Score reflects good data engineering fit with penalties for the specified Data Vault and degree preferences. Improve by highlighting any Data Vault work, formal training, or substituting with documented Data Vault project examples or certifications.
**Strengths:** Python and data pipeline experience, Data governance, profiling and lineage experience, Ability to design production-grade transformation pipelines
**Missing Required:** Data Vault certification / mandatory expertise
Missing_Assets:
Data Vault framework (certified/mandatory), Advanced degree in CS/Data Analytics (preferred/required)
|
#4379875384 | 03-08-26 16:50 |
|
56
|
Senior Data Engineer (Tableau,Snowflake, Python)
View_Position
→
|
[EA] Modus Create
|
Porto Alegre, Rio Grande do Sul, Brazil |
WEAK MATCH▼
[ANALYSIS_REPORT]
**MEDIUM**
[Copilot: GPT5-mini] Candidate has strong Python, pipeline, and AI/data experience that map well to the role, but lacks explicit Snowflake and Tableau experience which are core JD items. Score reflects solid transferable skills but deductions for missing Snowflake/Tableau. Improve by highlighting any Snowflake/BI work or publishing a Tableau / Snowflake proof-of-work sample.
**Strengths:** Python for data engineering, Experience building production data pipelines, AI/LLM evaluation and pipeline knowledge (bonus)
**Missing Required:** Strong proficiency in Snowflake (not shown)
Missing_Assets:
Snowflake, Tableau, Machine learning tooling integrated with Snowflake (explicit)
|
#4372902753 | 03-08-26 16:49 |
|
56
|
ANALISTA SR ENGENHEIRO DE DADOS
View_Position
→
|
[EA] Braskem
|
Greater São Paulo Area |
WEAK MATCH▼
[ANALYSIS_REPORT]
**LOW**
[Copilot: GPT5.1] The candidate aligns with Python, SQL, data quality, governance and strong finance/supply chain domain knowledge, which are all relevant to a financial data engineering team. Yet this senior role is explicitly centered on Azure (Databricks, Microsoft Fabric), PySpark and MLOps in corporate environments, and none of those tools or environments appear in the resume, which will significantly depress ATS scoring relative to more traditional data engineering profiles. To improve, the candidate would need to showcase at least one Azure/Databricks-style project, explicitly mention experience with distributed data processing, and frame past work more clearly as engineering de dados in financial contexts.
**Strengths:** Data quality and governance work in ERP/TOTVS leading to measurable cost reductions, Solid Python and SQL background used in operational and analytical systems, Finance and operations domain understanding, especially around KPIs and forecasting
**Missing Required:** Azure Databricks, Microsoft Fabric, Apache Spark / PySpark, experience with Data Lakes and Data Warehouses on Azure
Missing_Assets:
MLOps tooling such as MLflow or similar frameworks, Power Platform (Power BI, Power Automate, Power Apps), experience documenting complex data architectures in enterprise standards
|
#4381074311 | 03-08-26 16:49 |
|
62
|
Engenheiro de Dados
View_Position
→
|
[EA] FCamara
|
São Paulo, São Paulo, Brazil |
GOOD MATCH▼
[ANALYSIS_REPORT]
**HIGH**
[Copilot: GPT5.1] The candidate matches many conceptual and practical aspects of this role: strong Python and SQL, experience designing and operating data/IA pipelines, knowledge of RAG and vector search concepts, Docker, CI/CD and multi-cloud fundamentals. However, the JD expects deep experience with Spark/PySpark, Airflow or Data Factory, specific NoSQL stores, Kubernetes and MLOps platforms like MLflow/Kubeflow/Vertex, none of which appear in the resume, which will limit ATS ranking for a senior consulting position. To improve, the candidate should add explicit containerization/orchestration achievements, highlight any NoSQL/vector DB usage, and consider a small public project using Airflow or Spark to insert those keywords credibly.
**Strengths:** Hands-on LLM and RAG experience with multiple providers (OpenAI, Gemini, Claude, HuggingFace), Solid SQL/PostgreSQL background plus data modeling and pipeline design, Docker, CI/CD and Linux skills that fit well with modern IA data stacks
**Missing Required:** Apache Spark / PySpark, orchestration tools like Apache Airflow or Azure Data Factory, NoSQL databases (MongoDB, Redis, Cassandra), Kubernetes for container orchestration
Missing_Assets:
Vector databases such as Qdrant, MLOps platforms (MLflow, Kubeflow, Vertex AI), experience managing large distributed data clusters beyond single-node environments
|
#4381106440 | 03-08-26 16:49 |
|
59
|
Senior Data Developer, Brazil
View_Position
→
|
[EA] CI&T
|
Brazil |
WEAK MATCH▼
[ANALYSIS_REPORT]
**LOW**
[Copilot: GPT5.1] The candidate aligns well with advanced Python for data/IA solutions, CI/CD, Git and data quality/governance, and has real experience designing and running pipelines that feed LLM-based systems. Yet this senior role is heavily built around the AWS data stack (Glue, EMR, Lambda, Kinesis, Step Functions, S3), infrastructure as code (Terraform/CloudFormation), Medallion-style architectures and Athena, none of which appear in the resume beyond basic AWS fundamentals. To improve, the candidate would need to demonstrate at least one serious AWS data project with named services, add IaC experience (even small Terraform demos), and describe pipelines in language closer to Lake/Lakehouse architectures.
**Strengths:** Advanced Python used for real production-like systems and automation, Demonstrated focus on data quality and governance from ERP through IA pipelines, Strong CI/CD and Git experience enabling reliable deployment workflows
**Missing Required:** AWS Glue for ETL/ELT, AWS EMR for big-data processing, AWS Lambda and Step Functions for orchestrated data workflows, AWS Kinesis for streaming, Terraform or CloudFormation for infrastructure as code, Medallion data architecture and Amazon Athena for querying
Missing_Assets:
PySpark and other distributed processing frameworks, data mesh implementation experience, data observability using Datadog or CloudWatch
|
#4381103758 | 03-08-26 16:48 |
|
52
|
Engenheiro(a) de Dados - Sênior
View_Position
→
|
[EA] XP Inc.
|
São Paulo, São Paulo, Brazil |
WEAK MATCH▼
[ANALYSIS_REPORT]
**LOW**
[Copilot: GPT5.1] The candidate brings strong SQL/modelagem, Python and experience with data quality, along with financial and operational domain knowledge that is relevant for a banking IA context. However, this senior role is explicitly Databricks-centric (Spark, Delta, Lakehouse) with ADF/Airflow, banking-grade governance/compliance and optimization of Databricks performance/costs, none of which are visible in the resume. To improve, the candidate would need to add explicit exposure to Databricks or Spark-like environments, mention any workflow orchestration tools, and frame previous work more clearly around governance/compliance patterns.
**Strengths:** Solid SQL and data modeling background with ERP and trading use cases, Experience with data quality and governance in operational systems, Ability to translate complex financial and operational needs into technical data solutions
**Missing Required:** Databricks (Spark, Delta, Lakehouse), Azure Data Factory or Apache Airflow, deep data governance and compliance experience in banking
Missing_Assets:
feature store design and management, Databricks performance and cost optimization, experience working with wholesale banking product datasets
|
#4382326924 | 03-08-26 16:47 |
|
70
|
Consultor de Engenharia de Dados - Híbrido
View_Position
→
|
[EA] Accenture Brasil
|
Greater Rio de Janeiro |
GOOD MATCH▼
[ANALYSIS_REPORT]
**TOP**
[Copilot: GPT5.1] The candidate matches Python and SQL, ETL/ELT and automation, has solid experience in industrial/operational environments (Electrolux, foodservice, e-commerce) and a strong track record in data quality and integration between ERP and operations. The biggest gaps are Spark, specific OT/IT tools like PI System/SCADA and direct oil & gas/offshore exposure, but the underlying pattern of integrating operational data for analytics and IA is clearly present. To improve, the candidate should explicitly connect past work to OT/IT integration concepts, add at least a Spark-style project, and emphasize CI/CD/DataOps practices in the resume.
**Strengths:** Python and SQL used to build real automation and analytics in industrial-style environments, Deep ERP and operational data experience, including data quality and governance work, Experience designing and running end-to-end pipelines that feed advanced analytics/IA solutions
**Missing Required:** Apache Spark, integration with PI System or SCADA, hands-on projects in oil & gas or offshore operations
Missing_Assets:
formal DataOps tooling and practices, experience building high-throughput streaming data pipelines, explicit cloud platform experience dedicated to industrial data workloads
|
#4378884886 | 03-08-26 16:47 |
|
73
|
Data Engineer I, LATAM CARPOOL
View_Position
→
|
[EA] Amazon
|
Greater São Paulo Area |
GOOD MATCH▼
[ANALYSIS_REPORT]
**HIGH**
[Copilot: GPT5-mini] This LATAM Data Engineer role aligns well: it requires regional experience (MX/BR), ETL/warehousing, and Python/SQL skills that the candidate has; the JD is junior-to-mid and many requirements map directly. Score benefits from local/regional fit and production pipeline experience. Improve by emphasizing Brazil retail/operations integrations, ETL scale, and any conversational-AI or agentic data work relevant to the role.
**Strengths:** Python and SQL pipeline experience, Brazil market and operations domain knowledge, Experience building AI-ready data warehouses and conversational pipelines
Missing_Assets:
Specific big-data tooling (Hadoop / Spark / EMR listed as preferred), Explicit experience with large-scale Hadoop ecosystem (preferred)
|
#4375030378 | 03-08-26 16:47 |
|
53
|
Data Engineer
View_Position
→
|
[EA] Pride Global
|
São Paulo, São Paulo, Brazil |
WEAK MATCH▼
[ANALYSIS_REPORT]
**MEDIUM**
[Copilot: GPT5-mini] Candidate has strong Python, SQL, and dimensional modeling aptitude but lacks explicit Microsoft Fabric experience which the JD prefers; many Fabric skills are adjacent to other clouds so they are partially inferable. Score indicates reasonable match for data modeling and pipeline work but penalizes the platform-specific Fabric experience gap. Improve by demonstrating Fabric or equivalent Azure Synapse/Databricks projects or describing medallion/bronze-silver-gold pipelines in detail.
**Strengths:** Python and SQL for ETL/ELT, Dimensional modeling and medallion architecture understanding, Experience integrating APIs and building fallback scraping pipelines
Missing_Assets:
Microsoft Fabric (lakehouse, pipelines, notebooks), PySpark / Spark experience (explicit), Fabric-specific governance and notebooks
|
#4378522043 | 03-08-26 16:46 |
|
28
|
Senior Healthcare Data Engineer with AI & Cloud Focus - 100% REMOTE
View_Position
→
|
[EA] ITDS
|
Brazil |
POOR MATCH▼
[ANALYSIS_REPORT]
**LOW**
[Copilot: GPT5-mini] This senior healthcare role requires deep, domain-specific healthcare data experience (claims, ICD/CPT, HIPAA compliance) and multi‑TB pipeline experience which the candidate does not demonstrate. Score reflects general data engineering strengths but major deductions for the mandatory healthcare domain and scale experience gap. Recommend skipping unless the candidate can document significant healthcare-specific ETL/claims work and large-scale pipeline ownership.
**Strengths:** Python and ETL pipeline experience, Data profiling and validation practices, Operationalization and monitoring experience
**Missing Required:** 5+ years healthcare data engineering experience, Hands-on Databricks / large-scale data lake + healthcare domain expertise
Missing_Assets:
Healthcare data domain (claims, ICD/CPT, NPI, PHI handling), Large-scale ETL/ELT on multi-billion-row datasets, HIPAA / healthcare compliance experience
|
#4381257666 | 03-08-26 16:46 |
Page 36 / 336