
How I Sped Up a UNICEF Research Project with AI Custom GPTs for Data Analysis and Synthesis

How I Sped Up a UNICEF Research Project with AI Custom GPTs for Data Analysis and Synthesis

How I Sped Up a UNICEF Research Project with AI Custom GPTs for Data Analysis and Synthesis
Over a three-month engagement with a UNICEF-aligned research team, I used two custom-built GPTs and a spreadsheet-first architecture to dramatically increase the speed and rigour of a child poverty study in Zambia.
The project focused on female-headed households (FHHs) in two of the country's poorest provinces — Mongu and Kasama — analysing 120 narrative case studies and converting them into structured, query-ready data.. We moved from a manual baseline of 60–90 minutes per case to a steady-state throughput of four cases per hour, saving approximately 120 analyst hours in total. The AI system not only codified and standardised data across ten thematic areas, but also enabled non-technical report writers to query the database in plain English and receive traceable tables, summaries, and formulas.
This case study walks through the end-to-end setup: schema definition, custom GPT design, database structuring in Excel/Google Sheets, training materials for writers, and the measurable impact of these tools.
Key Takeaways
Cut processing time to ~15 min/case, saving ~120 analyst hours.
Schema-first + spreadsheet-first setup (10 themes) standardized narrative data.
Two custom GPTs: one for case coding (with quotes), one for reporting (tables + formulas).
Full auditability: every figure traces to case ID, quote, and cell range.
Empowered non-technical writers to self-serve insights in plain English.
Reusable, scalable pattern with QA checks; next steps include dashboards and confidence scores.

Project Overview
Problem: Manual analysis of rich qualitative data (case studies) was slow, inconsistent, and hard to audit.
Goal: Build an AI-powered pipeline that increases throughput, enforces thematic consistency, and enables fast, auditable reporting.
Scope: 120 qualitative case studies across FHHs in two provinces.
Deliverables:
Thematic schema with ten categories
Structured database (Excel & Sheets)
AI Analysis GPT for case coding
AI Reporting GPT for synthesis and querying
Formula library for trend detection
Training for non-technical staff

Why Female-Headed Households?
Female-headed households (FHHs) in rural Zambia often face layered vulnerabilities: unstable income, care burdens, and reduced access to critical services. UNICEF wanted a deeper understanding of how these dynamics contribute to child poverty in Mongu (Western) and Kasama (Northern).
Narrative data had already been collected in the form of in-depth case studies — but turning those stories into usable data was the bottleneck.

Objectives
Speed: Reduce time spent per case from over an hour to under 20 minutes.
Standardisation: Ensure all data aligns with a fixed schema across ten core themes.
Self-Serve Insights: Let report writers pull accurate numbers and narratives without analyst support.
Auditability: Ensure that every value in the final report can be traced back to both the source excerpt and cell range.

The Ten Themes That Structured the Study
To make qualitative analysis scalable and meaningful, I developed a thematic schema with ten categories:
FHH Assets
FHH Income
Livelihood
Health & Nutrition
Education & Child Protection
Water & Sanitation
Housing, Power & Information
Internal Factors
External Support
Coping & Mitigation Strategies
Each theme had 8–15 specific fields, from "Savings" to "Top 1st Expense" to "Perception of Single Women in Community."

The AI Data Analysis Workflow
Step 1: GPT #1 — Case Analysis GPT
This GPT was trained to:
Read a single narrative case
Apply the schema to extract structured data into a JSON object
Quote the supporting sentence for each value
Use null for missing data
Flag ambiguous items
Example Output:
Step 2: Export to Structured Sheets
The JSON outputs were flattened into a CSV and loaded into Google Sheets and Excel. Why? Because the stakeholders were already comfortable in spreadsheets, and transparency mattered.
Step 3: Formulas for Trend Detection
I built a suite of pre-wired formulas to quickly:
Join row-level child data
Create frequency-ranked lists ("Top 3 expenses")
Compare Kasama vs Mongu
Filter by conditions ("FHHs with ≤2 meals per day and no clinic access")
Step 4: GPT #2 — Reporting GPT
The second GPT worked on top of the spreadsheet. It could:
Receive a natural language prompt ("Show top stress factors in Kasama")
Pull data from the sheet
Return a narrative summary, a table, and the formulas used
Example Response:
"Top stress factors in Kasama were food insecurity (18), illness (12), and lack of income (9)."
Formula used:

Results
📈 Time Saved
Manual = 60–90 min × 120 = 120–180 hours
AI-assisted = ~15 min × 120 = 30 hours
Time saved: ~120 hours
💡 Insights Standardised
Thematic schema ensured consistency across all case analyses
Quotes tied to every data point — no guesswork
🧠 Smarter Report Writing
Writers could ask: "What's the most common household expense in Mongu?"
The GPT returned not just a paragraph but also the formula and the range
🔍 Auditable Data
Any claim in the final report could be backtracked to a case ID, quote, and cell

Training Non-Technical Writers
Two compact training sessions were delivered:
Reading the Data
Schema overview, how to interpret null, checking for completeness
Asking GPT for Help
How to prompt: "Ask → Check → Paste"
Verifying the formula
Adding traceable tables directly to their reports

Governance & Quality Control
Flags for Outliers: Unusual ages, household sizes, or missing fields flagged automatically
Schema Enforcement: Each case checked against expected structure before upload
Reconciliation: Ensured data splits (Kasama/Mongu) added up to All
Quotes = Trust: Every non-null value had a source quote
Limitations & Next Steps
Visualisation: Future iterations should add dashboards
Language Diversity: Add local-language prompt variants
Delta Reports: Track changes in the dataset over time
Confidence Scores: Surface coder confidence in ambiguous cases
Takeaways for Other Organisations
Start with a schema-first mindset — it’s the backbone of traceability
Use GPTs to scale tedious manual work, but never skip human QA
Embrace spreadsheets if your users already do — AI doesn’t mean you have to move to Python
Give non-technical teams tools they actually want to use
Demand both answers and the method behind them
Want This Setup?
If your team is sitting on a pile of narrative interviews or case studies and drowning in deadlines, I can help. You’ll get:
Schema design
Custom GPTs (analysis + reporting)
Spreadsheet database with formulas
Training for your non-technical staff
How many cases are you dealing with? What do you need to find out?
Send over an anonymised sample and I’ll design a plan that fits.

Bonus: Formula Snippets You Can Steal
Google Sheets — Frequency Table (Label, Count)
Excel — Cross-Sheet Lookup
Google Sheets — Join Child Rows
Final Thought
This case study wasn’t just about speed. It was about making qualitative data analysis auditable, reproducible, and accessible. When UNICEF asked for insights into child poverty among female-headed households, we delivered data that wasn’t just fast — it was defensible. And that, ultimately, builds better policy.
If you're ready to level up your data game, let's talk. Book a onboarding call today!
Frequently Asked Questions
1. Can this process be adapted for other qualitative research projects?
Absolutely. The approach is schema-first and data-agnostic, making it suitable for interviews, focus groups, and field notes across sectors.
2. Does this require programming or Python knowledge?
No. The entire pipeline runs in Google Sheets or Excel, making it accessible to non-technical users. The GPTs are pre-configured to work with this setup.
3. How do you prevent AI from "hallucinating" data?
The analysis GPT only codes when it finds direct quotes in the case text. Missing values are marked null, and all non-null entries are tied to a cited sentence.
4. Can this scale to thousands of cases?
Yes. While some manual QA is still needed for edge cases, the core pipeline (especially the Reporting GPT) scales linearly with the dataset size.
Over a three-month engagement with a UNICEF-aligned research team, I used two custom-built GPTs and a spreadsheet-first architecture to dramatically increase the speed and rigour of a child poverty study in Zambia.
The project focused on female-headed households (FHHs) in two of the country's poorest provinces — Mongu and Kasama — analysing 120 narrative case studies and converting them into structured, query-ready data.. We moved from a manual baseline of 60–90 minutes per case to a steady-state throughput of four cases per hour, saving approximately 120 analyst hours in total. The AI system not only codified and standardised data across ten thematic areas, but also enabled non-technical report writers to query the database in plain English and receive traceable tables, summaries, and formulas.
This case study walks through the end-to-end setup: schema definition, custom GPT design, database structuring in Excel/Google Sheets, training materials for writers, and the measurable impact of these tools.
Key Takeaways
Cut processing time to ~15 min/case, saving ~120 analyst hours.
Schema-first + spreadsheet-first setup (10 themes) standardized narrative data.
Two custom GPTs: one for case coding (with quotes), one for reporting (tables + formulas).
Full auditability: every figure traces to case ID, quote, and cell range.
Empowered non-technical writers to self-serve insights in plain English.
Reusable, scalable pattern with QA checks; next steps include dashboards and confidence scores.

Project Overview
Problem: Manual analysis of rich qualitative data (case studies) was slow, inconsistent, and hard to audit.
Goal: Build an AI-powered pipeline that increases throughput, enforces thematic consistency, and enables fast, auditable reporting.
Scope: 120 qualitative case studies across FHHs in two provinces.
Deliverables:
Thematic schema with ten categories
Structured database (Excel & Sheets)
AI Analysis GPT for case coding
AI Reporting GPT for synthesis and querying
Formula library for trend detection
Training for non-technical staff

Why Female-Headed Households?
Female-headed households (FHHs) in rural Zambia often face layered vulnerabilities: unstable income, care burdens, and reduced access to critical services. UNICEF wanted a deeper understanding of how these dynamics contribute to child poverty in Mongu (Western) and Kasama (Northern).
Narrative data had already been collected in the form of in-depth case studies — but turning those stories into usable data was the bottleneck.

Objectives
Speed: Reduce time spent per case from over an hour to under 20 minutes.
Standardisation: Ensure all data aligns with a fixed schema across ten core themes.
Self-Serve Insights: Let report writers pull accurate numbers and narratives without analyst support.
Auditability: Ensure that every value in the final report can be traced back to both the source excerpt and cell range.

The Ten Themes That Structured the Study
To make qualitative analysis scalable and meaningful, I developed a thematic schema with ten categories:
FHH Assets
FHH Income
Livelihood
Health & Nutrition
Education & Child Protection
Water & Sanitation
Housing, Power & Information
Internal Factors
External Support
Coping & Mitigation Strategies
Each theme had 8–15 specific fields, from "Savings" to "Top 1st Expense" to "Perception of Single Women in Community."

The AI Data Analysis Workflow
Step 1: GPT #1 — Case Analysis GPT
This GPT was trained to:
Read a single narrative case
Apply the schema to extract structured data into a JSON object
Quote the supporting sentence for each value
Use null for missing data
Flag ambiguous items
Example Output:
Step 2: Export to Structured Sheets
The JSON outputs were flattened into a CSV and loaded into Google Sheets and Excel. Why? Because the stakeholders were already comfortable in spreadsheets, and transparency mattered.
Step 3: Formulas for Trend Detection
I built a suite of pre-wired formulas to quickly:
Join row-level child data
Create frequency-ranked lists ("Top 3 expenses")
Compare Kasama vs Mongu
Filter by conditions ("FHHs with ≤2 meals per day and no clinic access")
Step 4: GPT #2 — Reporting GPT
The second GPT worked on top of the spreadsheet. It could:
Receive a natural language prompt ("Show top stress factors in Kasama")
Pull data from the sheet
Return a narrative summary, a table, and the formulas used
Example Response:
"Top stress factors in Kasama were food insecurity (18), illness (12), and lack of income (9)."
Formula used:

Results
📈 Time Saved
Manual = 60–90 min × 120 = 120–180 hours
AI-assisted = ~15 min × 120 = 30 hours
Time saved: ~120 hours
💡 Insights Standardised
Thematic schema ensured consistency across all case analyses
Quotes tied to every data point — no guesswork
🧠 Smarter Report Writing
Writers could ask: "What's the most common household expense in Mongu?"
The GPT returned not just a paragraph but also the formula and the range
🔍 Auditable Data
Any claim in the final report could be backtracked to a case ID, quote, and cell

Training Non-Technical Writers
Two compact training sessions were delivered:
Reading the Data
Schema overview, how to interpret null, checking for completeness
Asking GPT for Help
How to prompt: "Ask → Check → Paste"
Verifying the formula
Adding traceable tables directly to their reports

Governance & Quality Control
Flags for Outliers: Unusual ages, household sizes, or missing fields flagged automatically
Schema Enforcement: Each case checked against expected structure before upload
Reconciliation: Ensured data splits (Kasama/Mongu) added up to All
Quotes = Trust: Every non-null value had a source quote
Limitations & Next Steps
Visualisation: Future iterations should add dashboards
Language Diversity: Add local-language prompt variants
Delta Reports: Track changes in the dataset over time
Confidence Scores: Surface coder confidence in ambiguous cases
Takeaways for Other Organisations
Start with a schema-first mindset — it’s the backbone of traceability
Use GPTs to scale tedious manual work, but never skip human QA
Embrace spreadsheets if your users already do — AI doesn’t mean you have to move to Python
Give non-technical teams tools they actually want to use
Demand both answers and the method behind them
Want This Setup?
If your team is sitting on a pile of narrative interviews or case studies and drowning in deadlines, I can help. You’ll get:
Schema design
Custom GPTs (analysis + reporting)
Spreadsheet database with formulas
Training for your non-technical staff
How many cases are you dealing with? What do you need to find out?
Send over an anonymised sample and I’ll design a plan that fits.

Bonus: Formula Snippets You Can Steal
Google Sheets — Frequency Table (Label, Count)
Excel — Cross-Sheet Lookup
Google Sheets — Join Child Rows
Final Thought
This case study wasn’t just about speed. It was about making qualitative data analysis auditable, reproducible, and accessible. When UNICEF asked for insights into child poverty among female-headed households, we delivered data that wasn’t just fast — it was defensible. And that, ultimately, builds better policy.
If you're ready to level up your data game, let's talk. Book a onboarding call today!
Frequently Asked Questions
1. Can this process be adapted for other qualitative research projects?
Absolutely. The approach is schema-first and data-agnostic, making it suitable for interviews, focus groups, and field notes across sectors.
2. Does this require programming or Python knowledge?
No. The entire pipeline runs in Google Sheets or Excel, making it accessible to non-technical users. The GPTs are pre-configured to work with this setup.
3. How do you prevent AI from "hallucinating" data?
The analysis GPT only codes when it finds direct quotes in the case text. Missing values are marked null, and all non-null entries are tied to a cited sentence.
4. Can this scale to thousands of cases?
Yes. While some manual QA is still needed for edge cases, the core pipeline (especially the Reporting GPT) scales linearly with the dataset size.
Over a three-month engagement with a UNICEF-aligned research team, I used two custom-built GPTs and a spreadsheet-first architecture to dramatically increase the speed and rigour of a child poverty study in Zambia.
The project focused on female-headed households (FHHs) in two of the country's poorest provinces — Mongu and Kasama — analysing 120 narrative case studies and converting them into structured, query-ready data.. We moved from a manual baseline of 60–90 minutes per case to a steady-state throughput of four cases per hour, saving approximately 120 analyst hours in total. The AI system not only codified and standardised data across ten thematic areas, but also enabled non-technical report writers to query the database in plain English and receive traceable tables, summaries, and formulas.
This case study walks through the end-to-end setup: schema definition, custom GPT design, database structuring in Excel/Google Sheets, training materials for writers, and the measurable impact of these tools.
Key Takeaways
Cut processing time to ~15 min/case, saving ~120 analyst hours.
Schema-first + spreadsheet-first setup (10 themes) standardized narrative data.
Two custom GPTs: one for case coding (with quotes), one for reporting (tables + formulas).
Full auditability: every figure traces to case ID, quote, and cell range.
Empowered non-technical writers to self-serve insights in plain English.
Reusable, scalable pattern with QA checks; next steps include dashboards and confidence scores.

Project Overview
Problem: Manual analysis of rich qualitative data (case studies) was slow, inconsistent, and hard to audit.
Goal: Build an AI-powered pipeline that increases throughput, enforces thematic consistency, and enables fast, auditable reporting.
Scope: 120 qualitative case studies across FHHs in two provinces.
Deliverables:
Thematic schema with ten categories
Structured database (Excel & Sheets)
AI Analysis GPT for case coding
AI Reporting GPT for synthesis and querying
Formula library for trend detection
Training for non-technical staff

Why Female-Headed Households?
Female-headed households (FHHs) in rural Zambia often face layered vulnerabilities: unstable income, care burdens, and reduced access to critical services. UNICEF wanted a deeper understanding of how these dynamics contribute to child poverty in Mongu (Western) and Kasama (Northern).
Narrative data had already been collected in the form of in-depth case studies — but turning those stories into usable data was the bottleneck.

Objectives
Speed: Reduce time spent per case from over an hour to under 20 minutes.
Standardisation: Ensure all data aligns with a fixed schema across ten core themes.
Self-Serve Insights: Let report writers pull accurate numbers and narratives without analyst support.
Auditability: Ensure that every value in the final report can be traced back to both the source excerpt and cell range.

The Ten Themes That Structured the Study
To make qualitative analysis scalable and meaningful, I developed a thematic schema with ten categories:
FHH Assets
FHH Income
Livelihood
Health & Nutrition
Education & Child Protection
Water & Sanitation
Housing, Power & Information
Internal Factors
External Support
Coping & Mitigation Strategies
Each theme had 8–15 specific fields, from "Savings" to "Top 1st Expense" to "Perception of Single Women in Community."

The AI Data Analysis Workflow
Step 1: GPT #1 — Case Analysis GPT
This GPT was trained to:
Read a single narrative case
Apply the schema to extract structured data into a JSON object
Quote the supporting sentence for each value
Use null for missing data
Flag ambiguous items
Example Output:
Step 2: Export to Structured Sheets
The JSON outputs were flattened into a CSV and loaded into Google Sheets and Excel. Why? Because the stakeholders were already comfortable in spreadsheets, and transparency mattered.
Step 3: Formulas for Trend Detection
I built a suite of pre-wired formulas to quickly:
Join row-level child data
Create frequency-ranked lists ("Top 3 expenses")
Compare Kasama vs Mongu
Filter by conditions ("FHHs with ≤2 meals per day and no clinic access")
Step 4: GPT #2 — Reporting GPT
The second GPT worked on top of the spreadsheet. It could:
Receive a natural language prompt ("Show top stress factors in Kasama")
Pull data from the sheet
Return a narrative summary, a table, and the formulas used
Example Response:
"Top stress factors in Kasama were food insecurity (18), illness (12), and lack of income (9)."
Formula used:

Results
📈 Time Saved
Manual = 60–90 min × 120 = 120–180 hours
AI-assisted = ~15 min × 120 = 30 hours
Time saved: ~120 hours
💡 Insights Standardised
Thematic schema ensured consistency across all case analyses
Quotes tied to every data point — no guesswork
🧠 Smarter Report Writing
Writers could ask: "What's the most common household expense in Mongu?"
The GPT returned not just a paragraph but also the formula and the range
🔍 Auditable Data
Any claim in the final report could be backtracked to a case ID, quote, and cell

Training Non-Technical Writers
Two compact training sessions were delivered:
Reading the Data
Schema overview, how to interpret null, checking for completeness
Asking GPT for Help
How to prompt: "Ask → Check → Paste"
Verifying the formula
Adding traceable tables directly to their reports

Governance & Quality Control
Flags for Outliers: Unusual ages, household sizes, or missing fields flagged automatically
Schema Enforcement: Each case checked against expected structure before upload
Reconciliation: Ensured data splits (Kasama/Mongu) added up to All
Quotes = Trust: Every non-null value had a source quote
Limitations & Next Steps
Visualisation: Future iterations should add dashboards
Language Diversity: Add local-language prompt variants
Delta Reports: Track changes in the dataset over time
Confidence Scores: Surface coder confidence in ambiguous cases
Takeaways for Other Organisations
Start with a schema-first mindset — it’s the backbone of traceability
Use GPTs to scale tedious manual work, but never skip human QA
Embrace spreadsheets if your users already do — AI doesn’t mean you have to move to Python
Give non-technical teams tools they actually want to use
Demand both answers and the method behind them
Want This Setup?
If your team is sitting on a pile of narrative interviews or case studies and drowning in deadlines, I can help. You’ll get:
Schema design
Custom GPTs (analysis + reporting)
Spreadsheet database with formulas
Training for your non-technical staff
How many cases are you dealing with? What do you need to find out?
Send over an anonymised sample and I’ll design a plan that fits.

Bonus: Formula Snippets You Can Steal
Google Sheets — Frequency Table (Label, Count)
Excel — Cross-Sheet Lookup
Google Sheets — Join Child Rows
Final Thought
This case study wasn’t just about speed. It was about making qualitative data analysis auditable, reproducible, and accessible. When UNICEF asked for insights into child poverty among female-headed households, we delivered data that wasn’t just fast — it was defensible. And that, ultimately, builds better policy.
If you're ready to level up your data game, let's talk. Book a onboarding call today!
Frequently Asked Questions
1. Can this process be adapted for other qualitative research projects?
Absolutely. The approach is schema-first and data-agnostic, making it suitable for interviews, focus groups, and field notes across sectors.
2. Does this require programming or Python knowledge?
No. The entire pipeline runs in Google Sheets or Excel, making it accessible to non-technical users. The GPTs are pre-configured to work with this setup.
3. How do you prevent AI from "hallucinating" data?
The analysis GPT only codes when it finds direct quotes in the case text. Missing values are marked null, and all non-null entries are tied to a cited sentence.
4. Can this scale to thousands of cases?
Yes. While some manual QA is still needed for edge cases, the core pipeline (especially the Reporting GPT) scales linearly with the dataset size.
Similar Resources

AI Data Analysis & Digital Workflows for SA White Paper on Local Government
Between August 2025 and today, I was subcontracted to help the Local Government White Paper team ingest and synthesise 270 public submissions.
Date
08 Sept 2025
Topic
Case Studies
Author
Romanos Boraine

How I Sped Up a UNICEF Research Project with AI Custom GPTs for Data Analysis and Synthesis
Over a three-month engagement with a UNICEF-aligned research team, I used two custom-built GPTs and a spreadsheet-first architecture to dramatically increase the speed and rigour of a child poverty study in Zambia.
Date
08 Sept 2025
Topic
Case Studies
Author
Romanos Boraine

Crafting a Digital Brand: A Case Study on the Launch of Craft Cask
This case study documents the journey of building Craft Cask, a digital-first media brand focusing on whisky, from the ground up.
Date
20 Jun 2025
Topic
Author
Romanos Boraine

7 Ways NPOs Waste Money on Inefficient Digital Systems
South African non-profit organisations (NPOs) are the lifeblood of social progress, driven by passionate, purpose-driven teams committed to making a significant impact with limited resources.
Date
17 Jun 2025
Topic
Author
Romanos Boraine
Similar Resources

AI Data Analysis & Digital Workflows for SA White Paper on Local Government
Between August 2025 and today, I was subcontracted to help the Local Government White Paper team ingest and synthesise 270 public submissions.
Date
08 Sept 2025
Category
Case Studies
Author
Romanos Boraine

How I Sped Up a UNICEF Research Project with AI Custom GPTs for Data Analysis and Synthesis
Over a three-month engagement with a UNICEF-aligned research team, I used two custom-built GPTs and a spreadsheet-first architecture to dramatically increase the speed and rigour of a child poverty study in Zambia.
Date
08 Sept 2025
Category
Case Studies
Author
Romanos Boraine

Crafting a Digital Brand: A Case Study on the Launch of Craft Cask
This case study documents the journey of building Craft Cask, a digital-first media brand focusing on whisky, from the ground up.
Date
20 Jun 2025
Category
Author
Romanos Boraine

7 Ways NPOs Waste Money on Inefficient Digital Systems
South African non-profit organisations (NPOs) are the lifeblood of social progress, driven by passionate, purpose-driven teams committed to making a significant impact with limited resources.
Date
17 Jun 2025
Category
Author
Romanos Boraine
Similar Resources

AI Data Analysis & Digital Workflows for SA White Paper on Local Government
Between August 2025 and today, I was subcontracted to help the Local Government White Paper team ingest and synthesise 270 public submissions.
Date
08 Sept 2025
Category
Case Studies
Author
Romanos Boraine

How I Sped Up a UNICEF Research Project with AI Custom GPTs for Data Analysis and Synthesis
Over a three-month engagement with a UNICEF-aligned research team, I used two custom-built GPTs and a spreadsheet-first architecture to dramatically increase the speed and rigour of a child poverty study in Zambia.
Date
08 Sept 2025
Category
Case Studies
Author
Romanos Boraine

Crafting a Digital Brand: A Case Study on the Launch of Craft Cask
This case study documents the journey of building Craft Cask, a digital-first media brand focusing on whisky, from the ground up.
Date
20 Jun 2025
Category
Author
Romanos Boraine

7 Ways NPOs Waste Money on Inefficient Digital Systems
South African non-profit organisations (NPOs) are the lifeblood of social progress, driven by passionate, purpose-driven teams committed to making a significant impact with limited resources.
Date
17 Jun 2025
Category
Author
Romanos Boraine

Book a Free Consultation with Romanos Boraine
Let’s talk. Book a free 20-minute discovery call with me to map out your brand, systems, or content gaps. We will identify what we can fix, fast, to help your nonprofit or social enterprise grow smarter.
Helping nonprofits, startups, and social enterprises in South Africa grow smarter through strategic positioning, creative direction, digital systems audits, and workflow optimisation.
© Romanos Boraine 2025.
All Rights Reserved

Book a Free Consultation with Romanos Boraine
Let’s talk. Book a free 20-minute discovery call with me to map out your brand, systems, or content gaps. We will identify what we can fix, fast, to help your nonprofit or social enterprise grow smarter.
Helping nonprofits, startups, and social enterprises in South Africa grow smarter through strategic positioning, creative direction, digital systems audits, and workflow optimisation.
© Romanos Boraine 2025.
All Rights Reserved

Book a Free Consultation with Romanos Boraine
Let’s talk. Book a free 20-minute discovery call with me to map out your brand, systems, or content gaps. We will identify what we can fix, fast, to help your nonprofit or social enterprise grow smarter.
Helping nonprofits, startups, and social enterprises in South Africa grow smarter through strategic positioning, creative direction, digital systems audits, and workflow optimisation.
© Romanos Boraine 2025.
All Rights Reserved