Interview – Building the Brain (With ActionBoard’s Chief Architect)

Interview – Building the Brain (With ActionBoard’s Chief Architect)
Rafa Rayeeda Rahmaani
  • Research
  • 20 July 2025
  • 10 min read

How do you design an AI system that’s part project manager, part data guru? To find out, I interviewed Jarjis Imam, the chief architect of ActionBoard’s intelligence layer and the mastermind behind ActionGraph and the Dual Retrieval System. jarjis Imam is a ex AWS and CapitalOne AI/ML Cloud Architect with a decade of experience in Cloud, AI and knowledge management. In this candid discussion, he reveals the challenges and eureka moments encountered while creating the “brain”of ActionBoard. For anyone fascinated by the engineering that enables true intelligence, this peek behind the curtain is illuminating.


Q: Creating something like ActionGraph sounds complex. What inspired the idea, and what were the biggest hurdles in building it?
A: Jarjis: “The inspiration came from our own frustration. Early on, we tried using off-the-shelf project management APIs with an AI agent, and it was painfully obvious the AI was ‘blind’ without context. We realized we needed a rich model of all project knowledge to give the AI eyes and ears. That naturally led us to knowledge graphs. The concept of ActionGraph clicked when we thought: what if every action item was a living object that knows its relationships? Once we visualized tasks and goals as a connected graph, we knew this was the way to go.

As for hurdles – oh, there were many. First, data modeling: We had to decide what types of nodes and links exist. Projects can be fluid, so we needed a schema flexible enough to handle, say, a task that belongs to multiple teams or a goal that evolves over time. We ended up iterating dozens of times. At one point, we had “risk” as a separate node type with links to tasks – it made the graph smart about risk propagation, but it also grew the complexity too much for the initial version. We pulled back and implemented risk tracking in a simpler way, planning to expand it later.

Another challenge was performance. Graph queries can get slow as the network grows. We knew if a user has, say, 10 projects with 500 tasks each, plus relationships, we’re talking thousands of nodes and edges – and queries that traverse many hops. We solved a lot of that by pre-computing certain indexes, basically caching the heavy calculations like critical path or centrality scores. Also, choosing the right graph database tech underneath was critical. We actually benchmarked a few and even considered building our own. In the end, we chose one that is optimized for dynamic updates, since our data changes in real time. Honestly, one of the hardest parts was getting people (even internally) to understand the power of the graph. In early demos, someone would say, ‘Can’t we do this with a relational DB and some joins?’ We had to show scenarios where the graph shines – like multi-level dependency chains or recommendation of similar past projects – to really drive the point home. Once they saw those, the skepticism dropped.”

It’s intriguing to hear that even convincing people of the approach was a task. Often,groundbreaking ideas face inertia simply because they’re new. Jarjis Imam persistence in
demonstrating the value through scenarios clearly paid off.


Q: We’ve talked about the dual retrieval system in the blog. How did that come about, and what were its development challenges?


A: “The dual retrieval system was born out of necessity. During beta tests, we found instances where the AI would give an answer that was technically correct per the structured data, but it missed context that was sitting in a meeting note or comment. One user asked: ‘Is the April project on track?’ The structured data said yes (all tasks green), but the user had left a comment about a potential scope change brewing (which wasn’t marked as a task yet). The AI missed that and gave a falsely rosy answer. The user rightly said, ‘Your AI should’ve known we’re deliberating a scope change.’ That was an ‘aha’ moment – we needed to feed all relevant info, not just the sanitized fields, to the AI.

The implementation was tricky. Marrying a graph database with a vector search engine is like making two very different systems dance. One is precise and deterministic, the other is fuzzy and probabilistic. We had to ensure they talk to each other. The fusion module was the hardest piece of design decision I’ve tested in years. We joked it’s like a mini-judge that has to decide which source to trust when they differ. For example, if a graph says task is 100% done but some document suggests it might be reopened, what does the AI say? We established rules and confidence scoring to handle these cases. Over time (and many edge cases), it’s gotten quite robust.

We also had to optimize for speed. Initially, doing both searches in parallel was a bit slow – vector searches through hundreds of documents can be heavy. We implemented some clever tricks: for instance, use the graph context to narrow the vector search space. If you ask about Project X, we first use the graph to identify just the documents likely linked to Project X (like attachments, or comments by project members during project timeframe), and only vector-search those, not everything. That optimization cut the retrieval time dramatically without losing relevant info.

One fun anecdote: during development, we accidentally created a feedback loop where the AI would inject structured info into a generated summary and then vector-index its own summary, which then confused the next query. It started to mildly hallucinate because it was getting its own prior answers as part of the input. Catching and fixing that was important – we now isolate sources so the AI doesn’t confuse its synthesized text with user-provided text. That was a lesson in careful system design to maintain source integrity.”


It’s fascinating (and a bit humorous) to imagine the AI confusing itself with its prior output. This shows how complex such a system is – you have to account for all sorts of recursive quirks. The solution of using graph context to narrow vector search highlights the elegance of combining methods: they improve each other.


Q: From an outcomes perspective, what improvements has this dual approach delivered compared to early versions?


A: “Quantitatively, we saw a big boost in answer accuracy and user trust. We measure something we call ‘resolution rate’ – basically, how often the AI’s response fully addresses the user’s query without needing follow-up. Before dual retrieval, our resolution rate was around 60%. After, it jumped to about 85%. Users stopped catching the AI missing obvious things. We also track errors like incorrect status reporting or missed risks; those went down significantly. It’s like the AI went from being a decent intern to a pretty reliable assistant.

One of the coolest outcomes is how this system reduced what we call the nag factor. In earlier versions, users would often double-check the AI by manually looking at the board or notes (“nagging” the system, not fully trusting it). Now, we see much less of that behavior in usage data. They trust the answer given, which is a huge success for us. It means people feel the AI is looking at everything they would have looked at.

A specific example: during a pilot, one manager told us, ‘I asked ActionBoard about our release Actionlist, and it reminded me of a compliance checklist Action item that I completely forgot in my personal notes. That alone saved me from a potential last-minute scramble.’ Stories like that are direct payoffs of dual retrieval – the AI caught something buried in notes while also considering the main task list. Hearing a seasoned manager say the AI saved his bacon, that’s gold.”


Those metrics are impressive. An 85% first-try resolution rate is quite high in the realm of AI assistants. It shows the dual system isn’t just a neat theoretical idea; it’s delivering tangible
reliability gains. And user trust is the ultimate prize – it means folks truly consider the AI a partner now.


Q: Looking forward, where do you see ActionGraph and the retrieval system evolving? Any exciting features or improvements on the horizon?


A: “Oh, we have a packed roadmap! One area is predictive simulation. Now that we have this rich ActionGraph, we want to let the AI simulate “what-if” scenarios more proactively. We’re developing a feature where the AI will periodically run simulations (like Monte Carlo simulations on the graph with varying task durations) and tell you the probability of hitting your deadline or budget. We think that’ll give project managers a superpower – foresight with confidence levels.

On the retrieval side, we’re exploring incorporating external knowledge. Imagine ActionBoard not only looking at your internal data, but also pulling in industry benchmarks or best practices if relevant. Say you’re running a cybersecurity project – the system might fetch a known compliance checklist from a trusted database to make sure you’re not missing any standard tasks. We’d do this carefully, with sources you approve. It’s like extending the knowledge graph with universal Action nodes when needed.

We’re also working on more natural interactions. Right now, users mostly ask explicit questions. We want the AI to eventually participate more in conversations – perhaps even join a team chat to surface info at the right moment. For example, during a Slack discussion, the bot could chime in, ‘Hey, I notice you’re discussing changing scope – here are 3 tasks in the plan that would be affected.’ That kind of real-time assist is on our wishlist, and the dual retrieval foundation is there; it’s about integration and UI now.

From a geekier angle, we’re keeping an eye on advances in large language models and graph AI. There’s research on directly integrating graph data into language models (some projects like KBERT). If we can further fuse those, the line between structured and unstructured might blur even more, which could unlock an even more fluid intelligence. But we’ll always keep one thing constant: the commitment to grounded, context-aware AI. More features are great, but not at the expense of the trust we’ve built.”


Jarjis’s lookahead pointes to a Hope for AI to work as stress reducer and knowledge crystalizer to help low performing individuals to perform at a expert level and unlock their creative potential. The AI tech needs to be shaped to meet human life improvement and not build barriers to growth. So we are focusing on observing how the user is build better life style with AI that’s meaningful.


As we wrapped up the interview, I asked Jarjis Imam one final question: What’s the one
thing he wants users to know about ActionBoard’s intelligence?

He replied, “That it was built to learn and improve with you. The more you use it, the more you can execute actions in your world. We poured our own intelligence into it so that it can amplify yours.” I think that sentiment beautifully captures the essence of ActionBoard. In our final blog of this series, we’ll step back
and consider the big picture – what systems like ActionBoard mean for the future of work and how we envision the partnership between humans and AI unfolding in the years to come.

shape

🔮 Ready to turn ideas into action?

Join thousands of users transforming the way they work with AI-powered boards. Whether you’re leading a team or organizing your life — ActionBoard helps you move forward with clarity

Build Your First Board (Free)
shape
GenAi