My AI-native dev environment
Lately, I have been in the groove with AI-native engineering tools and iterating on my dev environment. My main motivation is to do hands-on research for a startup concept and arrive at a compelling CX for it, as I iterate on it while going through customer interviews.
For full transparency, I coded heavily in C++ for several years post graduation and am very familiar with cloud-infra, but after I became a PM leader, I skipped many years without writing any serious application code. And despite the hiatus, now I suddenly find myself a more efficient and productive coder than I was at the peak of my undergrad/early SWE career!
In my current setup, I use:
Languages: Python, TypeScript
Tools: Cursor IDE with Claude 3.7 Sonnet, Vercel V0, Github, CLI
Assistant: ChatGPT (including deep research), Gemini for Google Cloud (specially for GCP infra)
Managed services and database: Firebase, Cloud Run, Firestore (NoSQL)
Model APIs: OpenAI APIs
Evals: OpenAI Evals, Ragas
Choices and learnings:
I stayed away from Agentic frameworks as they are confusing and seemed like an overhead. Using Python libraries and model APIs directly, along with simple workflow and agentic patterns (published by Anthropic in Dec’24) accomodate almost all logic I have needed so far.
Once I have thought through the Agentic/workflow logic I want to make progress on, I start with the closest pattern in a credible repo I can find (ChatGPT is able to help here).
I use Cursor (heavily) for sample codebase explanation, syntax error correction, formatting, imports, and local testing. Recently also started using Agent mode to draft complete logic including data models and functions, complete with retry/error handling logic.
I iterate a frontend/UX concept directly in Vercel V0, and get to a reasonable looking first draft, usually within a few prompts. And then I prefer downloading the code and iterating on it within Cursor IDE.
Debugging deployment issues with Cursor has been counter-productive, and I found myself going in circles. Since I am using GCP infra, I have found Gemini for Google Cloud (available across multiple surfaces e.g., Cloud console, Firebase console etc.) to be directionally more helpful in resolving issues faster.
Observations:
While not as useful to me given my flow and current goals, I was curious about engineering collaboration features. I found PR summary and automatic code review features to be better with Copilot, given it's deeper integration into Github platform.
I have found all AI for Code tools to be lacking in cloud-native or system design. Moreover, IDE as a UX for system design thinking and decisions is filled with many friction points (missing context from sources other than codebase, and inability to iterate, share, and collaborate). I have also tried tools other than Cursor (like Copilot and Augment Code) for system design, and found the results to be mostly the same. This is not surprising since the underlying models are trained specifically for code completion/generation/explanation/review tasks.