Skip to main content

Vibe coding with Gemini CLI

 For sometime I wanted to build a data explorer for Valkey Cache, and this past long weekend afforded me some time to start the project. Instead of building the application from scratch, I decided to give "vibe-coding" a try. After all, there is a lot of excitement around it now, and I wanted to see for myself how effective it could be.

I fired off gemini-cli in an empty directory, and gave simple instructions of what I wanted to accomplish: a simple key-value explorer for open source valkey, that runs locally on my machine, can load and save CSV data files, and offers the ability to change keys and values interactively. I asked for the app to be built in python, and to use the tcl-tk libraries for the graphical user interface. I chose tcl-tk for a couple of reasons: it is available on multiple platforms, and is a challenge on OSX after Apple deprecated support for it.

The CLI started cranking, and created the directory structure for the application, and asked for permission to download and install the relevant python libraries. After 10 minutes of simple questions and answers, iterations on the interface, and testing that every change produces the correct UI and results, the application was complete. I was impressed with the results: the application did what I wanted it to do, and the code was clear and well documented, similar to professionally written code. 

I was amazed with the outcome, and decided to repeat the experiment, but now using a native language. I chose C++, a language I am very familiar with, and very aware of the challenges of using it to build a cross-platform GUI application. The same cycle repeated as the exercise with Python, but this time it did not go as smoothly or as quickly as before.

The CLI chose SFML as the cross platform GUI library, and built the directory structure and cmake files for the project. It then started to generate the code for the UI. During the process it hit multiple compilation errors, mostly due to type mismatches, and incorrect function argument order. What was fascinating during the process is that the CLI discovered errors in the library documentation, and by examining the header files uncovered the argument mismatch and fixed them. However it took many iterations to resolve these errors.

Then it hit another wall with generating the dialog boxes for loading and saving the key-value pairs in CSV format. It downloaded the requisite font files, and an open source library for building dialog boxes that is compatible with SFML, however it struggled with font registration, and getting that library integrated with SFML. After some attempts, I gave it the hint to use a different library for the UI--wxWidgets

The CLI cleaned up the directory structure, regenerated the cmake files, and proceeded iteratively with the build-test cycle until the application was complete. During the process it experienced some hiccups with type mismatches, type-casting, and ID generation, however the end result was a fully functional application that satisfied my requirements, with clear and documented code, and minimal intervention from me, with very minor modifications to the code. The exercise took longer than the python one, however it definitely took much less time than if I did everything from scratch by myself.

The exercise and the results got me very excited about the potential of vibe-coding. Imagine how many people that have an idea for an application, but don't pursue it because they lack programming expertise or the ability to create supporting assets such as graphical user interfaces, sound, video, or other creative assets. Now with a generative AI coding tool, they can pursue these ideas, and with minimal programming knowledge, build a prototype they can experiment with, and help them refine these ideas. Programming, creative assets, or the toil of boilerplate code would no longer be a barrier to pursue these ideas.


Or imagine a seasoned developer, who wants to accelerate building a prototype, but wants to reduce the toil of learning technologies they are not very familiar with, or accelerate the tedious parts of the task.

The potential is boundless, and the productivity increase is unbounded!


Comments

Popular posts from this blog

Kindle Paperwhite

I have always been allergic to buying specialized electronic devices that do only one thing, such as the Kindle, the iPod, and fitness trackers. Why buy these when technology evolves so fast that a multi-purpose device such as the phone or a smart watch can eventually do the same thing, but with the convenience of updates that fix bugs and add functionality? So, I was shocked when this weekend I made an impulse buy and got the newest Kindle Paperwhite—a special purpose device for reading eBooks. I was walking past the Amazon store in the mall and saw that the newest Kindle Paperwhites were marked down by $40 for the holidays. The device looked good in the display, so I went in to look at it closely. The Paperwhite is small and light, with a 6” screen that is backlit and waterproof.   The text was crisp and readable, and in the ambient light, it felt like I am reading a printed book. I was sold and bought it on the spot. At home I have struggled to put it down. The bo...

A paper a day keeps the doctor away: MillWheel: Fault-Tolerant Stream Processing at Internet Scale

The recent data explosion, and the increase in appetite for fast results spurred a lot of interest in low-latency data processing systems. One such system is MillWheel, presented in the paper " MillWheel: Fault-Tolerant Stream Processing at Internet Scale ", which is widely used at Google. In MillWheel, the users specify a directed computation graph that describe what they would like to do, and write application code that runs on each individual node in the graph. The system takes care of managing the flow of data within the graph, persisting the state of the computation, and handling any failures that occur, relieving the users from that burden. MillWheel exposes an API for record processing, that handles each record in an idempotent fashion, with an exactly once delivery semantics. The system checkpoints progress with a fine granularity, removing the need to buffer data between external senders. The authors describe the system using the Zeitgeist produ...

A paper a day keeps the dr away: Dapper a Large-Scale Distributed Systems Tracing Infrastructure

Modern Internet scale applications are a challenge to monitor and diagnose. The applications are usually comprised of complex distributed systems that are built by multiple teams, sometimes using different languages and technologies. When one component fails or misbehaves, it becomes a nightmare to figure out what went wrong and where. Monitoring and tracing systems aim to make that problem a bit more tractable, and Dapper, a system by Google for large scale distributed systems tracing is one such system. The paper starts by setting the context for Dapper through the use of a real service: "universal search". In universal search, the user types in a query that gets federated to multiple search backends such as web search, image search, local search, video search, news search, as well as advertising systems to display ads. The results are then combined and presented back to the user. Thousands of machines could be involved in returning that result, and any poor p...