The Git Log Never Lies

What I Actually Look At Before I Read a Single Line of Code

Most hackathon judging advice talks about rubrics and presentation skills. Nobody talks about the forensic layer – the signals hiding in plain sight that tell you more about a project than any demo ever could.

I recently judged a batch of hackathon projects – ten submissions, all built in 72 hours, all solving variations of the same problem. Every single one had a polished README. Most had live demos. A few had impressive UI animations that would make any audience clap.

But before I read a single line of application code, I already had a strong sense of which projects were built with real engineering discipline and which were assembled in a sprint of copy-paste. Not because of intuition. Because the metadata told me everything.

The git log. The package manifest. The dependency list. The file names that should not exist. These artifacts don’t lie. They can’t. They weren’t written to impress a judge – they were left behind by the process of building the thing. And once you learn to read them, you can’t unsee what they reveal.

The Commit History Is a Timeline of Honesty

The first thing I check is git log --oneline. Not the code. Not the README. The commit history.

A project with fifteen commits that narrate a story – init project, add auth, wire up RAG retrieval, fix hydration mismatch, integrate Gemini with fallback – is a project built by someone who was thinking as they worked. You can see the decisions. You can see where they hit a wall, backed up, and tried something different. That’s engineering.

A project with two commits – Initial commit and Published your App – tells a different story. Six thousand lines of code appeared in a single commit. That doesn’t mean the code is bad. But it does mean I have no evidence of iteration, no evidence of debugging, no evidence that a human sat with a problem and worked through it. The entire development process is invisible.

In the batch I judged, the correlation was nearly perfect. Projects with more iterative commits had better architecture, better error handling, and fewer broken code paths. Projects with one or two commits had unused dependencies, leftover template names, and features that existed in the README but not in the code.

The commit history doesn’t tell you if the code is good. It tells you if the process was real.

The Package Manifest Remembers What the Developer Forgot

Every project has a package.json or pyproject.toml or equivalent. Most judges skip it. I read it before the source code.

One project I reviewed had a package.json with the name field still set to “react-example“. The default template name. Nobody changed it. That one string told me this project was likely scaffolded from a starter template, and the developer never went back to clean up. It’s a small thing. But small things compound.

In the same batch, another project listed @google/genai as a dependency. I searched the entire source code. It was never imported. Never used. It acted as a ghost dependency – probably suggested by an AI assistant during setup, accepted without question, and never removed. The project worked fine without it. But its presence told me something about how carefully the codebase was maintained.

Compare that with a project where the pyproject.toml had pinned dependency versions, proper classifiers, a [project.scripts] entry point, dev dependencies separated cleanly, and ruff configured for linting. That manifest was written by someone who intended to ship software, not just demo it.

The manifest is the project’s birth certificate. It tells you whether the developer was building something real or assembling something presentable.

The Files That Should Not Exist

This is the one that surprises people.

In multiple projects, I found files like CLAUDE.md or AGENTS.md sitting in the repository root. These are configuration files for AI coding assistants – instructions that tell the AI how to behave when generating code for this project. Their presence isn’t inherently negative. AI-assisted development is the reality of current software engineering, and I have no issue with it.

But leaving them in the submitted repository is a signal. It says: the developer did not review what was in their repo before submitting. They didn’t walk through the file tree. They didn’t ask themselves what a judge would see. It’s the equivalent of submitting a report with the editor’s tracked changes still visible.

In one project, I found a replace.js file – a script that bulk-converted light-theme CSS classes to dark-theme classes across every source file. That script told a clear story. The UI was generated in light mode, probably by an AI, and then a find-and-replace pass converted it to dark mode afterward. The output looked fine. But the process was visible to anyone who looked.

I also found a project that claimed to detect security vulnerabilities – with its API key hardcoded in raw text on line 13 of the server file, committed to a public GitHub repository. The irony seemed hard to miss. If the developer had run their own tool on their own codebase, it would have identified it.

Tests Are Not Optional – They Are a Worldview

There is a clean dividing line in hackathon submissions. On one side are projects with automated tests. On the other side are projects that call manual scripts “tests.”

One project claimed 141 tests. I checked. There were actually 141 test functions across 15 test files, with proper fixtures, mocked API calls, edge case coverage, and CLI integration tests. That number was real. Another project had a file called test.js that contained a single fetch() call to the local server with zero assertions. That’s not a test. That’s a health check someone ran manually once.

A third project had a test file that imported a module called configLoader - which did not exist anywhere in the codebase. The tests would crash if you ran them. They were written to look complete but were never actually executed.

In my experience managing engineering teams, the presence or absence of tests tells you something deeper than code quality. It tells you whether the developer believes their code should be verified, or whether they believe their code is correct because they wrote it. That distinction matters far beyond hackathons.

What the README Promises vs What the Code Delivers

A README is marketing. The codebase is the product. The gap between them is where credibility lives.

One project I evaluated branded itself as “AI-powered” and claimed features like “Semantic Analysis” and “Hallucination Detection.” I searched the entire codebase for any AI integration – API calls, model imports, inference pipelines. There were none. The analysis engine was entirely regex-based pattern matching. Useful, but not AI. The README was writing checks the code couldn’t cash.

Another project had a button labeled “Execute Fix” on every diagnostic card. Clicking it triggered a satisfying three-step animation – INIT_REPAIR... PATCHING_DATA... FIXED_SUCCESS. But the underlying code was just a setTimeout that updated a UI state variable after three seconds. Nothing was fixed. Nothing was even attempted. The animation was the feature.

The best project in the batch had the opposite problem – it under-promised. Its README was simple and accurate. Every feature it listed was implemented. The demo matched the description. And critically, when things were missing, the README said so. There was a Roadmap section that honestly listed what wasn’t built yet. That honesty was worth more to me than any polished animation.

The Pattern I Keep Seeing

After evaluating all ten projects, a pattern became clear. The strongest submissions shared a set of traits that had nothing to do with the rubric categories. They had iterative commit histories. Clean dependency manifests. Automated tests that actually ran. Error handling that degraded gracefully instead of crashing. README claims that matched the code. And a clear absence of leftover scaffolding artifacts.

The weakest submissions also shared traits. Single-commit histories. Unused dependencies. No tests or broken tests. Features that existed only in the UI layer with no backend logic. And a constant gap between what was described and what was built.

None of these signals are in any judging rubric I’ve seen. No hackathon asks judges to check git log or read package.json. But they should. Because these artifacts tell you something the demo cannot: whether the project was engineered or assembled.

What This Means Beyond Hackathons

I manage engineering teams. I review code daily. And the exact same signals that separate strong hackathon projects from weak ones separate strong production codebases from fragile ones.

A developer who leaves unused dependencies in production will leave unused feature flags too. A developer who doesn’t run their own tests will not catch regressions before they ship. A developer who writes a README that doesn’t match the code will write documentation that misleads the next team member who reads it.

The reverse is also true. A developer who takes the time to clean their manifest, write meaningful commit messages, and delete scaffolding artifacts is showing the same discipline that produces reliable software. These habits don’t switch on and off between a hackathon and a day job. They are either part of how someone works, or they are not.

That hackathon showed me something I won’t forget. The projects that respected the invisible parts of software development – the parts no judge was supposed to see – were the same projects that delivered the best visible results.

The git log never lies. You just have to read it.

The best code isn’t the code that impresses at first glance. It’s the code that still holds up when you look at everything around it.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.