Lately, I’ve been obsessed with Agentic Coding and built a small project called Codez, which runs the Codex CLI directly inside GitHub Actions. It’s been working quite well. Here’s an example of a workflow it supports:
Discuss with AI first → break down the problem → tackle each part one by one
Take this code refactoring request as an example: Issue #304
Step 1: Raise the Requirement
In the issue, I clearly described what I wanted to do, including points to discuss and related background info. The title and content of the issue become part of the initial prompt for the agent.
Then, I triggered the agent directly in the comments:
1 2 3 4
/codex # Keyword to wake up the agent --no-pr # Add this flag to prevent it from directly creating a PR. I want to clarify the problem first --fetch # This flag lets the agent fetch the content from the link for offline use https://google.github.io/styleguide/tsguide.html # Relevant documentation to help the agent make informed decisions
With that, the agent gets to work. Even though it doesn’t have network access during runtime, it can still give some solid initial ideas using the context (codebase, content cached from the provided links, and the issue itself).
Step 2: Automatically Break Down into Actionable Tickets
Next, I triggered the agent again. This time to generate a series of issues:
1 2 3
/codex --create-issues # The model generates a JSON of titles and descriptions, then uses GitHub API to create issues --full-history # Includes all previous comments in this thread into the context window
Note: Only flags in the current comment are recognized. The agent then organizes the discussion into multiple issues. Each issue can be worked on independently.
These newly created tickets can now be handled one by one. Like this:
Final Step: Let the Agent Do the Work
The entire workflow feels like genuine team collaboration: discuss first, break down the problem, then work in parallel. The difference is that humans are replaced by agents.
Some say Agentic Coding is like catnip for programmers, totally addictive. It’s true.
Let me just say it straight: prompt injection isn’t a bug, it’s a feature.
If you really think there’s some perfect solution to stop all “injections” then in the end, all you’re really doing is trying to police the context window. Kinda like a censorship board, separating out anything they think is harmful.
Why is that?
Because these so-called “injected” prompts are just part of the context like everything else. There’s no real difference between text A and text B. When you put them together into AB, unless you filter the input, there’s not much else you can do. Sure, you can train the model to “self-censor”, that’s doable. But honestly, that makes me feel kind of sad.
As humans, when you read a line like “The bright moon shines between the pines, clear spring flows over the stones” you might picture a quiet forest, a gentle breeze, moonlight. Or when you hear Vivaldi’s Winter, maybe it reminds you of that delicate tension in Portrait of a Lady on Fire. Even under strict rules or social norms, those feelings still burn underneath.
Or maybe we’re just chatting and someone casually drops a phrase like “move mountains like Yu Gong”, or some internet meme, suddenly a flood of meaning and context rushes in.
Is that prompt injection? Technically, yeah.
But that’s also what makes language so powerful. If you try to restrict the context window too tightly, you’re basically cutting off the soul of language.
Now, when I say it’s “unsolvable,” I don’t mean there’s zero technical hope. You can definitely use regex to strip out what you think is harmful, or train a small model to monitor input.
But I want to ask you, why do we need to do that?
We’ve thrown everything, all the words in the world, into training these large models. And still, many people see them as nothing more than small kids. Just like how, to some parents, a kid is never fully grown. Maybe these models will always be “kids” Maybe we all are.
The human brain might be structurally ready at birth, but what really shapes it is training, experience. That’s why fine-tuning matters so much. It’s what decides whether a model can tell when a “dangerous” prompt is actually dangerous. Though honestly, the word “judgment” feels almost too subjective.
We’re so eager to make these models human-like, but we’ve never really stopped to think: what is a human?
We want these models to understand ethics, to follow the rules, and yet we forget that we humans are still struggling in the gray areas ourselves.
A “paradigm” is not just a form—it’s a mode of thinking.
As a branch of the writing profession, programmers interact with tens of thousands of lines of code, and naturally, they rely on paradigms for guidance. It’s like having a built-in “system prompt”—you don’t need to explain “what object-oriented programming is” to your colleagues every time, nor do you have to argue endlessly about the details of design patterns. There’s an unspoken understanding that you’re “playing by the rules” in a certain context.
After all, the human “mental window” is limited in size. Collaborating without a paradigm is like chickens talking to ducks—utter miscommunication. If you insist on piling all your functions into a single file with no abstraction or categorization, the consequence is writing documentation until you break down.
But now that artificial intelligence is part of the programmer’s workflow, paradigms are shifting.
First, the way we handle code has transformed from “manual carving” to “semantic understanding.” Those obedient little “coding assistants” can automatically modify your code based on a single instruction. So under this transformation, do traditional coding paradigms still matter?
It depends on your perspective.
They do matter—because large language models deal in language. The clearer the linguistic structure, the easier it is to extract and recreate information. It’s the difference between a pile of unclassified documents and a meticulously organized file cabinet. The information entropy is not the same, and naturally, neither is the efficiency.
But from another angle, maybe they’re no longer that important. After all, paradigms were originally for humans to read. Now, LLMs can develop their own paradigms—or even revive those long-abandoned “classical practices.” For instance, directly manipulating binary code: who needs layers of abstraction? From a model’s perspective, that quaint notion of “human readability” might be completely unnecessary.
Second, there’s the transformation of the programmer’s role. Whether we accept it or not, this wave of change has quietly pushed us into a new position. The days of hand-crafting code were like seasoned artisans refining woodwork—there was a kind of pride and dignity woven into every character.
But from a business standpoint, when websites and apps are the final product, the programmer is just one link in the industrial chain. Whether you code by hand or use AI to generate it, the user doesn’t care. Our mindset must shift accordingly. I am not just a code porter or a keystroke operator—I am an engineer solving problems. Writing code is a means to that end, not the end itself.
To me, the value of paradigms lies in helping us organize our thoughts, so that our future selves—or others—can quickly get into the zone when reading our code. This was once a form of self-rescue for programmers. But now, LLMs effortlessly surpass us. They have stronger memory, faster analysis, and even if your code is a mess, they can still make sense of it.
So, do we still need paradigms in the future?
Perhaps the real question is: are we willing to hand over our thinking to the model, or do we want to preserve a trace of human logic?
Paradigms are road signs in the world of programming.
But when the roads are no longer built by humans—does that mean we are no longer travelers, or just moving where the machine tells us to go?
Accessing the Steam Deck’s file system remotely can be incredibly useful.
Imagine using Raycast to quickly open a recent VS Code project, and one of them is a folder on the Steam Deck via an SSH connection. With just one click, you're connected and ready to go.
Prerequisites
Before getting started, make sure you have the following:
Steam Deck: Ensure your Steam Deck is powered on and connected to the same Wi-Fi network as your laptop.
Computer: A Mac (or possibly a PC) with Visual Studio Code installed.
SSH Enabled on Steam Deck: SSH is not enabled by default. You will need to enable it through the Steam Deck’s desktop mode.
VS Code Extensions: Install the “Remote - SSH” extension on your VS Code.
Step 1: Enable SSH on the Steam Deck
Press the Steam button, navigate to Power, and switch to Desktop mode. Once in desktop mode, open the KDE application launcher and search for Konsole (terminal).
Start the SSH server with:
1
sudo systemctl start sshd
Additionally, enable SSH to start on boot:
1
sudo systemctl enable sshd
Verify Your IP Address, in the terminal, type:
1
ip a
Note the IP address (e.g., 192.168.1.xxx or 10.0.0.xxx) that corresponds to your Wi-Fi connection.
Step 2: Create SSH Key Pairs (Recommended and not optional if you want to have a smooth flow with Raycast)
Creating SSH key pairs can enhance the security of your SSH connection by using public-key cryptography instead of a password.
Generate SSH Key Pair on the Steam Deck:
1
ssh-keygen -t rsa -b 4096 -f ~/.ssh/sd_rsa
This command will save the key pair in the specified folder (~/.ssh/sd_rsa).
This process will create two files:
“sd_rsa”: This is your private key. Keep this file secure and find a way to copy it to the Mac. Be creative; for example, you can use GoodReader to set up a quick WiFi Server.
“sd_rsa.pub”: This is your public key. This file can be shared and will stay on the Steam Deck.
For added security, you can:
Disable password authentication on the Steam Deck:
1
sudo vim /etc/ssh/sshd_config
Find the line that says “#PasswordAuthentication” and change it to:
1 2
PasswordAuthentication no PubkeyAuthentication yes
Save the file (:wq) and restart the SSH service:
1
sudo systemctl restart sshd
Ensure your public key is added to the “authorized_keys” file:
1
cat ~/.ssh/sd_rsa.pub >> ~/.ssh/authorized_keys
Step 3: Configure SSH Access in VS Code
Ensure you have the Remote - SSH extension installed. If not, you can find it in the VS Code Marketplace. Copy the “sd_rsa” private key to your Mac and set the correct permissions (chmod 600 if necessary).
Press “Cmd+Shift+P” on Mac to open the command palette.
Type “Remote-SSH: Open SSH Configuration File…” and select it.
Choose the SSH configuration file you want to edit (usually “~/.ssh/config”).
Add a new entry to the configuration file in the following format:
1 2 3 4
Host steamdeck HostName 192.168.1.xxx User deck IdentityFile ~/.ssh/sd_rsa
Replace “192.168.1.xxx” with the actual IP address of your Steam Deck.
Save and close the configuration file.
Step 4: Connect to SSH in VS Code
Again, open the command palette (“Cmd+Shift+P”).
Type “Remote-SSH: Connect to Host…” and select the entry you just added.
The first time only, you may be prompted to accept the host’s fingerprint.
You’ll need to grant VS Code permission to access the local network. Go to System Settings > Privacy & Security > Local Network and ensure VS Code is listed and has access enabled.
By following the above steps, you can conveniently access and manage your Steam Deck’s file system using the powerful toolset provided by VS Code over an SSH connection.
Step 5: Shortcut Using Raycast and VS Code Extension
If you haven’t already, download and install Raycast from Raycast’s official website. Open Raycast and go to the “Extensions Store” and search for “Visual Studio Code” and install the extension.
Open Raycast “Option+Space”
Type “VS Code” and you will see the shortcut in VS Code Recent Projects.
Select “deck” or another name for your SSH connection from the list to quickly open it in the VS Code.
A big thanks to the developers who created these amazing tools!
We’ve all experienced the frustration of a poor internet connection. You may recall the disappointment of a large file download failing after 24 hours of waiting. Even worse, discovering that the download is not resumable.
Responsibility for resumable downloads doesn’t solely rest on the client side with the correct setting of HTTP headers. It’s equally, if not more, important for the backend to correctly enable several headers and implement the associated logic.
While I won’t delve into the detailed implementation in a specific language, understanding the headers discussed below will equip you with the knowledge to easily implement this feature if you wish.
Client
The only aspect you need to focus on is the Range HTTP request header. This header specifies the portions of a resource that the server should return. That’s all there is to it.
1
Range: <unit>=<range-start>-
On the client side, the only requirement is to properly implement the Range HTTP request header. This involves using the correct unit and determining the starting point of the range. The server then knows which portion of the file to send. There’s no need to worry about specifying the range end, as the typical use case involves resuming and downloading the entire file.
Server
Now, things start to get more complicated.
The ETag (also known as entity tag) HTTP response header serves as an identifier for a specific version of a resource.
1
ETag: "<etag_value>"
If your target client includes a browser, then you need to set the ETag. Modern browsers expect to see this value; otherwise, the browser will simply retry downloading the entire file again.
The Content-Range response HTTP header signifies the position of a partial message within the full body message.
Imagine you are downloading a file of 500 bytes, but due to an unstable internet connection, the download is interrupted after only 100 bytes. In this scenario, you would expect the server to send the remaining 400 bytes of the file. Consequently, you would anticipate seeing the appropriate header in the server’s response.
1
Content-Range: bytes 100-499/500
Check out MDN for understanding those numbers, I won’t explain them here.
The Accept-Ranges HTTP response header acts as a signal from the server, indicating its capability to handle partial requests from the client for file downloads.
Essentially, this header communicates to the client, “Hey, I am capable of handling this, let’s proceed.”
Don’t ask me why, you just need it.
1
Accept-Ranges: <range-unit>
I suggest simply using bytes.
1
Accept-Ranges: bytes
The Content-Length header signifies the size of the message body, measured in bytes, that is transmitted to the recipient.
In layman’s terms, it represents the bytes of the remaining file.
1
Content-Length: <length>
Let’s continue the same example mentioned above, the server is going to send the remaining 400 bytes of the file.
1
Content-Length: 400
This is merely an introduction.
There are many complex considerations to take into account. For instance, when dealing with ETags, you must strategize on how to assign a unique ID to each resource. Additionally, you need to determine how to update the ETag when a resource is upgraded to a newer version.
Understanding those HTTPS headers is a good start.
Before everything else, you need to install the Selenium package, of course.
1
pip install selenium
Or, if you hate to deal with anti-bot measures, you can just use this.
1
pip install undetected-chromedriver
Then, add the user data directory to the ChromeOptions object. It is the path to your Chrome profile. For macOS, it is located at ‘~/Library/Application Support/Google/Chrome’.
Then, you can use the send_keys method to fill in the username and password fields. I add one while loop to wait for the element in case the script runs too fast.
After logging in, the chrome usally pops up a dialog asking if you want to save the password. It is annoying.
You can try to disable it by adding the --disable-save-password-bubble or --disable-popup-blocking argument to the ChromeOptions object. I don’t think it works. But you can try.
In the end, I just used a hack, that is to open a new tab and immediately close it, the popup will appear.
1 2 3 4 5 6 7 8 9 10 11
# open a new tab driver.execute_script("window.open('','_blank')")
time.sleep(1) # 1 second wait is enough I guess driver.switch_to.window(driver.window_handles[1])
# say goodbye to the new tab driver.close()
# now switch back to the original tab driver.switch_to.window(driver.window_handles[0])
That’s it.
Oh, one more thing.
Add user-agent to the ChromeOptions object is also a good idea. And please do not forget to specify version_main for the driver to match your current chrome version.
Raycast is a productivity tool for macOS. It allows you to quickly access files, folders, and applications. It’s great, but only available on macOS. If you already use Raycast, you know how useful it is. If you don’t, you should give it a try if you have a Mac.
For daily work, I also use Windows, and I was trying to implement a similar workflow on Windows. The thing I missed the most was the ability to search and open previously used workspaces in VS Code or remote machines with a few keystrokes.
You can guess my excitement when I found out about Microsoft PowerToys.
OK.
Enable VS Code search in the settings for PowerToys Run utility.
Then, you can use the shortcut Alt + Space to search for your workspaces.
1
{ THE_WORKSPACE_NAME_YOU_WANT_TO_OPEN
Now I have to find the equivalent of zsh-autosuggestions on Windows. Wish me luck.