A “paradigm” is not just a form—it’s a mode of thinking.

As a branch of the writing profession, programmers interact with tens of thousands of lines of code, and naturally, they rely on paradigms for guidance. It’s like having a built-in “system prompt”—you don’t need to explain “what object-oriented programming is” to your colleagues every time, nor do you have to argue endlessly about the details of design patterns. There’s an unspoken understanding that you’re “playing by the rules” in a certain context.

After all, the human “mental window” is limited in size. Collaborating without a paradigm is like chickens talking to ducks—utter miscommunication. If you insist on piling all your functions into a single file with no abstraction or categorization, the consequence is writing documentation until you break down.

But now that artificial intelligence is part of the programmer’s workflow, paradigms are shifting.

First, the way we handle code has transformed from “manual carving” to “semantic understanding.” Those obedient little “coding assistants” can automatically modify your code based on a single instruction. So under this transformation, do traditional coding paradigms still matter?

It depends on your perspective.

They do matter—because large language models deal in language. The clearer the linguistic structure, the easier it is to extract and recreate information. It’s the difference between a pile of unclassified documents and a meticulously organized file cabinet. The information entropy is not the same, and naturally, neither is the efficiency.

But from another angle, maybe they’re no longer that important. After all, paradigms were originally for humans to read. Now, LLMs can develop their own paradigms—or even revive those long-abandoned “classical practices.” For instance, directly manipulating binary code: who needs layers of abstraction? From a model’s perspective, that quaint notion of “human readability” might be completely unnecessary.

Second, there’s the transformation of the programmer’s role. Whether we accept it or not, this wave of change has quietly pushed us into a new position. The days of hand-crafting code were like seasoned artisans refining woodwork—there was a kind of pride and dignity woven into every character.

But from a business standpoint, when websites and apps are the final product, the programmer is just one link in the industrial chain. Whether you code by hand or use AI to generate it, the user doesn’t care. Our mindset must shift accordingly. I am not just a code porter or a keystroke operator—I am an engineer solving problems. Writing code is a means to that end, not the end itself.

To me, the value of paradigms lies in helping us organize our thoughts, so that our future selves—or others—can quickly get into the zone when reading our code. This was once a form of self-rescue for programmers. But now, LLMs effortlessly surpass us. They have stronger memory, faster analysis, and even if your code is a mess, they can still make sense of it.

So, do we still need paradigms in the future?

Perhaps the real question is: are we willing to hand over our thinking to the model, or do we want to preserve a trace of human logic?

Paradigms are road signs in the world of programming.

But when the roads are no longer built by humans—does that mean we are no longer travelers, or just moving where the machine tells us to go?

所谓“范式”,是一种形式,更是一种思维模式。

程序员,作为文字工作者的一个分支,要与成千上万行代码打交道,自然离不开各种范式的指引。就像预设好了一个“系统提示”,你无需一遍遍向同事解释“什么是面向对象”,也无需在设计模式上争论太多细节。大家心照不宣,默认你在某个语境下是“按套路出牌”。

毕竟,人类的“思维窗口”大小有限,没有范式的合作无异于鸡同鸭讲。你要是硬把所有函数堆在一个文件里,不做任何抽象、分类,那后果就是写文档要写到崩溃。

可当人工智能加入程序员的工作流之后,范式正在发生变化。

首先,代码的处理方式从“手工雕刻”变成了“语义理解”。那些听话的“编程小助手”,听你一句指令,便能自动修改代码。那么,这种转变下,传统代码里的范式还重要吗?

这得看你怎么看。

重要,当然重要。因为大模型处理的是“语言”,语言的组织结构越清晰,越有利于信息的提取和再创造。这就像未经分类的文件堆和精心整理过的档案柜之间的差别,信息熵不一样,效率自然也不一样。

但换个角度,也可以说它已经不那么重要了。毕竟,以往的范式,是为了“人”阅读。而现在,大模型完全可以发展出自己的范式,甚至回收那些早被人类嫌弃的“古典做法”。比如说,直接操作二进制编码,何须讲究抽象层级?对于模型来说,那点“人类的可读性”,恐怕只是多余。

其次,是程序员身份的转变。不论内心是否接受,这股变化的洪流,已经悄然将人推到了新的位置。手搓代码的岁月,有点像老手艺人打磨木工制品,那种纯手工的尊严和骄傲,仿佛藏在每一个字符之间。

但若从商业角度看,当网页和 App 成为最终商品时,程序员不过是产业链条上的一环。你是用手搓,还是让 AI 帮你画,用户并不关心。思维也要随之转变。我不只是代码的搬运工、软件的敲字员,而是一个解决问题的工程师。写代码,是解决问题的手段,绝不是唯一答案。

在我看来,范式的意义,是帮助我们整理思绪,好让将来的“自己”或“他人”在回头阅读这些代码时,能更快地进入状态。这本来是程序员的一项自救之术,如今,却被大模型轻松超越。它记忆力更强,分析力更快,哪怕你写得一团糟,它也能看出门道。

那么,未来还需要范式吗?

也许真正的问题是:我们愿意把思维交给它,还是愿意保留一点属于人类自己的逻辑痕迹?

范式,是程序世界中的路标。

而当道路不再由我们铺设,是否意味着,我们也将不再是旅人,而是被路指引的方向?

Accessing the Steam Deck’s file system remotely can be incredibly useful.

Imagine using Raycast to quickly open a recent VS Code project, and one of them is a folder on the Steam Deck via an SSH connection. With just one click, you're connected and ready to go.

Prerequisites

Before getting started, make sure you have the following:

  • Steam Deck: Ensure your Steam Deck is powered on and connected to the same Wi-Fi network as your laptop.
  • Computer: A Mac (or possibly a PC) with Visual Studio Code installed.
  • SSH Enabled on Steam Deck: SSH is not enabled by default. You will need to enable it through the Steam Deck’s desktop mode.
  • VS Code Extensions: Install the “Remote - SSH” extension on your VS Code.

Step 1: Enable SSH on the Steam Deck

Press the Steam button, navigate to Power, and switch to Desktop mode. Once in desktop mode, open the KDE application launcher and search for Konsole (terminal).

  • Start the SSH server with:
    1
    sudo systemctl start sshd
  • Additionally, enable SSH to start on boot:
    1
    sudo systemctl enable sshd
  • Verify Your IP Address, in the terminal, type:
    1
    ip a
    Note the IP address (e.g., 192.168.1.xxx or 10.0.0.xxx) that corresponds to your Wi-Fi connection.

Creating SSH key pairs can enhance the security of your SSH connection by using public-key cryptography instead of a password.

  • Generate SSH Key Pair on the Steam Deck:
    1
    ssh-keygen -t rsa -b 4096 -f ~/.ssh/sd_rsa
    This command will save the key pair in the specified folder (~/.ssh/sd_rsa).

This process will create two files:

  • “sd_rsa”: This is your private key. Keep this file secure and find a way to copy it to the Mac. Be creative; for example, you can use GoodReader to set up a quick WiFi Server.
  • “sd_rsa.pub”: This is your public key. This file can be shared and will stay on the Steam Deck.

For added security, you can:

  • Disable password authentication on the Steam Deck:
    1
    sudo vim /etc/ssh/sshd_config
  • Find the line that says “#PasswordAuthentication” and change it to:
    1
    2
    PasswordAuthentication no
    PubkeyAuthentication yes
  • Save the file (:wq) and restart the SSH service:
    1
    sudo systemctl restart sshd
  • Ensure your public key is added to the “authorized_keys” file:
    1
    cat ~/.ssh/sd_rsa.pub >> ~/.ssh/authorized_keys

Step 3: Configure SSH Access in VS Code

Ensure you have the Remote - SSH extension installed. If not, you can find it in the VS Code Marketplace. Copy the “sd_rsa” private key to your Mac and set the correct permissions (chmod 600 if necessary).

  • Press “Cmd+Shift+P” on Mac to open the command palette.
  • Type “Remote-SSH: Open SSH Configuration File…” and select it.
  • Choose the SSH configuration file you want to edit (usually “~/.ssh/config”).
  • Add a new entry to the configuration file in the following format:
    1
    2
    3
    4
    Host steamdeck
    HostName 192.168.1.xxx
    User deck
    IdentityFile ~/.ssh/sd_rsa
  • Replace “192.168.1.xxx” with the actual IP address of your Steam Deck.
  • Save and close the configuration file.

Step 4: Connect to SSH in VS Code

  • Again, open the command palette (“Cmd+Shift+P”).
  • Type “Remote-SSH: Connect to Host…” and select the entry you just added.
  • The first time only, you may be prompted to accept the host’s fingerprint.
  • You’ll need to grant VS Code permission to access the local network. Go to System Settings > Privacy & Security > Local Network and ensure VS Code is listed and has access enabled.

By following the above steps, you can conveniently access and manage your Steam Deck’s file system using the powerful toolset provided by VS Code over an SSH connection.

Step 5: Shortcut Using Raycast and VS Code Extension

If you haven’t already, download and install Raycast from Raycast’s official website. Open Raycast and go to the “Extensions Store” and search for “Visual Studio Code” and install the extension.

  • Open Raycast “Option+Space”
  • Type “VS Code” and you will see the shortcut in VS Code Recent Projects.
  • Select “deck” or another name for your SSH connection from the list to quickly open it in the VS Code.

A big thanks to the developers who created these amazing tools!

References

SSH-KEYGEN General Commands Manual
https://man.openbsd.org/ssh-keygen

Remote SSH with Visual Studio Code
https://code.visualstudio.com/blogs/2019/07/25/remote-ssh

Remote SSH: Tips and Tricks
https://code.visualstudio.com/blogs/2019/10/03/remote-ssh-tips-and-tricks

We’ve all experienced the frustration of a poor internet connection. You may recall the disappointment of a large file download failing after 24 hours of waiting. Even worse, discovering that the download is not resumable.

Responsibility for resumable downloads doesn’t solely rest on the client side with the correct setting of HTTP headers. It’s equally, if not more, important for the backend to correctly enable several headers and implement the associated logic.

While I won’t delve into the detailed implementation in a specific language, understanding the headers discussed below will equip you with the knowledge to easily implement this feature if you wish.

Client

The only aspect you need to focus on is the Range HTTP request header. This header specifies the portions of a resource that the server should return. That’s all there is to it.

1
Range: <unit>=<range-start>-

On the client side, the only requirement is to properly implement the Range HTTP request header. This involves using the correct unit and determining the starting point of the range. The server then knows which portion of the file to send. There’s no need to worry about specifying the range end, as the typical use case involves resuming and downloading the entire file.

Server

Now, things start to get more complicated.

The ETag (also known as entity tag) HTTP response header serves as an identifier for a specific version of a resource.

1
ETag: "<etag_value>"

If your target client includes a browser, then you need to set the ETag. Modern browsers expect to see this value; otherwise, the browser will simply retry downloading the entire file again.

The Content-Range response HTTP header signifies the position of a partial message within the full body message.

1
Content-Range: <unit> <range-start>-<range-end>/<size>

Imagine you are downloading a file of 500 bytes, but due to an unstable internet connection, the download is interrupted after only 100 bytes. In this scenario, you would expect the server to send the remaining 400 bytes of the file. Consequently, you would anticipate seeing the appropriate header in the server’s response.

1
Content-Range: bytes 100-499/500

Check out MDN for understanding those numbers, I won’t explain them here.

The Accept-Ranges HTTP response header acts as a signal from the server, indicating its capability to handle partial requests from the client for file downloads.

Essentially, this header communicates to the client, “Hey, I am capable of handling this, let’s proceed.”

Don’t ask me why, you just need it.

1
Accept-Ranges: <range-unit>

I suggest simply using bytes.

1
Accept-Ranges: bytes

The Content-Length header signifies the size of the message body, measured in bytes, that is transmitted to the recipient.

In layman’s terms, it represents the bytes of the remaining file.

1
Content-Length: <length>

Let’s continue the same example mentioned above, the server is going to send the remaining 400 bytes of the file.

1
Content-Length: 400

This is merely an introduction.

There are many complex considerations to take into account. For instance, when dealing with ETags, you must strategize on how to assign a unique ID to each resource. Additionally, you need to determine how to update the ETag when a resource is upgraded to a newer version.

Understanding those HTTPS headers is a good start.

Before everything else, you need to install the Selenium package, of course.

1
pip install selenium

Or, if you hate to deal with anti-bot measures, you can just use this.

1
pip install undetected-chromedriver

Then, add the user data directory to the ChromeOptions object. It is the path to your Chrome profile. For macOS, it is located at ‘~/Library/Application Support/Google/Chrome’.

1
2
3
4
5
6
7
import undetected_chromedriver as uc

options = uc.ChromeOptions()
options.add_argument(f"--user-data-dir={'Path_to_your_Chrome_profile'}")
driver = uc.Chrome(options=options)

driver.get('https://www.example.com')

The --user-data-dir argument is kind of cheating because it allows you to bypass the login process without actually logging in.

Cookie is your friend.

But sometimes, you need to handle the login process, for instance, you have to switch between multiple accounts.

First of all, take care of your credentials. Use an .env file.

1
2
3
4
5
6
7
import os
from dotenv import load_dotenv

load_dotenv()

USERNAME = os.getenv('USERNAME_ENV_VAR')
PASSWORD = os.getenv('PASSWORD_ENV_VAR')

Then, you can use the send_keys method to fill in the username and password fields. I add one while loop to wait for the element in case the script runs too fast.

1
2
3
4
5
6
7
8
9
while True:
try:
driver.find_element(by=By.ID, value="username").send_keys(USERNAME)
break
except:
time.sleep(1)

driver.find_element(by=By.ID, value="password").send_keys(PASSWORD)
driver.find_element(by=By.ID, value="submit").click()

After logging in, the chrome usally pops up a dialog asking if you want to save the password. It is annoying.

You can try to disable it by adding the --disable-save-password-bubble or --disable-popup-blocking argument to the ChromeOptions object. I don’t think it works. But you can try.

In the end, I just used a hack, that is to open a new tab and immediately close it, the popup will appear.

1
2
3
4
5
6
7
8
9
10
11
# open a new tab
driver.execute_script("window.open('','_blank')")

time.sleep(1) # 1 second wait is enough I guess
driver.switch_to.window(driver.window_handles[1])

# say goodbye to the new tab
driver.close()

# now switch back to the original tab
driver.switch_to.window(driver.window_handles[0])

That’s it.

Oh, one more thing.

Add user-agent to the ChromeOptions object is also a good idea. And please do not forget to specify version_main for the driver to match your current chrome version.

Raycast is a productivity tool for macOS. It allows you to quickly access files, folders, and applications. It’s great, but only available on macOS. If you already use Raycast, you know how useful it is. If you don’t, you should give it a try if you have a Mac.

For daily work, I also use Windows, and I was trying to implement a similar workflow on Windows. The thing I missed the most was the ability to search and open previously used workspaces in VS Code or remote machines with a few keystrokes.

You can guess my excitement when I found out about Microsoft PowerToys.

OK.

Enable VS Code search in the settings for PowerToys Run utility.

Then, you can use the shortcut Alt + Space to search for your workspaces.

1
{ THE_WORKSPACE_NAME_YOU_WANT_TO_OPEN

Now I have to find the equivalent of zsh-autosuggestions on Windows. Wish me luck.

Links to the tools mentioned in this post:

The Problem

  • Microsoft is only providing intel version ISO file for Windows 11.
  • For Windows 11 Insider Preview, the arm version is provided only in VHDX format.

Workaround

Luckily, we can get the ESD file and then convert it into ISO file which can be used in VMware.

Steps

Go to the website of Parallels to download their Mac app. Alternatively, you can get the DMG link from Homebrew API

The link looks like this. After downloading, double click the DMG file but don’t install the Parallels. You just need to mount the DMG file.

Then open the terminal and run the following commands:

1
2
sudo ditto /Volumes/Parallels\ Desktop\ 19/Parallels\ Desktop.app/Contents/MacOS/prl_esd2iso /usr/local/bin/prl_esd2iso
sudo ditto /Volumes/Parallels\ Desktop\ 19/Parallels\ Desktop.app/Contents/Frameworks/libwimlib.1.dylib /usr/local/lib/libwimlib.1.dylib

We can thank Parallels for providing these amazing tools later. Unmount and delete the DMG file.

To figure out the download link for Windows 11 ESD file:

1
cd ~/Downloads/ && curl -L "https://go.microsoft.com/fwlink?linkid=2156292" -o products_Win11.cab && tar -xf products_Win11.cab products.xml && cat products.xml | cat products.xml | grep ".*_CLIENTCONSUMER_RET_A64FRE_en-us.esd" | sed -e s/"<FileName>"//g -e s/"<\/FileName>"//g -e s/\ //g -e s/"<FilePath>"//g -e s/"<\/FilePath>"//g -e s/\ //g | head -n 2

By the way, I assume your current working directory is ~/Downloads/. If not, please change it accordingly.

Use curl to download the ESD file:

1
curl http://dl.delivery.mp.microsoft.com/filestreamingservice/files/f16733c5-e9f8-4613-9fe6-d331c8dd6e28/22621.1702.230505-1222.ni_release_svc_refresh_CLIENTCONSUMER_RET_A64FRE_en-us.esd --output win11.esd

Convert the ESD file into ISO file.

1
prl_esd2iso ~/Downloads/win11.esd ~/Downloads/win11.iso

Now you can insert the ISO file into VMware Fusion which is free to use with a personal license You can find the license key after you register/login on the their website.

Install the vmware fusion with Homebrew. Yes, you need have Homebrew installed, but I guess you already done it.

1
brew install --cask vmware-fusion

If you run into the chown issue like:

1
Chown /Applications/VMware Fusion.app: Operation not permitted

Please double check if the Full Disk Access is granted for the Terminal.app in system settings Privacy & Security.

Drag and drop the Windows 11 ISO file into vmware. You can go with UEFI and also the default values with the rest of the settings.

Pay attention to the message on the screen, if it is saying press any key to continue, don’t wait. You only have five seconds to hit the key, so be fast. I will not talk about the basic steps of installing Windows 11, I trust you can install the operating system with the GUI.

When you reach the setup step of internet connection, press shift + fn + F10 to invoke the CMD. Input:

1
OOBE\BYPASSNRO

It will auto restart the setup steps and this time you choose the option I don’t have internet (Yup, actually you don’t.) Continue with limited setup. If everything goes well. In the end, you get into Windows 11 Desktop

Run PowerShell as Administrator and type:

1
Set-ExecutionPolicy RemoteSigned

Insert the VMware Tools CD image into the virtual machine. Run the setup script with PowerShell.

In case you want to set the Execution Policy back:

1
Set-ExecutionPolicy Restricted

If the VMware Tools are successfully installed, the internet connection is working inside the virtual machine. Adjust settings as you wish. For example, set the display resolution to 2880 x 1800 and Scale to 200%.

A fully operational Windows 11 on Mac is all yours.

Enjoy.

You must have the same headache as me when AWS sends you an email saying this month you get yet another bill for all the unknown active resources on AWS.

But wait, where are they? How can I find them?

I have been asking myself these questions time and time again. Now I finally find a simple way to deal with it.

  • Open AWS Resource Groups: https://console.aws.amazon.com/resource-groups/
  • In the navigation pane, on the left side of the screen, choose Tag Editor.
  • For Regions, choose All regions.
  • For Resource types, choose All supported resource types.
  • Choose Search resources.

Then, you will see all the resources that are still active in your account.

You need to terminate them one by one.

Good luck!

Introduction

BBR (“Bottleneck Bandwidth and Round-trip propagation time”) aims to improve network performance and reduce latency. BBR estimates the available network bandwidth and the round-trip time (RTT) to adjust the TCP sending rate dynamically, reducing queuing delays and reducing packet loss.

Prerequisites

Check if your Linux kernel version is 4.9 or higher.

1
uname -r

Congestion Control Status

1
sysctl net.ipv4.tcp_available_congestion_control

If you see net.ipv4.tcp_available_congestion_control = bbr cubic reno, then BBR is enabled. You can check it again after we enable BBR.

Enable BBR

1
2
3
echo "net.core.default_qdisc=fq" >> /etc/sysctl.conf
echo "net.ipv4.tcp_congestion_control=bbr" >> /etc/sysctl.conf
sudo sysctl -p

The first line enables Fair Queueing (FQ), which is a network scheduler that improves network performance by reducing latency and jitter. The second line enables BBR.

The last line reloads the configuration file for the changes to take effect.

References

Optimizing HTTP/2 prioritization with BBR and tcp_notsent_lowat:
https://blog.cloudflare.com/http-2-prioritization-with-nginx/

TCP BBR congestion control comes to GCP – your Internet just got faster:
https://cloud.google.com/blog/products/networking/tcp-bbr-congestion-control-comes-to-gcp-your-internet-just-got-faster

The ftplib is included in Python batteries, which can be used to implement the client side of the FTP protocol. It’s compact and easy to use, but missing a user-friendly progress display. For example, if you have a long-time connection to upload / download a large file to / from the FTP server, the terminal tells you nothing about the progress of the file transfer.

Don’t panic.

Luckily, we have Rich, a Python library, showing rich text (with color and style) to the terminal. Especially, it can display continuously updated information regarding the progress of long running tasks / file copies etc. That is perfect for the scenario of file transfer with FTP server.

The main challenge is that the Progress in rich.progress has to be called every time you need to update the UI and we have to, at the same time, synchronize the actual progress of FTP file transfer.

OK, show me the code.

First, make sure you have rich library installed.

Then, double check if you get these dependencies imported.

1
2
3
4
import time
from ftplib import FTP_TLS
from rich import print
from rich.progress import Progress, SpinnerColumn, TotalFileSizeColumn, TransferSpeedColumn, TimeElapsedColumn

The implementation is straightforward with the help from the callback in FTP.retrbinary(). The callback function is called for each block of data received. And that is when we take the chance to update and render the progress display.

Here is an example of downloading from FTP server.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
def download_from_ftp(file_path):
# ftplib.FTP_TLS() connects to FTP server with username and password
ftp = FTP_TLS(host=FTP_HOST, user=FTP_USER, passwd=FTP_PASSWD)

# Securing the data connection requires the user to explicitly ask for it by calling the prot_p()
ftp.prot_p()

# prepare a file object on local machine to write data into
f = open(file_path, 'wb')

# initialize the ftp_progress with file_size, ftp connection and the file object
# you may need to work out how to get the actual file size
# Hint: FTP.dir() produces a directory listing as returned by the LIST command
tracker = ftp_progress(file_size, ftp, f)

# the trick to update rich progress display is using the callback function in retrbinary()
ftp.retrbinary('RETR example_file.zip', callback=tracker.handle)

# stop progress display and also terminate the file object
tracker.stop()

# send a QUIT command to the server and close the connection
ftp.quit()

If you go through the comments I wrote for the above function, then the below class should be fairly self-explanatory to you. The handle() is where we reflect the changes in each iteration, yes in callbacks.

One thing you should be aware of is that FTP uses two separate TCP connections: one to carry commands and the other to transfer data. So in the case of a long-time file transfer, you need to talk to the command channel once a while, to keep it connected. ‘NOOP’ command is designed for this, to prevent the client from being automatically disconnected (by server) for being idle.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
class ftp_progress:
def __init__(self, file_size, ftp, f):
self.file_size = file_size
self.ftp = ftp
self.f = f
self.size_written = 0
self.time = time.time()
self.progress = Progress(
SpinnerColumn(),
*Progress.get_default_columns(),
TotalFileSizeColumn(),
TransferSpeedColumn(),
TimeElapsedColumn(),
)
self.task_download = self.progress.add_task("[red]Download...", total=self.file_size)
self.progress.start()

def stop(self):
self.progress.stop()
self.f.close()

def handle(self, data):
self.f.write(data)
self.size_written += 8192
self.progress.update(self.task_download, advance=8192)

# keep FTP control connection alive
if time.time() - self.time > 60:
self.time = time.time()
self.ftp.putcmd('NOOP')

As a final note, it should be mentioned that be careful of passing by reference in Python. If you don’t close / keep FTP connections correctly with the server, strange things (not the TV show) cound happen.

And, stay away from the nested callbacks, always.

Ref:

ftplib — FTP protocol client
https://docs.python.org/3/library/ftplib.html

Rich’s documentation
https://rich.readthedocs.io/en/stable/index.html

Progress Display (Rich)
https://rich.readthedocs.io/en/stable/progress.html

0%