How I contributed LC-Rating
During the process of solving programming questions, I found that a good solution is very important, which can help me better understand the questions and learn better.
After a series of searches, I think 0x3f explained it very well. At the same time, 0x3f recommend this site which records which questions 0x3f has provided solution and contains grouped questions list created by 0x3f. The only drawback is that I solve the questions on the global server of leetcode and all his links to the questions are on the Chinese server, which made me decide to make a button for easy switching.
This is the first time I have completely reviewed other people's projects. My idea is to look at it from deployment to component design in reverse, which helps me sort out the logic. This project is very similar to the UTLeetcoder I have done before. They are all noserver projects and CI/CD through github action
Add button
I took a quick look and found that there is a hooks folder in the project and found that reducer is not used in the project. So I decided to add a site hook to check the current site state. Then I added a button to the required interface to change the state.
The specific changes are as follows, but I found that when there are too many URLs on a page, the change speed will be very slow, so I closed this pull request.
Add python script
During the process of doing the questions, I found that no one seemed to be maintaining the question list, so I studied it and found that the maintenance of the question list depends on the script under /lc-maker/js/
. People need to manually copy the script, then enter the corresponding question list and run the script in the browser console. After getting the data, they have to process it manually.
During my research, I found that the repo owner has written most of the corresponding leetcode api in a file, which is undoubtedly good news. The source of the project to update the question list is leetcode discussion. So I added a new gql operation and opened a new script called 0x3f_discuss.py
. At the beginning of the project, first hard code the existing discussions and their titles. And establish the data class required for the corresponding page display. Next, execute the gql operation to get the markdown data and start processing the data. And update it to the corresponding data.
Originally, I was going to write a github workflow to let the script execute automatically, but I realized that I didn’t have the corresponding permissions and couldn’t test whether it was correct or not, so I put it on hold.