A script which recurrsively walks over a specific post and
automatically replies to (and downvotes) comments by a particular
author.
Have you ever attempted to have a regular decent conversation on
reddit like any normal human being would, only to be confronted by
a troll, a moron, or a jerk? Maybe this person is just genuinely
evil. Maybe this person just won’t let you have the last word?
It’s probably because their mother didn’t teach them good manners.
This script could enable you to repetitively shut this person down
without having to spare them even a single thought.
This is a mostly untested sript. I only wrote it to end a certain
conversation. It did a fine job at that, and it kept me from having
to waste any more time replying to that thread. I fed the script
a reply that made it quite clear that the person that I was replying
to was deficient in their mental capacities and that I had written
this bot to automatically reply to anything they wrote in my thread.
It worked perfectly. After a few rounds of interacting with my
reply bot, that person finally realized that they would never have
the last word. They moved on and dug a little deeper into their
troll hole. Good riddance.
usage
First install the packages using pip or whatever. You can use a
virtualenv if you want of course.
Run the program with the following command
python3 reddit_auto_reply.py
Currently you have to edit the script manually, filling in the
necessary details by editing certain variables manually. You
should be able to tell which variables need to be edited by going
into the program and reading the code. If you want to know more
about what variables you need to edit or how to do so, create an
issue and let me know. I’m willing to put a small amount of work
into this to make it better and more usable.
An Expected Goals (xG) model that predicts the probability of scoring based on StatsBomb data. The project uses machine learning techniques to analyze factors most influencing shot effectiveness in football. Applied models (Logistic Regression, Random Forest, XGBoost) combined with Beta calibration technique create a highly accurate predictive tool. Analysis results confirm the crucial role of shot geometry and defender influence on goal-scoring probability.
🎯 Motivation
Expected Goals (xG) is one of the most important metrics used in modern football analytics. It allows for evaluating shot quality regardless of whether they resulted in a goal. In this project, I built my own xG model to better understand factors affecting shot effectiveness and create a tool that can be used for match analysis and player evaluation.
📋 Data
The data used comes from StatsBomb’s open dataset from the 2015/2016 season for five top European leagues:
Premier League (England)
La Liga (Spain)
Bundesliga (Germany)
Serie A (Italy)
Ligue 1 (France)
The data contains detailed information about each shot, including position on the pitch, shot type, circumstances of the shot, and positioning of other players at the moment of the shot.
Note: The repository does not include data files by default. You need to run the data_collector.ipynb notebook first to download the data from StatsBomb’s open dataset.
WARNING there have been reports of apps being rejected when Reachability is used in a framework. The only solution to this so far is to rename the class.
Reachability
This is a drop-in replacement for Apple’s Reachability class. It is ARC-compatible, and it uses the new GCD methods to notify of network interface changes.
In addition to the standard NSNotification, it supports the use of blocks for when the network becomes reachable and unreachable.
Finally, you can specify whether a WWAN connection is considered “reachable”.
DO NOT OPEN BUGS UNTIL YOU HAVE TESTED ON DEVICE
BEFORE YOU OPEN A BUG ABOUT iOS6/iOS5 build errors, use Tag 3.2 or 3.1 as they support assign types
Requirements
Once you have added the .h/m files to your project, simply:
Go to the Project->TARGETS->Build Phases->Link Binary With Libraries.
Press the plus in the lower left of the list.
Add SystemConfiguration.framework.
Boom, you’re done.
Examples
Block Example
This sample uses blocks to notify when the interface state has changed. The blocks will be called on a BACKGROUND THREAD, so you need to dispatch UI updates onto the main thread.
In Objective-C
// Allocate a reachability object
Reachability* reach = [Reachability reachabilityWithHostname:@"www.google.com"];
// Set the blocks
reach.reachableBlock = ^(Reachability*reach)
{
// keep in mind this is called on a background thread// and if you are updating the UI it needs to happen// on the main thread, like this:dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"REACHABLE!");
});
};
reach.unreachableBlock = ^(Reachability*reach)
{
NSLog(@"UNREACHABLE!");
};
// Start the notifier, which will cause the reachability object to retain itself!
[reach startNotifier];
In Swift 3
import Reachability
varreach:Reachability?func application(_ application:UIApplication, didFinishLaunchingWithOptions launchOptions:[UIApplicationLaunchOptionsKey:Any]?)->Bool{
// Allocate a reachability object
self.reach =Reachability.forInternetConnection()
// Set the blocks
self.reach!.reachableBlock ={(reach:Reachability?)->Voidin
// keep in mind this is called on a background thread
// and if you are updating the UI it needs to happen
// on the main thread, like this:
DispatchQueue.main.async{print("REACHABLE!")}}self.reach!.unreachableBlock ={(reach:Reachability?)->Voidinprint("UNREACHABLE!")}self.reach!.startNotifier()returntrue}
NSNotification Example
This sample will use NSNotifications to notify when the interface has changed. They will be delivered on the MAIN THREAD, so you can do UI updates from within the function.
In addition, it asks the Reachability object to consider the WWAN (3G/EDGE/CDMA) as a non-reachable connection (you might use this if you are writing a video streaming app, for example, to save the user’s data plan).
In Objective-C
// Allocate a reachability object
Reachability* reach = [Reachability reachabilityWithHostname:@"www.google.com"];
// Tell the reachability that we DON'T want to be reachable on 3G/EDGE/CDMA
reach.reachableOnWWAN = NO;
// Here we set up a NSNotification observer. The Reachability that caused the notification// is passed in the object parameter
[[NSNotificationCenterdefaultCenter] addObserver:selfselector:@selector(reachabilityChanged:)
name:kReachabilityChangedNotificationobject:nil];
[reach startNotifier];
In Swift 3
import Reachability
varreach:Reachability?func application(_ application:UIApplication, didFinishLaunchingWithOptions launchOptions:[UIApplicationLaunchOptionsKey:Any]?)->Bool{
// Allocate a reachability object
self.reach =Reachability.forInternetConnection()
// Tell the reachability that we DON'T want to be reachable on 3G/EDGE/CDMA
self.reach!.reachableOnWWAN =false
// Here we set up a NSNotification observer. The Reachability that caused the notification
// is passed in the object parameter
NotificationCenter.default.addObserver(self,
selector: #selector(reachabilityChanged),
name:NSNotification.Name.reachabilityChanged,
object:nil)self.reach!.startNotifier()returntrue}func reachabilityChanged(notification:NSNotification){ifself.reach!.isReachableViaWiFi() || self.reach!.isReachableViaWWAN(){print("Service available!!!")}else{print("No service available!!!")}}
val applicationContext: Application
val application: Application
val activityList: List<Activity>
val topActivity: Activity
val topActivityOrNull: Activity?
val topFragmentActivityOrNull: FragmentActivity?
val topActivityOrApplication: Context
Documentation is available on JitPack for the master branch.
Javadoc documentation can be generated by running the following command:
./gradlew javadoc
The HTML documentation will be placed in ./target/site/apidocs/index.html
HMAC+HKDF Authentication
HMAC+HKDF Authentication is an Authentication method that allows ensures the request is not tampered with in transit. This provides resiliance not only against network layer manipulation, but also man-in-the-middle attacks.
At a high level, an HMAC signature is created based upon the raw request body, the HTTP method, the URI (with query parameters, if present), and the current date. In addition to ensuring the request cannot be manipulated in transit, it also ensures that the request is timeboxed, effectively preventing replay attacks.
The library itself is made available by importing the following struct:
Supporting API’s will return the following payload containing at minimum the following information.
This string can be used in the Authorization Header
Date Header
The Version 1 HMAC header requires an additional X-Date header. The X-Date header can be retrieved by calling auth.getDateString()
Encrypted Requests & Responses
This library enables clients to establish and trusted encrypted session on top of a TLS layer, while simultaniously (and independently) providing the ability authenticate and identify a client via HMAC+HKDF style authentication.
The rationale for this functionality includes but is not limited to:
Need to ensure confidentiality of the Initial Key Material (IKM) provided by the server for HMAC+HKDF authentication
Need to ensure confidentiality of user submitted credentials to the API for authentication
The primary reason you may want to establish an encrypted session with the API itself is to ensure confidentiality of the IKM to prevent data leakages over untrusted networks to avoid information being exposed in a Cloudflare like incident (or any man-in-the-middle attack). Encrypted sessions enable you to utilize a service like Cloudflare should a memory leak occur again with confidence that the IKM and other secure data would not be exposed.
To encrypt, decrypt, sign, and verify messages, you’ll need to be able to generate the appropriate keys. Internally, this library uses lazysodium-java to perform all necessary cryptography functions, though any libsodium implementation for Java would suffice.
Encryption Keys
Encryption uses a sodium crypto box. A keypair can be generated as follows when using lazy-sodium.
importcom.ncryptf.Request;
importcom.ncryptf.exceptions.*;
importjava.util.Base64;
// Arbitrary string payloadStringpayload = "{\"foo\":\"bar\"}";
try {
// 32 byte secret and public key. Extract from kp.get...().getAsBytes(), or another libsodium methodRequestrequest = newRequest(secretKeyBytes, signingSecretKeyBytes/* token.signature */);
// Cipher now contains the encryted data// Signature should be the signature private key previously agreed upon with the sender// If you're using a `Token` object, this should be the `.signature` propertybyte[] cipher = request.encrypt(payload, remotePublicKey);
// Send as encrypted request bodyStringb64Body = Base64.getEncoder().encode(cipher);
// Do your http request here
} catch (EncryptionFailedExceptione) {
// Handle encryption errors here
}
Note that you need to have a pre-bootstrapped public key to encrypt data. For the v1 API, this is typically this is returned by /api/v1/server/otk.
Decrypting Responses
Responses from the server can be decrypted as follows:
importcom.ncryptf.Response;
importcom.ncryptf.exceptions.*;
importjava.util.Base64;
try {
// Grab the raw response from the serverbyte[] responseFromServer = Base64.getDecoder().decode("<HTTP-Response-Body>");
Responseresponse = newResponse(clientSecretKey);
Stringdecrypted = response.decrypt(responseFromServer, remotePublicKey);
} catch (InvalidChecksumExceptione) {
// Checksum is not valid. Request body was tampered with
} catch (InvalidSignatureExceptione) {
// Signature verification failed
} catch (DecryptionFailedExceptione) {
// Decryption failed. This may be an issue with the provided nonce, or keypair being used
}
V2 Encrypted Payload
Verison 2 works identical to the version 1 payload, with the exception that all components needed to decrypt the message are bundled within the payload itself, rather than broken out into separate headers. This alleviates developer concerns with needing to manage multiple headers.
The version 2 payload is described as follows. Each component is concatanated together.
42 libft is the first project of the common core, this project makes the student recreate some standard C library functions and some addition functions that will be useful throughout the cursus.
If you’re from 42 and you just started libft i highly recommend you to use this reposotory more as a support and develop your own functions and tests. If you need help you can send me a message in any of my socials
Standard C Library
Function
Description
Status
Francinette
ft_isalpha
Checks if the char received is a letter
✔️ ️
✔️
ft_isdigit
Checks if the char received is a number
✔️ ️
✔️
ft_isalnum
Checks if the char received is alphanumeric
✔️ ️
✔️
ft_isascii
Checks if the char received is an ascii char
✔️ ️
✔️
ft_isprint
Checks if the char received is printable
✔️ ️
✔️
ft_strlen
Returns the size of the string received
✔️ ️
✔️
ft_memset
Fills a block of memory with a particular value
✔️ ️
✔️
ft_bzero
Deletes the information of a set block of memory
✔️ ️
✔️
ft_memcpy
Copies the values of x bytes from source to the destination
✔️ ️
✔️
ft_memmove
Copies the values of x bytes from source to the destination
✔️ ️
✔️
ft_strlcpy
Copies from src to dest and returns the length of the string copied
✔️ ️
✔️
ft_strlcat
Concatnates dest with src and returns the length of the string concatnated
✔️ ️
✔️
ft_toupper
Converts into upercase the lowercase char received
✔️ ️
✔️
ft_tolower
Converts into lowercase the upercase char received
✔️ ️
✔️
ft_strchr
Returns the first occurance of char in the string
✔️ ️
✔️
ft_strrchr
Returns the last occurance of char in the string
✔️ ️
✔️
ft_strncmp
Compares the given strings up to n characters
✔️ ️
✔️
ft_memchr
Searchs in x bytes on a block of memory the first occurance of the value received
✔️ ️
✔️
ft_memcmp
Compares the first x bytes of a block of memory area str1 and str2
✔️ ️
✔️
ft_strnstr
Returns the first occurace of the little string on the big string
✔️ ️
✔️
ft_atoi
Converts the string received to it’s int value
✔️ ️
✔️
ft_calloc
Allocates a memory block with the size received and initializes it
✔️ ️
✔️
ft_strdup
Duplicates the string received on to a allocated string
✔️ ️
✔️
Addition functions
Function
Description
Status
Francinette
ft_substr
Returns an allocated string that starts at the index received
✔️ ️
✔️
ft_strjoin
Returns a new allocated string which is the result of the concatenation of both strings received
✔️ ️
✔️
ft_strtrim
Returns a copy of the string received without the characters received removing them from the beginning and end of the string
✔️ ️
✔️
ft_split
Returns a string separated by the character sent
✔️ ️
✔️
ft_itoa
Converts the int value received to it’s character value
✔️ ️
✔️
ft_strmapi
Applies the function received to each letter of the string received, creating a new allocated string with the changes
✔️ ️
✔️
ft_striteri
Applies the function received to each letter of the string received and replaces the string received with the changes
✔️ ️
✔️
ft_putchar_fd
Outputs the char received on to the file descriptor given
✔️ ️
✔️
ft_putstr_fd
Outputs the string received on to the file descriptor given
✔️ ️
✔️
ft_putendl_fd
Outputs the string received on to the file descriptor given and ending it with a new line
✔️ ️
✔️
ft_putnbr_fd
Outputs the number received on to the file descriptor given
✔️ ️
✔️
Bonus functions
Function
Description
Status
Francinette
ft_lstnew
Creates and return a new allocated node to a linked list
✔️ ️
✔️
ft_lstadd_front
Adds the node received to the beginning of a linked list
✔️ ️
✔️
ft_lstsize
Returns the number of nodes on a linked list
✔️ ️
✔️
ft_lstlast
Returns the last node of a linked list
✔️ ️
✔️
ft_lstadd_back
Adds the node received to the end of a linked list
✔️ ️
✔️
ft_lstdelone
Receives a node, deletes the contents of it’s variables and frees the node
✔️ ️
✔️
ft_lstclear
Deletes and frees the given node and every successor of that node
✔️ ️
✔️
ft_lstiter
Applies the function received to every element of the node’s variables
✔️ ️
✔️
ft_lstmap
Applis the function received to every element of the node’s variables and creates a new linked list from that
deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
更新源
apt update
安装 OpenCV-4.5.0
OpenCV-4.5.0源码链接如下,下载 zip 包,解压后放到宿主机/home目录下,即容器的/workspace目录下
cp {ultralytics}/yolov5/para.wts {tensorrt}/
cd {tensorrt}/
mkdir images # and put some images in it# update CLASS_NUM in yololayer.h if your model is trained on custom dataset# you can also update INPUT_H、INPUT_W in yololayer.h, update NET(s/m/l/x) in trt_infer.cpp
make
./trt_infer
# result images will be generated in present dir
As part of the Duckietown class taught at ETH Zurich (Fall 2023), we worked on a small final project and presented it to other students as a group of three master students: Benjamin Dupont, Yanni Kechriotis, and Samuel Montorfani. We implemented an intersection navigation pipeline for the Duckiebots (small autonomous differential drive robots equipped with Nvidia Jetson Nano) to enable them to drive through intersections in the Duckietown road-like environment.
The pipeline consists of:
Perception: Detect intersections and other Duckiebots in the environment.
Decision Making: Decide which way to go and whether it is safe to proceed based on the detections. This includes applying a decision-making stack to determine priority and right of way.
Control: Steer the Duckiebot through the intersection.
The pipeline is implemented in Python and uses the ROS framework to communicate with the Duckiebot and other nodes in the system.
Project Overview
Scope
Detect intersections in Duckietown.
Detect other Duckiebots in the intersection.
Decide whether to stop, go, or turn based on other agents, using LED colors for communication.
Navigate the intersection by turning left, right, or going straight, depending on the intersection options.
Apply a decision-making stack to determine priority and right of way.
Assumptions
All sensors on the Duckiebots are assumed to be fully functional. The intersections are expected to be of standard size, with standard markings that are clearly visible, and without any obstructions such as buildings. Additionally, the Duckiebots are assumed to be of standard size and shape.
Finally, the code for the lane following is given by the instructors as it is part of the Duckietown software stack.
Challenges
The project faces several challenges that could lead to failure. One major challenge is the presence of multiple Duckiebots at an intersection, which can create symmetry issues and complicate decision-making. Delayed decision-making can also pose a risk, as it may lead to collisions or traffic jams. The limited field of view of the Duckiebots can hinder their ability to detect other robots and obstacles in time. LED detection issues can further complicate communication between Duckiebots. Additionally, random component failures can disrupt the navigation process. To mitigate these risks, we implemented a robust priority system and strategies to improve field of view, such as detecting Duckiebots while approaching intersections and turning in place to get a better view. We also assume that there is always a Duckiebot on the left and make random decisions after a certain time to prevent deadlocks at intersections.
Implementation details and results
The implementation of our intersection navigation project involved creating custom classes and functions to handle various tasks such as intersection detection, decision making, and control. The Duckiebot starts by following the lane and uses its camera to detect intersections by identifying red markers. Upon detecting an intersection, it stops and randomly chooses an action (straight, left, or right) based on the intersection type. The Duckiebot then signals its intended action using LEDs and checks for other Duckiebots at the intersection using a custom-trained YOLOv5 object detection model. This model provided reliable detection of other Duckiebots, which was crucial for the priority decision-making process. The Duckiebot follows standard traffic rules to determine right-of-way and uses motor encoders to execute the chosen action through the intersection.
Perception
The perception module is responsible for detecting intersections and other Duckiebots in the environment. We used the Duckietown lane following code to detect intersections based on the presence of red markers. The intersection detection algorithm was implemented using OpenCV to identify the red markers and determine the intersection type (T-intersection or 4-way intersection) and possible options for the duckiebot to navigate to. We also trained a custom YOLOv5 object detection model to detect other Duckiebots at the intersection. The model was trained on a dataset of Duckiebot images and achieved high accuracy in detecting Duckiebots in various orientations and lighting conditions. The alternative for this was to use the LEDs on the Duckiebots to communicate with each other, but we decided to use the object detection model for more reliable results, as the LED strength could vary depending on the lighting conditions, and on most robots, only one LED was working. We then ran the LED detection in the bounding box of the detected Duckiebots to determine the color of the LED and the direction the Duckiebot was going to take. This information was used in the decision-making module to determine the Duckiebot’s next action. To determine in which position the other Duckiebots were, we used their bounding boxes in the camera pixel coordinates to infer their position relative to the Duckiebot. This information was used in the decision-making module to determine the Duckiebot’s priority and right of way.
“+” Intersection detection
YOLO v5 Detection
Decision Making
The decision-making module is responsible for determining the Duckiebot’s next action based on the detected intersections and other Duckiebots. Once the different options were detected, the Duckiebot randomly chose an action (straight, left, or right) based on the intersection type.
We implemented a priority system to handle multiple Duckiebots at an intersection and ensure safe navigation. The priority system assigns right-of-way based on the Duckiebot’s position relative to the other Duckiebots. The Duckiebot signals its intended action using LEDs to communicate with other Duckiebots and avoid collisions. This is used in complex cases where right of way is not sufficient. In the most simple case, the duckiebot just stays at a stop until the other duckiebot to the right has passed. In the case where the duckiebot is at a 4-way intersection, it will signal its intention to go straight, left, or right using the LEDs. If the duckiebot is at a T-intersection, it will signal its intention to go straight or turn using the LEDs. The decision-making module also includes a tie-breaking mechanism to resolve conflicts when multiple Duckiebots have the same priority. In these cases, the Duckiebot randomly chooses an action to prevent deadlocks and ensure smooth traffic flow. The decision-making module was implemented using a combination of if-else statements and priority rules to determine the Duckiebot’s next action based on the detected intersections and other Duckiebots. The priority system was designed to handle various scenarios and ensure safe and efficient navigation through intersections. It was however not fully completed and tested during the project, as the time was limited.
Control
Due to the limited time available for the project, we couldn’t implement a full estimation and control pipeline for the Duckiebots. Instead, we decided to opt for a brute force approach by calculating the inputs needed to achieve the desired action using open loop control. This was sufficient in most cases, and the lane following module was able to take over just at the end of the intersection to compensate for potential small errors and go back on track.
Additionaly, to mitigate the effect of misalignment of the Duckiebot when approaching the intersection, we added a small alignment step before the intersection, where the Duckiebot would turn in place to get a better view of the intersection and align itself with the lanes. Using the intersection detection and aligning it with a template, we were able to ensure the duckiebot was straight when scanning the intersection, effectively improving the detection accuracy but also the intersection navigation itself thanks to a more standardized starting pose.
Results
In terms of results, our systematic evaluation showed an intersection detection accuracy of approximately 90%, a turn completion rate of around 85%, and a Duckiebot detection accuracy of about 95%. However, we encountered some challenges, with crashes occurring about 10% of the time and off-road occurrences happening roughly 40% of the time, often due to camera delays, motor issues, or other hardware problems. These problems also arose due to the code running on our own laptops rather than the Duckiebot itself, which could have affected the real-time performance. Despite these challenges, our project demonstrated a successful implementation of intersection navigation for Duckiebots, and we received very positive feedback from our peers during the final presentation.
Demonstration Videos
You can watch a demonstration of the intersection navigation system in action with the following GIFs:
Single Duckiebot navigating through an intersection
Two Duckiebots navigating through an intersection
Three Duckiebots navigating through an intersection
For the full videos, with realistic performance, you can look in the folder /videos in the repository.
Note: As discussed in the challenges section, the videos show the Duckiebots running on our laptops rather than the actual Duckiebots, which could have affected the real-time performance. This affected the controls sent to the Duckiebots and the camera feed, leading to some crashes and off-road occurrences.
Additionaly, the videos also show the sometime inaccurate lane following code, which was out of scope and given to us by the instructors, which was also an assumption made in the project.
Conclusion and Future Work
In conclusion, our project successfully implemented an intersection navigation system for Duckiebots, achieving high accuracy in intersection detection and Duckiebot recognition. Despite hardware and software integration challenges, we demonstrated the feasibility of autonomous intersection navigation in Duckietown. The project met our initial goals, although the combined execution of actions revealed areas for improvement, particularly in handling delays and hardware reliability.
For future work, several extensions could enhance the Duckiebots’ capabilities. Developing a more robust tie-breaking mechanism for four-way intersections and ensuring the system can handle non-compliant or emergency Duckiebots would improve reliability. Implementing traffic light-controlled intersections and enabling multiple Duckiebots to navigate intersections simultaneously with minimal constraints on traffic density would significantly advance the system’s complexity and utility. Better integration of the code into the component framework would streamline development and debugging processes.
Achieving these improvements would require substantial effort, particularly in enhancing hardware reliability and refining the software framework. Despite the challenges, the potential advancements would unlock new skills for the Duckiebots, making them more versatile and capable in complex environments. Given the limited time we had for this project, we would have liked to have more time to work on these aspects as the schedule was quite tight.
Overall, we are satisfied with our project’s outcomes and the learning experience it provided. The insights gained will inform future developments and contribute to the broader field of autonomous robotics.
Design Document
The design document for the project can be found in the /design_document folder. It contains a pdf document exported from the word document that we filled in throughout our work, outlining the design choices, implementation details, and challenges faced during the project.