Category: Blog

  • 42_libft

    42 libft

    Static Badge Static Badge

    Static Badge

    42 libft is the first project of the common core, this project makes the student recreate some standard C library functions and some addition functions that will be useful throughout the cursus.

    If you’re from 42 and you just started libft i highly recommend you to use this reposotory more as a support and develop your own functions and tests. If you need help you can send me a message in any of my socials

    Standard C Library

    Function Description Status Francinette
    ft_isalpha Checks if the char received is a letter  ✔️ ️   ✔️
    ft_isdigit Checks if the char received is a number  ✔️ ️   ✔️
    ft_isalnum Checks if the char received is alphanumeric  ✔️ ️   ✔️
    ft_isascii Checks if the char received is an ascii char  ✔️ ️   ✔️
    ft_isprint Checks if the char received is printable  ✔️ ️   ✔️
    ft_strlen Returns the size of the string received  ✔️ ️   ✔️
    ft_memset Fills a block of memory with a particular value  ✔️ ️   ✔️
    ft_bzero Deletes the information of a set block of memory  ✔️ ️   ✔️
    ft_memcpy Copies the values of x bytes from source to the destination  ✔️ ️   ✔️
    ft_memmove Copies the values of x bytes from source to the destination  ✔️ ️   ✔️
    ft_strlcpy Copies from src to dest and returns the length of the string copied  ✔️ ️   ✔️
    ft_strlcat Concatnates dest with src and returns the length of the string concatnated  ✔️ ️   ✔️
    ft_toupper Converts into upercase the lowercase char received  ✔️ ️   ✔️
    ft_tolower Converts into lowercase the upercase char received  ✔️ ️   ✔️
    ft_strchr Returns the first occurance of char in the string  ✔️ ️   ✔️
    ft_strrchr Returns the last occurance of char in the string  ✔️ ️   ✔️
    ft_strncmp Compares the given strings up to n characters  ✔️ ️   ✔️
    ft_memchr Searchs in x bytes on a block of memory the first occurance of the value received  ✔️ ️   ✔️
    ft_memcmp Compares the first x bytes of a block of memory area str1 and str2  ✔️ ️   ✔️
    ft_strnstr Returns the first occurace of the little string on the big string  ✔️ ️   ✔️
    ft_atoi Converts the string received to it’s int value  ✔️ ️   ✔️
    ft_calloc Allocates a memory block with the size received and initializes it  ✔️ ️   ✔️
    ft_strdup Duplicates the string received on to a allocated string  ✔️ ️   ✔️

    Addition functions

    Function Description Status Francinette
    ft_substr Returns an allocated string that starts at the index received  ✔️ ️   ✔️
    ft_strjoin Returns a new allocated string which is the result of the concatenation of both strings received  ✔️ ️   ✔️
    ft_strtrim Returns a copy of the string received without the characters received removing them from the beginning and end of the string  ✔️ ️   ✔️
    ft_split Returns a string separated by the character sent  ✔️ ️   ✔️
    ft_itoa Converts the int value received to it’s character value  ✔️ ️   ✔️
    ft_strmapi Applies the function received to each letter of the string received, creating a new allocated string with the changes  ✔️ ️   ✔️
    ft_striteri Applies the function received to each letter of the string received and replaces the string received with the changes  ✔️ ️   ✔️
    ft_putchar_fd Outputs the char received on to the file descriptor given  ✔️ ️   ✔️
    ft_putstr_fd Outputs the string received on to the file descriptor given  ✔️ ️   ✔️
    ft_putendl_fd Outputs the string received on to the file descriptor given and ending it with a new line  ✔️ ️   ✔️
    ft_putnbr_fd Outputs the number received on to the file descriptor given  ✔️ ️   ✔️

    Bonus functions

    Function Description Status Francinette
    ft_lstnew Creates and return a new allocated node to a linked list  ✔️ ️   ✔️
    ft_lstadd_front Adds the node received to the beginning of a linked list  ✔️ ️   ✔️
    ft_lstsize Returns the number of nodes on a linked list  ✔️ ️   ✔️
    ft_lstlast Returns the last node of a linked list  ✔️ ️   ✔️
    ft_lstadd_back Adds the node received to the end of a linked list  ✔️ ️   ✔️
    ft_lstdelone Receives a node, deletes the contents of it’s variables and frees the node  ✔️ ️   ✔️
    ft_lstclear Deletes and frees the given node and every successor of that node  ✔️ ️   ✔️
    ft_lstiter Applies the function received to every element of the node’s variables  ✔️ ️   ✔️
    ft_lstmap Applis the function received to every element of the node’s variables and creates a new linked list from that  ✔️ ️   ✔️
    Visit original content creator repository
  • TensorRT-v8-YOLOv5-v5.0

    TensorRT v8.2 加速部署 YOLOv5-v5.0

    项目简介

    • 使用 TensorRT 原生API构建 YOLO 网络,将 PyTorch 模型转为.plan 序列化文件,加速模型推理;
    • 基于 TensorRT 8.2.4 版本,具体环境见下方的环境构建部分;
    • 主要参考 tensorrtx 项目,但作者本人根据自己编程习惯,做了大量改动;
    • 未使用Cuda加速图像预处理的项目链接:no_cuda_preproc

    项目特点

    • 这里对比和 tensorrtx 项目中 YOLOv5-v5.0 的不同,并不是说孰优孰劣,只是有些地方更符合作者个人习惯

    tensorrtx 本项目 备注
    1 implicit(隐式 batch) explicit(显式 batch) 此不同为最大的不同,代码中很多的差异都源于此
    2 Detect Plugin 继承自 IPluginV2IOExt Detect Plugin 继承自 IPluginV2DynamicExt
    3 Detect Plugin 被编译为动态链接库 Detect Plugin 直接编译到最终的可执行文件
    4 异步推理(context.enqueue) 同步推理(context.executeV2) 作者亲测在速度方面无差别,同步写法更简便
    5 INT8量化时,采用OpenCV的dnn模块将图像转换为张量 INT8量化时,自定义的方法将图像转换为张量
    6 C++加opencv实现预处理 cuda编程实现预处理加速 v5.0之后的版本也有,两种不同的实现

    除上述外,还有很多其他编码上的不同,不一一赘述。

    推理速度

    • 基于GPU:GeForce RTX 2080 Ti

    FP32 FP16 INT8
    6 ms 3 ms 3 ms

    备注:本项目的推理时间包括:预处理、前向传播、后处理,tensorrtx 项目仅计算了前向传播时间。

    环境构建

    宿主机基础环境

    • Ubuntu 16.04
    • GPU:GeForce RTX 2080 Ti
    • docker,nvidia-docker

    基础镜像拉取

    docker pull nvcr.io/nvidia/tensorrt:22.04-py3
    • 该镜像中各种环境版本如下:

    CUDA cuDNN TensorRT python
    11.6.2 8.4.0.27 8.2.4.2 3.8.10

    安装其他库

    1. 创建 docker 容器

      docker run -it --gpus device=0 --shm-size 32G -v /home:/workspace nvcr.io/nvidia/tensorrt:22.04-py3 bash

      其中-v /home:/workspace将宿主机的/home目录挂载到容器中,方便一些文件的交互,也可以选择其他目录

      • 将容器的源换成国内源

      cd /etc/apt
      rm sources.list
      vim sources.list
      • 将下面内容拷贝到文件sources.list

      deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
      deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
      deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
      deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
      deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
      • 更新源
      apt update
    2. 安装 OpenCV-4.5.0

      • OpenCV-4.5.0源码链接如下,下载 zip 包,解压后放到宿主机/home目录下,即容器的/workspace目录下
      https://github.com/opencv/opencv
      • 下面操作均在容器中

      # 安装依赖
      apt install build-essential
      apt install libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
      apt install libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libdc1394-22-dev
      # 开始安装 OpenCV
      cd /workspace/opencv-4.5.0
      mkdir build
      cd build
      cmake -D CMAKE_INSTALL_PREFIX=/usr/local -D CMAKE_BUILD_TYPE=Release -D OPENCV_GENERATE_PKGCONFIG=ON -D OPENCV_ENABLE_NONFREE=True ..
      make -j6
      make install

    运行项目

    1. 获取 .wts 文件
    • 主要过程为:把本项目的pth2wts.py文件复制到官方yolov5-v5.0目录下,在官方yolov5-v5.0目录下运行 python pth2wts.py,得到para.wts文件
    • 具体过程可参考下面步骤

    git clone -b v5.0 https://github.com/ultralytics/yolov5.git
    git clone https://github.com/emptysoal/yolov5-v5.0_tensorrt-v8.2.git
    # download https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt
    cp {tensorrt}/pth2wts.py {ultralytics}/yolov5
    cd {ultralytics}/yolov5
    python pth2wts.py
    # a file 'para.wts' will be generated.
    1. 构建 .plan 序列化文件并推理
    • 主要过程为:把上一步生成的para.wts文件复制到本项目目录下,在本项目中依次运行make./trt_infer
    • 具体过程可参考下面步骤

    cp {ultralytics}/yolov5/para.wts {tensorrt}/
    cd {tensorrt}/
    mkdir images  # and put some images in it
    # update CLASS_NUM in yololayer.h if your model is trained on custom dataset
    # you can also update INPUT_H、INPUT_W in yololayer.h, update NET(s/m/l/x) in trt_infer.cpp
    make
    ./trt_infer
    # result images will be generated in present dir

    Visit original content creator repository

  • Intersection-navigation-for-Duckietown

    Intersection Navigation for Duckietown

    As part of the Duckietown class taught at ETH Zurich (Fall 2023), we worked on a small final project and presented it to other students as a group of three master students: Benjamin Dupont, Yanni Kechriotis, and Samuel Montorfani. We implemented an intersection navigation pipeline for the Duckiebots (small autonomous differential drive robots equipped with Nvidia Jetson Nano) to enable them to drive through intersections in the Duckietown road-like environment.

    The pipeline consists of:

    1. Perception: Detect intersections and other Duckiebots in the environment.
    2. Decision Making: Decide which way to go and whether it is safe to proceed based on the detections. This includes applying a decision-making stack to determine priority and right of way.
    3. Control: Steer the Duckiebot through the intersection.

    The pipeline is implemented in Python and uses the ROS framework to communicate with the Duckiebot and other nodes in the system.

    Intersection Navigation

    Project Overview

    Scope

    • Detect intersections in Duckietown.
    • Detect other Duckiebots in the intersection.
    • Decide whether to stop, go, or turn based on other agents, using LED colors for communication.
    • Navigate the intersection by turning left, right, or going straight, depending on the intersection options.
    • Apply a decision-making stack to determine priority and right of way.

    Assumptions

    All sensors on the Duckiebots are assumed to be fully functional. The intersections are expected to be of standard size, with standard markings that are clearly visible, and without any obstructions such as buildings. Additionally, the Duckiebots are assumed to be of standard size and shape. Finally, the code for the lane following is given by the instructors as it is part of the Duckietown software stack.

    Challenges

    The project faces several challenges that could lead to failure. One major challenge is the presence of multiple Duckiebots at an intersection, which can create symmetry issues and complicate decision-making. Delayed decision-making can also pose a risk, as it may lead to collisions or traffic jams. The limited field of view of the Duckiebots can hinder their ability to detect other robots and obstacles in time. LED detection issues can further complicate communication between Duckiebots. Additionally, random component failures can disrupt the navigation process. To mitigate these risks, we implemented a robust priority system and strategies to improve field of view, such as detecting Duckiebots while approaching intersections and turning in place to get a better view. We also assume that there is always a Duckiebot on the left and make random decisions after a certain time to prevent deadlocks at intersections.

    Implementation details and results

    The implementation of our intersection navigation project involved creating custom classes and functions to handle various tasks such as intersection detection, decision making, and control. The Duckiebot starts by following the lane and uses its camera to detect intersections by identifying red markers. Upon detecting an intersection, it stops and randomly chooses an action (straight, left, or right) based on the intersection type. The Duckiebot then signals its intended action using LEDs and checks for other Duckiebots at the intersection using a custom-trained YOLOv5 object detection model. This model provided reliable detection of other Duckiebots, which was crucial for the priority decision-making process. The Duckiebot follows standard traffic rules to determine right-of-way and uses motor encoders to execute the chosen action through the intersection.

    Perception

    The perception module is responsible for detecting intersections and other Duckiebots in the environment. We used the Duckietown lane following code to detect intersections based on the presence of red markers. The intersection detection algorithm was implemented using OpenCV to identify the red markers and determine the intersection type (T-intersection or 4-way intersection) and possible options for the duckiebot to navigate to. We also trained a custom YOLOv5 object detection model to detect other Duckiebots at the intersection. The model was trained on a dataset of Duckiebot images and achieved high accuracy in detecting Duckiebots in various orientations and lighting conditions. The alternative for this was to use the LEDs on the Duckiebots to communicate with each other, but we decided to use the object detection model for more reliable results, as the LED strength could vary depending on the lighting conditions, and on most robots, only one LED was working. We then ran the LED detection in the bounding box of the detected Duckiebots to determine the color of the LED and the direction the Duckiebot was going to take. This information was used in the decision-making module to determine the Duckiebot’s next action. To determine in which position the other Duckiebots were, we used their bounding boxes in the camera pixel coordinates to infer their position relative to the Duckiebot. This information was used in the decision-making module to determine the Duckiebot’s priority and right of way.

    Plus Intersection
    “+” Intersection detection

    YOLO Detection
    YOLO v5 Detection

    Decision Making

    The decision-making module is responsible for determining the Duckiebot’s next action based on the detected intersections and other Duckiebots. Once the different options were detected, the Duckiebot randomly chose an action (straight, left, or right) based on the intersection type. We implemented a priority system to handle multiple Duckiebots at an intersection and ensure safe navigation. The priority system assigns right-of-way based on the Duckiebot’s position relative to the other Duckiebots. The Duckiebot signals its intended action using LEDs to communicate with other Duckiebots and avoid collisions. This is used in complex cases where right of way is not sufficient. In the most simple case, the duckiebot just stays at a stop until the other duckiebot to the right has passed. In the case where the duckiebot is at a 4-way intersection, it will signal its intention to go straight, left, or right using the LEDs. If the duckiebot is at a T-intersection, it will signal its intention to go straight or turn using the LEDs. The decision-making module also includes a tie-breaking mechanism to resolve conflicts when multiple Duckiebots have the same priority. In these cases, the Duckiebot randomly chooses an action to prevent deadlocks and ensure smooth traffic flow. The decision-making module was implemented using a combination of if-else statements and priority rules to determine the Duckiebot’s next action based on the detected intersections and other Duckiebots. The priority system was designed to handle various scenarios and ensure safe and efficient navigation through intersections. It was however not fully completed and tested during the project, as the time was limited.

    Control

    Due to the limited time available for the project, we couldn’t implement a full estimation and control pipeline for the Duckiebots. Instead, we decided to opt for a brute force approach by calculating the inputs needed to achieve the desired action using open loop control. This was sufficient in most cases, and the lane following module was able to take over just at the end of the intersection to compensate for potential small errors and go back on track. Additionaly, to mitigate the effect of misalignment of the Duckiebot when approaching the intersection, we added a small alignment step before the intersection, where the Duckiebot would turn in place to get a better view of the intersection and align itself with the lanes. Using the intersection detection and aligning it with a template, we were able to ensure the duckiebot was straight when scanning the intersection, effectively improving the detection accuracy but also the intersection navigation itself thanks to a more standardized starting pose.

    Results

    In terms of results, our systematic evaluation showed an intersection detection accuracy of approximately 90%, a turn completion rate of around 85%, and a Duckiebot detection accuracy of about 95%. However, we encountered some challenges, with crashes occurring about 10% of the time and off-road occurrences happening roughly 40% of the time, often due to camera delays, motor issues, or other hardware problems. These problems also arose due to the code running on our own laptops rather than the Duckiebot itself, which could have affected the real-time performance. Despite these challenges, our project demonstrated a successful implementation of intersection navigation for Duckiebots, and we received very positive feedback from our peers during the final presentation.

    Demonstration Videos

    You can watch a demonstration of the intersection navigation system in action with the following GIFs:

    Single Duckiebot Navigation
    Single Duckiebot navigating through an intersection

    Two Duckiebots Navigation
    Two Duckiebots navigating through an intersection

    Three Duckiebots Navigation
    Three Duckiebots navigating through an intersection

    For the full videos, with realistic performance, you can look in the folder /videos in the repository.

    Note: As discussed in the challenges section, the videos show the Duckiebots running on our laptops rather than the actual Duckiebots, which could have affected the real-time performance. This affected the controls sent to the Duckiebots and the camera feed, leading to some crashes and off-road occurrences. Additionaly, the videos also show the sometime inaccurate lane following code, which was out of scope and given to us by the instructors, which was also an assumption made in the project.

    Conclusion and Future Work

    In conclusion, our project successfully implemented an intersection navigation system for Duckiebots, achieving high accuracy in intersection detection and Duckiebot recognition. Despite hardware and software integration challenges, we demonstrated the feasibility of autonomous intersection navigation in Duckietown. The project met our initial goals, although the combined execution of actions revealed areas for improvement, particularly in handling delays and hardware reliability.

    For future work, several extensions could enhance the Duckiebots’ capabilities. Developing a more robust tie-breaking mechanism for four-way intersections and ensuring the system can handle non-compliant or emergency Duckiebots would improve reliability. Implementing traffic light-controlled intersections and enabling multiple Duckiebots to navigate intersections simultaneously with minimal constraints on traffic density would significantly advance the system’s complexity and utility. Better integration of the code into the component framework would streamline development and debugging processes.

    Achieving these improvements would require substantial effort, particularly in enhancing hardware reliability and refining the software framework. Despite the challenges, the potential advancements would unlock new skills for the Duckiebots, making them more versatile and capable in complex environments. Given the limited time we had for this project, we would have liked to have more time to work on these aspects as the schedule was quite tight.

    Overall, we are satisfied with our project’s outcomes and the learning experience it provided. The insights gained will inform future developments and contribute to the broader field of autonomous robotics.

    Design Document

    The design document for the project can be found in the /design_document folder. It contains a pdf document exported from the word document that we filled in throughout our work, outlining the design choices, implementation details, and challenges faced during the project.

    Visit original content creator repository
  • rclone-rc

    rclone-rc

    A fully type-safe TypeScript API client for Rclone’s Remote Control (RC) interface, powered by @ts-rest and Zod.

    Tested with Rclone v1.70.0

    ⚠️ Work in Progress

    This library is currently under active development. Check out the current status for a list of implemented commands.

    Consider contributing if you need a specific command:

    1. Check src/api/index.ts for current implementation
    2. Add your needed command following the same pattern
    3. Open a Pull Request

    ✨ Features

    • 🔒 Fully Type-Safe: End-to-end type safety for all API calls, including async operations
    • 📄 OpenAPI Support: Generated spec for integration with any language/client
    • 🧩 Framework Agnostic: Works with any fetch client
    • 🚀 Async Operations: First-class support for Rclone’s async operations
    • ✅ Runtime Validation: Uses Zod to validate types at runtime
    • 💪 HTTP Status Handling: Error responses handled through typed status codes

    Installation

    # Using npm
    npm install rclone-rc
    
    # Using yarn
    yarn add rclone-rc
    
    # Using pnpm
    pnpm add rclone-rc

    Usage

    Basic Client

    import { createClient } from 'rclone-rc';
    
    const api = createClient({
      baseUrl: 'http://localhost:5572',
      username: 'your-username', // Optional if running with --rc-no-auth
      password: 'your-password', // Optional if running with --rc-no-auth
    });
    
    try {
      // Get rclone version with typed response
      const { status, body } = await api.version();
    
      if (status === 200) {
        console.log('Rclone version:', body.version); // typed
      } else if (status === 500) {
        console.log('Error:', body.error); // also typed
      }
    
      // List files with type-safe parameters and response
      const files = await api.list({
        body: { fs: 'remote:path', remote: '' }
      });
    
      if (files.status === 200) {
        console.log('Files:', files.body.list);
      }
    } catch (error) {
      // Only network errors will throw exceptions
      console.error('Network error:', error);
    }

    Error Handling

    This library handles errors in two ways:

    1. HTTP Status Errors: Returned as typed responses with appropriate status codes
    2. Network Errors: Thrown as exceptions when server is unreachable

    Async Operations

    For long-running operations:

    import { createClient, createAsyncClient } from 'rclone-rc';
    
    const api = createClient({ baseUrl: 'http://localhost:5572' });
    const asyncApi = createAsyncClient({ baseUrl: 'http://localhost:5572' });
    
    try {
      // Start async job
      const job = await asyncApi.list({
        body: {
          fs: 'remote:path',
          remote: '',
          _async: true, // You need to pass this flag to the async client
        }
      });
    
      // Access job ID and check status
      const jobId = job.body.jobid;
      // Check job status using the non-async client
      const status = await api.jobStatus({ body: { jobid: jobId } });
    
      if (status.status === 200 && status.body.finished) {
        console.log('Job output:', status.body.output);
      }
    } catch (error) {
      console.error('Network error:', error);
    }

    Runtime Type Validation

    Zod validates both request and response types at runtime:

    • Request validation: Parameters, body, and query are validated before sending
    • Response validation: Can be disabled with validateResponse: false in client options

      const api = createClient({
        baseUrl: 'http://localhost:5572',
        validateResponse: false, // true by default
      });

    OpenAPI Integration

    Generate an OpenAPI specification for use with other languages and tools:

    import { generateOpenApi } from '@ts-rest/open-api';
    import { rcloneContract } from 'rclone-rc';
    
    const openApiDocument = generateOpenApi(rcloneContract, {
      info: { title: 'Rclone RC API', version: '1.0.0' }
    });

    Access the raw OpenAPI specifications at:

    Development

    pnpm install     # Install dependencies
    pnpm build       # Build the project
    pnpm test        # Run tests
    pnpm lint        # Lint code
    pnpm format      # Format code
    pnpm openapi     # Generate OpenAPI spec

    Requirements

    • Node.js 18+
    • TypeScript 5.0+

    License

    MIT

    Visit original content creator repository

  • FAST_Anime_VSR

    FAST Anime VSRR (Video Super-Resolution and Restoration)

    This repository is dedicated to enhancing the Super-Resolution (SR) inference process for Anime videos by fully harnessing the potential of your GPU. It is built upon the foundations of Real-CuGAN (https://github.com/bilibili/ailab/blob/main/Real-CUGAN/README_EN.md) and Real-ESRGAN (https://github.com/xinntao/Real-ESRGAN).

    I’ve implemented the SR process using TensorRT, incorporating a custom frame division algorithm designed to accelerate it. This algorithm includes a video redundancy jump mechanism, akin to video compression Inter-Prediction, and a momentum mechanism.

    Additionally, I’ve employed FFMPEG to decode the video at a reduced frames-per-second (FPS) rate, facilitating faster processing with an almost imperceptible drop in quality. To further optimize performance, I’ve utilized both multiprocessing and multithreading techniques to fully utilize all available computational resources.

    For a more detailed understanding of the implementation and algorithms used, I invite you to refer to this presentation slide: https://docs.google.com/presentation/d/1Gxux9MdWxwpnT4nDZln8Ip_MeqalrkBesX34FVupm2A/edit#slide=id.p.

    In my 3060Ti Desktop version, it can process in Real-Time on 480P Anime videos input (Real-CUGAN), which means that as soon as you finish watching one Anime video, the second Anime Super-Resolution (SR) video is already processed and ready for you to continue watching with just a simple click..

    Currently, this repository supports Real-CUGAN (official) and a shallow Real-ESRGAN (6 blocks Anime Image version RRDB-Net provided by Real-ESRGAN).

      
    My ultimate goal is to directly utilize decode information in Video Codec as in this paper (https://arxiv.org/abs/1603.08968), so I use the word “FAST” at the beginning. Though this repository can already process in real-time, this repository will be continuously maintained and developed.

    If you like this repository, you can give me a star (if you are willing). Feel free to report any problem to me.   
      

    Visual Improvement (Real-CUGAN)

    Before:
    compare1

    After 2X scaling:
    compare2   
      

    Model supported now:

    1. Real-CUGAN: The original model weight provided by BiliBili (from https://github.com/bilibili/ailab/tree/main)
    2. Real-ESRGAN: Using Anime version RRDB with 6 Blocks (full model has 23 blocks) (from https://github.com/xinntao/Real-ESRGAN/blob/master/docs/model_zoo.md#for-anime-images–illustrations)
    3. VCISR: A model I trained with my upcoming paper methods using Anime training datasets (https://github.com/Kiteretsu77/VCISR-official)

    Supported Devices and Python Version:

    1. Nvidia GPU with Cuda (Tested: 2060 Super, 3060Ti, 3090Ti, 4090)
    2. Tested on Python 3.10   
        

    Installation (Linux – Ubuntu):

    Skip step 3 and 4 if you don’t want tensorrt, but they can increase the speed a lot & save a lot of GPU memory.

    1. Install CUDA. The following is how I install:

      • My Nvidia Driver in Ubuntu is installed by Software & Updates of Ubuntu (Nvidia server driver 525), and the cuda version in nvidia-smi is 12.0 in default, which is the driver API.
      • Next, I install cuda from the official website (https://developer.nvidia.com/cuda-12-0-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local). I install 12.0 version because it’s said that the runtime API should be older than driver API cuda (12.0 in nvidia-smi). I use runfile(local) to install because that is the easiest option.
        During the installation, Leave driver installation ([] Driver [] 525.65.01) blank. Avoid dual driver installation.
      • After finishing the cuda installation, we need to add their path to the environment
            gedit ~/.bashrc
            // Add the following two at the end of the popped up file (The path may be different, please double check)
            export PATH=/usr/local/cuda-12.0/bin${PATH:+:${PATH}}
            export LD_LIBRARY_PATH=/usr/local/cuda-12.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
            // Save the file and execute the following in the terminal
            source ~/.bashrc
      • You should be capable to run “nvcc –version” to know that cuda is fully installed.
    2. Install CuDNN. The following is how I install:

    3. Install tensorrt

      • Download tensorrt 8.6 from https://developer.nvidia.com/nvidia-tensorrt-8x-download (12.0 Tar pacakge is preferred)
      • Follow https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-tar to install several wheels.
        Step4 of this website is like below (Don’t forget to replace YOUR_USERNAME to your home address):
            gedit ~/.bashrc
            // Add the following at the end of the popped up file (The path may be different, please double check)
            export LD_LIBRARY_PATH=/home/YOUR_USERNAME/TensorRT-8.6.1.6/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
            // Save the file and execute the following in the terminal
            source ~/.bashrc
      • After finishing these steps, you should be able to “import tensorrt” in python (starts a new window to run this)
    4. Install torch2trt (Don’t directly use pip install torch2trt)

    5. Install basic libraries for python

          pip install -r requirements.txt

    Installation (Windows):

    Skip step 3 and 4 if you don’t want tensorrt, but they can increase the speed a lot & save a lot of GPU memory.

    1. Install CUDA (e.g. https://developer.nvidia.com/cuda-downloads?)

    2. Install Cudnn (move bin\ & include\ & lib\ so far is enough)

    3. Install tensorrt (Don’t directly use python install)

    4. Install torch2trt (Don’t directly use pip install torch2trt)

    5. Install basic libraries for python

          pip install -r requirements.txt

    Run (Inference):

    1. Adjust config.py to setup your setting. Usually, just editing Frequently Edited Setting part is enough. Please follow the instructions there.

      • Edit process_num, full_model_num, nt to match your GPU’s computation power.
      • The input (inp_path) can be a single video input or a folder with a bunch of videos (video format can be various as long as they are supported by ffmpeg); The output is mp4 format in default.
    2. Run

           python main.py
      • The original cunet weight should be automatically downloaded and tensorrt transformed weight should be generated automatically based on the video input height and weight.
      • Usually, if this is the first time you transform to a tensorrt weight, it may need to wait for a while for the program to generate tensorrt weight.
      • If the input source has any external subtitle, it will also be extracted automatically and sealed back to the processed video at the end.   
          

    Future Works:

    1. Debug use without TensorRT && when full_model_num=0
    2. MultiGPU inference support
    3. Provide PSNR && Visual Quality report in README.md
    4. Provide all repositories in English.
    5. Record a video on how to install TensorRT from scratch.   
        

    Disclaimer:

    1. The sample image under tensorrt_weight_generator is just for faster implementation, I don’t have a copyright for that one. All rights are reserved to their original owners.
    2. My code is developed from Real-CUGAN github repository (https://github.com/bilibili/ailab/tree/main/Real-CUGAN)
    Visit original content creator repository
  • chaos-dispensary

    chaos-dispensary Actions Status

    A web service that dispenses random numbers built from

    Base Docker Image

    Debian bullseye-slim (x64)

    Get the image from Docker Hub or build it yourself

    docker pull fullaxx/chaos-dispensary
    docker build -t="fullaxx/chaos-dispensary" github.com/Fullaxx/chaos-dispensary
    

    Configuration Options

    Adjust chaos2redis to pin long_spin() and time_spin() to the same thread
    Default: long_spin() and time_spin() will each spin their own thread

    -e SAVEACORE=1
    

    Adjust chaos2redis to acquire 6 blocks of chaos per thread before transmutation
    Default: 4 blocks of chaos per thread

    -e CHAOS=6
    

    Adjust chaos2redis to use 2 hashing cores for chaos transmutation
    Default: 1 hashing core

    -e CORES=2
    

    Adjust chaos2redis to keep 25 lists of 999999 random numbers in redis
    Default: 10 lists of 100000 random numbers each

    -e LISTS=25 -e LSIZE=999999
    

    Launch chaos-dispensary docker container

    Run chaos-dispensary binding to 172.17.0.1:80 using default configuration

    docker run -d -p 172.17.0.1:80:8080 fullaxx/chaos-dispensary
    

    Run chaos-dispensary binding to 172.17.0.1:80 using a conservative configuration

    docker run -d -e SAVEACORE=1 -e CHAOS=2 -p 172.17.0.1:80:8080 fullaxx/chaos-dispensary
    

    Run chaos-dispensary binding to 172.17.0.1:80 using a multi-core configuration

    docker run -d -e CORES=4 -p 172.17.0.1:80:8080 fullaxx/chaos-dispensary
    

    Using curl to retrieve random numbers

    By default the output will be a space delimited string of numbers.
    If the header Accept: application/json is sent, the output will be json.
    Get 1 number from the dispensary:

    curl http://172.17.0.1:8080/chaos/1
    curl -H "Accept: application/json" http://172.17.0.1:8080/chaos/1
    

    Get 10 numbers from the dispensary:

    curl http://172.17.0.1:8080/chaos/10
    curl -H "Accept: application/json" http://172.17.0.1:8080/chaos/10
    

    Get 99999 numbers from the dispensary:

    curl http://172.17.0.1:8080/chaos/99999
    curl -H "Accept: application/json" http://172.17.0.1:8080/chaos/99999
    

    Using curl to check status

    The status node consists of two values.
    Chaos/s is the amount of chaos pouches that we’re processing per second.
    Numbers/s is the amount of random numbers we’re generating per second.

    curl http://172.17.0.1:8080/status/
    curl -H "Accept: application/json" http://172.17.0.1:8080/status/
    

    Visit original content creator repository

  • ask

    ask

    The ask program prompts the user and accepts a single-key response.

    Usage:

    ask [options] “prompt string” responses

    The options can appear anywhere on the command line.

    Options Description
    -c, -C, –case-sensitive Case-sensitive response matching
    -h, -H, –help Show this help screen


    The “prompt string” and responses are positional parameters. They must be in order.

    Positional Parameter Description
    “prompt string” The prompt string the user sees
    responses The characters the user is allowed to press to answer the prompt (Not a comma-separated list.)


    Return value:

    System Exit Code Meaning
    0 The user entered a response that was not in the allowed responses.
    1-125 The index of the user’s choice in the responses. (The first response is 1, the next is 2, and so on.)
    126 The user pressed Enter without making a choice
    127 The user pressed Escape

    System Exit Code: In an sh-compatible shell, you check the ?* variable.
    In a batch file you check the ERRORLEVEL.
    Other shells may be different.

    Usage Notes:

    • The user must press Enter after pressing a key.
    • The response is not case-sensitive by default. Use -c if case-sensitive mode is necessary.
    • If the user presses more than one key, the first key will be used. The user can use the keyboard to edit their response.
    • The escape sequences \, \a, \n, \r, and \t are allowed in the prompt string.

    Example:

    ask “** Answer [Y]es, [N]o, or [M]aybe: ” YNM

    The example displays the following prompt, and reads the user’s response:

    ** Answer [Y]es, [N]o, or [M]aybe:

    The example returns:

    • Exit code 1 if the user pressed y or Y.
    • Exit code 2 if the user pressed n or N.
    • Exit code 3 if the user pressed m or M.
    • Exit code 0 if the user pressed a key that was not y, Y, n, N, m, or M.
    • Exit code 126 if the user pressed Enter without pressing a key.
    • Exit code 127 if the user pressed Escape.


    Programming Notes

    You can include the ask_funcs.h header and link the ask_funcs.o module to your own C/C++ programs. This provides your programs the same functionality used by the ask program.

    It uses standard C library functions for I/O. It uses the standard putchar() function to print the prompt string to stdout, and the standard getchar() function to read the response character from stdin.

    Functions

    The ask_funcs module provides the following public functions:

    void set_default_options();
    void set_case_sensitive_mode(int value);
    int ask(char *prompt, char *response_list);
    

    void set_default_options();

    • The set_default_options function simply sets the default case_sensitive mode option to OFF (0).

    void set_case_sensitive_mode(int value);

    • The set_case_sensitive_mode function lets you turn the case sensitive mode ON (non-zero) or OFF (0).

    int ask(char *prompt, char *responses);

    • The ask function lets you prompt the user, specifying the prompt string and the response characters, and then receive the response code.
    • The return value from the ask function is the same as the system exit codes described for the ask program.

    Visit original content creator repository

  • Mitten

    Mitten

    Mitten is a Python script designed to monitor GitHub repositories for new commits and send notifications to a specified Discord channel. The script leverages the GitHub API to fetch commit information and Discord Webhooks to post notifications.

    Features

    • Fetches commits from specified GitHub repositories.
    • Sends commit notifications to Discord with detailed commit information.
    • Ability to mention specified roles in commit notifications.
    • Supports selecting specific branches from each repository.
    • Logs commit information locally to avoid duplicate notifications.
    • Fetches commits pushed since the last runtime of the script, ensuring that commits pushed during downtime are still fetched in the next run.
    • Configurable through environment variables.

    Requirements

    • Python 3.7+
    • requests library
    • python-dotenv library

    Configuration

    Create a ‘.env‘ file in the same directory as the script with the following variables:

    • REPOS: A comma-separated list of repositories to monitor. You can also optionally specify a branch for each repo by adding ‘:branch_name’ (e.g., ‘owner/repo1,owner/repo1:dev_branch,owner/repo2‘).
    • DISCORD_WEBHOOK_URL: The Discord webhook URL where notifications will be sent.
    • GITHUB_TOKEN: (Optional but highly recommended) Your GitHub API token to avoid rate limiting. Learn more about creating a personal access token here.
    • CHECK_INTERVAL: The interval (in seconds) at which the script checks for new commits. Make sure this value exceeds the number of repos to monitor.
    • DISCORD_EMBED_COLOR: (Optional) The color of the commit embeds sent to Discord. The color must be provided in hexadecimal format using the prefix ‘0x’ (e.g., ‘0xffffff’).
    • ROLES_TO_MENTION: (Optional) The role IDs (NOT role name, but the corresponding 19 digit role ID) to mention in Discord when a new commit is detected. Separate each role ID with a comma. You can also ping @everyone by simply setting this to ‘@everyone’.
    • WEBHOOKS_ON_REPO_INIT: Choose whether to send a message to Discord whenever a new repository is initialized.
    • PREFER_AUTHOR_IN_TITLE: Preference for title style in commit messages. If set to True, the commit author’s username and avatar will be used in the title of the embed. If set to False, the repo name and the repo owner’s avatar will be used.
    • TEST_WEBHOOK_CONNECTION: Send a test message to Discord when the script is started.

    Installation

    1. Clone the repository:

      git clone https://github.com/joobert/mitten.git
      cd mitten
    2. Install dependencies:

      pip install -r requirements.txt
    3. Create a .env file with the following content:

      REPOS=owner/repo1,owner/repo1:dev_branch,owner/repo2,owner/repo3
      DISCORD_WEBHOOK_URL=your_webhook_url
      GITHUB_TOKEN=your_github_token
      CHECK_INTERVAL=60
      DISCORD_EMBED_COLOR=
      ROLES_TO_MENTION=
      WEBHOOKS_ON_REPO_INIT=True
      PREFER_AUTHOR_IN_TITLE=False
      TEST_WEBHOOK_CONNECTION=False
    4. Run the script:

      python mitten.py

    (Optional) Running with Docker

    Ensure you have both Docker and Docker Compose installed on your machine.

    1. Clone the repository:

      git clone https://github.com/joobert/mitten.git
      cd mitten
    2. Create a .env file with the following content:

      REPOS=owner/repo1,owner/repo1:dev_branch,owner/repo2,owner/repo3
      DISCORD_WEBHOOK_URL=your_webhook_url
      GITHUB_TOKEN=your_github_token
      CHECK_INTERVAL=60
      DISCORD_EMBED_COLOR=
      ROLES_TO_MENTION=
      WEBHOOKS_ON_REPO_INIT=True
      PREFER_AUTHOR_IN_TITLE=False
      TEST_WEBHOOK_CONNECTION=False
    3. Create empty commit_log.json and mitten_logs.txt files:

      touch commit_log.json mitten_logs.txt
    4. Start the service with Docker Compose:

      docker compose up -d

    Important Notes

    • Initial Run: On the first run (and for each subsequent repository added down the line), Mitten will initialize each repository by fetching its entire commit history to avoid spamming notifications and fetch commits pushed during the script’s downtime on the next run. This process can be API heavy and time-consuming for large repositories, but only needs to be done once per repository.

    • GitHub Token: It is highly recommended to set a GitHub API token to avoid API rate limiting issues. Without the token, you will be limited to 60 requests per hour, which might not be sufficient for monitoring multiple repositories, nor sufficient for the initial run of a large repository. Setting the token increases this limit significantly (5000 requests per hour) ensuring you won’t run into issues.

    • Logging: Mitten creates and logs commit information locally in a file named ‘commit_log.json‘ to ensure that no duplicate notifications are sent. The script also saves its runtime logs to a file named ‘mitten_logs.txt‘. Both of these should be kept in the same directory as the script.

    Contributing

    Contributions are welcome! Please feel free to submit a Pull Request or open an Issue.

    License

    MIT

    Visit original content creator repository

  • travel-buddy

    App Screenshot

    Travel Buddy is a Flutter app that helps users explore and mark locations on a Google Map based on Foursquare categories. The app integrates Firebase Authentication for user management and Cloud Firestore for storing user-specific favorites.

    Features

    • User Authentication: Users can sign up and sign in using Firebase Authentication.
    • Google Map Integration: Users can view a Google Map and mark locations based on Foursquare categories.
    • Foursquare Places API: The app fetches places data from the Foursquare API to display options on the map.
    • Favorites Management: Users can save marked locations as favorites, which are stored in Firebase Cloud Firestore and are specific to each authenticated user.
    • State Management: Utilizes various state management solutions:
      • BLoC for authentication and Foursquare connections.
      • GetX for accessing Foursquare place details.
      • A singleton pattern for managing Firebase connections and retrieving the user’s favorites.
    App Screenshot

    Technologies Used

    • Flutter: SDK version >=3.1.0 <4.0.0
    • Firebase: Firebase Authentication and Cloud Firestore.
    • Google Maps: For displaying locations.
    • Foursquare API: For accessing place data.
    • State Management: BLoC, GetX, Singleton, Stateful widget.

    Getting Started

    Prerequisites

    • Flutter SDK installed (version >=3.1.0 <4.0.0)
    • Dart SDK
    • Firebase account
    • Foursquare API key

    Installation

    1. Clone the repository:

      git clone https://github.com/belenyb/travel_buddy.git
      cd travel_buddy
    2. Install the dependencies:

      flutter pub get
    3. Configure Firebase:

    • Create a Firebase project in the Firebase Console.
    • Add your Flutter app to the project.
    • Download the google-services.json (for Android) and/or GoogleService-Info.plist (for iOS) files and place them in the appropriate directories:
      • Android: android/app/
      • iOS: ios/Runner/
    1. Set up Foursquare API:
    • Sign up for a Foursquare developer account and create a new app to obtain your API key.
    • Create an .env file with your Foursquare API key and place it on the root of your project.
    1. Run the app:
      flutter run
    App Screenshot

    Usage

    Sign Up / Sign In

    Launch the app and create a new account or sign in to your existing account.

    Explore Locations

    Use the Google Map interface to explore nearby locations categorized by Foursquare.

    Mark Favorites

    Tap on a location to mark it as a favorite. Your favorites are saved and can be accessed later.

    App Screenshot

    Visit original content creator repository