Blog

  • reddit-auto-reply

    reddit-auto-reply

    Reddit auto reply and auto downvote.

    A script which recurrsively walks over a specific post and
    automatically replies to (and downvotes) comments by a particular
    author.

    Have you ever attempted to have a regular decent conversation on
    reddit like any normal human being would, only to be confronted by
    a troll, a moron, or a jerk? Maybe this person is just genuinely
    evil. Maybe this person just won’t let you have the last word?
    It’s probably because their mother didn’t teach them good manners.

    This script could enable you to repetitively shut this person down
    without having to spare them even a single thought.

    This is a mostly untested sript. I only wrote it to end a certain
    conversation. It did a fine job at that, and it kept me from having
    to waste any more time replying to that thread. I fed the script
    a reply that made it quite clear that the person that I was replying
    to was deficient in their mental capacities and that I had written
    this bot to automatically reply to anything they wrote in my thread.
    It worked perfectly. After a few rounds of interacting with my
    reply bot, that person finally realized that they would never have
    the last word. They moved on and dug a little deeper into their
    troll hole. Good riddance.

    usage

    First install the packages using pip or whatever. You can use a
    virtualenv if you want of course.

    Run the program with the following command
    python3 reddit_auto_reply.py

    Currently you have to edit the script manually, filling in the
    necessary details by editing certain variables manually. You
    should be able to tell which variables need to be edited by going
    into the program and reading the code. If you want to know more
    about what variables you need to edit or how to do so, create an
    issue and let me know. I’m willing to put a small amount of work
    into this to make it better and more usable.

    Visit original content creator repository

  • Football-xG-Predictor

    ⚽ Football Expected Goals (xG) Predictor

    🧠 About the Project

    An Expected Goals (xG) model that predicts the probability of scoring based on StatsBomb data. The project uses machine learning techniques to analyze factors most influencing shot effectiveness in football. Applied models (Logistic Regression, Random Forest, XGBoost) combined with Beta calibration technique create a highly accurate predictive tool. Analysis results confirm the crucial role of shot geometry and defender influence on goal-scoring probability.

    🎯 Motivation

    Expected Goals (xG) is one of the most important metrics used in modern football analytics. It allows for evaluating shot quality regardless of whether they resulted in a goal. In this project, I built my own xG model to better understand factors affecting shot effectiveness and create a tool that can be used for match analysis and player evaluation.

    📋 Data

    The data used comes from StatsBomb’s open dataset from the 2015/2016 season for five top European leagues:

    • Premier League (England)
    • La Liga (Spain)
    • Bundesliga (Germany)
    • Serie A (Italy)
    • Ligue 1 (France)

    The data contains detailed information about each shot, including position on the pitch, shot type, circumstances of the shot, and positioning of other players at the moment of the shot.

    Note: The repository does not include data files by default. You need to run the data_collector.ipynb notebook first to download the data from StatsBomb’s open dataset.

    https://github.com/statsbomb/open-data

    🔍 Methodology

    Data Preparation

    • Extraction of relevant shot-related variables
    • Transformation of raw location data into useful geometric features
    • Categorization of shot types and body parts used for shots

    Feature Engineering

    • Geometric: shot angle, distance from goal
    • Contextual: number of defenders on shot line, goalkeeper presence
    • Technical: dominant vs non-dominant foot shots, first-time shots
    • Situational: shots under pressure, shots after dribbling

    Modeling

    Testing and comparison of three algorithms:

    1. Logistic Regression
    2. Random Forest
    3. XGBoost

    Model Calibration

    Application of Beta Calibration technique to calibrate probabilities, which significantly improved model prediction quality.

    📈 Key Results

    Model Comparison

    Model ROC AUC Brier Score Log Loss xG/Goals Ratio
    Logistic Regression 0.796 0.073 0.257 0.98
    Random Forest 0.796 0.074 0.259 0.99
    XGBoost 0.798 0.073 0.257 0.98

    Key Findings

    1. Shot geometry is crucial – shot angle and distance from goal are the strongest predictors
    2. Defenders on shot line – each additional defender significantly decreases goal-scoring probability
    3. First-time shots have higher effectiveness than those preceded by ball control
    4. Model calibration is crucial – all models before calibration significantly overestimated probabilities

    💻 Technologies

    • Language: Python 3.7+
    • Data Analysis: Pandas, NumPy
    • ML Models: Scikit-learn, XGBoost
    • Visualization: Matplotlib, Seaborn, Mplsoccer
    • Data Source: StatsBombPy

    📁 Project Structure

    Football-xG-Predictor/
    ├── notebooks/                 
    │   ├── data_collection.py      # Data collection script
    │   └── xg_model.ipynb          # Main notebook with xG model
    ├── src/                        
    │   ├── __init__.py             # Package initialization file
    │   ├── preprocessing.py        # Data preprocessing functions
    │   ├── feature_engineering.py  # Feature engineering
    │   ├── modeling.py             # Model implementation
    │   ├── evaluation.py           # Metrics and model evaluation
    │   └── visualization.py        # Visualizations
    ├── data/                       # Data folder (created after running data_collector.ipynb)
    ├── assets/                     # Graphics and visualizations
    ├── requirements.txt            # Dependencies
    └── README.md                   # Project description / this file
    

    🚀 How to Download and Run the Project

    1. Clone the repository:
    git clone https://github.com/bsobkowicz1096/Football-xG-Predictor.git
    1. Navigate to the project directory:
    cd Football-xG-Predictor
    1. Create a virtual environment (optional but recommended):

    python -m venv venv
    source venv/bin/activate  # On Linux/macOS
    venv\Scripts\activate     # On Windows
    1. Install dependencies:
    pip install -r requirements.txt
    1. Run the notebook:
    jupyter notebook notebooks/football_xg_predictor.ipynb

    Note: The project uses publicly available StatsBomb data, used in accordance with their license terms.

    Visit original content creator repository

  • Reachability

    Reference Status
    build-status

    WARNING there have been reports of apps being rejected when Reachability is used in a framework. The only solution to this so far is to rename the class.

    Reachability

    This is a drop-in replacement for Apple’s Reachability class. It is ARC-compatible, and it uses the new GCD methods to notify of network interface changes.

    In addition to the standard NSNotification, it supports the use of blocks for when the network becomes reachable and unreachable.

    Finally, you can specify whether a WWAN connection is considered “reachable”.

    DO NOT OPEN BUGS UNTIL YOU HAVE TESTED ON DEVICE

    BEFORE YOU OPEN A BUG ABOUT iOS6/iOS5 build errors, use Tag 3.2 or 3.1 as they support assign types

    Requirements

    Once you have added the .h/m files to your project, simply:

    • Go to the Project->TARGETS->Build Phases->Link Binary With Libraries.
    • Press the plus in the lower left of the list.
    • Add SystemConfiguration.framework.

    Boom, you’re done.

    Examples

    Block Example

    This sample uses blocks to notify when the interface state has changed. The blocks will be called on a BACKGROUND THREAD, so you need to dispatch UI updates onto the main thread.

    In Objective-C

    // Allocate a reachability object
    Reachability* reach = [Reachability reachabilityWithHostname:@"www.google.com"];
    
    // Set the blocks
    reach.reachableBlock = ^(Reachability*reach)
    {
        // keep in mind this is called on a background thread
        // and if you are updating the UI it needs to happen
        // on the main thread, like this:
    
        dispatch_async(dispatch_get_main_queue(), ^{
            NSLog(@"REACHABLE!");
        });
    };
    
    reach.unreachableBlock = ^(Reachability*reach)
    {
        NSLog(@"UNREACHABLE!");
    };
    
    // Start the notifier, which will cause the reachability object to retain itself!
    [reach startNotifier];

    In Swift 3

    import Reachability
    
    var reach: Reachability?
    
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
            // Allocate a reachability object
            self.reach = Reachability.forInternetConnection()
            
            // Set the blocks
            self.reach!.reachableBlock = {
                (reach: Reachability?) -> Void in
                
                // keep in mind this is called on a background thread
                // and if you are updating the UI it needs to happen
                // on the main thread, like this:
                DispatchQueue.main.async {
                    print("REACHABLE!")
                }
            }
            
            self.reach!.unreachableBlock = {
                (reach: Reachability?) -> Void in
                print("UNREACHABLE!")
            }
            
            self.reach!.startNotifier()
        
            return true
    }

    NSNotification Example

    This sample will use NSNotifications to notify when the interface has changed. They will be delivered on the MAIN THREAD, so you can do UI updates from within the function.

    In addition, it asks the Reachability object to consider the WWAN (3G/EDGE/CDMA) as a non-reachable connection (you might use this if you are writing a video streaming app, for example, to save the user’s data plan).

    In Objective-C

    // Allocate a reachability object
    Reachability* reach = [Reachability reachabilityWithHostname:@"www.google.com"];
    
    // Tell the reachability that we DON'T want to be reachable on 3G/EDGE/CDMA
    reach.reachableOnWWAN = NO;
    
    // Here we set up a NSNotification observer. The Reachability that caused the notification
    // is passed in the object parameter
    [[NSNotificationCenter defaultCenter] addObserver:self
                                             selector:@selector(reachabilityChanged:)
                                                 name:kReachabilityChangedNotification
                                               object:nil];
    
    [reach startNotifier];

    In Swift 3

    import Reachability
    
    var reach: Reachability?
    
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
        // Allocate a reachability object
        self.reach = Reachability.forInternetConnection()
        
        // Tell the reachability that we DON'T want to be reachable on 3G/EDGE/CDMA
        self.reach!.reachableOnWWAN = false
        
        // Here we set up a NSNotification observer. The Reachability that caused the notification
        // is passed in the object parameter
        NotificationCenter.default.addObserver(
            self,
            selector: #selector(reachabilityChanged),
            name: NSNotification.Name.reachabilityChanged,
            object: nil
        )
        
        self.reach!.startNotifier()
        
        return true
    }
            
    func reachabilityChanged(notification: NSNotification) {
        if self.reach!.isReachableViaWiFi() || self.reach!.isReachableViaWWAN() {
            print("Service available!!!")
        } else {
            print("No service available!!!")
        }
    }

    Tell the world

    Head over to Projects using Reachability and add your project for “Maximum Wins!”.

    Visit original content creator repository

  • xxf_android

    Table of Contents

    Android技术中台

    xxf架构封装常用组件与用法,且符合函数式和流式编程

    1. 去除RxLife,用Android自带的lifecycle来管理RxJava的生命周期
    2. viewModel中也可以使用rxjava bind生命周期,跟activity一样,可以处理rxjava的生命周期
    3. 权限请求可以用RxJava链式调用,不用写复杂的回调 内部使用ActivityResultLauncher 且可以免注册
    4. startActivityForResult可以用RxJava链式调用,不用写复杂的回调 内部使用ActivityResultLauncher 且可以免注册
    5. 简单配置http,轻松完成网络请求
    6. http双缓存(避免服务器不处理etag),多种策略保证数据及时交互
    7. 时间和货币格式化全权交割给框架处理
    8. 各种工具类Number,Time,File,Toast,Zip,Arrays….等15种
    9. 全面转向kotlin 扩展函数超 400 个 全部开箱即可使用
    10. 封装常见自定义View
      TitleBar,Loading,ScaleFrameLayout,MaxHeightView,SoftKeyboardSizeWatchLayout…等20种
    11. 数据库在objectbox上进行封装以及监听,以及生成long id

    用法

    引入方式

    代码已经80%转换成kotlin 请注意用法改变,新版本使用方式

    //请在build.gradle中配置
    allprojects {
        repositories {
            maven { url "https://jitpack.io" }
            jcenter()
            maven {
                url 'https://maven.aliyun.com/repository/public'
            }
            maven {
                credentials {
                    username '654f4d888f25556ebb4ed790'
                    password 'OsVOuR6WZFK='
                }
                url 'https://packages.aliyun.com/maven/repository/2433389-release-RMv0jP/'
            }
            maven {
                credentials {
                    username '654f4d888f25556ebb4ed790'
                    password 'OsVOuR6WZFK='
                }
                url 'https://packages.aliyun.com/maven/repository/2433389-snapshot-Kqt8ID/'
            }
        }
        configurations.all {
            // 实时检查 Snapshot 更新
            resolutionStrategy.cacheChangingModulesFor 0, 'seconds'
        }
    }

       //新版本使用方式,需要添加上面的权限
       implementation 'com.NBXXF.xxf_android:libs:5.2.2.1-SNAPSHOT'
    
    Application 与Activity 管理

    提供以下[直接访问]的内敛函数,其他组件扩展 400个函数 参考lib_ktx

    val applicationContext: Application
    
    val application: Application
    
    val activityList: List<Activity> 
    
    val topActivity: Activity
    
    val topActivityOrNull: Activity?
    
    val topFragmentActivityOrNull: FragmentActivity? 
    
    val topActivityOrApplication: Context
    
    
    http请求

    1. http接口interface声明(与retrofit十分类似) 全部采用注解式,灵活插拔,无client概念

    /**
    * 提供基础路由
    */
    @BaseUrl("http://api.map.baidu.com/")
    
    /**
    * 提供缓存目录设置
    */
    @RxHttpCacheProvider(DefaultRxHttpCacheDirectoryProvider.class)
    /**
    * 声明拦截器
    */
    @Interceptor({MyLoggerInterceptor.class, MyLoggerInterceptor2.class})
    
    /**
    * 声明rxJava拦截器
    */
    @RxJavaInterceptor(DefaultCallAdapter.class)
    public interface LoginApiService {
    
       /**
        * 声明接口 跟retrofit一致
        *
        * @return
        */
       @GET("telematics/v3/weather?location=%E5%98%89%E5%85%B4&output=json&ak=5slgyqGDENN7Sy7pw29IUvrZ")
       Observable<JsonObject> getCity();
    
       /**
        * 在retrofit上面扩展了 @Cache 设置缓存类型
        *
        * @param cacheType
        * @return
        */
       @GET("telematics/v3/weather?location=%E5%98%89%E5%85%B4&output=json&ak=5slgyqGDENN7Sy7pw29IUvrZ")
       Observable<JsonObject> getCity(@Cache CacheType cacheType);
    
       /**
        * 缓存5s
        * 添加在方法上     @Headers("cache:5000")
        *
        * @param cacheType
        * @return
        */
       @GET("telematics/v3/weather?location=%E5%98%89%E5%85%B4&output=json&ak=5slgyqGDENN7Sy7pw29IUvrZ")
       @Headers("cache:5000")
       Observable<JsonObject> getCity2(@Cache CacheType cacheType);
    
       /**
        * 缓存
        * 添加在参数上 @Header("cache") long time
        *
        * @param cacheType
        * @return
        */
       @GET("telematics/v3/weather?location=%E5%98%89%E5%85%B4&output=json&ak=5slgyqGDENN7Sy7pw29IUvrZ")
       Observable<JsonObject> getCity3(@Header("cache") long time, @Cache CacheType cacheType);
    
    
       @GET("telematics/v3/weather?location=%E5%98%89%E5%85%B4&output=json&ak=5slgyqGDENN7Sy7pw29IUvrZ")
       @RxHttpCache(CacheType.onlyCache)
       Observable<JsonObject> getCityOnlyCache();
    
    }
    

    缓存模式

    public enum CacheType {
       /**
        * 先从本地缓存拿取,然后从服务器拿取,可能会onNext两次,如果本地没有缓存 最少执行oNext一次
        */
       firstCache,
       /**
        * 先从服务器获取,没有网络 读取本地缓存
        */
       firstRemote,
       /**
        * 只从服务器拿取
        */
       onlyRemote,
       /**
        * 只从本地缓存中拿取,没有缓存 执行逻辑同Observable.empty()
        */
       onlyCache,
    
       /**
        * 如果本地存在就返回本地的,否则返回网络的数据
        */
       ifCache,
    
       /**
        * 读取上次的缓存,上次没有缓存就返回网络的数据,然后同步缓存;
        * 上次有缓存,也会同步网络数据 但不会onNext
        */
       lastCache;
    }
    
    1. api 请求方式,并绑定loading对话框

           BackupApiService::class.java.apiService()
                    .backupUpConfigQuestionQuery()
                    .map(new ResponseDTOSimpleFunction<List<SecurityQuestionDTO>>())
                    .bindProgressHud(this))//绑定progress loadingdialog
                    .subscribe(new Consumer<List<SecurityQuestionDTO>>() {
                        @Override
                        public void accept(List<SecurityQuestionDTO> securityQuestionDTOS) throws Exception {
                         
                        }
                    });
    

    kotlin 方式

           BackupApiService::class.apiService()
           
           getApiService<BackupApiService>()
            
    

    拓展上传的文件类型
    支持7种标识文件的方式,注意一般的服务器都支持 不传filename在表单,这个取决于你的服务器
    也就是协议中的 Content-Disposition: form-data; name=”file” 与 Content-Disposition: form-data; name=”file”; filename=”1705161084857114.jpeg” 的区别
    或者我提供了很多拓展uri.toPart(“filename”),file.toPart(“filename”)…

    1. File
    2. ByteArray
    3. inputStream
    4. FileDescriptor
    5. ParcelFileDescriptor
    6. AssetFileDescriptor
    7. Uri
      如下demo

    @POST("api/rentalAreaPad/uploadFile")
    @Multipart
    fun uploadRentalAreaPadFile2(
    @Query("companyId") companyId: String,
    @Part("file") fileUri: Uri,
    ): Observable<BaseResultDTO<String>>
    
    RxJava生命周期管理

    在Activity,Fragment,DialogFragment中都可以使用RxJava管理生命周期,语法保持一致,如下:

        io.reactivex.Observable.interval(1, TimeUnit.SECONDS)
                      .bindLifecycle(GrContactsFragment.this))//绑定生命周期
                      .subscribe(new Consumer<Long>() {
                          @Override
                          public void accept(Long aLong) throws Exception {
                              LogUtils.AndroidD("--------->ex:" + aLong);
                          }
                      });
    
    
    权限

    权限请求使用Rx链式调用(内部使用ActivityResultLauncher 且使用方免注册)
    常规为了避免断链式调用,那么需要attach隐藏的fragment,而这里的实现为站位ActivityResultLauncher
    更节约内存,且解决了并行的问题,也能免注册调用

      //请求授权
        ``` 
      支持activity fragment LifecycleOwner 挂载对象访问   如果是在其他类对象中 可以访问全局内敛函数 topFragmentActivity?.requestPermissionsObservable
      requestPermission(Manifest.permission.CAMERA)
                                .subscribe(new Consumer<Boolean>() {
                                    @Override
                                    public void accept(Boolean aBoolean) throws Exception {
                                        ToastUtils.showToast(v.getContext(), "Manifest.permission.CAMERA:" + aBoolean);
                                    }
                                });
                                
      //获取是否授权                          
      ToastUtils.showToast(v.getContext(), "Manifest.permission.CAMERA:" + checkSelfPermissions(Manifest.permission.CAMERA));
     ```
    
    startActivityForResult

    Rx链式调用(内部使用ActivityResultLauncher 且使用方免注册)
    常规为了避免断链式调用,那么需要attach隐藏的fragment,而这里的实现为站位ActivityResultLauncher
    更节约内存,且解决了并行的问题,也能免注册调用

       支持activity fragment LifecycleOwner 挂载对象访问 如果是在其他类对象中 可以访问全局内敛函数 topFragmentActivity?.startActivityForResultObservable
       startActivityForResult(new Intent(MainActivity.this, TestActivity.class))
                                  .subscribe(new Consumer<ActivityResult>() {
                                      @Override
                                      public void accept(ActivityResult activityResult) throws Exception {
                                          ToastUtils.showToast(v.getContext(), "activityResult:reqcode:" + activityResult.getRequestCode() + ";resCode" + activityResult.getResultCode() + ";data:" + activityResult.getData().getStringExtra("data"));
    
                                      }
                                  });   
    
    事件通信框架

    以字符串作为通信的模型举例:
    
    //订阅消息
    String.javaClass.subscribeEvent()
    .subscribe {
    
                }
     //发送消息           
    "测试".postEvent();
            
     //其他模型       
      TestEvent::class.java.subscribeEvent()
                    .subscribe {
                System.out.println("=====>"+"收到"+it);
            }
      TestEvent().postEvent();      
    
    Uri 授权

    Uri 授权每个app都去注册 十分麻烦,这里采用自动注册FileProvider

    File.toAuthorizedUri
    
    Json安全

    更加安全的JsonTypeAdapter,应对若语言类的服务器开发工程师,比如 把int 返回双引号空,框架内部
    兼容了int,bool,long,double,float,number,bigDecimal等常用类型的安全性

    GsonBuilder()
        .registerTypeAdapterFactory(new SafeTypeAdapterFactory())
        .build()
    

    这里同时也推荐我的另外一款code-gen工具
    对gson提升10倍速度,比市面上的任何序列化工具都优秀,请参考gson_plugin

    效率提升

    1. 增加避免装箱拆箱的LongHashMap 比如SparseArray查询效率更快,相比hashmap 提升50%,以及LongHashSet等组件
    2. 增加MurmurHash 比jdk自带hash 更快 200%提升
    3. 增加city hash 对于大数据 hash 效率更高
    委托属性

    获取viewbinding 可以 使用 by viewBinding()加载

       private val binding by viewBinding(ActivitySettingBinding::bind);
    
    
    activity intent和fragment Fragment参数绑定 可以使用 by argumentBinding(“xxx”)

        private val withNfcId: String? by argumentBinding("withNfcId");//携带了NFC卡号
    
    SharedPreference 可以采用 by

    读写 key value 委托,内部具体用什么缓存 由IPreferencesOwner决定,十分方便扩展,可以扩展为MMKV,sqlite,file等等

    object PreferencesDemo : SharedPreferencesOwner {
    
        data class User(val name: String? = null)
    
        var name: String by preferencesBinding("key", "xxx")
    
        //可以监听
        var name2: String by preferencesBinding("key2", "xxx").observable { property, newValue ->
            println("=============>PrefsDemo3:$newValue")
        }
        var user: User by preferencesBinding("key3", User()).useGson()
    
        fun test() {
            println("=============>PrefsDemo:$name")
            name = randomUUIDString32;
            println("=============>PrefsDemo2:$name")
            name2 = randomUUIDString32;
            println("=============>PrefsDemo4:$name2")
    
            println("=============>PrefsDemoUserBefore:$user")
            user = User("张三 ${System.currentTimeMillis()}")
            println("=============>PrefsDemoUser:$user")
        }
    }
    
    inline fun <P : IPreferencesOwner, reified V> PrefsDelegate<P, out V>.useGson(): KeyValueDelegate<P, V> {
        return object : KeyValueDelegate<P, V>(this.key, this.default) {
            private val stringDelegate by lazyUnsafe {
                PrefsDelegate<P, String>(this.key, "", String::class);
            }
    
            override fun getValue(thisRef: P, property: KProperty<*>): V {
                val value = stringDelegate.getValue(thisRef, property)
                return if (value.isEmpty()) {
                    default
                } else {
                    Json.fromJson<V>(value) ?: default
                }
            }
    
            override fun setValue(thisRef: P, property: KProperty<*>, value: V) {
                stringDelegate.setValue(thisRef, property, Json.toJson(value));
            }
        }
    }
    
    自定义相册

        //需额外加依赖
        implementation 'com.NBXXF.xxf_android:lib_album:xxxx'
    

       AlbumLauncher.from(SampleActivity.this)
                            .choose(MimeType.ofImage(), false)
                            .countable(true)
                            .capture(true)
                            .maxSelectable(9)
                            .addFilter(new GifSizeFilter(320, 320, 5 * Filter.K * Filter.K))
                            .gridExpectedSize(
                                    getResources().getDimensionPixelSize(R.dimen.grid_expected_size))
                            .restrictOrientation(ActivityInfo.SCREEN_ORIENTATION_PORTRAIT)
                            .thumbnailScale(0.85f)
                            .imageEngine(new GlideEngine())
                            .setOnSelectedListener((uriList, pathList) -> {
                                Log.e("onSelected", "onSelected: pathList=" + pathList);
                            })
                            .showSingleMediaType(true)
                            .originalEnable(true)
                            .maxOriginalSize(10)
                            .autoHideToolbarOnSingleTap(true)
                            .setOnCheckedListener(isChecked -> {
                                Log.e("isChecked", "onCheck: isChecked=" + isChecked);
                            })
                            .forResult()
                            .subscribe(new Consumer<AlbumResult>() {
                                @Override
                                public void accept(AlbumResult albumResult) throws Throwable {
                                    mAdapter.setData(albumResult.getUris(), albumResult.getPaths());
                                }
                            });
    

    自定义相机仿微信

        //需额外加依赖
        implementation 'com.NBXXF.xxf_android:lib_camera_wechat:xxxx'
    

     CameraLauncher.instance
                        //.openPreCamera()// 是否打开为前置摄像头
                        .allowPhoto(true)// 是否允许拍照 默认允许
                        .allowRecord(true)// 是否允许录像 默认允许
                        .setMaxRecordTime(3)//最长录像时间 秒
                        .forResult(this)
                        .subscribe {
                            if (it.isImage) {
                                text.text = "Image Path:\n${it.path}"
                            } else {
                                text.text = "Video Path:\n${it.path}"
                            }
                        }
    

    二维码生成

    QRCodeProviders.of(content)
                    .setOutputSize(new Size(width, height))
                    .setContentMargin(Integer.valueOf(margin))
                    .setContentColor(color_black)
                    .setBackgroundColor(color_white)
                    .setLogo(logoBitmap)
                    .setContentFillImg(blackBitmap)
                    .build();
    
    RecyclerView 分割线

    1. DividerDecorationFactory 工厂模式
    2. LinearItemDecoration 支持 水平 垂直 和格子
    圆角组件

    (app:radius=”8dp”,app:radius=”360dp” 为圆形 详细参考下面每个类的 类注释!!!)

    1. XXFRoundButton
    2. XXFRoundCheckedTextView
    3. XXFRoundEditText
    4. XXFRoundImageView
    5. XXFRoundTextView
    6. XXFRoundLayout
    7. XXFRoundLinearLayout
    8. XXFRoundRelativeLayout
    带渐变背景的组件

    (app:start_color app:end_color 详细参考下面每个类的 类注释!!!)

    1. XXFGradientCompatButton
    2. XXFGradientCompatCheckedTextView
    3. XXFGradientCompatEditText
    4. XXFGradientCompatImageView
    5. XXFGradientCompatTextView
    6. XXFGradientFrameLayout
    7. XXFGradientLinearLayout
    8. XXFGradientRelativeLayout
    设置宽高比例的组件

    (app:widthRatio app:heightRatio 详细参考下面每个类的 类注释!!!)

    1. XXFRationCompatButton
    2. XXFRationCompatCheckedTextView
    3. XXFRationCompatEditText
    4. XXFRationCompatImageView
    5. XXFRationCompatTextView
    6. XXFRationFrameLayout
    7. XXFRationLinearLayout
    8. XXFRationtRelativeLayout

    Visit original content creator repository

  • ncryptf-java

    ncryptf Java

    TravisCI License

    ncryptf logo

    A library for facilitating hashed based KDF signature authentication, and end-to-end encrypted communication with compatible API’s.

    Installing

    This library can be installed via Maven or Gradle via a JitPack checkout:

    Maven

    1. Add jitpack.io as a repository dependency in your pom.xml.
    <repository>
        <id>jitpack.io</id>
        <url>https://jitpack.io</url>
    </repository>
    1. Add this repository as a dependency. Be sure to replace LATEST_TAG_FROM_GITHUB appropriately.
    <dependency>
        <groupId>>com.ncryptf</groupId>
        <artifactId>ncryptf-java</artifactId>
        <version>LATEST_TAG_FROM_GITHUB</version>
    </dependency>

    Gradle

    1. Add jitpack.io to your root bundle.gradle file:
    allprojects {
        repositories {
            ...
            maven { url 'https://jitpack.io' }
        }
    }
    
    1. Add this repository as a dependency. Be sure to replace LATEST_TAG_FROM_GITHUB appropriately.
    dependencies {
        implementation 'com.github.charlesportwoodii:ncryptf-java:LATEST_TAG_FROM_GITHUB'
    }
    

    Testing

    ./gradlew clean install test
    

    Documentation

    Documentation is available on JitPack for the master branch.

    Javadoc documentation can be generated by running the following command:

    ./gradlew javadoc
    

    The HTML documentation will be placed in ./target/site/apidocs/index.html

    HMAC+HKDF Authentication

    HMAC+HKDF Authentication is an Authentication method that allows ensures the request is not tampered with in transit. This provides resiliance not only against network layer manipulation, but also man-in-the-middle attacks.

    At a high level, an HMAC signature is created based upon the raw request body, the HTTP method, the URI (with query parameters, if present), and the current date. In addition to ensuring the request cannot be manipulated in transit, it also ensures that the request is timeboxed, effectively preventing replay attacks.

    The library itself is made available by importing the following struct:

    Supporting API’s will return the following payload containing at minimum the following information.

    {
        "access_token": "7XF56VIP7ZQQOLGHM6MRIK56S2QS363ULNB5UKNFMJRQVYHQH7IA",
        "refresh_token": "MA2JX5FXWS57DHW4OIHHQDCJVGS3ZKKFCL7XM4GNOB567I6ER4LQ",
        "ikm": "bDEyECRvKKE8w81fX4hz/52cvHsFPMGeJ+a9fGaVvWM=",
        "signing": "7v/CdiGoEI7bcj7R2EyDPH5nrCd2+7rHYNACB+Kf2FMx405und2KenGjNpCBPv0jOiptfHJHiY3lldAQTGCdqw==",
        "expires_at": 1472678411
    }

    After extracting the elements, we can create signed request by doing the following:

    import com.ncryptf.Token;
    import com.ncryptf.Authorization;
    import com.ncryptf.exceptions.*;
    
    Token token = new Token(
        accessToken,
        refreshToken,
        ikm,
        signing,
        expiresAt
    );
    
    try {
        Authorization auth = new Authorization(
            httpMethod,
            uri,
            token,
            date,
            payload
        );
    
        String header = auth.getHeader();
    } catch (KeyDerivationException e) {
        // Handle errors
    }

    A trivial full example is shown as follows:

    import com.ncryptf.Token;
    import com.ncryptf.Authorization;
    import com.ncryptf.exceptions.*;
    import org.apache.commons.codec.binary.Base64;
    import java.time.Instant;
    import java.time.ZoneOffset;
    import java.time.ZonedDateTime;
    
    Token token = new Token(
        "7XF56VIP7ZQQOLGHM6MRIK56S2QS363ULNB5UKNFMJRQVYHQH7IA",
        "7XF56VIP7ZQQOLGHM6MRIK56S2QS363ULNB5UKNFMJRQVYHQH7IA",
        Base64.decodeBase64("bDEyECRvKKE8w81fX4hz/52cvHsFPMGeJ+a9fGaVvWM="),
        Base64.decodeBase64("7v/CdiGoEI7bcj7R2EyDPH5nrCd2+7rHYNACB+Kf2FMx405und2KenGjNpCBPv0jOiptfHJHiY3lldAQTGCdqw=="),
        ZonedDateTime.ofInstant(Instant.ofEpochSecond(1472678411), ZoneOffset.UTC)
    )
    
    ZonedDateTime date = ZonedDateTime.now(ZoneOffset.UTC);
    
    try {
        Authorization auth = new Authorization(
            "POST",
            "/api/v1/test",
            token,
            date,
            "{\"foo\":\"bar\"}"
        );
    
        String header = auth.getHeader();
    } catch (KeyDerivationException e) {
        // Handle errors
    }

    Note that the date property should be pore-offset when calling Authorization to prevent time skewing.

    The payload parameter should be a JSON serializable string.

    Version 2 HMAC Header

    The Version 2 HMAC header, for API’s that support it can be retrieved by calling:

    String header = auth.getHeader();

    Version 1 HMAC Header

    For API’s using version 1 of the HMAC header, call Authorization with the optional version parameter set to 1 for the 6th parameter.

    try {
        Authorization auth = new Authorization(
            httpMethod,
            uri,
            token,
            date,
            payload,
            1
        );
    
        String header = auth.getHeader();
    } catch (KeyDerivationException e) {
        // Handle errors
    }

    This string can be used in the Authorization Header

    Date Header

    The Version 1 HMAC header requires an additional X-Date header. The X-Date header can be retrieved by calling auth.getDateString()

    Encrypted Requests & Responses

    This library enables clients to establish and trusted encrypted session on top of a TLS layer, while simultaniously (and independently) providing the ability authenticate and identify a client via HMAC+HKDF style authentication.

    The rationale for this functionality includes but is not limited to:

    1. Necessity for extra layer of security
    2. Lack of trust in the network or TLS itself (see https://blog.cloudflare.com/incident-report-on-memory-leak-caused-by-cloudflare-parser-bug/)
    3. Need to ensure confidentiality of the Initial Key Material (IKM) provided by the server for HMAC+HKDF authentication
    4. Need to ensure confidentiality of user submitted credentials to the API for authentication

    The primary reason you may want to establish an encrypted session with the API itself is to ensure confidentiality of the IKM to prevent data leakages over untrusted networks to avoid information being exposed in a Cloudflare like incident (or any man-in-the-middle attack). Encrypted sessions enable you to utilize a service like Cloudflare should a memory leak occur again with confidence that the IKM and other secure data would not be exposed.

    To encrypt, decrypt, sign, and verify messages, you’ll need to be able to generate the appropriate keys. Internally, this library uses lazysodium-java to perform all necessary cryptography functions, though any libsodium implementation for Java would suffice.

    Encryption Keys

    Encryption uses a sodium crypto box. A keypair can be generated as follows when using lazy-sodium.

    import com.ncryptf.Utils;
    import com.ncryptf.Keypair;
    Keypair kp = Utils.generateKeypair();

    Signing Keys

    Encryption uses a sodium signature. A keypair for signing can be generated as follows using lazy-sodium:

    import com.ncryptf.Utils;
    import com.ncryptf.Keypair;
    Keypair kp = Utils.generateSigningKeypair();

    Encrypted Request Body

    Payloads can be encrypted as follows:

    import com.ncryptf.Request;
    import com.ncryptf.exceptions.*;
    import java.util.Base64;
    
    // Arbitrary string payload
    String payload = "{\"foo\":\"bar\"}";
    
    try {
        // 32 byte secret and public key. Extract from kp.get...().getAsBytes(), or another libsodium method
        Request request = new Request(secretKeyBytes, signingSecretKeyBytes /* token.signature */);
    
        // Cipher now contains the encryted data
        // Signature should be the signature private key previously agreed upon with the sender
        // If you're using a `Token` object, this should be the `.signature` property
        byte[] cipher = request.encrypt(payload, remotePublicKey);
    
        // Send as encrypted request body
        String b64Body = Base64.getEncoder().encode(cipher);
    
        // Do your http request here
    } catch (EncryptionFailedException e) {
        // Handle encryption errors here
    }

    Note that you need to have a pre-bootstrapped public key to encrypt data. For the v1 API, this is typically this is returned by /api/v1/server/otk.

    Decrypting Responses

    Responses from the server can be decrypted as follows:

    import com.ncryptf.Response;
    import com.ncryptf.exceptions.*;
    import java.util.Base64;
    
    try {
        // Grab the raw response from the server
        byte[] responseFromServer = Base64.getDecoder().decode("<HTTP-Response-Body>");
        Response response = new Response(clientSecretKey);
    
        String decrypted = response.decrypt(responseFromServer, remotePublicKey);
    } catch (InvalidChecksumException e) {
        // Checksum is not valid. Request body was tampered with
    } catch (InvalidSignatureException e) {
        // Signature verification failed
    } catch (DecryptionFailedException e) {
        // Decryption failed. This may be an issue with the provided nonce, or keypair being used
    }

    V2 Encrypted Payload

    Verison 2 works identical to the version 1 payload, with the exception that all components needed to decrypt the message are bundled within the payload itself, rather than broken out into separate headers. This alleviates developer concerns with needing to manage multiple headers.

    The version 2 payload is described as follows. Each component is concatanated together.

    Segment Length
    4 byte header DE259002 in binary format 4 BYTES
    Nonce 24 BYTES
    The public key associated to the private key 32 BYTES
    Encrypted Body X BYTES
    Signature Public Key 32 BYTES
    Signature or raw request body 64 BYTES
    Checksum of prior elements concatonated together 64 bytes
    Visit original content creator repository
  • 42_libft

    42 libft

    Static Badge Static Badge

    Static Badge

    42 libft is the first project of the common core, this project makes the student recreate some standard C library functions and some addition functions that will be useful throughout the cursus.

    If you’re from 42 and you just started libft i highly recommend you to use this reposotory more as a support and develop your own functions and tests. If you need help you can send me a message in any of my socials

    Standard C Library

    Function Description Status Francinette
    ft_isalpha Checks if the char received is a letter  ✔️ ️   ✔️
    ft_isdigit Checks if the char received is a number  ✔️ ️   ✔️
    ft_isalnum Checks if the char received is alphanumeric  ✔️ ️   ✔️
    ft_isascii Checks if the char received is an ascii char  ✔️ ️   ✔️
    ft_isprint Checks if the char received is printable  ✔️ ️   ✔️
    ft_strlen Returns the size of the string received  ✔️ ️   ✔️
    ft_memset Fills a block of memory with a particular value  ✔️ ️   ✔️
    ft_bzero Deletes the information of a set block of memory  ✔️ ️   ✔️
    ft_memcpy Copies the values of x bytes from source to the destination  ✔️ ️   ✔️
    ft_memmove Copies the values of x bytes from source to the destination  ✔️ ️   ✔️
    ft_strlcpy Copies from src to dest and returns the length of the string copied  ✔️ ️   ✔️
    ft_strlcat Concatnates dest with src and returns the length of the string concatnated  ✔️ ️   ✔️
    ft_toupper Converts into upercase the lowercase char received  ✔️ ️   ✔️
    ft_tolower Converts into lowercase the upercase char received  ✔️ ️   ✔️
    ft_strchr Returns the first occurance of char in the string  ✔️ ️   ✔️
    ft_strrchr Returns the last occurance of char in the string  ✔️ ️   ✔️
    ft_strncmp Compares the given strings up to n characters  ✔️ ️   ✔️
    ft_memchr Searchs in x bytes on a block of memory the first occurance of the value received  ✔️ ️   ✔️
    ft_memcmp Compares the first x bytes of a block of memory area str1 and str2  ✔️ ️   ✔️
    ft_strnstr Returns the first occurace of the little string on the big string  ✔️ ️   ✔️
    ft_atoi Converts the string received to it’s int value  ✔️ ️   ✔️
    ft_calloc Allocates a memory block with the size received and initializes it  ✔️ ️   ✔️
    ft_strdup Duplicates the string received on to a allocated string  ✔️ ️   ✔️

    Addition functions

    Function Description Status Francinette
    ft_substr Returns an allocated string that starts at the index received  ✔️ ️   ✔️
    ft_strjoin Returns a new allocated string which is the result of the concatenation of both strings received  ✔️ ️   ✔️
    ft_strtrim Returns a copy of the string received without the characters received removing them from the beginning and end of the string  ✔️ ️   ✔️
    ft_split Returns a string separated by the character sent  ✔️ ️   ✔️
    ft_itoa Converts the int value received to it’s character value  ✔️ ️   ✔️
    ft_strmapi Applies the function received to each letter of the string received, creating a new allocated string with the changes  ✔️ ️   ✔️
    ft_striteri Applies the function received to each letter of the string received and replaces the string received with the changes  ✔️ ️   ✔️
    ft_putchar_fd Outputs the char received on to the file descriptor given  ✔️ ️   ✔️
    ft_putstr_fd Outputs the string received on to the file descriptor given  ✔️ ️   ✔️
    ft_putendl_fd Outputs the string received on to the file descriptor given and ending it with a new line  ✔️ ️   ✔️
    ft_putnbr_fd Outputs the number received on to the file descriptor given  ✔️ ️   ✔️

    Bonus functions

    Function Description Status Francinette
    ft_lstnew Creates and return a new allocated node to a linked list  ✔️ ️   ✔️
    ft_lstadd_front Adds the node received to the beginning of a linked list  ✔️ ️   ✔️
    ft_lstsize Returns the number of nodes on a linked list  ✔️ ️   ✔️
    ft_lstlast Returns the last node of a linked list  ✔️ ️   ✔️
    ft_lstadd_back Adds the node received to the end of a linked list  ✔️ ️   ✔️
    ft_lstdelone Receives a node, deletes the contents of it’s variables and frees the node  ✔️ ️   ✔️
    ft_lstclear Deletes and frees the given node and every successor of that node  ✔️ ️   ✔️
    ft_lstiter Applies the function received to every element of the node’s variables  ✔️ ️   ✔️
    ft_lstmap Applis the function received to every element of the node’s variables and creates a new linked list from that  ✔️ ️   ✔️
    Visit original content creator repository
  • TensorRT-v8-YOLOv5-v5.0

    TensorRT v8.2 加速部署 YOLOv5-v5.0

    项目简介

    • 使用 TensorRT 原生API构建 YOLO 网络,将 PyTorch 模型转为.plan 序列化文件,加速模型推理;
    • 基于 TensorRT 8.2.4 版本,具体环境见下方的环境构建部分;
    • 主要参考 tensorrtx 项目,但作者本人根据自己编程习惯,做了大量改动;
    • 未使用Cuda加速图像预处理的项目链接:no_cuda_preproc

    项目特点

    • 这里对比和 tensorrtx 项目中 YOLOv5-v5.0 的不同,并不是说孰优孰劣,只是有些地方更符合作者个人习惯

    tensorrtx 本项目 备注
    1 implicit(隐式 batch) explicit(显式 batch) 此不同为最大的不同,代码中很多的差异都源于此
    2 Detect Plugin 继承自 IPluginV2IOExt Detect Plugin 继承自 IPluginV2DynamicExt
    3 Detect Plugin 被编译为动态链接库 Detect Plugin 直接编译到最终的可执行文件
    4 异步推理(context.enqueue) 同步推理(context.executeV2) 作者亲测在速度方面无差别,同步写法更简便
    5 INT8量化时,采用OpenCV的dnn模块将图像转换为张量 INT8量化时,自定义的方法将图像转换为张量
    6 C++加opencv实现预处理 cuda编程实现预处理加速 v5.0之后的版本也有,两种不同的实现

    除上述外,还有很多其他编码上的不同,不一一赘述。

    推理速度

    • 基于GPU:GeForce RTX 2080 Ti

    FP32 FP16 INT8
    6 ms 3 ms 3 ms

    备注:本项目的推理时间包括:预处理、前向传播、后处理,tensorrtx 项目仅计算了前向传播时间。

    环境构建

    宿主机基础环境

    • Ubuntu 16.04
    • GPU:GeForce RTX 2080 Ti
    • docker,nvidia-docker

    基础镜像拉取

    docker pull nvcr.io/nvidia/tensorrt:22.04-py3
    • 该镜像中各种环境版本如下:

    CUDA cuDNN TensorRT python
    11.6.2 8.4.0.27 8.2.4.2 3.8.10

    安装其他库

    1. 创建 docker 容器

      docker run -it --gpus device=0 --shm-size 32G -v /home:/workspace nvcr.io/nvidia/tensorrt:22.04-py3 bash

      其中-v /home:/workspace将宿主机的/home目录挂载到容器中,方便一些文件的交互,也可以选择其他目录

      • 将容器的源换成国内源

      cd /etc/apt
      rm sources.list
      vim sources.list
      • 将下面内容拷贝到文件sources.list

      deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
      deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
      deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
      deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
      deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
      • 更新源
      apt update
    2. 安装 OpenCV-4.5.0

      • OpenCV-4.5.0源码链接如下,下载 zip 包,解压后放到宿主机/home目录下,即容器的/workspace目录下
      https://github.com/opencv/opencv
      • 下面操作均在容器中

      # 安装依赖
      apt install build-essential
      apt install libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
      apt install libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libdc1394-22-dev
      # 开始安装 OpenCV
      cd /workspace/opencv-4.5.0
      mkdir build
      cd build
      cmake -D CMAKE_INSTALL_PREFIX=/usr/local -D CMAKE_BUILD_TYPE=Release -D OPENCV_GENERATE_PKGCONFIG=ON -D OPENCV_ENABLE_NONFREE=True ..
      make -j6
      make install

    运行项目

    1. 获取 .wts 文件
    • 主要过程为:把本项目的pth2wts.py文件复制到官方yolov5-v5.0目录下,在官方yolov5-v5.0目录下运行 python pth2wts.py,得到para.wts文件
    • 具体过程可参考下面步骤

    git clone -b v5.0 https://github.com/ultralytics/yolov5.git
    git clone https://github.com/emptysoal/yolov5-v5.0_tensorrt-v8.2.git
    # download https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt
    cp {tensorrt}/pth2wts.py {ultralytics}/yolov5
    cd {ultralytics}/yolov5
    python pth2wts.py
    # a file 'para.wts' will be generated.
    1. 构建 .plan 序列化文件并推理
    • 主要过程为:把上一步生成的para.wts文件复制到本项目目录下,在本项目中依次运行make./trt_infer
    • 具体过程可参考下面步骤

    cp {ultralytics}/yolov5/para.wts {tensorrt}/
    cd {tensorrt}/
    mkdir images  # and put some images in it
    # update CLASS_NUM in yololayer.h if your model is trained on custom dataset
    # you can also update INPUT_H、INPUT_W in yololayer.h, update NET(s/m/l/x) in trt_infer.cpp
    make
    ./trt_infer
    # result images will be generated in present dir

    Visit original content creator repository

  • Intersection-navigation-for-Duckietown

    Intersection Navigation for Duckietown

    As part of the Duckietown class taught at ETH Zurich (Fall 2023), we worked on a small final project and presented it to other students as a group of three master students: Benjamin Dupont, Yanni Kechriotis, and Samuel Montorfani. We implemented an intersection navigation pipeline for the Duckiebots (small autonomous differential drive robots equipped with Nvidia Jetson Nano) to enable them to drive through intersections in the Duckietown road-like environment.

    The pipeline consists of:

    1. Perception: Detect intersections and other Duckiebots in the environment.
    2. Decision Making: Decide which way to go and whether it is safe to proceed based on the detections. This includes applying a decision-making stack to determine priority and right of way.
    3. Control: Steer the Duckiebot through the intersection.

    The pipeline is implemented in Python and uses the ROS framework to communicate with the Duckiebot and other nodes in the system.

    Intersection Navigation

    Project Overview

    Scope

    • Detect intersections in Duckietown.
    • Detect other Duckiebots in the intersection.
    • Decide whether to stop, go, or turn based on other agents, using LED colors for communication.
    • Navigate the intersection by turning left, right, or going straight, depending on the intersection options.
    • Apply a decision-making stack to determine priority and right of way.

    Assumptions

    All sensors on the Duckiebots are assumed to be fully functional. The intersections are expected to be of standard size, with standard markings that are clearly visible, and without any obstructions such as buildings. Additionally, the Duckiebots are assumed to be of standard size and shape. Finally, the code for the lane following is given by the instructors as it is part of the Duckietown software stack.

    Challenges

    The project faces several challenges that could lead to failure. One major challenge is the presence of multiple Duckiebots at an intersection, which can create symmetry issues and complicate decision-making. Delayed decision-making can also pose a risk, as it may lead to collisions or traffic jams. The limited field of view of the Duckiebots can hinder their ability to detect other robots and obstacles in time. LED detection issues can further complicate communication between Duckiebots. Additionally, random component failures can disrupt the navigation process. To mitigate these risks, we implemented a robust priority system and strategies to improve field of view, such as detecting Duckiebots while approaching intersections and turning in place to get a better view. We also assume that there is always a Duckiebot on the left and make random decisions after a certain time to prevent deadlocks at intersections.

    Implementation details and results

    The implementation of our intersection navigation project involved creating custom classes and functions to handle various tasks such as intersection detection, decision making, and control. The Duckiebot starts by following the lane and uses its camera to detect intersections by identifying red markers. Upon detecting an intersection, it stops and randomly chooses an action (straight, left, or right) based on the intersection type. The Duckiebot then signals its intended action using LEDs and checks for other Duckiebots at the intersection using a custom-trained YOLOv5 object detection model. This model provided reliable detection of other Duckiebots, which was crucial for the priority decision-making process. The Duckiebot follows standard traffic rules to determine right-of-way and uses motor encoders to execute the chosen action through the intersection.

    Perception

    The perception module is responsible for detecting intersections and other Duckiebots in the environment. We used the Duckietown lane following code to detect intersections based on the presence of red markers. The intersection detection algorithm was implemented using OpenCV to identify the red markers and determine the intersection type (T-intersection or 4-way intersection) and possible options for the duckiebot to navigate to. We also trained a custom YOLOv5 object detection model to detect other Duckiebots at the intersection. The model was trained on a dataset of Duckiebot images and achieved high accuracy in detecting Duckiebots in various orientations and lighting conditions. The alternative for this was to use the LEDs on the Duckiebots to communicate with each other, but we decided to use the object detection model for more reliable results, as the LED strength could vary depending on the lighting conditions, and on most robots, only one LED was working. We then ran the LED detection in the bounding box of the detected Duckiebots to determine the color of the LED and the direction the Duckiebot was going to take. This information was used in the decision-making module to determine the Duckiebot’s next action. To determine in which position the other Duckiebots were, we used their bounding boxes in the camera pixel coordinates to infer their position relative to the Duckiebot. This information was used in the decision-making module to determine the Duckiebot’s priority and right of way.

    Plus Intersection
    “+” Intersection detection

    YOLO Detection
    YOLO v5 Detection

    Decision Making

    The decision-making module is responsible for determining the Duckiebot’s next action based on the detected intersections and other Duckiebots. Once the different options were detected, the Duckiebot randomly chose an action (straight, left, or right) based on the intersection type. We implemented a priority system to handle multiple Duckiebots at an intersection and ensure safe navigation. The priority system assigns right-of-way based on the Duckiebot’s position relative to the other Duckiebots. The Duckiebot signals its intended action using LEDs to communicate with other Duckiebots and avoid collisions. This is used in complex cases where right of way is not sufficient. In the most simple case, the duckiebot just stays at a stop until the other duckiebot to the right has passed. In the case where the duckiebot is at a 4-way intersection, it will signal its intention to go straight, left, or right using the LEDs. If the duckiebot is at a T-intersection, it will signal its intention to go straight or turn using the LEDs. The decision-making module also includes a tie-breaking mechanism to resolve conflicts when multiple Duckiebots have the same priority. In these cases, the Duckiebot randomly chooses an action to prevent deadlocks and ensure smooth traffic flow. The decision-making module was implemented using a combination of if-else statements and priority rules to determine the Duckiebot’s next action based on the detected intersections and other Duckiebots. The priority system was designed to handle various scenarios and ensure safe and efficient navigation through intersections. It was however not fully completed and tested during the project, as the time was limited.

    Control

    Due to the limited time available for the project, we couldn’t implement a full estimation and control pipeline for the Duckiebots. Instead, we decided to opt for a brute force approach by calculating the inputs needed to achieve the desired action using open loop control. This was sufficient in most cases, and the lane following module was able to take over just at the end of the intersection to compensate for potential small errors and go back on track. Additionaly, to mitigate the effect of misalignment of the Duckiebot when approaching the intersection, we added a small alignment step before the intersection, where the Duckiebot would turn in place to get a better view of the intersection and align itself with the lanes. Using the intersection detection and aligning it with a template, we were able to ensure the duckiebot was straight when scanning the intersection, effectively improving the detection accuracy but also the intersection navigation itself thanks to a more standardized starting pose.

    Results

    In terms of results, our systematic evaluation showed an intersection detection accuracy of approximately 90%, a turn completion rate of around 85%, and a Duckiebot detection accuracy of about 95%. However, we encountered some challenges, with crashes occurring about 10% of the time and off-road occurrences happening roughly 40% of the time, often due to camera delays, motor issues, or other hardware problems. These problems also arose due to the code running on our own laptops rather than the Duckiebot itself, which could have affected the real-time performance. Despite these challenges, our project demonstrated a successful implementation of intersection navigation for Duckiebots, and we received very positive feedback from our peers during the final presentation.

    Demonstration Videos

    You can watch a demonstration of the intersection navigation system in action with the following GIFs:

    Single Duckiebot Navigation
    Single Duckiebot navigating through an intersection

    Two Duckiebots Navigation
    Two Duckiebots navigating through an intersection

    Three Duckiebots Navigation
    Three Duckiebots navigating through an intersection

    For the full videos, with realistic performance, you can look in the folder /videos in the repository.

    Note: As discussed in the challenges section, the videos show the Duckiebots running on our laptops rather than the actual Duckiebots, which could have affected the real-time performance. This affected the controls sent to the Duckiebots and the camera feed, leading to some crashes and off-road occurrences. Additionaly, the videos also show the sometime inaccurate lane following code, which was out of scope and given to us by the instructors, which was also an assumption made in the project.

    Conclusion and Future Work

    In conclusion, our project successfully implemented an intersection navigation system for Duckiebots, achieving high accuracy in intersection detection and Duckiebot recognition. Despite hardware and software integration challenges, we demonstrated the feasibility of autonomous intersection navigation in Duckietown. The project met our initial goals, although the combined execution of actions revealed areas for improvement, particularly in handling delays and hardware reliability.

    For future work, several extensions could enhance the Duckiebots’ capabilities. Developing a more robust tie-breaking mechanism for four-way intersections and ensuring the system can handle non-compliant or emergency Duckiebots would improve reliability. Implementing traffic light-controlled intersections and enabling multiple Duckiebots to navigate intersections simultaneously with minimal constraints on traffic density would significantly advance the system’s complexity and utility. Better integration of the code into the component framework would streamline development and debugging processes.

    Achieving these improvements would require substantial effort, particularly in enhancing hardware reliability and refining the software framework. Despite the challenges, the potential advancements would unlock new skills for the Duckiebots, making them more versatile and capable in complex environments. Given the limited time we had for this project, we would have liked to have more time to work on these aspects as the schedule was quite tight.

    Overall, we are satisfied with our project’s outcomes and the learning experience it provided. The insights gained will inform future developments and contribute to the broader field of autonomous robotics.

    Design Document

    The design document for the project can be found in the /design_document folder. It contains a pdf document exported from the word document that we filled in throughout our work, outlining the design choices, implementation details, and challenges faced during the project.

    Visit original content creator repository