Blog

  • Steam-Videogames-Reviews

    Repository Content

    This repository contains a Jupyter notebook titled “Steam Reviews and Algorithm Analysis” The notebook is divided into two main sections:

    1. Analysis of a dataset containing review data from the Steam gaming platform in 2021 with analysis of the results obtained in markdown structure;
    2. Algorithm complexity analysis focused on evaluating the computational complexity of an algorithm.

    Steam Reviews 2021 Dataset

    The “Steam Reviews 2021” dataset, available on Kaggle, offers detailed insights into user reviews for games on the Steam platform. This dataset includes information on user ratings, gameplay time, recommendation indicators, and timestamps, making it an excellent resource for analyzing user engagement, game popularity, and sentiment trends within the gaming community.

    Top 10 Games by Sales

    Overview

    This dataset focuses on reviews posted by Steam users in 2021. Each entry provides data on user preferences, game engagement, and ratings. It is particularly valuable for researchers, data scientists, and game developers interested in exploring user behavior patterns, identifying trends in gaming popularity, and assessing sentiment.

    ⚠️ Warning

    • The following dataset is 7.6 Gigabyte in size and requires at least 24 Gigabyte of RAM for complete running of jupyter file. If your computer does not meet these specifications, consider using a virtual machine with adequate computational power to handle the dataset efficiently.
    • To see the graph of point four of [RQ5] you must download the script because it is a large graph

    Key Attributes of the Dataset

    • Review details: Data on each review, including creation and update timestamps, user recommendations, and text snippets where available.
    • User engagement: Metrics on playtime, total review counts, and other engagement indicators.
    • Game information: Title of the game, genre categories, and other descriptive details relevant to each game.

    Libraries

    To run the analysis in this notebook, please install the following Python libraries:

    • kagglehub
    • pandas
    • numpy
    • matplotlib
    • seaborn
    • datetime
    • scipy
    • plotly
    • nltk
    • swifter

    Data fields

    The dataset consists of the following primary fields:

    • Timestamp created: The date and time when the review was initially created.
    • Timestamp updated: The most recent date and time when the review was updated.
    • Game title (app_name): Name of the game to which the review pertains.
    • Recommended: Binary indicator (True = Recommended, False = Not Recommended) representing whether the user endorses the game.
    • Playtime forever: Total playtime in minutes that the user has logged for the game at the time of the review.
    • Author details:
      • Last played: Date when the reviewer last played the game.
      • Number of reviews: Total reviews authored by the user, indicating their engagement level on Steam.

    Potential applications

    This dataset can support a range of analyses and projects, including:

    • Sentiment Analysis: Determine general user sentiment by analyzing reviews and recommendations.
    • Engagement and Retention Studies: Evaluate user playtime and frequency of reviews to infer engagement levels with various games.
    • Popularity and Recommendation Trends: Assess which games received the most recommendations and explore why some games have higher review volumes.
    • Comparative analysis: Contrast top-reviewed and top-played games to explore correlations between playtime and review volume.

    License

    This dataset is sourced from Kaggle and adheres to Kaggle’s usage terms. Please check Kaggle’s licensing policy if considering use for commercial purposes.

    Algorithm question

    Algorithm

    Information about the Algorithm

    You are given two positive integers, ( n ) (where 1 ≤ n ≤ 10^9) and k (where q ≤ k ≤ 100).Your task is to express n as the sum of k positive integers, all having the same parity (i.e., all have the same remainder when divided by 2, meaning they are either all even or all odd). In other words, find a₁, a₂, …, aₖ, each aᵢ > 0, such that n = a₁ + a₂ + … + aₖ, and all aᵢ simultaneously are either even or odd. If it’s impossible to represent n in this way, report that no such representation exists.

    Input

    In the first input line, you will receive a number t (where 1 ≤ t ≤ 100), representing the number of test cases. The following t lines will contain two values, n and k, corresponding to each test case.

    Output

    For each test case, if it is possible to represent n as the sum of k positive integers, all of the same parity (either all even or all odd), print YES and provide the corresponding values of aᵢ in the next line. If there are multiple valid solutions, you can print any of them. If such a representation is not possible for a given test case, print NO.

    Examples

    Input

    10 3
    100 4
    8 7
    97 2
    8 8
    3 10
    5 3
    

    Output

    YES
    4 2 4
    YES
    55 5 5 35
    NO
    NO
    YES
    1 1 1 1 1 1 1 1
    NO
    YES
    3 1 1
    
    Visit original content creator repository https://github.com/Flavio-Mangione/Steam-Videogames-Reviews
  • khan-www23

    [WWW’23] KHAN: Knowledge-Aware Hierarchical Attention Networks for Accurate Political Stance Prediction

    This repository provides an implementation of KHAN as described in the paper: KHAN: Knowledge-Aware Hierarchical Attention Networks for Accurate Political Stance Prediction by Yunyong Ko, Seongeun Ryu, Soeun Han, Youngseung Jeon, Jaehoon Kim, Sohyun Park, Kyungsik Han, Hanghang Tong, and Sang-Wook Kim, In Proceedings of the ACM Web Conference (WWW) 2023.

    The overview of KHAN

    The overview of KHAN

    • Datasets
      • To reflect the different political knowledge of each entity, we build two political knowledge graphs, KG-lib and KG-con. Also, for extensive evaluation, we construct a large-scale political news datatset, AllSides-L, much larger (48X) than the existing largest political news article dataset.
    • Algorithm
      • We propose a novel approach to accurate political stance prediction (KHAN), employing (1) hierarchical attention networks (HAN) and (2) knowledge encoding (KE) to effectively capture both explicit and implicit factors of a news article.
    • Evaluation
      • Via extensive experiments, we demonstrate that (1) (accuracy) KHAN consistently achieves higher accuracies than all competing methods (up to 5.92% higher than the state-of-the-art method), (2) (efficiency) KHAN converges within comparable training time/epochs, and (3) (effectiveness) each of the main components of KHAN is effective in political stance prediction.

    Datasets

    1. News articles datasets (SemEval, AllSides-S, AllSides-L)
    Dataset # of articles Class distribution
    SemEval 645 407 / 238
    AllSides-S 14.7k 6.6k / 4.6k / 3.5k
    AllSides-L 719.2k 112.4k / 202.9k / 99.6k / 62.6k / 241.5k
    1. Knowledge Graphs (YAGO, KG-conservative, KG-liberal)
    KG dataset # of source poses # of entities # of raltions
    YAGO 123,182 1,179,040
    KG-lib 219,915 5,581 29,967
    KG-con 276,156 6,316 33,207
    1. Pre-trained KG embeddings (common, conservative, liberal)

    Dependencies

    Our code runs on the Intel i7-9700k CPU with 64GB memory and NVIDIA RTX 2080 Ti GPU with 12GB, with the following packages installed:

    python 3.8.10
    torch 1.11.0
    torchtext 0.12.0
    pandas
    numpy
    argparse
    sklearn
    

    How to run

    python3 main.py \
      --gpu_index=0 \
      --batch_size=16 \
      --num_epochs=50 \
      --learning_rate=0.001 \
      --max_sentence=20 \
      --embed_size=256 \
      --dropout=0.3 \
      --num_layer=1 \
      --num_head=4 \
      --d_hid=128 \
      --dataset=SEMEVAL \
      --alpha=0.6 \
      --beta=0.2
    

    Citation

    Please cite our paper if you have used the code in your work. You can use the following BibTex citation:

    @inproceedings{ko2023khan,
      title={KHAN: Knowledge-Aware Hierarchical Attention Networks for Accurate Political Stance Prediction},
      author={Ko, Yunyong and Ryu, Seongeun and Han, Soeun and Jeon,Youngseung and Kim, Jaehoon and Park, Sohyun and Han, Kyungsik Tong, Hanghang and Kim., Sang-Wook},
      booktitle={Proceedings of the ACM Web Conference (WWW) 2023},
      pages={1572--1583},
      year={2023},
      isbn = {9781450394161},
      publisher = {Association for Computing Machinery (ACM)},
      doi = {10.1145/3543507.3583300},
      location = {Austin, TX, USA},
    }
    
    Visit original content creator repository https://github.com/yy-ko/khan-www23
  • Newbe.Mahua.Framework.V1

    Newbe.Mahua.Framework 已于 2020 年 8 月 2 日 正式归档,源码将不再更新。

    先点击一下右上角的 Star,开启隐藏功能。

    GitHub last commit All Contributors

    入坑提示

    建议开发者先根据自身需求结合表格,选择属于自己的专属开发框架,避免浪费时间。

    1. 编写一套代码就能在多个平台运行
    2. 支持使用除了 C#之外的开发语言来开发
    3. 我希望他足够简单,不用学习太多就能掌握,通常只需要半个小时就能掌握所有内容
    4. 我希望社区的反馈足够快,有问题都可以帮我解决
    SDK 名称 (1) (2) (3) (4)
    论坛中的其他 SDK
    Jie2GG.Native.Csharp.Frame
    Newbe.Mahua V2
    Newbe.Mahua V1

    论坛其他 SDK 的链接

    开篇一张图,功能全靠编

    Newbe.Mahua.Version

    真正的勇士,看图就明白了其中的道理。

    你打麻花,谁疼?麻花疼。

    QQ 协议实现也有不少,QQ 机器人平台有不少,这些平台大多具有不同的接口,对接开发存在巨大困难。

    使用该 SDK 开发可以实现一次开发,运行于多个不同平台的绝佳体验。

    支持容器管理生命周期,依赖注入,便于进行单元测试,确保开发效率。

    你只要基于 SDK 的接口开发一次,便可以将你的插件发布到所有支持的 QQ 机器人平台。

    不用担心某个平台被咔嚓。

    立马开始

    点击查看帮助文档开始编写第一个 QQ 机器人。

    快乐实践

    我想将我的项目加在此处

    i 春秋社区机器人自 2018 年 02 月 04 日开始服务于 i 春秋社区,每日文章推送、文章查询、魔法币查询、作者信息查询、作家团奖金余额查询、奖金排行榜、i 春秋课程查询等一列功能。据不完全统计,使用人数已经超过 3.5 万,使用次数达到 20 万,最大覆盖 700+个群。

    顾名思义,Repeater Breaker 是专门针对 QQ 群中的复读现象开发的检测机器人,是各位群主治理复读机的辅助工具。

    群机器人查询 EVE 市场价格。

    程序喵群机器人

    实现了基础的 QQ 群签到的功能,数据库可以存储每个用户的签到天数,积分,最后签到日期,以此来配合一切其他群管理模块,用户每天可以签到一次,随机获取积分,积分可配合其他功能进行消费,每天对每个已签到用户生成一个近似随机的人品值以供娱乐,签到时间不同对每个用户划分等级,以显示用户活跃程度。 通过二次开发可以扩展很多玩法。

    版本

    版本 下载量 开发版 说明
    Newbe.Mahua Newbe.Mahua.Version Newbe.Mahua.Download Newbe.Mahua.Version.Pre 核心接口
    Newbe.Mahua.PluginLoader Newbe.Mahua.PluginLoader.Version Newbe.Mahua.PluginLoader.Download Newbe.Mahua.PluginLoader.Version.Pre 核心运行时
    Newbe.Mahua.Tools.Psake Newbe.Mahua.Tools.Psake.Version Newbe.Mahua.Tools.Psake.Download Newbe.Mahua.Tools.Psake.Version.Pre 工具包
    Newbe.Mahua.Administration Newbe.Mahua.Administration.Version Newbe.Mahua.Administration.Download Newbe.Mahua.Administration.Version.Pre WPF 版设置中心
    Newbe.Mahua.CQP Newbe.Mahua.CQP.Version Newbe.Mahua.CQP.Download Newbe.Mahua.CQP.Version.Pre CQP(酷 Q)实现
    Newbe.Mahua.QQLight Newbe.Mahua.QQLight.Version Newbe.Mahua.QQLight.Download Newbe.Mahua.QQLight.Version.Pre QQLight 实现
    Newbe.Mahua.MPQ Newbe.Mahua.MPQ.Version Newbe.Mahua.MPQ.Download Newbe.Mahua.MPQ.Version.Pre MPQ(MyPcQQ)实现
    Newbe.Mahua.CQP.ApiExtensions Newbe.Mahua.CQP.ApiExtensions.Version Newbe.Mahua.CQP.Download Newbe.Mahua.CQP.Version.Pre 对 CQP 进行 API 扩展
    Newbe.Mahua.Amanda Newbe.Mahua.Amanda.Version Newbe.Mahua.Amanda.Download Newbe.Mahua.Amanda.Version.Pre Amanda 实现(已经停止维护)
    Newbe.Mahua.CleverQQ Newbe.Mahua.CleverQQ.Version Newbe.Mahua.CleverQQ.Download Newbe.Mahua.CleverQQ.Version.Pre CleverQQ 实现(已经停止维护)

    相关链接

    说在最后面

    开发本 SDK 的目的是为了促进.Net 技术的交流学习。

    由本 SDK 衍生的任何产品均与本 SDK 无关!

    由本 SDK 支持的 QQ 自动化管理助手平台均与本 SDK 无关!

    禁止用于国家或地区法律法规所禁止的范围!

    最后,但是最重要的,一定要 Star 一下!

    特别感谢 Jetbrain 公司提供的 License 以支持该项目

    jetbrains

    Contributors

    Thanks goes to these wonderful people (emoji key):


    Newbe36524

    📖 💻 🔧 📝 💡

    Traceless

    🐛

    kotoneme

    💻

    AllenXie

    💻

    bgli100

    🐛

    Q-Q

    🐛

    LollipopGeneral

    💻

    LabelZhou

    🤔

    r4v3zn

    🤔

    Ciniki

    🤔

    Jimes

    🤔

    lv69

    🐛

    This project follows the all-contributors specification. Contributions of any kind welcome!

    Stargazers over time

    Stargazers over time

    Visit original content creator repository https://github.com/newbe36524/Newbe.Mahua.Framework.V1
  • quick_trade

    quick_trade

    stand-with-Ukraine Downloads Downloads

    image

    Dependencies:
     ├──ta (Bukosabino   https://github.com/bukosabino/ta (by Darío López Padial))
     ├──plotly (https://github.com/plotly/plotly.py)
     ├──pandas (https://github.com/pandas-dev/pandas)
     ├──numpy (https://github.com/numpy/numpy)
     ├──tqdm (https://github.com/tqdm/tqdm)
     ├──scikit-learn (https://github.com/scikit-learn/scikit-learn)
     └──ccxt (https://github.com/ccxt/ccxt)
    

    Installation:

    Quick install:

    $ pip3 install quick-trade
    

    For development:

    $ git clone https://github.com/quick-trade/quick_trade.git
    $ pip3 install -r quick_trade/requirements.txt
    $ cd quick_trade
    $ python3 setup.py install
    $ cd ..
    

    Customize your strategy!

    from quick_trade.plots import TraderGraph, make_trader_figure
    import ccxt
    from quick_trade import strategy, TradingClient, Trader
    from quick_trade.utils import TradeSide
    
    
    class MyTrader(qtr.Trader):
        @strategy
        def strategy_sell_and_hold(self):
            ret = []
            for i in self.df['Close'].values:
                ret.append(TradeSide.SELL)
            self.returns = ret
            self.set_credit_leverages(2)  # if you want to use a leverage
            self.set_open_stop_and_take(stop)
            # or... set a stop loss with only one line of code
            return ret
    
    
    client = TradingClient(ccxt.binance())
    df = client.get_data_historical("BTC/USDT")
    trader = MyTrader("BTC/USDT", df=df)
    trader.connect_graph(TraderGraph(make_trader_figure()))
    trader.set_client(client)
    trader.strategy_sell_and_hold()
    trader.backtest()

    Find the best strategy!

    import quick_trade as qtr
    import ccxt
    from quick_trade.tuner import *
    from quick_trade import TradingClient
    
    
    class Test(qtr.ExampleStrategies):
        @strategy
        def strategy_supertrend1(self, plot: bool = False, *st_args, **st_kwargs):
            self.strategy_supertrend(plot=plot, *st_args, **st_kwargs)
            self.convert_signal()  # only long trades
            return self.returns
    
        @strategy
        def macd(self, histogram=False, **kwargs):
            if not histogram:
                self.strategy_macd(**kwargs)
            else:
                self.strategy_macd_histogram_diff(**kwargs)
            self.convert_signal()
            return self.returns
    
        @strategy
        def psar(self, **kwargs):
            self.strategy_parabolic_SAR(plot=False, **kwargs)
            self.convert_signal()
            return self.returns
    
    
    params = {
        'strategy_supertrend1':
            [
                {
                    'multiplier': Linspace(0.5, 22, 5)
                }
            ],
        'macd':
            [
                {
                    'slow': Linspace(10, 100, 3),
                    'fast': Linspace(3, 60, 3),
                    'histogram': Choise([False, True])
                }
            ],
        'psar':
            [
                {
                    'step': 0.01,
                    'max_step': 0.1
                },
                {
                    'step': 0.02,
                    'max_step': 0.2
                }
            ]
    
    }
    
    tuner = QuickTradeTuner(
        TradingClient(ccxt.binance()),
        ['BTC/USDT', 'OMG/USDT', 'XRP/USDT'],
        ['15m', '5m'],
        [1000, 700, 800, 500],
        params
    )
    
    tuner.tune(Test)
    print(tuner.sort_tunes())
    tuner.save_tunes('quick-trade-tunes.json')  # save tunes as JSON

    You can also set rules for arranging arguments for each strategy by using _RULES_ and kwargs to access the values of the arguments:

    params = {
        'strategy_3_sma':
            [
                dict(
                    plot=False,
                    slow=Choise([2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597]),
                    fast=Choise([2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597]),
                    mid=Choise([2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597]),
                    _RULES_='kwargs["slow"] > kwargs["mid"] > kwargs["fast"]'
                )
            ],
    }

    User’s code example (backtest)

    from quick_trade import brokers
    from quick_trade import trading_sys as qtr
    from quick_trade.plots import *
    import ccxt
    from numpy import inf
    
    
    client = brokers.TradingClient(ccxt.binance())
    df = client.get_data_historical('BTC/USDT', '15m', 1000)
    trader = qtr.ExampleStrategies('BTC/USDT', df=df, interval='15m')
    trader.set_client(client)
    trader.connect_graph(TraderGraph(make_trader_figure(height=731, width=1440, row_heights=[10, 5, 2])))
    trader.strategy_2_sma(55, 21)
    trader.backtest(deposit=1000, commission=0.075, bet=inf)  # backtest on one pair

    Output plotly chart:

    image

    Output print

    losses: 12
    trades: 20
    profits: 8
    mean year percentage profit: 215.1878652911773%
    winrate: 40.0%
    mean deviation: 2.917382949881604%
    Sharpe ratio: 0.02203412259055281
    Sortino ratio: 0.02774402450236864
    calmar ratio: 21.321078596349782
    max drawdown: 10.092728860725552%
    

    Run strategy

    Use the strategy on real moneys. YES, IT’S FULLY AUTOMATED!

    import datetime
    from quick_trade.trading_sys import ExampleStrategies
    from quick_trade.brokers import TradingClient
    from quick_trade.plots import TraderGraph, make_figure
    import ccxt
    
    ticker = 'MATIC/USDT'
    
    start_time = datetime.datetime(2021,  # year
                                   6,  # month
                                   24,  # day
    
                                   5,  # hour
                                   16,  # minute
                                   57)  # second (Leave a few seconds to download data from the exchange)
    
    
    class MyTrade(ExampleStrategies):
        @strategy
        def strategy(self):
            self.strategy_supertrend(multiplier=2, length=1, plot=False)
            self.convert_signal()
            self.set_credit_leverages(1)
            self.sl_tp_adder(10)
            return self.returns
    
    
    keys = {'apiKey': 'your api key',
            'secret': 'your secret key'}
    client = TradingClient(ccxt.binance(config=keys))  # or any other exchange
    
    trader = MyTrade(ticker=ticker,
                     interval='1m',
                     df=client.get_data_historical(ticker, limit=10))
    fig = make_trader_figure()
    graph = TraderGraph(figure=fig)
    trader.connect_graph(graph)
    trader.set_client(client)
    
    trader.realtime_trading(
        strategy=trader.strategy,
        start_time=start_time,
        ticker=ticker,
        limit=100,
        wait_sl_tp_checking=5
    )

    image

    Additional Resources

    Old documentation (V3 doc): https://vladkochetov007.github.io/quick_trade.github.io

    License

    Creative Commons License
    quick_trade by Vladyslav Kochetov is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
    Permissions beyond the scope of this license may be available at vladyslavdrrragonkoch@gmail.com.

    Visit original content creator repository https://github.com/quick-trade/quick_trade
  • quick_trade

    quick_trade

    stand-with-Ukraine Downloads Downloads

    image

    Dependencies:
     ├──ta (Bukosabino   https://github.com/bukosabino/ta (by Darío López Padial))
     ├──plotly (https://github.com/plotly/plotly.py)
     ├──pandas (https://github.com/pandas-dev/pandas)
     ├──numpy (https://github.com/numpy/numpy)
     ├──tqdm (https://github.com/tqdm/tqdm)
     ├──scikit-learn (https://github.com/scikit-learn/scikit-learn)
     └──ccxt (https://github.com/ccxt/ccxt)
    

    Installation:

    Quick install:

    $ pip3 install quick-trade
    

    For development:

    $ git clone https://github.com/quick-trade/quick_trade.git
    $ pip3 install -r quick_trade/requirements.txt
    $ cd quick_trade
    $ python3 setup.py install
    $ cd ..
    

    Customize your strategy!

    from quick_trade.plots import TraderGraph, make_trader_figure
    import ccxt
    from quick_trade import strategy, TradingClient, Trader
    from quick_trade.utils import TradeSide
    
    
    class MyTrader(qtr.Trader):
        @strategy
        def strategy_sell_and_hold(self):
            ret = []
            for i in self.df['Close'].values:
                ret.append(TradeSide.SELL)
            self.returns = ret
            self.set_credit_leverages(2)  # if you want to use a leverage
            self.set_open_stop_and_take(stop)
            # or... set a stop loss with only one line of code
            return ret
    
    
    client = TradingClient(ccxt.binance())
    df = client.get_data_historical("BTC/USDT")
    trader = MyTrader("BTC/USDT", df=df)
    trader.connect_graph(TraderGraph(make_trader_figure()))
    trader.set_client(client)
    trader.strategy_sell_and_hold()
    trader.backtest()

    Find the best strategy!

    import quick_trade as qtr
    import ccxt
    from quick_trade.tuner import *
    from quick_trade import TradingClient
    
    
    class Test(qtr.ExampleStrategies):
        @strategy
        def strategy_supertrend1(self, plot: bool = False, *st_args, **st_kwargs):
            self.strategy_supertrend(plot=plot, *st_args, **st_kwargs)
            self.convert_signal()  # only long trades
            return self.returns
    
        @strategy
        def macd(self, histogram=False, **kwargs):
            if not histogram:
                self.strategy_macd(**kwargs)
            else:
                self.strategy_macd_histogram_diff(**kwargs)
            self.convert_signal()
            return self.returns
    
        @strategy
        def psar(self, **kwargs):
            self.strategy_parabolic_SAR(plot=False, **kwargs)
            self.convert_signal()
            return self.returns
    
    
    params = {
        'strategy_supertrend1':
            [
                {
                    'multiplier': Linspace(0.5, 22, 5)
                }
            ],
        'macd':
            [
                {
                    'slow': Linspace(10, 100, 3),
                    'fast': Linspace(3, 60, 3),
                    'histogram': Choise([False, True])
                }
            ],
        'psar':
            [
                {
                    'step': 0.01,
                    'max_step': 0.1
                },
                {
                    'step': 0.02,
                    'max_step': 0.2
                }
            ]
    
    }
    
    tuner = QuickTradeTuner(
        TradingClient(ccxt.binance()),
        ['BTC/USDT', 'OMG/USDT', 'XRP/USDT'],
        ['15m', '5m'],
        [1000, 700, 800, 500],
        params
    )
    
    tuner.tune(Test)
    print(tuner.sort_tunes())
    tuner.save_tunes('quick-trade-tunes.json')  # save tunes as JSON

    You can also set rules for arranging arguments for each strategy by using _RULES_ and kwargs to access the values of the arguments:

    params = {
        'strategy_3_sma':
            [
                dict(
                    plot=False,
                    slow=Choise([2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597]),
                    fast=Choise([2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597]),
                    mid=Choise([2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597]),
                    _RULES_='kwargs["slow"] > kwargs["mid"] > kwargs["fast"]'
                )
            ],
    }

    User’s code example (backtest)

    from quick_trade import brokers
    from quick_trade import trading_sys as qtr
    from quick_trade.plots import *
    import ccxt
    from numpy import inf
    
    
    client = brokers.TradingClient(ccxt.binance())
    df = client.get_data_historical('BTC/USDT', '15m', 1000)
    trader = qtr.ExampleStrategies('BTC/USDT', df=df, interval='15m')
    trader.set_client(client)
    trader.connect_graph(TraderGraph(make_trader_figure(height=731, width=1440, row_heights=[10, 5, 2])))
    trader.strategy_2_sma(55, 21)
    trader.backtest(deposit=1000, commission=0.075, bet=inf)  # backtest on one pair

    Output plotly chart:

    image

    Output print

    losses: 12
    trades: 20
    profits: 8
    mean year percentage profit: 215.1878652911773%
    winrate: 40.0%
    mean deviation: 2.917382949881604%
    Sharpe ratio: 0.02203412259055281
    Sortino ratio: 0.02774402450236864
    calmar ratio: 21.321078596349782
    max drawdown: 10.092728860725552%
    

    Run strategy

    Use the strategy on real moneys. YES, IT’S FULLY AUTOMATED!

    import datetime
    from quick_trade.trading_sys import ExampleStrategies
    from quick_trade.brokers import TradingClient
    from quick_trade.plots import TraderGraph, make_figure
    import ccxt
    
    ticker = 'MATIC/USDT'
    
    start_time = datetime.datetime(2021,  # year
                                   6,  # month
                                   24,  # day
    
                                   5,  # hour
                                   16,  # minute
                                   57)  # second (Leave a few seconds to download data from the exchange)
    
    
    class MyTrade(ExampleStrategies):
        @strategy
        def strategy(self):
            self.strategy_supertrend(multiplier=2, length=1, plot=False)
            self.convert_signal()
            self.set_credit_leverages(1)
            self.sl_tp_adder(10)
            return self.returns
    
    
    keys = {'apiKey': 'your api key',
            'secret': 'your secret key'}
    client = TradingClient(ccxt.binance(config=keys))  # or any other exchange
    
    trader = MyTrade(ticker=ticker,
                     interval='1m',
                     df=client.get_data_historical(ticker, limit=10))
    fig = make_trader_figure()
    graph = TraderGraph(figure=fig)
    trader.connect_graph(graph)
    trader.set_client(client)
    
    trader.realtime_trading(
        strategy=trader.strategy,
        start_time=start_time,
        ticker=ticker,
        limit=100,
        wait_sl_tp_checking=5
    )

    image

    Additional Resources

    Old documentation (V3 doc): https://vladkochetov007.github.io/quick_trade.github.io

    License

    Creative Commons License
    quick_trade by Vladyslav Kochetov is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
    Permissions beyond the scope of this license may be available at vladyslavdrrragonkoch@gmail.com.

    Visit original content creator repository https://github.com/quick-trade/quick_trade
  • BilledIn

    BilledIn

    BilledIn is a simple billing system (Desktop application) for small businesses. It is designed to be easy to use and easy to understand. It is completely written in Python and uses the Tkinter library for the GUI and SQLite for the database.

    Features

    • Main Screen

      • The main screen of the application is where the user will get option to login as admin or as an employee. Main Screen
    • Employee Login

      • The employee can login using their employee id and password, One standout feature of the application is that the employee can only login if the admin has created an account for them and also the password is encrypted, ensuring the security of the employee’s account. Employee Login
    • Admin mode

      • The admin can login using the default username and password, which is “admin01” and “admin01” respectively. Admin Login
      • List of things the admin can do:
        • Inventory Management
        • Employee Management
        • Invoice Management
        • Settings
    • Inventory Management

      • The admin can search, add, update, delete and genrate barcode stickers for the products. Inventory Management
      • Update Product Update Product
    • Employee Management

      • The admin can search, add, update and delete the employee’s account. Employee Management
      • Add Employee Add Employee
    • Invoice Management

      • The admin can search invoices and generate bills and sales reports. Invoice Management
      • Sales Report Sales Report
    • Billing Screen

      • The employee can generate bills for the customers. Billing Screen
    • Barcode Sticker

      • The admin can generate barcode stickers for the products. Barcode Sticker

    Installation

    • Clone the repository

      git clone https://github.com/mahadev0811/BilledIn.git
    • Install the required packages

      pip install -r requirements.txt
    • Run the application

      python app.py

    License

    This project is licensed under the MIT License – see the LICENSE file for details.

    Visit original content creator repository https://github.com/mahadev0811/BilledIn
  • BilledIn

    BilledIn

    BilledIn is a simple billing system (Desktop application) for small businesses. It is designed to be easy to use and easy to understand. It is completely written in Python and uses the Tkinter library for the GUI and SQLite for the database.

    Features

    • Main Screen

      • The main screen of the application is where the user will get option to login as admin or as an employee. Main Screen
    • Employee Login

      • The employee can login using their employee id and password, One standout feature of the application is that the employee can only login if the admin has created an account for them and also the password is encrypted, ensuring the security of the employee’s account. Employee Login
    • Admin mode

      • The admin can login using the default username and password, which is “admin01” and “admin01” respectively. Admin Login
      • List of things the admin can do:
        • Inventory Management
        • Employee Management
        • Invoice Management
        • Settings
    • Inventory Management

      • The admin can search, add, update, delete and genrate barcode stickers for the products. Inventory Management
      • Update Product Update Product
    • Employee Management

      • The admin can search, add, update and delete the employee’s account. Employee Management
      • Add Employee Add Employee
    • Invoice Management

      • The admin can search invoices and generate bills and sales reports. Invoice Management
      • Sales Report Sales Report
    • Billing Screen

      • The employee can generate bills for the customers. Billing Screen
    • Barcode Sticker

      • The admin can generate barcode stickers for the products. Barcode Sticker

    Installation

    • Clone the repository

      git clone https://github.com/mahadev0811/BilledIn.git
    • Install the required packages

      pip install -r requirements.txt
    • Run the application

      python app.py

    License

    This project is licensed under the MIT License – see the LICENSE file for details.

    Visit original content creator repository https://github.com/mahadev0811/BilledIn
  • segmentation_models.pytorch

    Visit original content creator repository
    https://github.com/mberkay0/segmentation_models.pytorch

  • LLSpy

    LLSpy: Lattice light-sheet post-processing utility

    license_shield python_shield travis_shield Documentation Status doi_shield

    image

    Copyright © 2019 Talley Lambert, Harvard Medical School.

    LLSpy is a python library to facilitate lattice light sheet data processing. It extends the cudaDeconv binary created in the Betzig lab at Janelia Research Campus, adding features that auto-detect experimental parameters from the data folder structure and metadata (minimizing user input), auto-choose OTFs, perform image corrections and manipulations, and facilitate file handling. Full(er) documentation available at http://llspy.readthedocs.io/

    There are three ways to use LLSpy:

    1. Graphical User Interface

    The GUI provides access to the majority of functionality in LLSpy. It includes a drag-and drop queue, visual progress indicator, and the ability to preview data processed with the current settings using the (awesome) 4D-viewer, Spimagine, and experimental support for napari.

    LLSpy graphical interface

    2. Command Line Interface

    The command line interface can be used to process LLS data in a server environment (linux compatible).

    $ lls --help
    
    Usage: lls [OPTIONS] COMMAND [ARGS]...
    
      LLSpy
    
      This is the command line interface for the LLSpy library, to facilitate
      processing of lattice light sheet data using cudaDeconv and other tools.
    
    Options:
      --version          Show the version and exit.
      -c, --config PATH  Config file to use instead of the system config.
      --debug
      -h, --help         Show this message and exit.
    
    Commands:
      camera    Camera correction calibration
      clean     Delete LLSpy logs and preferences
      compress  Compression & decompression of LLSdir
      config    Manipulate the system configuration for LLSpy
      decon     Deskew and deconvolve data in LLSDIR.
      deskew    Deskewing only (no decon) of LLS data
      gui       Launch LLSpy Graphical User Interface
      info      Get info on an LLSDIR.
      install   Install cudaDeconv libraries and binaries
      reg       Channel registration
    
    # process a dataset
    $ lls decon --iters 8 --correctFlash /path/to/dataset
    
    # change system or user-specific configuration
    $ lls config --set otfDir path/to/PSF_and_OTFs
    
    # or launch the gui
    $ lls gui

    3. Interactive data processing in a python console

    >>> import llspy
    
    # the LLSdir object contains most of the useful attributes and
    # methods for interacting with a data folder containing LLS tiffs
    >>> E = llspy.LLSdir('path/to/experiment_directory')
    # it parses the settings file into a dict:
    >>> E.settings
    {'acq_mode': 'Z stack',
     'basename': 'cell1_Settings.txt',
     'camera': {'cam2name': '"Disabled"',
                'cycle': '0.01130',
                'cycleHz': '88.47 Hz',
                'exp': '0.01002',
        ...
    }
    
    # many important attributes are in the parameters dict
    >>> E.parameters
    {'angle': 31.5,
     'dx': 0.1019,
     'dz': 0.5,
     'nc': 2,
     'nt': 10,
     'nz': 65,
     'samplescan': True,
      ...
    }
    
    # and provides methods for processing the data
    >>> E.autoprocess()
    
    # the autoprocess method accepts many options as keyword aruguments
    # a full list with descriptions can be seen here:
    >>> llspy.printOptions()
    
                  Name  Default                    Description
                  ----  -------                    -----------
          correctFlash  False                      do Flash residual correction
    flashCorrectTarget  cpu                        {"cpu", "cuda", "parallel"} for FlashCor
                nIters  10                         deconvolution iters
             mergeMIPs  True                       do MIP merge into single file (decon)
                otfDir  None                       directory to look in for PSFs/OTFs
                tRange  None                       time range to process (None means all)
                cRange  None                       channel range to process (None means all)
                   ...  ...                        ...
    
    # as well as file handling routines
    >>> E.compress(compression='lbzip2')  # compress the raw data into .tar.(bz2|gz)
    >>> E.decompress()  # decompress files for re-processing
    >>> E.freeze()  # delete all processed data and compress raw data for long-term storage.

    Note: The LLSpy API is currently unstable (subject to change). Look at the llspy.llsdir.LLSdir class as a starting point for most of the useful methods. Minimal documentation available in the docs. Feel free to fork this project on github and suggest changes or additions.

    Requirements

    • Compatible with Windows (tested on 7/10), Mac or Linux (tested on Ubuntu 16.04)
    • Python 3.6 (as of version 0.4.0, support for 2.7 and 3.5 ended)
    • Most functionality assumes a data folder structure as generated by the Lattice Scope LabeView acquisition software written by Dan Milkie in the Betzig lab. If you are using different acquisition software (such as 3i software), it is likely that you will need to change the data structure and metadata parsing routines in order to make use of this software.
    • Currently, the core deskew/deconvolution processing is based on cudaDeconv, written by Lin Shao and maintained by Dan Milkie. cudaDeconv is licensed and distributed by HHMI. It was open-sourced in Feb 2019, and is available here: https://github.com/dmilkie/cudaDecon
    • CudaDeconv requires a CUDA-capable GPU
    • The Spimagine viewer requires a working OpenCL environment

    Installation

    1. Install conda/mamba

    2. Launch a terminal window (Linux), or Miniforge Prompt (Windows)

    3. Install LLSpy into a new conda environment

      conda create -n llsenv python=3.11 cudadecon
      conda activate llsenv
      pip install llspy

      The create -n llsenv line creates a virtual environment. This is optional, but recommended as it easier to uninstall cleanly and prevents conflicts with any other python environments. If installing into a virtual environment, you must source the environment before proceeding, and each time before using llspy.

    Each time you use the program, you will need to activate the virtual environment. The main command line interface is lls, and the gui can be launched with lls gui. You can create a bash script or batch file to autoload the environment and launch the program if desired.

    # Launch Anaconda Prompt and type...
    conda activate llsenv
    
    # show the command line interface help menu
    lls -h
    # process a dataset
    lls decon /path/to/dataset
    # or launch the gui
    lls gui

    See complete usage notes in the documentation.

    Features of LLSpy

    • graphical user interface with persistent/saveable processing settings
    • command line interface for remote/server usage (coming)
    • preview processed image to verify settings prior to processing full experiment
    • Pre-processing corrections:
      • correct “residual electron” issue on Flash4.0 when using overlap synchronous mode. Includes CUDA and parallel CPU processing as well as GUI for generation of calibration file.
      • apply selective median filter to particularly noisy pixels
      • trim image edges prior to deskewing (helps with CMOS edge row artifacts)
      • auto-detect background
    • Processing:
      • select subset of acquired images (C or T) for processing
      • automatic parameter detection based on auto-parsing of Settings.txt
      • automatic OTF generation/selection from folder of raw PSF files, based on date of acquisition, mask used (if entered into SPIMProject.ini), and wavelength.
      • graphical progress bar and time estimation
    • Post-processing:
      • proper voxel-size metadata embedding (newer version of Cimg)
      • join MIP files into single hyperstack viewable in ImageJ/Fiji
      • automatic width/shift selection based on image content (“auto crop to features”)
      • automatic fiducial-based image registration (provided tetraspeck bead stack)
      • compress raw data after processing
    • Watched-folder autoprocessing (experimental):
      • Server mode: designate a folder to watch for incoming finished LLS folders (with Settings.txt file). When new folders are detected, they are added to the processing queue and the queue is started if not already in progress.
      • Acquisition mode: designed to be used on the acquisition computer. Designate folder to watch for new LLS folders, and process new files as they arrive. Similar to built in GPU processing tab in Lattice Scope software, but with the addition of all the corrections and parameter selection in the GUI.
    • easily return LLS folder to original (pre-processed) state
    • compress and decompress folders and subfolders with lbzip2 (not working on windows)
    • concatenate two experiments – renaming files with updated relative timestamps and stack numbers
    • rename files acquired in script-editor mode with Iter_ in the name to match standard naming with positions (work in progress)
    • cross-platform: includes precompiled binaries and shared libraries that should work on all systems.

    Bug Reports, Feature requests, etc

    Pull requests are welcome!

    To report a bug or request a feature, please submit an issue on github

    Please include the following in any bug reports:

    • Operating system version
    • GPU model
    • CUDA version (type nvcc --version at command line prompt)
    • Python version (type python --version at command line prompt, with llsenv conda environment active if applicable)

    The most system-dependent component (and the most likely to fail) is the OpenCL dependency for Spimagine. LLSpy will fall back gracefully to the built-in Qt-based viewer, but the Spimagine option will be will be unavailble and grayed out on the config tab in the GUI. Submit an issue on github for help.

    Visit original content creator repository https://github.com/tlambert03/LLSpy
  • LLSpy

    LLSpy: Lattice light-sheet post-processing utility

    license_shield python_shield travis_shield Documentation Status doi_shield

    image

    Copyright © 2019 Talley Lambert, Harvard Medical School.

    LLSpy is a python library to facilitate lattice light sheet data processing. It extends the cudaDeconv binary created in the Betzig lab at Janelia Research Campus, adding features that auto-detect experimental parameters from the data folder structure and metadata (minimizing user input), auto-choose OTFs, perform image corrections and manipulations, and facilitate file handling. Full(er) documentation available at http://llspy.readthedocs.io/

    There are three ways to use LLSpy:

    1. Graphical User Interface

    The GUI provides access to the majority of functionality in LLSpy. It includes a drag-and drop queue, visual progress indicator, and the ability to preview data processed with the current settings using the (awesome) 4D-viewer, Spimagine, and experimental support for napari.

    LLSpy graphical interface

    2. Command Line Interface

    The command line interface can be used to process LLS data in a server environment (linux compatible).

    $ lls --help
    
    Usage: lls [OPTIONS] COMMAND [ARGS]...
    
      LLSpy
    
      This is the command line interface for the LLSpy library, to facilitate
      processing of lattice light sheet data using cudaDeconv and other tools.
    
    Options:
      --version          Show the version and exit.
      -c, --config PATH  Config file to use instead of the system config.
      --debug
      -h, --help         Show this message and exit.
    
    Commands:
      camera    Camera correction calibration
      clean     Delete LLSpy logs and preferences
      compress  Compression & decompression of LLSdir
      config    Manipulate the system configuration for LLSpy
      decon     Deskew and deconvolve data in LLSDIR.
      deskew    Deskewing only (no decon) of LLS data
      gui       Launch LLSpy Graphical User Interface
      info      Get info on an LLSDIR.
      install   Install cudaDeconv libraries and binaries
      reg       Channel registration
    
    # process a dataset
    $ lls decon --iters 8 --correctFlash /path/to/dataset
    
    # change system or user-specific configuration
    $ lls config --set otfDir path/to/PSF_and_OTFs
    
    # or launch the gui
    $ lls gui

    3. Interactive data processing in a python console

    >>> import llspy
    
    # the LLSdir object contains most of the useful attributes and
    # methods for interacting with a data folder containing LLS tiffs
    >>> E = llspy.LLSdir('path/to/experiment_directory')
    # it parses the settings file into a dict:
    >>> E.settings
    {'acq_mode': 'Z stack',
     'basename': 'cell1_Settings.txt',
     'camera': {'cam2name': '"Disabled"',
                'cycle': '0.01130',
                'cycleHz': '88.47 Hz',
                'exp': '0.01002',
        ...
    }
    
    # many important attributes are in the parameters dict
    >>> E.parameters
    {'angle': 31.5,
     'dx': 0.1019,
     'dz': 0.5,
     'nc': 2,
     'nt': 10,
     'nz': 65,
     'samplescan': True,
      ...
    }
    
    # and provides methods for processing the data
    >>> E.autoprocess()
    
    # the autoprocess method accepts many options as keyword aruguments
    # a full list with descriptions can be seen here:
    >>> llspy.printOptions()
    
                  Name  Default                    Description
                  ----  -------                    -----------
          correctFlash  False                      do Flash residual correction
    flashCorrectTarget  cpu                        {"cpu", "cuda", "parallel"} for FlashCor
                nIters  10                         deconvolution iters
             mergeMIPs  True                       do MIP merge into single file (decon)
                otfDir  None                       directory to look in for PSFs/OTFs
                tRange  None                       time range to process (None means all)
                cRange  None                       channel range to process (None means all)
                   ...  ...                        ...
    
    # as well as file handling routines
    >>> E.compress(compression='lbzip2')  # compress the raw data into .tar.(bz2|gz)
    >>> E.decompress()  # decompress files for re-processing
    >>> E.freeze()  # delete all processed data and compress raw data for long-term storage.

    Note: The LLSpy API is currently unstable (subject to change). Look at the llspy.llsdir.LLSdir class as a starting point for most of the useful methods. Minimal documentation available in the docs. Feel free to fork this project on github and suggest changes or additions.

    Requirements

    • Compatible with Windows (tested on 7/10), Mac or Linux (tested on Ubuntu 16.04)
    • Python 3.6 (as of version 0.4.0, support for 2.7 and 3.5 ended)
    • Most functionality assumes a data folder structure as generated by the Lattice Scope LabeView acquisition software written by Dan Milkie in the Betzig lab. If you are using different acquisition software (such as 3i software), it is likely that you will need to change the data structure and metadata parsing routines in order to make use of this software.
    • Currently, the core deskew/deconvolution processing is based on cudaDeconv, written by Lin Shao and maintained by Dan Milkie. cudaDeconv is licensed and distributed by HHMI. It was open-sourced in Feb 2019, and is available here: https://github.com/dmilkie/cudaDecon
    • CudaDeconv requires a CUDA-capable GPU
    • The Spimagine viewer requires a working OpenCL environment

    Installation

    1. Install conda/mamba

    2. Launch a terminal window (Linux), or Miniforge Prompt (Windows)

    3. Install LLSpy into a new conda environment

      conda create -n llsenv python=3.11 cudadecon
      conda activate llsenv
      pip install llspy

      The create -n llsenv line creates a virtual environment. This is optional, but recommended as it easier to uninstall cleanly and prevents conflicts with any other python environments. If installing into a virtual environment, you must source the environment before proceeding, and each time before using llspy.

    Each time you use the program, you will need to activate the virtual environment. The main command line interface is lls, and the gui can be launched with lls gui. You can create a bash script or batch file to autoload the environment and launch the program if desired.

    # Launch Anaconda Prompt and type...
    conda activate llsenv
    
    # show the command line interface help menu
    lls -h
    # process a dataset
    lls decon /path/to/dataset
    # or launch the gui
    lls gui

    See complete usage notes in the documentation.

    Features of LLSpy

    • graphical user interface with persistent/saveable processing settings
    • command line interface for remote/server usage (coming)
    • preview processed image to verify settings prior to processing full experiment
    • Pre-processing corrections:
      • correct “residual electron” issue on Flash4.0 when using overlap synchronous mode. Includes CUDA and parallel CPU processing as well as GUI for generation of calibration file.
      • apply selective median filter to particularly noisy pixels
      • trim image edges prior to deskewing (helps with CMOS edge row artifacts)
      • auto-detect background
    • Processing:
      • select subset of acquired images (C or T) for processing
      • automatic parameter detection based on auto-parsing of Settings.txt
      • automatic OTF generation/selection from folder of raw PSF files, based on date of acquisition, mask used (if entered into SPIMProject.ini), and wavelength.
      • graphical progress bar and time estimation
    • Post-processing:
      • proper voxel-size metadata embedding (newer version of Cimg)
      • join MIP files into single hyperstack viewable in ImageJ/Fiji
      • automatic width/shift selection based on image content (“auto crop to features”)
      • automatic fiducial-based image registration (provided tetraspeck bead stack)
      • compress raw data after processing
    • Watched-folder autoprocessing (experimental):
      • Server mode: designate a folder to watch for incoming finished LLS folders (with Settings.txt file). When new folders are detected, they are added to the processing queue and the queue is started if not already in progress.
      • Acquisition mode: designed to be used on the acquisition computer. Designate folder to watch for new LLS folders, and process new files as they arrive. Similar to built in GPU processing tab in Lattice Scope software, but with the addition of all the corrections and parameter selection in the GUI.
    • easily return LLS folder to original (pre-processed) state
    • compress and decompress folders and subfolders with lbzip2 (not working on windows)
    • concatenate two experiments – renaming files with updated relative timestamps and stack numbers
    • rename files acquired in script-editor mode with Iter_ in the name to match standard naming with positions (work in progress)
    • cross-platform: includes precompiled binaries and shared libraries that should work on all systems.

    Bug Reports, Feature requests, etc

    Pull requests are welcome!

    To report a bug or request a feature, please submit an issue on github

    Please include the following in any bug reports:

    • Operating system version
    • GPU model
    • CUDA version (type nvcc --version at command line prompt)
    • Python version (type python --version at command line prompt, with llsenv conda environment active if applicable)

    The most system-dependent component (and the most likely to fail) is the OpenCL dependency for Spimagine. LLSpy will fall back gracefully to the built-in Qt-based viewer, but the Spimagine option will be will be unavailble and grayed out on the config tab in the GUI. Submit an issue on github for help.

    Visit original content creator repository https://github.com/tlambert03/LLSpy