Blog

  • aws-multi-account-inspector-to-es-s3-blogpost-2020

    Blog Post – Centralize and visualize multi-account Amazon Inspector findings with Amazon Elasticsearch and Amazon S3

    This repository contains the source code along with automation templates for the AWS Security Blog post deploying a solution to to centrally analyze and monitor the vulnerability posture of EC2 instances across multiple regions of multiple accounts in your AWS environment. This will help you to send security finidngs generated by Amazon Inspector directly to Amazon ES for visualization in Kibana and to Amazon S3 for additional storage in a centralized architecture.

    In this repository you will find all the AWS CloudFormation templates that will build this solution in your AWS environment. Additionally you will need to download the zip file that contains a Lambda function code which needs to be stored in an S3 bucket for deployment

    Overview of the CloudFormation Templates

    Central-SecurityAcnt-BaseTemplate.yaml – This template creates the following resources in the central security account:

    1. An SNS topic and topic policy in all those regions mapping to the regions of application accounts where Inspector scan will be conducted.
    2. A SQS Queue with queue policy in the primary region where the regional SNS topics send the Inspector findings as messages.
    3. A Dead Letter Queue in the primary region where failed messages will be stored if the messages are not delivered to the main SQS Queue.
    4. IAM Role and Policy used by the Lambda function of every region in every account to associate an SNS topic to the created Inspector template.
    5. SNS Subscription for the SQS Queue.
    6. A Lambda function (can be termed as the main function) in the primary region which is trigerred by the SQS Queue to send Inspector findings from all regions of all application accounts to the centralized Elasticsearch domain and S3 bucket. The function code – Inspector-to-S3ES-crossAcnt.py is above 4096 characters hence it is compressed with all the dependent python modules in a zip file – Inspector-to-S3ES-crossAcnt.zip. To check on the dependent python modules refer to the folder – lambda-dependencies
    7. An IAM role and policy used as the lambda execution role.
    8. A lambda trigger that associates the SQS queue with the lambda function (both being in the same primary region).

    ApplicationAcnts-RolesTemplate.yml – This template creates the following global resources in the primary region of all application accounts:

    1. An IAM role and policy to start an Inspector Assessement run in that account based on an scheduled interval.
    2. An IAM role and policy that is used as a cross-account role to be assumed by the central security account’s lambda execution role for fetching details from the Inspector scans in the application accounts
    3. An IAM role and policy used as the execution role for a regional lambda function created in all application accounts to attach the regional Inspector assessment template of application accounts to the same region SNS topic but in central security account.

    InspectorRun-SetupTemplate.yml – This template creates the following resources in all those regions of all application accounts where Inspector assessment scan is performed:

    1. A Lambda trigger that associates the regional lambda function with the CloudWatch event
    2. An Inspector assessment target group per region that comprises all the EC2 instances of that region.
    3. An Inspector assessment template that performs Inspector scan on the assessment target group of intances
    4. A regional Lambda function that is used to attach the Amazon Inspector assessment template (created in application accounts) to the cross-account Amazon SNS topic (created in security account), all within the same Region. This function is needed because Amazon Inspector templates can only be attached to SNS topics in the same account via the AWS Management Console or AWS Command Line Interface (AWS CLI).
    5. A CloudWatch event in every region that triggers the regional lambda function when the Amazon Inspector assessment template with a specific user-defined tag is created for the first time in that region
    6. A time based CloudWatch event to start the Inspector assessment template at a scheduled interval

    Security

    See CONTRIBUTING for more information.

    License

    This library is licensed under the MIT-0 License. See the LICENSE file.

    Visit original content creator repository
    https://github.com/aws-samples/aws-multi-account-inspector-to-es-s3-blogpost-2020

  • ipfwtabled

    DESCRIPTION
    
      IPFWTABLED is a daemon for FreeBSD OS which is intended to provide remote
      interface to IPFW's (FreeBSD firewall) tables. This might be useful for
      environments which use ipfw tables intensively (to prevent fork-bombing with
      'ipfw' command) or want to control firewall remotely.
    
      IPFWTABLED supports automatic expiring of entries in IPFW tables based on
      configured values for expiration interval. This may be specified one for all
      tables or different for each of them.
    
    USAGE
      
      ipfwtabled [-b <host>[:<port>][ -b <host>[:<port>] ...]]
      [-d] [-t|-u] [-e [<tableidx>]:<timeinsec>[-e <tableidx>:<timeinsec> ...]]
       -b <host>:<port> - bind address
       -d               - daemonize
       -t               - use TCP
       -u               - use UDP
       -e [<idx>]:<sec> - specify expiry period for entries of table
                          idx is index of ipfw table
                          sec is amount of seconds before entry to be purged
                          if idx is not specified value is set for all tables
       -h               - print this message
    
      See Perl example script 'client.pl' for reference on client implementation.
    
    INSTALLATION
    
      Source comes with simple Makefile thus plain
    
        $ make
    
      should be enough.
    
    COMPATIBILITY
    
      Tested on FreeBSD 9 but should work on earlier versions as well.
    
    SOURCE
      
      Source is available on github: https://github.com/InvisiLabs/ipfwtabled
    
    TODO
    
      * Add statistics dumping on operations performed and queue state
      * Add persistence for expiration cache to survive service restarts
      * Add hash field for authentication/integrity check purposes
    
    BUGS
    
      Please, report bugs and suggestions to <v.khondar at invisilabs.com> or via
      github issue tracker at https://github.com/InvisiLabs/ipfwtabled/issues
    
    LIMITATIONS
    
      Only ADD/DELETE/FLUSH operations for IPFW tables are supported.
      TTL for table entries can be specified only table-wide on ipfwtabled startup.
      Single address in CIDR notation is processed per single request.
    
      IPFWTABLED must be run as root as integration with IPFW is performed via
      setsockopt() interface which requires root privileges.
    
    COPYRIGHT & LICENSE
    
      Copyright (c) 2012,
      Vadym S. Khondar <v.khondar at invisilabs.com>, InvisiLabs.
      All rights reserved.
      
      Redistribution and use in source and binary forms, with or without
      modification, are permitted provided that the following conditions are met:
          * Redistributions of source code must retain the above copyright
            notice, this list of conditions and the following disclaimer.
          * Redistributions in binary form must reproduce the above copyright
            notice, this list of conditions and the following disclaimer in the
            documentation and/or other materials provided with the distribution.
          * Neither the name of the InvisiLabs nor the
            names of its contributors may be used to endorse or promote products
            derived from this software without specific prior written permission.
      
      THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
      AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
      IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
      ARE DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDERS BE LIABLE FOR ANY
      DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
      (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
      LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
      ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
      (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
      THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
    

    Visit original content creator repository
    https://github.com/InvisiLabs/ipfwtabled

  • little-lemon-project

    Little Lemon Restaurant

    • Please star ⭐ the repo when you visit it….

    Views

    Final capstone project for Meta Front-end developer program on Coursera, which contains a detailed and responsive website with table-booking functionality built using React.

    course link – Meta front end development professional certificate

    Screenshot

    Home Page

    image

    About us

    image

    Booking Page

    image image

    Tech Stack:

    • HTML, CSS
    • JSX
    • React

    Third Party Libraries & APIs:

    • react-router-dom
    • react-responsive-carousel
    • Meta front-end table-booking API

    Getting Started with Create React App

    This project was bootstrapped with Create React App.

    Available Scripts

    In the project directory, you can run:

    npm start

    Runs the app in the development mode.
    Open http://localhost:3000 to view it in your browser.

    The page will reload when you make changes.
    You may also see any lint errors in the console.

    npm test

    Launches the test runner in the interactive watch mode.
    See the section about running tests for more information.

    npm run build

    Builds the app for production to the build folder.
    It correctly bundles React in production mode and optimizes the build for the best performance.

    The build is minified and the filenames include the hashes.
    Your app is ready to be deployed!

    See the section about deployment for more information.

    npm run eject

    Note: this is a one-way operation. Once you eject, you can’t go back!

    If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

    Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

    You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

    Learn More

    You can learn more in the Create React App documentation.

    To learn React, check out the React documentation.

    Code Splitting

    This section has moved here: https://facebook.github.io/create-react-app/docs/code-splitting

    Analyzing the Bundle Size

    This section has moved here: https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size

    Making a Progressive Web App

    This section has moved here: https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app

    Advanced Configuration

    This section has moved here: https://facebook.github.io/create-react-app/docs/advanced-configuration

    Deployment

    This section has moved here: https://facebook.github.io/create-react-app/docs/deployment

    npm run build fails to minify

    This section has moved here: https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify

    Author

    Visit original content creator repository https://github.com/vinisander1024/little-lemon-project
  • atom-sbt-client

    Atom sbt client

    This is an Atom plugin integrating sbt server with the Atom IDE interface.

    It connects to a running sbt instance and communicates with it using Language Server Protocol. Atom sends event notifications to sbt (like didSave), in response sbt recompiles the project and sends back information about warnings and errors which are displayed in Atom:

    Installation

    You can install it using Atom interface or by running this command:

    apm install atom-sbt-client
    

    On the first launch it will automatically install its dependencies if needed:

    Usage

    1. Go to a Scala project and launch sbt (project/build.properties should set sbt version to 1.1.0 or higher)
    2. Open this project in Atom, open any Scala file and save it.

    It should trigger compilation and if there are any errors, you should see them in the gutter and in the diagnostics panel.

    Another feature is jump-to-definition, which works for some types in the project.

    Note that despite the debug logging in the sbt shell, you can still use it directly. It’s just a normal sbt shell which additionally communicates with Atom.

    Related links

    Visit original content creator repository https://github.com/laughedelic/atom-sbt-client
  • metaversus

    Modern Metaverse App using React JS

    Modern Metaverse App using React JS

    Ask Me Anything! GitHub license Maintenance GitHub branches Github commits Website Status GitHub issues GitHub pull requests

    📌 How to use this App?

    1. Clone this repository to your local computer.
    2. Open terminal in root directory.
    3. Type and Run npm install or yarn install.
    4. Once packages are installed, you can start this app using npm start or yarn start.
    5. Now app is fully configured and you can start using this app 👍.

    📷 Screenshots:

    Modern UI/UX

    Modern Animations

    Metaverse Design

    ⚙️ Built with

    Built with Love

    🔧 Stats

    Stats for this App

    🙌 Contribute

    You might encounter some bugs while using this app. You are more than welcome to contribute. Just submit changes via pull request and I will review them before merging. Make sure you follow community guidelines.

    Buy Me a Coffee 🍺

    🚀 Follow Me

    GitHub followers Twitter YouTube

    ⭐ Give A Star

    You can also give this repository a star to show more people and they can use this repository.

    🔥 Getting Started

    This is a Next.js project bootstrapped with create-next-app.

    First, run the development server:

    npm run dev
    # or
    yarn dev

    Open http://localhost:3000 with your browser to see the result.

    You can start editing the page by modifying pages/index.js. The page auto-updates as you edit the file.

    API routes can be accessed on http://localhost:3000/api/hello. This endpoint can be edited in pages/api/hello.js.

    The pages/api directory is mapped to /api/*. Files in this directory are treated as API routes instead of React pages.

    📚 Learn More

    To learn more about Next.js, take a look at the following resources:

    You can check out the Next.js GitHub repository – your feedback and contributions are welcome!

    🚀 Deploy on Vercel

    The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.

    Check out our Next.js deployment documentation for more details.

    Visit original content creator repository https://github.com/sanidhyy/metaversus
  • supervised-scikit

    Running the code:
    =================
    1) Install Anaconda following instructions here: https://docs.anaconda.com/anaconda/install/
    
    2) Start Jupyter Notebook server with the following command: `jupyter notebook`
    
    3) You can run all cells to get the complete report, although it may take some time. Or if
       you just want the final fit, the training results and the test results, run all the cells
       below the markdown cell labeled "Fit".
    
    
    Versions:
    =========
    Scikit Learn:     0.22
    Numpy:            1.17.4
    Pandas:           0.25.3
    Matplotlib:       3.1.1
    Python:           3.7.5
    Juypter Notebook: 6.0.2
    
    
    Data:
    =====
    There is no need to manually download the data since the code takes care of that for you, but if
    you're interested it can be found here:
      MNIST: https://www.openml.org/d/554
      Credit: https://www.openml.org/d/31
    
    
    Performance:
    ============
    Running all of the cells may take a very long time (days), in addition there are many cells that take advantage
    advantage of multithreading and use the `n_jobs=-1` parameter to use all available cores.  The SVM notebook uses
    a larger cache_size, to increase speed.
    
    Don't forget to adjust these values to match your system if running these cells.
    

    Visit original content creator repository
    https://github.com/nikolasavic/supervised-scikit

  • mini-bookkeeping

    记账小程序

    技术栈

    taro: https://taro.aotu.io/

    taro-ui: https://taro-ui.jd.com/


    开始

    # 要先全局安装 Taro-cli
    # 使用 npm 安装 CLI
    npm install -g @tarojs/cli
    # OR 使用 yarn 安装 CLI
    yarn global add @tarojs/cli
    # OR 安装了 cnpm,使用 cnpm 安装 CLI
    cnpm install -g @tarojs/cli
    

    需要兼容Taro版本,否则会编译不成功

    taro update self 2.0.5
    

    使用要点

    1. 开发时,进入 client 目录,在此目录下运行相关编译预览或打包命令
    2. 使用微信开发者工具调试项目,请将项目 整个文件夹 作为运行目录。 注意: 不是 client 中生成的 dist 文件夹
    # 使用taobao的npm仓库镜像
    npm set registry https://registry.npm.taobao.org/
    # 安装依赖包
    npm install
    # 编译 & 打包
    npm run dev:weapp
    npm run build:weapp
    

    项目目录

    ├── client                                  小程序端目录
    │   ├── config                              配置目录
    │   │   ├── dev.js                          开发时配置
    │   │   ├── index.js                        默认配置
    │   │   └── prod.js                         打包时配置
    │   ├── dist                                编译结果目录
    │   ├── package.json
    |   ├── package-lock.json
    │   ├── src                                 源码目录
    │   │   ├── app.scss                        项目总通用样式
    │   │   ├── app.js                          项目入口文件
    │   │   ├── components                      组件文件目录
    │   │   │   └── login                       login 组件目录
    │   │   │       └── index.weapp.js          login 组件逻辑
    │   │   └── pages                           页面文件目录
    ├── cloud                                   服务端目录
    │   └── functions                           云函数目录
    │    ├── login                              login 云函数
    │    │    ├── index.js                      login 函数逻辑
    │    │    └── package.json
    │    └── statistics                         订阅消息 & 统计云函数
    │         ├── index.js                      statistics 函数逻辑   
    │         ├── common.js                     公有方法
    │         ├── config.json                   云函数接口授权 & 触发器配置
    │         └── package.json
    ├── project.config.json                     小程序项目配置
    │
    └── mini.config.json                        个人小程序配置文件
    

    编译前准备

    需要在根目录下新建一个 mini.config.json

    {
      "app_name": "your app name",
      "cloud_dev": "dev env",
      "cloud_prod": "prod env",
      "tmpl_ids": [""] // 订阅消息ID,不需要可以在 start 页面去掉
    }
    

    邮箱

    yidierh@gmail.com

    线上小程序

    记账I

    Visit original content creator repository https://github.com/yidierh/mini-bookkeeping
  • locust-istio

    Locust-istio

    Python scripts to enable Locust to send traffic to a istio ingressgateway which will handle traffic for multiple hostnames.

    Rationale

    Some challenges I faced while using locust to test traffic on istio service mesh.

    1. In a development test setup these hostname may not get resolved by DNS. So traffic need to resolve IP address manually like in “–connect-to” flag in curl

    2. Many times traffic is sent to a ClusterIP service or a NodePort service (if user does not want a waste a LB from their LB pool.

    3. Deployment of locust in Kubernetes is not an easy method.

    These python files will enable locust to handle these challenges. Also Helm is used to address the challenge of deployment in Kubernetes.

    LoadBalancer example with curl:

    curl https://bookinfo.example.com/productpage --connect-to bookinfo.example.com:443:**LB-IP**:443
    

    NodePort example with curl:

    curl https://bookinfo.example.com/productpage --connect-to bookinfo.example.com:443:**Node-IP**:**Nodeport-for-port-443**
    

    ClusterIP example with curl:

    curl https://bookinfo.example.com/productpage --connect-to bookinfo.example.com:443:**ClusterIP**:443
    

    Installation and Test steps

    Creating test setup

    For creating a test setup I used documentation given in “https://istio.io/latest/docs/setup/getting-started/“. For ease of reference

    curl -L https://istio.io/downloadIstio | sh -
    cd istio-1.20.2
    export PATH=$PWD/bin:$PATH
    istioctl install --set profile=demo -y
    
    kubectl create ns bookinfo0
    kubectl create ns bookinfo1
    kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo0
    kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo1
    

    Currently main script assumes “istio-ingressgateway” pod runs in istio-system namespace and associated gateways are installed in istio-system namespace.

    In the main script edit the sections “#getting service details” and “#getting hostnames” to change to your custom namespace, service and gateway labels.
    For testing the script use the example given in “bookinfo-gateway-vs.yaml” & “aegle-wildcard-secret.yaml”.

    kubectl apply -f aegle-wildcard-secret.yaml
    kubectl apply -f bookinfo-gateway-vs.yaml
    

    Install locust

    kubectl create ns locust
    
    kubectl create configmap my-loadtest-locustfile --from-file ./main.py -n locust
    kubectl create configmap my-loadtest-lib --from-file ./lib -n locust
    
    kubectl apply -f role.yaml
    
    helm repo add deliveryhero https://charts.deliveryhero.io/
    
    helm install locust deliveryhero/locust \
      --set loadtest.name=my-loadtest0 \
      --set loadtest.locust_locustfile_configmap=my-loadtest-locustfile \
      --set loadtest.locust_lib_configmap=my-loadtest-lib  -f values.yaml -n locust
    

    Start locust traffic

    1. Check locust master and worker pods are coming up.
    2. If there is a crash check the log outputs of pods and fix the python scripts if needed. Or if it is a infra (kubernetes / istio) related problem, fix it.
    3. If the python scripts are changed to fix step 2, unistall the helm and the configmaps used for installation. Redo the installation.
    4. Once pods are up you can port-forward the locust service and use browser to start test or monitor it as given.
      kubectl port-forward service/locust 8089:8089 -n locust
    5. Else you can use the locust APIs to start and monitor the test

    start the test (host=www.ddd.com does not matter, it takes value from gateway CR)

    kubectl port-forward service/locust 8089:8089 -n locust &
    sleep 5
    curl -X POST   http://localhost:8089/swarm   -H 'content-type: application/x-www-form-urlencoded; charset=UTF-8'   -d 'user_count=5&spawn_rate=1&host=www.ddd.com'
    sleep 2
    kill $(jobs -p | awk '{print $1}')
    sleep 10
    

    monitor the test

    unset a
    unset b
    kubectl port-forward service/locust 8089:8089 -n locust &
    sleep 5
    
    a=$(curl -s -X GET http://localhost:8089/stats/requests | jq '.stats[1].current_rps')
    b=$(curl -s -X GET http://localhost:8089/stats/requests | jq '.stats[1].num_failures')
    echo "######################################################################################## rate: $a"
    echo "######################################################################################## failure: $b"
    kill $(jobs -p | awk '{print $1}')
    sleep 2
    
    unset a
    unset b
    
    1. you can delete the locust pods or restart the locust deployments or delete the locust replicasets to stop the test.
      kubectl delete rs --all -n locust

    Visit original content creator repository
    https://github.com/devogopan/locust-istio

  • Python-Analise_Exploratoria

    Limpeza e Análise Exploratória de Dados | Python

    Olá! Tudo bem por aí? Por aqui tudo ótimo!

    Este repositório funciona como uma sessão de portfólio, e todo o trabalho feito aqui é para expor algumas das minhas skills como analista de dados em python. Neste projeto, utilizei o dataset “Brazilian E-Commerce Public Dataset by Olist“, que está disponível no Kaggle através deste link. O objetivo deste trabalho é preparar e explorar os dados para responder as seguintes perguntas sobre performance de pedidos entregues:

    • Existe diferença do tempo de entrega por estado?
    • Quais são as cidades que tem o maior tempo de entrega médio?
    • As dimensões do produto influenciam no tempo de entrega?
    • O horário e o dia da semana em que o pedido é realizado impactam o tempo de entrega?
    • A distância entre a cidade do vendedor e do cliente impacta na performance de entrega?

    Notebook

    Partindo da limpeza, tratamento e transformação de dados, até os caminhos de análise exploratória para responder a perguntas de negócio, o notebook Limpeza_AnaliseExploratoria_OLIST traz diversas etapas do trabalho de análise de dados organizadas em 5 sessões (além, claro, da introdução):

    • Importação de bibliotecas e Carregamento
    • Limpeza, Tratamento e Preparação dos Dados
    • Análise Descritiva
    • Análise Exploratória de Dados
    • Conclusão

    Tecnologias Utilizadas

    Vale ressaltar: gosto muito de feedbacks e insights de como posso melhorar meus projetos. Vou ficar muito feliz em receber suas observações e colaborações! Fique à vontade para me contatar em caso de dúvidas ou sugestões.

    No mais, divirta-se e boa leitura! 😊

    Visit original content creator repository
    https://github.com/pedrocostanunes/Python-Analise_Exploratoria

  • medi-camp-pro

    MediCamp

    MediCamp Screenshot

    Overview

    MediCamp is a Medical Camp Management System (MCMS) built with the MERN stack. It is designed to help organizers and participants seamlessly manage medical camps. The platform provides tools for registration, payment, feedback collection, and detailed camp analytics, ensuring a smooth and efficient experience for all users.

    Live Site

    Visit MediCamp Live Site Live Demo

    Backend Link

    Visit MediCamp Backend Link

    Organizer Credentials

    Features

    1. User Authentication: Secure login and registration with support for email/password and social logins.
    2. Home Page: A vibrant banner section showcasing impactful camp success stories, popular camps, and feedback from participants.
    3. Popular Camps Section: Displays the top six camps based on participant counts, with detailed information and a “See All Camps” button.
    4. Available Camps Page: Allows users to view all camps, search by keywords, and sort based on criteria such as participant count, fees, and alphabetical order.
    5. Organizer Dashboard:
      • Add A Camp: Organizers can add camps with details like name, date, fees, location, and description.
      • Manage Camps: View, edit, or delete camps using intuitive controls.
      • Manage Registered Camps: View participants’ details, confirm payments, and cancel registrations.
    6. Participant Dashboard:
      • Analytics: Interactive charts (using Recharts) showcasing the participant’s lifetime camp data.
      • Registered Camps: Displays registered camp details, feedback options, and payment history.
    7. Camp Details Page: Offers comprehensive information about each camp and facilitates participant registration through a modal.
    8. Feedback & Ratings: Participants can provide feedback after successful payment, and these are displayed on the home page.
    9. Payment Integration: Secure payment processing with Stripe, including transaction ID documentation.
    10. Responsive Design: Fully optimized for mobile, tablet, and desktop devices.

    Technologies Used

    • Frontend: React, TailwindCSS, DaisyUI, TanStack Query, Axios, React Hook Form, Recharts
    • Backend: Node.js, Express.js, MongoDB
    • Authentication: Firebase, JWT
    • Other Libraries: Stripe for payments, SweetAlert2 for notifications

    Key Features Breakdown

    Authentication

    • Fully secure login and registration with Firebase.
    • JWT-based authentication for protecting private routes.

    Organizer Functionalities

    • Add, update, or delete camps effortlessly.
    • Manage participants with detailed information and controls.

    Participant Functionalities

    • Easy camp registration and payment process.
    • Feedback and rating submission post-camp experience.
    • Detailed analytics and payment history.

    Additional Features

    • Pagination and search for all tables.
    • 404 page for unmatched routes.
    • Customizable dashboard layouts for both organizers and participants.

    Project Setup

    Prerequisites

    • Node.js (v18+)
    • MongoDB
    Visit original content creator repository https://github.com/Purnendu-sarkar/medi-camp-pro