Blog

  • s4l-manual

    Welcome

    Welcome to the Dashboard Manual. This user manual is intended to provide you with detailed information and background on how to use the dashboard in Sim4Life.web.

    The full-featured Sim4Life.web platform can be accessed in one of two ways:

    • sim4life.io – Intended for businesses. Functions on a pay-per-use model.
    • sim4life.science – Intended for academic research groups. Functions on a pay-per-use model, with a significant academic discount.

    A full comparison of the different versions of Sim4Life can be found on our website at sim4life.swiss/specifications.

    You can log in at https://sim4life.io or https://sim4life.science with a valid email address and password combination. If you don’t have a user account yet, please request your login via Request Account on the sim4life.io webpage or the sim4life.science webpage.

    Under the TUTORIALS dashboard tab, you will find a set of pre-built read-only tutorial projects with results and scripts that illustrate how Sim4Life.web can be used to solve various simulation problems.

    For more specific technical information, please refer to the Sim4Life.web manual.

    Visit original content creator repository
    https://github.com/ZurichMedTech/s4l-manual

  • power-uploader

    文件传输SDK文档

    引入方式

    建议使用ES6引入。

    import {Uploader, FileStatus} from 'power-uploader';
    • Uploader:文件传输SDK的构造类
    • FileStatus:文件实例的状态
      • INITED:初始状态(beforeFileQueued事件之后就会变更)
      • QUEUED:已经进入队列, 等待上传
      • PROGRESS:上传中
      • ERROR:上传出错,可重试
      • CANCELLED:上传取消,会清出队列
      • INTERRUPT:上传中断,可续传
      • INVALID:文件不合格,不能重试上传

    初始化

    eg.

    let uploader = new Uploader({
        pick: '.ff-wrap .up-btn',
        dnd: 'body, .dnd-area',
        paste: ['body', '.paste-area'],
        listenerContainer: document,
        body: document.body,
        chunked: true,
        chunkSize: 20971520,
        multiple: true,
        withCredentials: false
    });

    初始化参数

    参数名 数据类型 默认值 说明
    timeout Number 0 超时时间,为0时不会设置超时事件
    accept Array [] [{extensions: ‘jpg’, mimeTypes: ‘image/*’}]
    auto Boolean true 是否得到文件后就上传,暂不支持设置为false
    sameTimeUploadCount Number 3 同时上传分片的个数
    chunked Boolean false 是否开启分片上传
    chunkSize Number 20971520 分片大小,默认是20MB
    chunkRetry Number 2 分片失败重试次数(不建议更改太大)
    formData Object {} 除了二进制文件外,拓展的formData对象
    headers Object {} 自定义头
    fileVal String ‘file’ formData二进制文件的key
    method String ‘post’ 请求方法
    fileNumLimit Number undefined 暂不启用
    fileSizeLimit Number undefined 暂不启用
    fileSingleSizeLimit Number undefined 暂不启用
    dnd String void 0 拖拽区域选择器
    pick String void 0 点击区域的选择器
    pickDir String void 0 点击区域的选择器(选择文件夹)
    paste String void 0 粘贴区域选择器
    server String 服务器地址
    listenerContainer DOM document 事件委托的DOM元素
    body DOM document.body 动态创建的input插入到的DOM元素
    multiple Boolean false 是否可以选择多文件上传
    withCredentials Boolean true 是否开启跨域带cookie
    setName Function (id) => new Date().getTime() + id 无文件名文件函数
    log Function console.log 记录log的函数
    logLevel Number 1 暂时不开启使用

    其中pickdndpaste填写选择器的时候支持Selector String,通过,分隔的Selector String,存放Selector String的数组。

    file对象封装

    file对象封装在事件回调函数中返回的参数对象里为filekey

    • eventEmitter:事件发射器
    • ext:文件后缀
    • id:文件唯一标识符,eg:WU_FILE_0
    • isFile:是否是文件(有可能是目录呢)
    • lastModifiedDate:最后修改时间
    • loaded:上传字节数
    • name:文件名
    • path:文件路径
    • uploadGroupId:文件组ID
    • size:文件大小(字节数)
    • source:原生文件对象
    • statusText:文件状态,即FileStatus的值
    • type:文件的MIME Type,egvideo/mp4

    事件

    所有的事件的回调函数返回的参数均是一个对象,参数中可能有的值说明

    名称 数据类型 说明 存在事件
    file Object 即上面的file对象封装 所有事件
    currentShard Number 当前文件分片,从1开始计算,不是从0开始 大部分事件
    shardCount Number 文件总分片 大部分事件
    shard Blob 分片的二进制对象,一般很少用到 大部分事件
    total Number 文件的总字节数 大部分事件
    loaded Number 文件读取的字节数 uploadProgress
    isUploadEnd Boolean 文件是否传输完毕 uploadProgress
    responseText String 分片请求后的服务端返回 uploadAccept
    responseText String 分片请求后的服务端返回 uploadAccept
    error Error 上传错误信息 uploadError

    • beforeFilesSourceQueued: 选择上传一组文件之前 包含目录源信息
      @return Object { filesSource, actionType, uploadGroupId }
      demo

       uploader.on('beforeFilesSourceQueued', (obj) => {
           let {filesSource, actionType, uploadGroupId} = obj;
           if (actionType === 'pickDir') {
               // 选择的是文件夹
           }
           // 超过10个文件不允许上传
           if (filesSource.length > 10) {
               return false;
           }
       });
    • filesSourceQueued: 选择上传一组文件之后 文件源信息
      @return Object { filesSource, actionType, uploadGroupId }
      demo

       uploader.on('filesSourceQueued', (obj) => {
           let {filesSource, actionType, uploadGroupId} = obj;
           if (actionType === 'pickDir') {
               // 选择的是文件夹
           }
       });
    • beforeFileQueued:文件添加到上传队列之前,可以对文件进行一些过滤,return false;会阻止将该文件加入队列。

      @return Object { file }

      demo

      uploader.on('beforeFileQueued', (obj)=> {
          console.log('beforeFileQueued');
          let { file, setContentType } = obj;
          setContentType('image/png'); // 更改文件的Content-Type  
      
          if (/^[^<>\|\*\?\/]*$/.test(file.name)) {
              let b1 = new Buffer(file.name);
              let bytes = Array.prototype.slice.call(b1, 0);
              if (bytes.length > 128) {
                  alert('字符长度过长');
                  return false;
              }
          } else {
              alert('存在特殊字符');
              return false;
          }
      
          return true;
      });
    • fileQueued:没有被beforeFileQueued阻止,文件已经被添加进队列,等待上传。

      @return Object { file }

      demo

      uploader.on('fileQueued', (obj) => {
          console.log('fileQueued');
          let { file } = obj;
      
          this.setState({
              fileList: [...this.state.fileList, file]
          });
      });
    • uploadStart:该文件已经开始上传(第一片分片已经上传了)。

      @return Object { file }

      demo

      uploader.on('uploadStart', (obj)=> {
          console.log('uploadStart');
          let { file } = obj;
      
          // 开始请求的文件statusText属性会有变化
          let newFileList = this.state.fileList.map(fileItem =>
              file.id === fileItem.id ? file : fileItem );
          this.setState({ fileList: newFileList });
      });
    • uploadBeforeSend:每一个分片上传之前,可以修改new Uploader的时候传入的部分属性,如serverheaders

      @return Object { file, currentShard[Number], shardCount[Number], shard[Blob] }

      demo

      uploader.on('uploadBeforeSend', (obj)=> {
          console.log('uploadBeforeSend');
          let { file, currentShard, shard, shardCount, config } = obj;
      
          config.headers = {
              'name': file.name,
              'path': '/person/img'
          };
          config.server = 'http://xxx.com/file/upload';
      });
    • uploadProgress:上传进度回调。

      @return Object { file, loaded[Number], total[Number], shardLoaded[Number], shardTotal[Number] }

      其中loadedtotal是整体文件的,shardLoadedshardTotal是单个分片的,file.loaded已经是loaded的值。

      demo

      uploader.on('uploadProgress', (obj)=> {
          console.log('uploadProgress');
          let { file, loaded, total, shardLoaded, shardTotal } = obj;
      
          console.log(loaded / total * 100 + '%', file.loaded);
          this.setState({
              fileList: this.state.fileList.map(item => item.id === file.id ? file : item)
          });
      });
    • fileMd5Finished:上传进度回调。

      @return Object { file, md5 }

      demo

      uploader.on('fileMd5Finished', async (obj) => {
          let {file, md5} = obj;
          let res = await api.md5Check(md5);
          if (res.ok === true) {
              file.url = res.url;
              file.loaded = file.size;
              file.statusText = FileStatus.COMPLETE;
              render(file); // 渲染文件
              return Uploader.CONSTANTS.MD5_HAS;
          }
      });
    • uploadAccept:分片上传成功。

      @return Object { file, shard[Blob], shardCount[Number], currentShard[Number], isUploadEnd[Boolean], responseText[String] }

      demo

      uploader.on('uploadAccept', async(obj)=> {
          console.log('uploadAccept');
          let { file, shard, shardCount, currentShard, isUploadEnd, responseText } = obj;
      });
    • uploadSuccess:文件上传成功。

      @return Object { file, currentShard[Number], shardCount[Number], shard[Blob], responseText[String], responseTextArr[Array] }

      demo

      uploader.on('uploadSuccess', (obj) => {
          console.log('uploadSuccess');
          let { file, currendShard, shardCount, shard, responseText, responseTextArr } = obj;
          
          if (shardCount === 1) {
               // use responseText
          } else {
              // use responseTextArr
          }
      
          let newFileList = this.state.fileList.map(item => file.id === item.id ? file : item);
          this.setState({fileList: newFileList});
      });
    • uploadEndSend:文件上传结束,成功失败都会触发。

      uploadSuccess

    • uploadError:文件上传失败。

      @return Object { file, error[Error] }

    文件夹相关的事件:

    • beforeChildFileQueued:文件夹中的子文件入队列之前
      @return Object { fileSource, parentEntry, uploadGroupId, actionType }

    • childFileQueued:文件夹中的子文件入队列之后
      @return Object { fileSource, parentEntry, uploadGroupId, actionType }

    • beforeChildDirQueued:文件夹中的子文件夹入队列之前
      @return Object { currentEntry, parentEntry, uploadGroupId, actionType }

    • childDirQueued:文件夹中的子文件夹入队列之后
      @return Object { currentEntry, parentEntry, uploadGroupId, actionType }

    • selectDir:选择了文件夹,参数返回entry信息,通过return false; 可以禁止
      @return Object { entry, uploadGroupId, actionType }

    Visit original content creator repository
    https://github.com/geeknull/power-uploader

  • secure-smart-contract-design-principles

    Secure Smart Contract Design Principles

    Originally created and derived by 0xRajeev, founder of Secureum, former Trail of Bits engineer and PhD holder.

    This repo details Saltzer and Schroeder’s 10 secure design principles as applied to solidity smart contracts. It is the first part of piece of the puzzle that is my implementation of DevSecOps as applied to smart contracts.

    The following design principles should be adhered to by blockchain developers intending to write secure code from the ground up and attain maximum value out of external smart contract audits. This list should be supplemented by the Solcurity standard by Rari-Capital and the Solidity DevSecOps standard

    1. Principle of Least Privilege: “Every program and every user of the system should operate using the least set of privileges necessary to complete the job” — Ensure that various system actors have the least amount of privilege granted as required by their roles to execute their specified tasks. Granting excess privilege is prone to misuse/abuse when trusted actors misbehave or their access is hijacked by malicious entities. (See Saltzer and Schroeder’s Secure Design Principles)

    2. Principle of Separation of Privilege: “Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key” — Ensure that critical privileges are separated across multiple actors so that there are no single points of failure/abuse. A good example of this is to require a multisig address (not EOA) for privileged actors (e.g. owner, admin, governor, deployer) who control key contract functionality such as pause/unpause/shutdown, emergency fund drain, upgradeability, allow/deny list and critical parameters. The multisig address should be composed of entities that are different and mutually distrusting/verifying. (See Saltzer and Schroeder’s Secure Design Principles)

    3. Principle of Least Common Mechanism: “Minimize the amount of mechanism common to more than one user and depended on by all users” — Ensure that only the least number of security-critical modules/paths as required are shared amongst the different actors/code so that impact from any vulnerability/compromise in shared components is limited and contained to the smallest possible subset. (See Saltzer and Schroeder’s Secure Design Principles)

    4. Principle of Fail-safe Defaults: “Base access decisions on permission rather than exclusion” — Ensure that variables or permissions are initialized to fail-safe default values which can be made more inclusive later instead of opening up the system to everyone including untrusted actors. (See Saltzer and Schroeder’s Secure Design Principles)

    5. Principle of Complete Mediation: “Every access to every object must be checked for authority.” — Ensure that any required access control is enforced along all access paths to the object or function being protected.

    6. Principle of Economy of Mechanism: “Keep the design as simple and small as possible” — Ensure that contracts and functions are not overly complex or large so as to reduce readability or maintainability. Complexity typically leads to insecurity.

    7. Principle of Open Design: “The design should not be secret” — Smart contracts are expected to be open-sourced and accessible to everyone. Security by obscurity of code or underlying algorithms is not an option. Security should be derived from the strength of the design and implementation under the assumption that (byzantine) attackers will study their details and try to exploit them in arbitrary ways.

    8. Principle of Psychological Acceptability: “It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly” — Ensure that security aspects of smart contract interfaces and system designs/flows are user-friendly and intuitive so that users can interact with minimal risk.

    9. Principle of Work Factor: “Compare the cost of circumventing the mechanism with the resources of a potential attacker” — Given the magnitude of value managed by smart contracts, it is safe to assume that byzantine attackers will risk the greatest amounts of intellectual/financial/social capital possible to subvert such systems. Therefore, the mitigation mechanisms must factor in the highest levels of risk.

    10. Principle of Compromise Recording: “Mechanisms that reliably record that a compromise of information has occurred can be used in place of more elaborate mechanisms that completely prevent loss” — Ensure that smart contracts and their accompanying operational infrastructure can be monitored/analyzed at all times (development/deployment/runtime) for minimizing loss from any compromise due to vulnerabilities/exploits. For e.g., critical operations in contracts should necessarily emit events to facilitate monitoring at runtime.

    Visit original content creator repository
    https://github.com/0xsomnus/secure-smart-contract-design-principles

  • HypED

    HypED

    HypED

    Overview

    This package includes an algorithm to answer point-to-point s-distance queries in hypergraphs, approximately. The algorithm can answer three types of queries: vertex-to-hyperedge, vertex-to-vertex, and hyperedge-to-hyperedge. This is achieved by constructing a distance oracle, which can be stored on disk for future usages. The distance oracle stores distances from landmark hyperedges to reachable hyperedges, so that the distance between two hyperedges can be approximated via triangle inequalities. The algorithm requires in input an integer L used to compute the desired oracle size O = L x |E|, where |E| is the number of hyperedges in the hypergraph. Please note that L is not the actual number of landmarks selected by the distance oracle.

    The package includes a Jupyter Notebook (Results.ipynb) with the results of the experimental evaluation of HypED, and the source code (related) of the two competitors considered in the evaluation.

    Content

    datasets/ .....
    related/.......
    scripts/ ......
    src/ ..........
    Results.ipynb..
    LICENSE .......
    

    Requirements

    To run our code:

    Java JRE v1.8.0
    

    To run the competitors’ code:

    C++11
    OpenMP
    

    To check the results of our experimental evaluation:

    Jupyter Notebook
    

    Input Format

    The input file must be a space separated list of integers, where each integer represents a vertex and each line represents a hyperedge in the hypergraph. The algorithm assumes that the file does not contain duplicate hyperedges. The script run.sh assumes that the file extension is .hg. The folder datasets includes all the datasets used in our experimental evaluation.

    Usage

    You can use HypED either by running the script run.sh included in this package, or by running the following command:

    java -cp HypED.jar:lib/* eu.centai.hypeq.test.EvaluateQueryTime dataFolder=<input_data> outputFolder=<output_data> dataFile=<file_name> numLandmarks=<value_L_to_use> samplePerc=<ratio_of_hyperedges_to_sample> landmarkSelection=<strategy_to_select_landmarks> numQueries=<number_of_random_queries_to_test> store=<whether_the_oracle_should_be_stored_on_disk>  landmarkAssignment=<landmark_assignment_strategy> lb=<min_cc_size> maxS=<max_min_overlap> alpha=<alpha> beta=<beta> seed=<seed> isApproximate=<whether_exact_distances_should_be_computed_as_well> kind=<type_of_query>
    

    The command creates a distance oracle for the input hypergraph (if it has not been created yet with the same parameter combination), and evaluates the performance of HypED on a set of numQueries random queries. For each query, it finds the approximate distance profile including the s-distances up to maxS.

    To evaluate the performance of the algorithm on a specific set of queries, such queries must be stored in a space-separated file, given in input with the option queryFile=<file_name>. The code assumes that the query file is located in the same folder where the graph file is located.

    To evaluate the performance of the algorithm when answering a specific set of s-distance queries for given values of s, such queries must be stored in a space-separated file, and then, the following command must be executed:

    java -cp HypED.jar:lib/* eu.centai.hypeq.test.EvaluateSQueries dataFolder=<input_data> outputFolder=<output_data> dataFile=<file_name>  queryFile=<query_file_name> numLandmarks=<value_L_to_use> samplePerc=<ratio_of_hyperedges_to_sample> landmarkSelection=<strategy_to_select_landmarks>  store=<whether_the_oracle_should_be_stored_on_disk>  landmarkAssignment=<landmark_assignment_strategy> lb=<min_cc_size> maxS=<max_min_overlap> alpha=<alpha> beta=<beta> seed=<seed> isApproximate=<whether_exact_distances_should_be_computed_as_well> kind=<type_of_query>
    

    Even though the query file includes the s values for which we want to compute the s-distances, we need to provide maxS in input, as its value is needed for the construction of the oracle.

    Using the Script

    The value of each parameter used by HypED must be set in the configuration file config.cfg:

    General Settings

    • input_data: path to the folder containing the graph file.
    • output_data: path to the folder to store the results.
    • landmarkSelection: how to select the landmarks within the s-connected components (random, degree, farthest, bestcover, between).
    • landmarkAssignment: how to assign landmarks to s-connected components (ranking, prob). If prob is selected, each experiment is performed 5 times using different seeds.
    • alpha: importance factor of the s-connected component sizes.
    • beta: importance factor of the min overlap size s.
    • seed: seed for reproducibility.
    • kind: type of distance query to answer (vertex for vertex-to-vertex, edge for hyperedge-to-hyperedge, both for vertex-to-hyperedge).
    • isApproximate: whether we want to compute only the approximate distances, or also the exact distances.

    Dataset-related Settings

    • Dataset names: names of the files (without file extension).
    • Default values: comma-separated list of default values for each dataset, i.e., value of L, percentage of hyperedges to sample, number of queries, lower bound lb for a s-connected component size to be considered for landmark assignment, max min overlap s, whether the oracle should be saved on disk.
    • Num Landmarks: comma-separated list of L values to test.
    • Experimental flags: test to perform among (1) compare strategies to find the s-connected components, (2) compare HypED with two baselines, (3) compare the performance using different importance factors, (4) compare the landmark selection strategies, (5) create distance profiles for random queries, (6) create distance profiles for given queries, (7) answer s-distance queries for given queries, (8) find the s-line graphs of the hypergraph up to maxS.

    Then, the arrays that store the names, the number of L values, and the experimental flags of each dataset to test must be declared at the beginning of the script run.sh.

    Output Format

    The algorithm produces two output files: one contains the approximate distances, and the other one some statistics.

    1. Output File: comma-separated list including src_id, dst_id, s, real s-distance (only if isApproximate was set to False), lower-bound to the s-distance, upper-bound to the s-distance, and approximate s-distance (computed as the median between lower and upper bound).
    2. Statistics File: tab-separated list including dataset name, timestamp, oracle creation time, query time, max min overlap s, lower bound lb, value L, number of landmarks selected, number of distance pairs stored in the oracle, number of distance profiles created, landmark selection strategy, landmark assignment strategy, alpha, and beta.

    Related Code

    The folder related includes the source code of the two competitors considered in our experimental evaluation.

    CTL[1] (folder CoreTreeLabelling) improves the state-of-the-art 2-hop pruned landmark labeling approach, by first decomposing the input graph in a large core and a forest of smaller trees, and then constructing two different indices on the core-tree structure previously generated. Distance queries can be answered exactly as the min between the distances provided by the two indices.

    HL [2] (folder highway_labelling-master) is a landmark-based algorithm that first selects a set of l vertices, and then populates two indices: the highway and the distance index. The distance index is populated starting BFSs from the l vertices, and is guaranteed to be minimal given that set of vertices. At query time, the algorithm first finds an upper-bound to the distance exploiting the highway index, and then, finds the distance in a sparsified version of the original graph.

    Both approaches are designed for connected graphs, and hence do not guarantee to provide exact answers when the graph is disconnected. We used them to construct indices for the s-line graphs of the hypergraphs.

    Usage

    Both approaches assume that the node ids take values in [0, |V|], where |V| is the total number of vertices in the graph. If you need to remap the vertices (and hence the query files), you can use the Python script graph_query_remapping.py. The script includes some comments on its usage.

    The meta-structures used by the algorithms can be created using the bash script preprocessing.sh. This script includes some variables that must be properly set:

    1. file_path: path to the graph files
    2. datasets: space-separated list of graph names
    3. proj: space separated list of s values, where each value gives the name of the s-line graph
    4. other parameters: CTL requires a list of tree-width values (tws), while HL requires a list of numbers of vertices (lands)

    The queries can be answered using the bash script query.sh. This script includes some comments on its usage.

    For further information, please refer to the ReadMe files included in the folders, or to the original repositories [3, 4].

    License

    This package is free for use (GNU General Public License).

    References

    [1] Wentao Li, Miao Qiao, Lu Qin, Ying Zhang, Lijun Chang, and Xuemin Lin. 2020. Scaling up distance labeling on graphs with core-periphery properties. In SIGMOD. 1367–1381.

    [2] Muhammad Farhan, Qing Wang, Yu Lin, and Brendan Mckay. 2019. A highly scalable labelling approach for exact distance queries in complex networks. In EDBT.

    [3] Core-Tree Labelling Code

    [4] Highway Labelling Repository

    Visit original content creator repository https://github.com/lady-bluecopper/HypED
  • Electronic-Interchange-Github-Resources

    Electronic-Interchange-Github-Resources

    List of EDI Github Resources. Pull Requests are Welcome!

    https://michaelachrisco.github.io/Electronic-Interchange-Github-Resources/

    Libraries

    Java

    • apifocal/x12-parser – Java library for parsing and creating ASC X12 EDI transactions
    • ballerina-platform/edi-tools – This library provides the functionality required to process EDI files and implement EDI integrations.
    • BerryWorksSoftware/edi-json – Serializing EDI as JSON
    • moqui/mantle-edi – Mantle EDI Integrations
    • mrcsparker/nifi-edireader-bundle – Apache NIFI processor that converts EDI ASC X12 and EDIFACT documents into XML
    • imsweb/x12-parser – A Java parser for ANSI ASC X12 documents.
    • smooks/smooks – An extensible Java framework for building XML and non-XML (CSV, EDI, Java, etc…) streaming applications
    • walmartlabs/gozer – The EDI X12 Standard provides a uniform way for companies to exchange information across different sectors.
    • xlate/staedi – General X12/EDIFACT stream reader and writer with support for validation of standards with optional schema customizations (i.e. implementation guides)

    C#/DotNet

    • olmelabs/EdiEngine – Simple .NET EDI Reader, Writer and Validator. Read, Write and Validate X12 EDI files with simple EDI Parser written on C#.
    • indice-co/EDI.Net – EDI Serializer/Deserializer. Supports EDIFact, X12 and TRADACOMS formats
    • Silvenga/EdiWeave – Open Source Hard-Fork of EdiFabric
    • MassTransit/Machete – Cut through the Crap, with Machete, a text parser, object mapper, and query engine.

    Python

    Swift

    PHP

    Javascript

    Ruby

    Rust

    • sezna/edi – Rust crate for parsing X12 EDI and acting on it. Supports serialization to a variety of formats including JSON.

    Golang

    • jf-tech/omniparser – omniparser is a native Golang ETL parser that ingests input data of various
      formats (CSV, txt, fixed length/width, XML, EDI/X12/EDIFACT, JSON, and custom formats) in streaming fashion and transforms data into desired
      JSON output based on a schema written in JSON. See EDI and
      EDI readers for more usage details.
    • moov-io/x12 – ASC X12 standards reader/writer

    CLI utilities

    Systems or Paid Services

    Examples

    Public EDI References

    • X12 Reference – Free online viewer for all releases of X12 specifications.
    • EDI Guide Catalog – An open directory of the most-requested Stedi Guides, interactive EDI specifications that let you instantly validate EDI documents.
    • EDIFACT Reference – Free online viewer for all releases of EDIFACT specifications.
    • Stedi/awesome-edi – List by Stedi of related resources.

    Syntax Highlighters

    Free Online EDI editors

    • EDI Inspector – A tool for inspecting EDI files and getting a free JSON conversion.

    Standalone editors

    • RKDN/x12Tool – A tool for reading and modifying x12/EDI files.

    Visit original content creator repository
    https://github.com/michaelachrisco/Electronic-Interchange-Github-Resources

  • dm4-kiezatlas-website

    DeepaMehta 4 Kiezatlas – Website

    This DeepaMehta 4 Plugin is the software module for ship- and development of the rewritten website to be hosted at www.kiezatlas.de.

    It builds on DeepaMehta 4 (Apache Lucene, Neo4J, Jetty among others) and our work of last year, especially on the

    but also on other open source software, such as

    and on the Tile Map Service of www.mapbox.com.

    Usage & Development

    If you want to adapt this software make sure to have your development environment set up like described in the DeepaMehta Plugin Development Guide.

    To install and setup hot-deployment for this plugin when having just downloaded and unzipped deepamehta, you could, for example configure your deepamehta bundle directory in the pom.xml of this plugin. Therefore add a dm4.deploy.dir path as a property to the pom.xml in this directory. For example:

        <properties>
            <dm4.deploy.dir>/home/oscar/deepamehta-4.8.3/bundle-deploy</dm4.deploy.dir>
        </properties>
    

    To build dm4-kiezatlas-website successfully you’ll need to build or install its dependencies into your local maven repository. This is due to the fact that we did not have the time to publish these bundles on maven central.

    To do so, check out the following plugins source code from github and run mvn clean install in all of them: dm4-kiezatlas, dm4-kiezatlas-etl, dm4-geospatial, dm4-thymeleaf, dm4-images, dm4-kiezatlas-angebote, dm4-webpages and dm4-sign-up.

    Now you can build and hot-deploy the sources of the dm4-kiezatlas-website plugin using the following two commands:

    grunt
    mvn clean package
    

    Grunt here is used to concat and minify some of the javascript sources as well as our custom selection of semantic-ui css components. Maven compiles the java sources and builds the plugin as an OSGi bundle.

    License

    Thise source code is licensed under the GNU GPL 3.0. It comes with no warranty.

    Version History

    0.6 — 04 May, 2017

    • Installed migration11, imported Bezirksregion-LOR CSV topics as LOR Utilities and Installed migration13
    • Introduced csv bezirksregion / lor mapping (csv-import)

    0.5 — Winter, 2016

    • More robust geo object entry form
    • Interface to serve simple, custom made Citymaps
    • City and district wide fulltext search on geo objects
    • Confirmation workflow for new geo objects created by the public

    Author:
    Malte Reißig, Copyright 2015-2016

    Visit original content creator repository
    https://github.com/mukil/dm4-kiezatlas-website

  • Cat-Chatroom-ECE3551-Final-Project

    Multifarious Systems 1 – Final Project

    Modular Multi-Purpose Chat Room

    Benjamin Luchterhand

    David E. Nieves-Acaron

    Fall 2020

    Introduction:


    The goal of the project was to create a chat room that updates in real-time and is capable of saving messages to a database to be displayed whenever the chat was opened in the future. The project initially targeted deployment in a custom infrastructure with automated bots to use the chat. Some of these goals were out of reach, and will be outlined in the “Future Work” section.

    Literature Review of Similar Projects


    The first project referenced in our own project was the “Real-time chat” from serverjs.io [1]. This project was the inspiration behind our original project; it utilized a chat embedded into a div form that used websockets as the primary form of communication between clients connected to the server. Natively, it uses nodejs, a Javascript server utility that we ended up sticking with. Additionally, this project gave us the idea of using cookies to track users and give the user a more “stateful” experience in the chatroom. We expanded on the cookie system with a more robust login system. The guide was a complete tutorial of how to start a simple messaging system which became a sort of template.

    The next system that was referenced was from a SkySilk.com blog [2] which contained much the same information for socket-based communication. While we ended up creating our own system for checking for new messages, we did use one line of code from the project for chat control; it simply scrolls the chat’s window down to the most recent message in the event of a new one being sent. Similar to the previous reference, this guide mirrored the use of node.js and added in the recommendation of express.js. This was not implemented.

    As for the Website-PHP-InfluxDB interactivity, it cannot be underestimated how important the technique of AJAX was for this application. AJAX stands for “asynchronous javascript and XML” and it is a technique that allows one to send POST requests to a PHP web server without reloading the page. For obvious reasons, this is desirable and can even be considered mission critical for a live chat application. The informative port by Capra, R [3] provides detailed information on how AJAX requests work using JQuery and using that as an example to work on, the code for posting and receiving data to and from the PHP server was developed.

    The interaction between PHP and InfluxDB was conducted through the use of the InfluxDB-PHP library. As such, the GitHub repository containing the code for this library [4] was consulted frequently for installation instructions as well as for examples. The full details of the installation are listed in the section below.

    Architecture:


    Ubuntu 18.0.4
    
    InfluxDB 1.1.1
    
    PHP 7.2.24-0
    
    Chromium/Google Chrome/Firefox Web Browsers
    
    HTML/CSS:
    

    Image 2: Chat Room HTML, ~/node_server/views/index.html

    The HTML for the chat is simple; a table as a container for an image and for the chat controls and viewing window. The chat input itself is wrapped in form tags which makes a call to send the message to the database (InfluxDB).

    Images 3, 4, 5, 6: CSS, ~/node_server/public/styles.css

    Javascript:

    The project utilizes internal Javascript to power the chats. See the images below.

    The internal Javascript of the same file (index.html) contains a $(document).ready function which initializes the message uploader. On submission of the form, if the string is empty, the user is notified with an alert. The author of the message is pulled from the cookie which it’s set in, and a timestamp is also taken for use in updating the chat in the future. The message is sent with a $.post() command, which thoroughly checks for errors on completion.

    Image 7: On form submission; send to InfluxDB

    Next is the login system; when the document is starting up, it checks the user’s cookies for a “current_user”. This is important to the chat box because this username gets shown with all chat messages. If this is not set, a user is prompted to create a username. If the username does not already exist in the database, the user is prompted to register with a password. If it does exist, the user is asked to log in. If the password does not match up, the user is kicked out to the beginning of the whole process.

    Image 8: Part ½, Javascript Login System

    Image 9: Part 2/2, Javascript Login System

    Next is the scraper for updating the chat. As mentioned in the literature review, we decided to avoid the sockets and implement our own updater. This would give us more knowledge on the system and put us in a better position to update it however we want to in the future. The next image shows this scraper’s functionality; a “latestTime” variable is set and referenced within an interval loop set to half a second. It sends this latest time checked to the server and queries for any messages that might have been submitted after that time. This ensures that all messages are collected in a timely manner. Any missed messages are returned with a bar character “|” as a separator, which is then deconstructed into the individual messages. Lastly, the “latestTime” variable is updated and the loop repeats.

    Image 10: Chat Updater/Influx Scraper

    Last of all for the Javascript is the message constructor that is called in the chat updater. This section of code extracts information from the JSON retrieved from the database. It pulls the message sent time, the author, and of course, the message itself. We’ve also implemented chat notification sounds in the form of cat meows, which is the theme of the chat.

    Images 11 & 12: Message Constructor

    PHP:

    The PHP document, found at ~/php_server/index.php, pulls important environment variables from the system when a request is made, connects to the appropriate database using these environment variables, runs database-interactive code, and finally returns the requested data. The first image is a function of a message being sent to the database:

    Image 13: PHP to Influx Code – Pushing a Message to Influx

    The next image shows how the PHP code handles a chat update request. All chats are queried past the given “latestTime” variable that originated in the Javascript provided previously. If messages are encountered, they are combined with the bar character “|” and sent back to the Javascript for message processing:

    Image 14: PHP – Receiving new messages from Influx

    Next is the login system; it simply then connects to the login table, ensures that all of the necessary information is present, and loads the information into InfluxDB.

    Image 14: Registering a new user

    Last of all, we have a small block of code which simply logs into a new user. It checks that a username and password was sent and if so, compares the given password to the one in the database:

    Image 15: Log into an existing user

    Influx:

    Schema, Table, Interactions – (Design choices/examples of each)

    The design for the message schema is as follows:

    {​
    
    measurement: 'message',​
    
    fields: ​
    
    { ​
    
    'length': int​
    
    value:  str​
    
    },​
    
    tags ​
    
    {​
    
    'from': str​
    
    'to': str​
    
    'message': str​
    
    },​
    
    time: long​
    
    ​
    
    }
    

    The reasoning behind this schema design is due to the way that InfluxDB holds data. All data to be written to InfluxDB requires a measurement name, fields, tags, and a timestamp. The measurement name in this case corresponds to “message”. Although the fields are usually used for numerical values, we decided to include the length of the string (for any future applications) as well as the actual string itself (denoted as value). The tags usually include metadata about the measurement, which is why we included the “from” address, the “to” address, and the message itself in the tags. Finally, the timestamp is a Unix timestamp corresponding to the number of seconds since January 1, 1970 UTC. This timestamp is marked as when the message was processed in the PHP server.

    {​​
    
    measurement: 'username',​​
    
    fields: ​
    
    { ​​
    
    'value':  str​
    
    },​​
    
    tags: ​
    
    {​
    
    'name': str​
    
    'password': str​
    
    }​,​
    
    time: long​
    
    }​​
    

    This measurement schema corresponds to the usernames and associated passwords stored in InfluxDB. As can be seen here, the only field corresponds to the value of the username itself. The tags include the username and the password of the user. Finally, the timestamp marks when the PHP server processed this user and thus can also serve as a marker for when the username was created.

    Functionality/Example: (full diagram attached in .zip file)

    Example: Logging in for the first time

    Entering a username

    Entering one’s password

    (password visible for debugging purposes)

    Password attempt successful

    Example of getting the password wrong:

    Example of registering a new user

    Typing messages

    Message can now be seen.

    Examples of different perspectives depending on who the user is (one is signed in from Chromium, and the other from Firefox):

    “Sign Out” button is available for erasing the “user” cookie and logging out.

    Additional “Click to Meow” button that plays a meow sound every time a new message (not by a user) is submitted.

    Setup:

    How do we set up the system? What commands?

    The directory tree looks like this:

    1. Install Composer

    php -r "copy('<https://getcomposer.org/installer>', 'composer-setup.php');"

    1. Install php-curl

    sudo apt-get install -y php5-curl

    1. Install InfluxDB-PHP using composer

    composer require influxdb/influxdb-php

    1. Run the node server inside of the node-server directory

    sudo node node-server/index.js

    1. Export the following environment variables with their corresponding values in your setup. The default values will be shown.
    export INFLUX\_HOST=localhost
    
    export INFLUX\_PORT=8086
    
    export INFLUX\_USER=root
    
    export INFLUX\_PSSWD=root
    
    1. Run the PHP server inside of the php-server directory
    cd php-server
    
    sudo php -S localhost:8080	
    
    

    Future Work


    Our future intentions are to create a simple video streaming service with this live chat sat beside it. The video would replace the gif of the cat, and the login system would be made to be more robust, with encryption and the proper security protocols in place. We would also like to replace our updating system with a socket-based system.

    David and I are research assistants and have spent the better part of the year building a cloud agnostic server infrastructure. It was our intention of hosting this system on that architecture to make the system not only scalable, but templateable. This would mean that new chats could pop up in response to system demand. This was not implemented simply because of time constraints, however, this would be as simple as containerizing our existing code and hosting it in the infrastructure in the near future.

    Team Member Contributions:


    Benjamin Luchterhand (Team Leader, ~45%):

    1. Original project concept with working Javascript chat
    2. Login System/Cookies
    3. Node.js
    4. HTML/CSS (initial)

    David Nieves-Acaron (~55%):

    1. PHP Server
    2. InfluxDB
    3. Message Read/Write
    4. HTML/CSS (adapted)

    Bibliography


    [1] Chat real-time – server.js. (n.d.). Retrieved November 30, 2020, from https://serverjs.io/tutorials/chat/

    [2] How to Create A Real-Time Chat App with Node.js. (2020, January 17). Retrieved from https://www.skysilk.com/blog/2018/create-real-time-chat-app-nodejs/

    [3] Capra, R. (Spring 2013). Lecture 10 – Ajax and JQuery. UNC School of Information and Library Science. Retrieved Fall 2020, from https://ils.unc.edu/courses/2013\_spring/inls760\_001/lect10/lect10.pdf

    [4] InfluxDB. (n.d.). InfluxDB-PHP. GitHub. https://github.com/influxdata/influxdb-php

    Visit original content creator repository https://github.com/DavidEnriqueNieves/Cat-Chatroom-ECE3551-Final-Project
  • Cat-Chatroom-ECE3551-Final-Project

    Multifarious Systems 1 – Final Project

    Modular Multi-Purpose Chat Room

    Benjamin Luchterhand

    David E. Nieves-Acaron

    Fall 2020

    Introduction:


    The goal of the project was to create a chat room that updates in real-time and is capable of saving messages to a database to be displayed whenever the chat was opened in the future. The project initially targeted deployment in a custom infrastructure with automated bots to use the chat. Some of these goals were out of reach, and will be outlined in the “Future Work” section.

    Literature Review of Similar Projects


    The first project referenced in our own project was the “Real-time chat” from serverjs.io [1]. This project was the inspiration behind our original project; it utilized a chat embedded into a div form that used websockets as the primary form of communication between clients connected to the server. Natively, it uses nodejs, a Javascript server utility that we ended up sticking with. Additionally, this project gave us the idea of using cookies to track users and give the user a more “stateful” experience in the chatroom. We expanded on the cookie system with a more robust login system. The guide was a complete tutorial of how to start a simple messaging system which became a sort of template.

    The next system that was referenced was from a SkySilk.com blog [2] which contained much the same information for socket-based communication. While we ended up creating our own system for checking for new messages, we did use one line of code from the project for chat control; it simply scrolls the chat’s window down to the most recent message in the event of a new one being sent. Similar to the previous reference, this guide mirrored the use of node.js and added in the recommendation of express.js. This was not implemented.

    As for the Website-PHP-InfluxDB interactivity, it cannot be underestimated how important the technique of AJAX was for this application. AJAX stands for “asynchronous javascript and XML” and it is a technique that allows one to send POST requests to a PHP web server without reloading the page. For obvious reasons, this is desirable and can even be considered mission critical for a live chat application. The informative port by Capra, R [3] provides detailed information on how AJAX requests work using JQuery and using that as an example to work on, the code for posting and receiving data to and from the PHP server was developed.

    The interaction between PHP and InfluxDB was conducted through the use of the InfluxDB-PHP library. As such, the GitHub repository containing the code for this library [4] was consulted frequently for installation instructions as well as for examples. The full details of the installation are listed in the section below.

    Architecture:


    Ubuntu 18.0.4
    
    InfluxDB 1.1.1
    
    PHP 7.2.24-0
    
    Chromium/Google Chrome/Firefox Web Browsers
    
    HTML/CSS:
    

    Image 2: Chat Room HTML, ~/node_server/views/index.html

    The HTML for the chat is simple; a table as a container for an image and for the chat controls and viewing window. The chat input itself is wrapped in form tags which makes a call to send the message to the database (InfluxDB).

    Images 3, 4, 5, 6: CSS, ~/node_server/public/styles.css

    Javascript:

    The project utilizes internal Javascript to power the chats. See the images below.

    The internal Javascript of the same file (index.html) contains a $(document).ready function which initializes the message uploader. On submission of the form, if the string is empty, the user is notified with an alert. The author of the message is pulled from the cookie which it’s set in, and a timestamp is also taken for use in updating the chat in the future. The message is sent with a $.post() command, which thoroughly checks for errors on completion.

    Image 7: On form submission; send to InfluxDB

    Next is the login system; when the document is starting up, it checks the user’s cookies for a “current_user”. This is important to the chat box because this username gets shown with all chat messages. If this is not set, a user is prompted to create a username. If the username does not already exist in the database, the user is prompted to register with a password. If it does exist, the user is asked to log in. If the password does not match up, the user is kicked out to the beginning of the whole process.

    Image 8: Part ½, Javascript Login System

    Image 9: Part 2/2, Javascript Login System

    Next is the scraper for updating the chat. As mentioned in the literature review, we decided to avoid the sockets and implement our own updater. This would give us more knowledge on the system and put us in a better position to update it however we want to in the future. The next image shows this scraper’s functionality; a “latestTime” variable is set and referenced within an interval loop set to half a second. It sends this latest time checked to the server and queries for any messages that might have been submitted after that time. This ensures that all messages are collected in a timely manner. Any missed messages are returned with a bar character “|” as a separator, which is then deconstructed into the individual messages. Lastly, the “latestTime” variable is updated and the loop repeats.

    Image 10: Chat Updater/Influx Scraper

    Last of all for the Javascript is the message constructor that is called in the chat updater. This section of code extracts information from the JSON retrieved from the database. It pulls the message sent time, the author, and of course, the message itself. We’ve also implemented chat notification sounds in the form of cat meows, which is the theme of the chat.

    Images 11 & 12: Message Constructor

    PHP:

    The PHP document, found at ~/php_server/index.php, pulls important environment variables from the system when a request is made, connects to the appropriate database using these environment variables, runs database-interactive code, and finally returns the requested data. The first image is a function of a message being sent to the database:

    Image 13: PHP to Influx Code – Pushing a Message to Influx

    The next image shows how the PHP code handles a chat update request. All chats are queried past the given “latestTime” variable that originated in the Javascript provided previously. If messages are encountered, they are combined with the bar character “|” and sent back to the Javascript for message processing:

    Image 14: PHP – Receiving new messages from Influx

    Next is the login system; it simply then connects to the login table, ensures that all of the necessary information is present, and loads the information into InfluxDB.

    Image 14: Registering a new user

    Last of all, we have a small block of code which simply logs into a new user. It checks that a username and password was sent and if so, compares the given password to the one in the database:

    Image 15: Log into an existing user

    Influx:

    Schema, Table, Interactions – (Design choices/examples of each)

    The design for the message schema is as follows:

    {​
    
    measurement: 'message',​
    
    fields: ​
    
    { ​
    
    'length': int​
    
    value:  str​
    
    },​
    
    tags ​
    
    {​
    
    'from': str​
    
    'to': str​
    
    'message': str​
    
    },​
    
    time: long​
    
    ​
    
    }
    

    The reasoning behind this schema design is due to the way that InfluxDB holds data. All data to be written to InfluxDB requires a measurement name, fields, tags, and a timestamp. The measurement name in this case corresponds to “message”. Although the fields are usually used for numerical values, we decided to include the length of the string (for any future applications) as well as the actual string itself (denoted as value). The tags usually include metadata about the measurement, which is why we included the “from” address, the “to” address, and the message itself in the tags. Finally, the timestamp is a Unix timestamp corresponding to the number of seconds since January 1, 1970 UTC. This timestamp is marked as when the message was processed in the PHP server.

    {​​
    
    measurement: 'username',​​
    
    fields: ​
    
    { ​​
    
    'value':  str​
    
    },​​
    
    tags: ​
    
    {​
    
    'name': str​
    
    'password': str​
    
    }​,​
    
    time: long​
    
    }​​
    

    This measurement schema corresponds to the usernames and associated passwords stored in InfluxDB. As can be seen here, the only field corresponds to the value of the username itself. The tags include the username and the password of the user. Finally, the timestamp marks when the PHP server processed this user and thus can also serve as a marker for when the username was created.

    Functionality/Example: (full diagram attached in .zip file)

    Example: Logging in for the first time

    Entering a username

    Entering one’s password

    (password visible for debugging purposes)

    Password attempt successful

    Example of getting the password wrong:

    Example of registering a new user

    Typing messages

    Message can now be seen.

    Examples of different perspectives depending on who the user is (one is signed in from Chromium, and the other from Firefox):

    “Sign Out” button is available for erasing the “user” cookie and logging out.

    Additional “Click to Meow” button that plays a meow sound every time a new message (not by a user) is submitted.

    Setup:

    How do we set up the system? What commands?

    The directory tree looks like this:

    1. Install Composer

    php -r "copy('<https://getcomposer.org/installer>', 'composer-setup.php');"

    1. Install php-curl

    sudo apt-get install -y php5-curl

    1. Install InfluxDB-PHP using composer

    composer require influxdb/influxdb-php

    1. Run the node server inside of the node-server directory

    sudo node node-server/index.js

    1. Export the following environment variables with their corresponding values in your setup. The default values will be shown.
    export INFLUX\_HOST=localhost
    
    export INFLUX\_PORT=8086
    
    export INFLUX\_USER=root
    
    export INFLUX\_PSSWD=root
    
    1. Run the PHP server inside of the php-server directory
    cd php-server
    
    sudo php -S localhost:8080	
    
    

    Future Work


    Our future intentions are to create a simple video streaming service with this live chat sat beside it. The video would replace the gif of the cat, and the login system would be made to be more robust, with encryption and the proper security protocols in place. We would also like to replace our updating system with a socket-based system.

    David and I are research assistants and have spent the better part of the year building a cloud agnostic server infrastructure. It was our intention of hosting this system on that architecture to make the system not only scalable, but templateable. This would mean that new chats could pop up in response to system demand. This was not implemented simply because of time constraints, however, this would be as simple as containerizing our existing code and hosting it in the infrastructure in the near future.

    Team Member Contributions:


    Benjamin Luchterhand (Team Leader, ~45%):

    1. Original project concept with working Javascript chat
    2. Login System/Cookies
    3. Node.js
    4. HTML/CSS (initial)

    David Nieves-Acaron (~55%):

    1. PHP Server
    2. InfluxDB
    3. Message Read/Write
    4. HTML/CSS (adapted)

    Bibliography


    [1] Chat real-time – server.js. (n.d.). Retrieved November 30, 2020, from https://serverjs.io/tutorials/chat/

    [2] How to Create A Real-Time Chat App with Node.js. (2020, January 17). Retrieved from https://www.skysilk.com/blog/2018/create-real-time-chat-app-nodejs/

    [3] Capra, R. (Spring 2013). Lecture 10 – Ajax and JQuery. UNC School of Information and Library Science. Retrieved Fall 2020, from https://ils.unc.edu/courses/2013\_spring/inls760\_001/lect10/lect10.pdf

    [4] InfluxDB. (n.d.). InfluxDB-PHP. GitHub. https://github.com/influxdata/influxdb-php

    Visit original content creator repository https://github.com/DavidEnriqueNieves/Cat-Chatroom-ECE3551-Final-Project
  • funcom_reproduction

    funcom_reproduction

    description   stars   forks   contributors

           

    Table of Contents

    Paper Info

    Title A Neural Model for Generating Natural Language Summaries of Program Subroutines
    Publication Title 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)
    Authors Alexander LeClair, Siyuan Jiang, Collin McMillan
    Repository funcom

    Install

    1. Install environment such as Google Colab env, GPU with high RAM. Google Colab is an online environment for machine learning and deep learning, which supports Python and Jupyter Notebook. The free version has only basic functionality. For reproduction, I use the Pro version with a high RAM GPU (monthly costs $10.88).

    2. Download and unzip Models and Data at Release, as well as the source code.

    3. Upload the whole funcom_reproduction folder to the Google Drive root (everyone has 15GB free storage, I think maybe enough). Create a directory and make sure that the data folder is located at ./funcom_reproduction/scratch/funcom/data.

    Usage

    1. Before the model training, Please create outdir directory under ./funcom_reproduction/scratch/funcom/data, and then create 3 directories histories, models and predictions respectively under outdir. After creation, you can execute steps 0, 0.5, 1 and 2 in the .ipynb file for training. The epoch suggested by the author is 5 (each epoch nearly costs more than 2 hours) because the effect will decrease if the epoch>5. But in my case, ast-attendgru model will abort exceptionally at the 4th epoch so eventually I choose epoch=3 for comparison. The epoch value can be modified at line 79 of train.py. Or you can also use my models in Models and skip this step.

    2. For comment generation and BLEU score calculation in the standard dataset, the attendgru model and ast-attendgru model have been released in Models. You can directly use them to generate comments for calculating BLEU scores. If you do, please start from step iii the following:

      1. Select outdir_attendgru or outdir_ast-attendgru in data, and rename the folder as outdir.

      2. Put the corresponding model file from Models under the directory ./funcom_reproduction/scratch/funcom/data/outdir/models. For example, if you choose outdir_attendgru, you need to use attendgru_E03_1633627453.h5 or attendgru_E05_1633627453.h5. Please do not forget to create the models directory.

      3. Open the corresponding .ipynb file under the root directory, and execute steps 0, 0.5, 1 and 3. After that, the .txt comment will be generated under ./funcom_reproduction/scratch/funcom/data/outdir/predictions. Please double-check the .h5 file name before running the code.

      4. Calculate the BLEU score by executing step 4 in the .ipynb file. I leave my results here for checking:

        Model Ba B1 B2 B3 B4
        ast-attendgru, E03 19.37 38.74 21.88 14.75 11.27
        attendgru, E03 19.24 38.65 21.77 14.66 11.12
        attendgru, E05 19.14 37.88 21.4 14.66 11.3
    3. For comment generation & BLEU score calculation in the challenge dataset, please modify line 114 of predict.py, and change default value from False to True. Then, redo steps iii and iv in the last point.

    Maintainers

    badge @KennardWang

    Contributing

    Feel free to open an issue or submit PRs.

    License

    license © Kennard Wang ( 2021.10.30 )

    Visit original content creator repository https://github.com/KennardWang/funcom_reproduction
  • google_translate_this

    Google Translate This

    This WebExtension translates the current page with Google Translate. It does so on demand so it does not change the page unless the user selected this.

    Alt text

    Why is this not on AMO?

    This extension executes remote code from Google in your current page and this is against AMO rules. If you use Chrome it does the same thing. I hope I can make it get into AMO soon but it depends on a lot of things.

    Privacy considerations

    This extension by default does not transmit any info to any site. ONCE YOU CLICK TRANSLATE CONSIDER THE PAGE SENT TO GOOGLE! Unfortunately this is how Google Translate works. This is the best I could do with the APIs that are avilable. I tried to isolate the page somehow but it is really difficult. Not only this but the extension grabs code from google translate and injects it in your current page. This only happens after you click translate, if you don’t interact with the extension, nothing gets send.

    This extension was designed for people leaving Chrome for Firefox. Some really need this feature and don’t mind the downsides. If you want a more privacy frinedly extension, check out AMO it has quite a few of them. Unfortunately they are not as user friendly as this one.

    Visit original content creator repository https://github.com/andreicristianpetcu/google_translate_this