Nateve compiler developed with python.

Overview

Adam

Adam is a Nateve Programming Language compiler developed using Python.

Nateve

Nateve is a new general domain programming language open source inspired by languages like Python, C++, JavaScript, and Wolfram Mathematica.

Nateve is an compiled language. Its first compiler, Adam, is fully built using Python 3.8.

Options of command line (Nateve)

  1. build: Transpile Nateve source code to Python 3.8
  2. run: Run Nateve source code
  3. compile: Compile Nateve source code to an executable file (.exe)
  4. run-init-loop: Run Nateve source code with an initial source and a loop source
  5. set-time-unit: Set Adam time unit to seconds or miliseconds (default: milisecond)
  6. -v: Activate verbose mode

Nateve Tutorial

In this tutorial, we will learn how to use Nateve step by step.

Step 1: Create a new Nateve project

$ cd my-project
$ COPY CON main.nateve

Hello World program

print("Hello, World!")

Is prime? program

def is_prime(n) {
    if n == 1 {
        return False
    }
    for i in range(2, n) {
        if n % i == 0 {
            return False
        }
    }
    return True
}

n = intput("Enter a number: ")

if is_prime(n) {
    print("It is a prime number.")
}
else {
    print("It is not a prime number.")
}

Comments

If you want to comment your code, you can use:

~ This is a single line comment ~

~
    And this a multiline comment
~

Under construction...

Let Statements

This language does not use variables. Instead of variables, you can only declare static values.

For declaring a value, you must use let and give it a value. For example:

let a = 1        -- Interger
let b = 1.0      -- Float
let c = "string" -- String
let d = true     -- Boolean
let e = [1,2,3]  -- List
let f = (1,2)    -- Tuple
...             

SigmaF allows data type as Integer, Float, Boolean, and String.

Lists

The Lists allow to use all the data types before mentioned, as well as lists and functions.

Also, they allow to get an item through the next notation:

let value_list = [1,2,3,4,5,6,7,8,9]
value_list[0]       -- Output: 1
value_list[0, 4]    -- Output: [1,2,3,4]
value_list[0, 8, 2] -- Output: [1, 3, 5, 7]

The struct of List CAll is example_list[<Start>, <End>, <Jump>]

Tuples

The tuples are data structs of length greater than 1. Unlike lists, they allow the following operations:

(1,2) + (3,4)      -- Output: (4,6)
(4,6,8) - (3,4,5)  -- Output: (1,2,3)
(0,1) == (0,1)     -- Output: true
(0,1) != (1,3)     -- Output: true

To obtain the values of a tuple, you must use the same notation of the list. But this data structure does not allow ranges like the lists (only you can get one position of a tuple).

E.g.

let t = (1,2,3,4,5,6)
t[1] -- Output: 2
t[5] -- Output: 6

And so on.

Operators

Warning: SigmaF have Static Typing, so it does not allow the operation between different data types.

These are operators:

Operator Symbol
Plus +
Minus -
Multiplication *
Division /
Modulus %
Exponential **
Equal ==
Not Equal !=
Less than <
Greater than >
Less or equal than <=
Greater or equal than >=
And &&
Or ||

The operator of negation for Boolean was not included. You can use the not() function in order to do this.

Functions

For declaring a function, you have to use the next syntax:

let example_function = fn <Name Argument>::<Argument Type> -> <Output Type> {
    => <Return Value>
}  

(For return, you have to use the => symbol)

For example:

let is_prime_number = fn x::int, i::int -> bool {
    if x <= 1 then {=> false;}
    if x == i then {=> true;}
    if (x % i) == 0 then {=> false;}
    => is_prime_number(x, i+1);
}

printLn(is_prime_number(11, 2)) -- Output: true

Conditionals

Regarding the conditionals, the syntax structure is:

if <Condition> then {
    <Consequence>
}
else{
    <Other Consequence>
}

For example:

if x <= 1 || x % i == 0 then {
    false;
}
if x == i then {
    true;
}
else {
    false;
}

Some Examples

-- Quick Sort
let qsort = fn l::list -> list {

	if (l == []) then {=> [];}
	else {
		let p = l[0];
		let xs = tail(l);
		
		let c_lesser = fn q::int -> bool {=> (q < p)}
		let c_greater = fn q::int -> bool {=> (q >= p)}

		=> qsort(filter(c_lesser, xs)) + [p] + qsort(filter(c_greater, xs));
	}
}

-- Filter
let filter = fn c::function, l::list -> list {
	if (l == []) then {=> [];} 

    => if (c(l[0])) then {[l[0]]} else {[]} +  filter(c, tail(l));
}

-- Map
let map = fn f::function, l::list -> list {
	if (l==[]) then {=> [];}
	
	=> [f(l[0])] + map(f, tail(l));
}

To know other examples of the implementations, you can go to e.g.


Feedback

I would really appreciatte your feedback. You can submit a new issue, or reach out me on Twitter.

Contribute

This is an opensource project, everyone can contribute and become a member of the community of SigmaF.

Why be a member of the SigmaF community?

1. A simple and understandable code

The source code of the interpreter is made with Python 3.8, a language easy to learn, also good practices are a priority for this project.

2. A great potencial

This project has a great potential to be the next programming language of the functional paradigm, to development the AI, and to development new metaheuristics.

3. Scalable development

One of the mains approaches of this project is the implementation of TDD from the beggining and the development of new features, which allows scalability.

4. Simple and power

One of the main purposes of this programming language is to create an easy-to-learn functional language, which at the same time is capable of processing large amounts of data safely and allows concurrence and parallelism.

5. Respect for diversity

Everybody is welcome, it does not matter your genre, experience or nationality. Anyone with enthusiasm can be part of this project. Anyone from the most expert to the that is beginning to learn about programming, marketing, design, or any career.

How to start contributing?

There are multiply ways to contribute, since sharing this project, improving the brand of SigmaF, helping to solve the bugs or developing new features and making improves to the source code.

  • Share this project: You can put your star in the repository, or talk about this project. You can use the hashtag #SigmaF in Twitter, LinkedIn or any social network too.

  • Improve the brand of SigmaF: If you are a marketer, designer or writer, and you want to help, you are welcome. You can contact me on Twitter like @fabianmativeal if you are interested on doing it.

  • Help to solve the bugs: if you find one bug notify me an issue. On this we can all improve this language.

  • Developing new features: If you want to develop new features or making improvements to the project, you can do a fork to the dev branch (here are the ultimate develops) working there, and later do a pull request to dev branch in order to update SigmaF.

You might also like...
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

A python framework to transform natural language questions to queries in a database query language.

__ _ _ _ ___ _ __ _ _ / _` | | | |/ _ \ '_ \| | | | | (_| | |_| | __/ |_) | |_| | \__, |\__,_|\___| .__/ \__, | |_| |_| |___/

Python library for processing Chinese text

SnowNLP: Simplified Chinese Text Processing SnowNLP是一个python写的类库,可以方便的处理中文文本内容,是受到了TextBlob的启发而写的,由于现在大部分的自然语言处理库基本都是针对英文的,于是写了一个方便处理中文的类库,并且和TextBlob

A Python package implementing a new model for text classification with visualization tools for Explainable AI :octocat:
A Python package implementing a new model for text classification with visualization tools for Explainable AI :octocat:

A Python package implementing a new model for text classification with visualization tools for Explainable AI 🍣 Online live demos: http://tworld.io/s

Python bindings to the dutch NLP tool Frog (pos tagger, lemmatiser, NER tagger, morphological analysis, shallow parser, dependency parser)

Frog for Python This is a Python binding to the Natural Language Processing suite Frog. Frog is intended for Dutch and performs part-of-speech tagging

A python wrapper around the ZPar parser for English.

NOTE This project is no longer under active development since there are now really nice pure Python parsers such as Stanza and Spacy. The repository w

💫 Industrial-strength Natural Language Processing (NLP) in Python

spaCy: Industrial-strength NLP spaCy is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest researc

Python interface for converting Penn Treebank trees to Stanford Dependencies and Universal Depenencies

PyStanfordDependencies Python interface for converting Penn Treebank trees to Universal Dependencies and Stanford Dependencies. Example usage Start by

Comments
  • [Enhancement] Nateve Vectors don't allow non-numeric datatypes

    [Enhancement] Nateve Vectors don't allow non-numeric datatypes

    Vectors just allow to use numbers (int/float) into them, because Vectors are redifinening Python Built-in lists in the middle code generation process. A possible solution is to join Vectors and Matrices into a Linear datatypes with the syntax opener tag "$", and the to make independent the python lists

    opened by eanorambuena 0
  • [Bug] Double execution of the modules in assembling process

    [Bug] Double execution of the modules in assembling process

    We need to resolve the double execution of the modules in assembling process.

    The last Non Double Execution Patch has been deprecated because it did generate bugs of type: - Code segmentation in the driver_file

    bug help wanted 
    opened by eanorambuena 0
Releases(0.0.3)
Owner
Nateve
Repositories related to the Nateve Programming Language
Nateve
The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)

Language Models are Few-shot Multilingual Learners Paper This is the source code of the paper [Arxiv] [ACL Anthology]: This code has been written usin

Genta Indra Winata 45 Nov 21, 2022
Code for text augmentation method leveraging large-scale language models

HyperMix Code for our paper GPT3Mix and conducting classification experiments using GPT-3 prompt-based data augmentation. Getting Started Installing P

NAVER AI 47 Dec 20, 2022
Open-Source Toolkit for End-to-End Speech Recognition leveraging PyTorch-Lightning and Hydra.

🤗 Contributing to OpenSpeech 🤗 OpenSpeech provides reference implementations of various ASR modeling papers and three languages recipe to perform ta

Openspeech TEAM 513 Jan 03, 2023
Research code for the paper "Fine-tuning wav2vec2 for speaker recognition"

Fine-tuning wav2vec2 for speaker recognition This is the code used to run the experiments in https://arxiv.org/abs/2109.15053. Detailed logs of each t

Nik 103 Dec 26, 2022
Host your own GPT-3 Discord bot

GPT3 Discord Bot Host your own GPT-3 Discord bot i'd host and make the bot invitable myself, however GPT3 terms of service prohibit public use of GPT3

[something hillarious here] 8 Jan 07, 2023
Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition

SEW (Squeezed and Efficient Wav2vec) The repo contains the code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speec

ASAPP Research 67 Dec 01, 2022
Paddlespeech Streaming ASR GUI

Paddlespeech-Streaming-ASR-GUI Introduction A paddlespeech Streaming ASR GUI. Us

Niek Zhen 3 Jan 05, 2022
Neural network models for joint POS tagging and dependency parsing (CoNLL 2017-2018)

Neural Network Models for Joint POS Tagging and Dependency Parsing Implementations of joint models for POS tagging and dependency parsing, as describe

Dat Quoc Nguyen 152 Sep 02, 2022
Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization (ACL 2021)

Structured Super Lottery Tickets in BERT This repo contains our codes for the paper "Super Tickets in Pre-Trained Language Models: From Model Compress

Chen Liang 16 Dec 11, 2022
A Fast Sequence Transducer Implementation with PyTorch Bindings

transducer A Fast Sequence Transducer Implementation with PyTorch Bindings. The corresponding publication is Sequence Transduction with Recurrent Neur

Awni Hannun 184 Dec 18, 2022
Mapping a variable-length sentence to a fixed-length vector using BERT model

Are you looking for X-as-service? Try the Cloud-Native Neural Search Framework for Any Kind of Data bert-as-service Using BERT model as a sentence enc

Han Xiao 11.1k Jan 01, 2023
Command Line Text-To-Speech using Google TTS

cli-tts Thanks to gTTS by @pndurette! This is an interactive command line text-to-speech tool using Google TTS. Just type text and the voice will be p

ReekyStive 3 Nov 11, 2022
⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).

BERT-of-Theseus Code for paper "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing". BERT-of-Theseus is a new compressed BERT by progre

Kevin Canwen Xu 284 Nov 25, 2022
SentAugment is a data augmentation technique for semi-supervised learning in NLP.

SentAugment SentAugment is a data augmentation technique for semi-supervised learning in NLP. It uses state-of-the-art sentence embeddings to structur

Meta Research 363 Dec 30, 2022
this repository has datasets containing information of Uber pickups in NYC from April 2014 to September 2014 and January to June 2015. data Analysis , virtualization and some insights are gathered here

uber-pickups-analysis Data Source: https://www.kaggle.com/fivethirtyeight/uber-pickups-in-new-york-city Information about data set The dataset contain

1 Nov 02, 2021
MicBot - MicBot uses Google Translate to speak everyone's chat messages

MicBot MicBot uses Google Translate to speak everyone's chat messages. It can al

2 Mar 09, 2022
A Python script that compares files in directories

compare-files A Python script that compares files in different directories, this is similar to the command filecmp.cmp(f1, f2). I made this script in

Colvin 1 Oct 15, 2021
A2T: Towards Improving Adversarial Training of NLP Models (EMNLP 2021 Findings)

A2T: Towards Improving Adversarial Training of NLP Models This is the source code for the EMNLP 2021 (Findings) paper "Towards Improving Adversarial T

QData 17 Oct 15, 2022
PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset

PyTorch Large-Scale Language Model A Large-Scale PyTorch Language Model trained on the 1-Billion Word (LM1B) / (GBW) dataset Latest Results 39.98 Perp

Ryan Spring 114 Nov 04, 2022
Big Bird: Transformers for Longer Sequences

BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the c

Google Research 457 Dec 23, 2022