A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

Overview


Build Status License: MIT

Machine Learning Collection

In this repository you will find tutorials and projects related to Machine Learning. I try to make the code as clear as possible, and the goal is be to used as a learning resource and a way to lookup problems to solve specific problems. For most I have also done video explanations on YouTube if you want a walkthrough for the code. If you got any questions or suggestions for future videos I prefer if you ask it on YouTube. This repository is contribution friendly, so if you feel you want to add something then I'd happily merge a PR 😃

Table Of Contents

Machine Learning

PyTorch Tutorials

If you have any specific video suggestion please make a comment on YouTube :)

Basics

More Advanced

Object Detection

Object Detection Playlist

Generative Adversarial Networks

GAN Playlist

Architectures

TensorFlow Tutorials

If you have any specific video suggestion please make a comment on YouTube :)

Beginner Tutorials

CNN Architectures

Comments
  • ProGAN Pretrained weights link is broken!

    ProGAN Pretrained weights link is broken!

    When i click to dowload pretrained weights i get redirected to https://github.com/aladdinpersson/Machine-Learning-Collection/tree/master/ML/Pytorch/GANs/ProGAN

    opened by extremety1989 11
  • ProGan RuntimeError

    ProGan RuntimeError

    i downloaded celeba_hq image dataset,modified config.py (DATASET = 'celeba_hq') , modified train.py( at main()
    # import sys # sys.exit()) then when i run python train.py i got this error

    return F.conv_transpose2d( RuntimeError: Expected 4-dimensional input for 4-dimensional weight [512, 512, 4, 4], but got 2-dimensional input of size [256, 512] instead

    opened by extremety1989 9
  • Transformer Question, and Request

    Transformer Question, and Request

    Learning PyTorch and love your videos. Your code is so clean and your explanations so crisp.

    Question/Bug?: In SelfAttention you split values, keys, and query by the number of heads. Then pass this into Linear with same input and output dimension. Why not keep the full dimension (ie: not split) and let the Linear do the reduction? This would allow linear to learn what to take out of the input rather?

    btw, https://github.com/tunz/transformer-pytorch/blob/master/model/transformer.py, class MultiHeadAttention(nn.Module) does this (if I interpret their code correctly).

    The paper https://arxiv.org/pdf/1706.03762.pdf indicates "learned linear projections to dk, dk and dv dimensions".

    If I'm all wrong, would love to be corrected as I learning. If I'm right, would also love to know that I'm starting to understand this stuff.

    Request: Starting to understand torch.einsum power but I am sure I am missing a bunch. Can you do a video on this?

    Regards, John

    opened by johngrabner 4
  • Inside your Seq2Seq (Transformers), Observe the parameter forward_expansion.

    Inside your Seq2Seq (Transformers), Observe the parameter forward_expansion.

    You've defined forward_expansion as 4,

    See the implementation of the FeedForward network at the end of encoders & decoders

    So, putting your variable in code. will look like -

    self.linear1 = Linear(d_model,  4, **factory_kwargs)
    self.dropout = Dropout(dropout)
    self.linear2 = Linear(4, d_model, **factory_kwargs)
    

    where d_model = 512 (emb_size)

    Obserser PyTorch's official implementation for more clearance.

    I will suggest you to change that variable to something bigger like 512, 1024 or 2048 I am surprised that even after using 4 as dim_feedforward (Or what you call it forward_expansion), You're getting great results

    opened by KrishPro 3
  • Getting error while executing Sementic segmentation w. UNET in pytorch

    Getting error while executing Sementic segmentation w. UNET in pytorch

    Hi, I watched your recent tutorial on sementic segmentation with pytorch. Being new to pytorch I was looking for some tutorial with good explanations especially in segmentation module and your tutorial came as a great help. I tried to implement your way on a UNet network for segmentation on google-colab but getting an error. I tried to fix it but no luck. Can you please help me in fixing the error. The error I am getting is:


    TypeError Traceback (most recent call last) in () 85 86 if name == "main": ---> 87 main()

    7 frames in main() 67 68 for epoch in range(Num_epochs): ---> 69 train_fn(train_loader, model, optimizer, loss_fn, scaler) 70 71

    in train_fn(loader, model, optimizer, loss_fn, scaler) 2 loop = tqdm(loader) 3 ----> 4 for batch_idx, (data, targets) in enumerate(loop): 5 data= data.to(device=device) 6 targets= targets.float().unsqueeze(1).to(device=device)

    /usr/local/lib/python3.6/dist-packages/tqdm/std.py in iter(self) 1102 fp_write=getattr(self.fp, 'write', sys.stderr.write)) 1103 -> 1104 for obj in iterable: 1105 yield obj 1106 # Update and possibly print the progressbar.

    /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in next(self) 433 if self._sampler_iter is None: 434 self._reset() --> 435 data = self._next_data() 436 self._num_yielded += 1 437 if self._dataset_kind == _DatasetKind.Iterable and \

    /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _next_data(self) 473 def _next_data(self): 474 index = self._next_index() # may raise StopIteration --> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 476 if self._pin_memory: 477 data = _utils.pin_memory.pin_memory(data)

    /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index]

    /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in (.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index]

    in getitem(self, index) 17 18 if self.transform is not None: ---> 19 augmentations= self.transform(image=image, mask=mask) 20 image = augmentations["image"] 21 mask = augmentations["mask"]

    TypeError: 'int' object is not callable

    opened by sautami26 3
  • Model overfitting for 20 classes ( PASCAL VOC 2007 + 2012 dataset )

    Model overfitting for 20 classes ( PASCAL VOC 2007 + 2012 dataset )

    Hi Aladdin , Thank you so much for your video and explanations,
    I am currently doing a project on object detection , and your video helped me a lot. thank you once again.

    I have a problem of overfitting in the model . I am getting test map as 10% , train map 90% . I trained on PASCAL VOC 2007 + VOC 2012 data. I have tried every way I could think of to reduce the overfitting ( dropout layer , weight decay , added 5k more images ,data augmentation , used pretrained extraction weights ,step LR etc etc ) , tried everything as close as possible to original paper its been a month now and I am still not able to figure out why . could you please help me? ( I have used your code for everything). it would be a great help if you can suggest something with respect to your code .

    P.S : I used the same code and modified it for 2 classes and 5 classes , I have got good results , 2classes : test map 50% , 5 classes : test map 60%.

    opened by 100daggers 3
  • Expected object of scalar type Long but got scalar type Float for sequence element 1 in sequence argument at position #1 'tensors'

    Expected object of scalar type Long but got scalar type Float for sequence element 1 in sequence argument at position #1 'tensors'

    Hi,

    I rewrote the code along with watching your tutorial. When I run the training procedure, I get the following error:

    Traceback (most recent call last):
      File "/home/niko/programs/pycharm-community-2019.2.1/helpers/pydev/pydevd.py", line 1415, in _exec
        pydev_imports.execfile(file, globals, locals)  # execute the script
      File "/home/niko/programs/pycharm-community-2019.2.1/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
        exec(compile(contents+"\n", file, 'exec'), glob, loc)
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/train_original.py", line 147, in <module>
        main()
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/train_original.py", line 126, in main
        train_loader, model, iou_threshold=0.5, threshold=0.4
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/utils.py", line 255, in get_bboxes
        true_bboxes = cellboxes_to_boxes(labels)
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/utils.py", line 322, in cellboxes_to_boxes
        converted_pred = convert_cellboxes(out).reshape(out.shape[0], S * S, -1)
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/utils.py", line 315, in convert_cellboxes
        (predicted_class, best_confidence, converted_bboxes), dim=-1
    RuntimeError: Expected object of scalar type Long but got scalar type Float for sequence element 1 in sequence argument at position #1 'tensors'
    

    Then I tried to copy the exact same code from your train.py and dataset.py file but the error still persisted. I guess getitem in dataset.py should return long instead of float types for bounding boxes. Do you know what might be the cause of the error above?

    opened by nikogamulin 3
  • CNN Architecture implementations in Tensorflow

    CNN Architecture implementations in Tensorflow

    Your YT contents are awesome!! Especially those CNN architectures from scratch videos. I myself is trying to learn DL and your videos helped me understand the concept better when I was reading the academic papers.

    Not very long ago, I started implementing some of the popular CNN architectures with Tensorflow 2.0 in my repo and I think it would be good to PR those to here so the rest can checkout both PyTorch and Tensorflow implementations.

    I am not super good with Tensorflow, so if there's something that can be improved, feel free to give comments.

    I have implemented

    • AlexNet
    • GoogLeNet / Inception V1
    • LeNet5
    • ResNet
    • VGGNet
    opened by the-robot 3
  • error when running code

    error when running code

    when running your code i get this error:

    RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select

    any tips? thank you

    opened by javismiles 3
  • Why do you need to slice captions?

    Why do you need to slice captions?

    https://github.com/AladdinPerzon/Machine-Learning-Collection/blob/9f3b2a82c0b8b6ba8c16293d8118d8d8c888f8e6/ML/Pytorch/more_advanced/image_captioning/train.py#L82

    Hello, thank you for your version of the image captioning solution! However, one thing is not clear to me. Why would you do that slice? If I correctly understood the captions, in that case, is a padded batch of captions, so it looks like: 1 1 1 1 1 2 1 1 1 2 0 0 1 1 1 1 2 0

    and if you make a slice [:,: - 1] that would be: 1 1 1 1 1 1 1 1 2 0 1 1 1 1 2 (1 is any token, 2 is and 0 is padding)

    So if you want to get rid of tokens that would not work.

    opened by concrete13377 3
  • Image captioning: all training example output is <UNK>

    Image captioning: all training example output is

    When training for image captioning, in the first epoch, the print_examples function returns the following

    Example 1 CORRECT: Dog on a beach by the ocean
    Example 1 OUTPUT: chasing stores mossy participates player brush museum phone handle drops native punk buried alongside cellphones very bags hairy paintball mouths mats markings volleyball backpacker dressed backpacks legos light bitten various pillow singing attempt superman weather try gnawing ceiling shaped tree someone phone scarf crouching courtyard cows indoors seeds hits hits
    Example 2 CORRECT: Child holding red frisbee outdoors
    Example 2 OUTPUT: chasing stores mossy bushes tags hardwood tulips chin lining gnawing taken tinkerbell both kind cable tile colorfully shepherd dangling skinny cake scene tattooed swimmer beverage come points come 23 wheels puppy scenic ring snake one piggy snowboard camera slightly fireworks nature try gnawing ceiling shaped tree someone phone scarf crouching
    Example 3 CORRECT: Bus driving by parked cars
    Example 3 OUTPUT: trucks each that cheerleader hawk jeeps formal ring skeleton forested various plastic goofy snowmobile dances very wearing seaweed cards kick works baseman past daughter football waterfalls bathroom motorcycle bar bikers phone following kid ring past converse nose nose college wide skyscraper rough holding bending seeds broken kissing follows pouring pouring
    Example 4 CORRECT: A small boat in the ocean
    Example 4 OUTPUT: chasing stores mossy bushes tags hardwood tulips chin lining gnawing taken tinkerbell both kind cable tile colorfully shepherd dangling skinny cake scene tattooed swimmer beverage come points come 23 wheels puppy scenic ring snake one piggy snowboard camera slightly fireworks nature try gnawing ceiling shaped tree someone phone scarf crouching
    Example 5 CORRECT: A cowboy riding a horse in the desert
    Example 5 OUTPUT: avoid windsurfing alongside roof between enjoys dimly artists artists others biting upon holding silhouette ascending apples curve tennis o leaves gives dinner chasing picnic pack ceremony kayak kayak office festive hikes covered visible signs dancing construction construction when hiking pillow foot leotard about all pit between stool ear sports cigarette
    

    however, after the first epoch and later, the print_examples function returns:

    Example 1 CORRECT: Dog on a beach by the ocean                                  
    Example 1 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    Example 2 CORRECT: Child holding red frisbee outdoors
    Example 2 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    Example 3 CORRECT: Bus driving by parked cars
    Example 3 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    Example 4 CORRECT: A small boat in the ocean
    Example 4 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    Example 5 CORRECT: A cowboy riding a horse in the desert
    Example 5 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    

    Im not sure what's going on

    opened by tbass134 3
  • Issues with the YOLO V1 Loss Function

    Issues with the YOLO V1 Loss Function

    I recently decided to try to make a YOLO V1 implementation as my first serious project, based on your guide, but doing all the pre-training and training the full model myself. I have succeeded in making a sort of working model, though there are probably still some mistakes as it is not optimal. For reference, my repository is here.

    Doing this led to me to noticing some issues with your implementation of the loss function:

    • Your target confidence (the tensor torch.flatten(exists_box * target[..., 20:21])) is going to be 1 for every cell where a box exists, and 0 for every box where it does not exist. In fact target[..., 20:21] is the same thing as exists_box. This is not true to the paper, which instead asks that in the case of a responsible predictor, the target confidence should be equal to the IOU of the currently predicted box with the ground truth box. The correct target tensor is exists_box * iou_maxes.unsqueeze(3) (not tested working, but this is the right idea). There is actually currently an open pull request (#44 ) which would fix this.
    • Your no-object loss does not factor in non-responsible predictors which share a cell with a responsible predictor, which it should, as the "1_ij^noobj" from the paper will be 1 for these.
    • You set your MSE function with reduction='sum', but then do not normalize for batch size. This means that the loss scales linearly with the batch size, which results in much larger losses (forcing low learning rates), and is also an entanglement of hyperparameters, which is bad. The correct implementation is to calculate sum-squared error for each sample in the batch independently, then average them. To fix this, replace return loss with return loss / float(predictions.size()[0]) (you will have to use a larger learning rate, but this is a good thing!).
    • Those flatten layers in are totally unnecessary, or rather, they do nothing. Torch MSE is smart enough to be given any two tensors of the same dimension.
    • In dataset.py, you have your width and height target values for each box calculated relative to the cell dimensions: width_cell, height_cell = (width * self.S, height * self.S,) This is incorrect, they are supposed to be relative to the dimensions of the entire image (even though x and y are relative to the dimensions of a cell!) The reason for this, as stated in the paper, is so that each element of [x,y,w,h] will be between 0.0 and 1.0. To fix this, just remove multiplication by self.S. This will also need to be fixed on the other end when you convert predicted labels back to boxes for visualization. This is really more about the dataloading than the loss function, but because it unbalances the loss function it has the same sort of effect: failing to fix this causes mode collapse on object classification when you try to generalize the model.

    Obviously your project is just about overfitting the model, and none of these issues are apparent when attempting to overfit. They do, however, cause serious issues when you are trying to train the whole thing. If you want to fix it, feel free to have reference to my re-implementation of the loss function, which should be compatible with yours, but is re-written to try to mimic the paper's formula as close as possible. Do bear in mind, though, that mine evidently isn't perfect either (I can't get my model stable under 1e-2 learning rate, indicating a probable scaling mistake somewhere).

    opened by a-g-moore 0
  • StyleGAN - what's exacttly wrong with it?

    StyleGAN - what's exacttly wrong with it?

    I run the code and it seem to work. Could you please clarify what's wrong with your implementation? It is too slow, or the generated faces are poor, or something else?

    opened by moneroexamples 0
  • warning: Embedding dir exists, did you set global_step for add_embedding()?

    warning: Embedding dir exists, did you set global_step for add_embedding()?

    I was doing the pytorch tensor board tutorial. While running the pytorch_tensorboard_.py notebook file, I get this:

    warning: Embedding dir exists, did you set global_step for add_embedding()?

    Somebody else also faced this (might be in a different case). I don't understand the cause & effect of this, any help?

    Thanks in advance!

    opened by massisenergy 0
  • Error while training YOLOv3 on COCO dataset

    Error while training YOLOv3 on COCO dataset

    Training on PASCAL VOC dataset work fine but while training on COCO dataset I am having the following error:

    File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\albumentations\core\bbox_utils.py", line 417, in check_bbox raise ValueError(f"Expected {name} for bbox {bbox} to be in the range [0.0, 1.0], got {value}.") ValueError: Expected x_min for bbox (-0.0020920502092049986, 0.09853100000000004, 0.327091949790795, 0.681844, 0.0) to be in the range [0.0, 1.0], got -0.0020920502092049986.

    opened by michalt38 0
  • Query: DCGAN implementation saving results/sampling

    Query: DCGAN implementation saving results/sampling

    Thank you for the tutorial. I have followed the tutorial and coded in parallel. I have been able to execute the model, however I wanted to know how to save/sample from the model so that I can visualize the various versions of model with different model architectures. I request you to guide me. Regards Prabhav

    P.S. It would be great if you could share how to use multiple GPUs in a single node for training.

    opened by KomputerMaster64 0
Owner
Aladdin Persson
I'm a math geek who likes programming. Particularly interested in machine learning, algorithms and software development.
Aladdin Persson
Sequence modeling benchmarks and temporal convolutional networks

Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) This repository contains the experiments done in the work An Empirical Evaluati

CMU Locus Lab 3.5k Jan 01, 2023
Code for the paper titled "Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks" (NeurIPS 2021 Spotlight).

Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks This repository contains the code and pre-trained

Hassan Dbouk 7 Dec 05, 2022
A High-Level Fusion Scheme for Circular Quantities published at the 20th International Conference on Advanced Robotics

Monte Carlo Simulation to the Paper A High-Level Fusion Scheme for Circular Quantities published at the 20th International Conference on Advanced Robotics

Sören Kohnert 0 Dec 06, 2021
Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors

Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository c

136 Dec 12, 2022
SoK: Vehicle Orientation Representations for Deep Rotation Estimation

SoK: Vehicle Orientation Representations for Deep Rotation Estimation Raymond H. Tu, Siyuan Peng, Valdimir Leung, Richard Gao, Jerry Lan This is the o

FIRE Capital One Machine Learning of the University of Maryland 12 Oct 07, 2022
INSPIRED: A Transparent Dialogue Dataset for Interactive Semantic Parsing

INSPIRED: A Transparent Dialogue Dataset for Interactive Semantic Parsing Existing studies on semantic parsing focus primarily on mapping a natural-la

7 Aug 22, 2022
Proto-RL: Reinforcement Learning with Prototypical Representations

Proto-RL: Reinforcement Learning with Prototypical Representations This is a PyTorch implementation of Proto-RL from Reinforcement Learning with Proto

Denis Yarats 74 Dec 06, 2022
Source code for ZePHyR: Zero-shot Pose Hypothesis Rating @ ICRA 2021

ZePHyR: Zero-shot Pose Hypothesis Rating ZePHyR is a zero-shot 6D object pose estimation pipeline. The core is a learned scoring function that compare

R-Pad - Robots Perceiving and Doing 18 Aug 22, 2022
This repository contains the entire code for our work "Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding"

Two-Timescale-DNN Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding This repository contains the entire code for our work

QiyuHu 3 Mar 07, 2022
Mscp jamf - Build compliance in jamf

mscp_jamf Build compliance in Jamf. This will build the following xml pieces to

Bob Gendler 3 Jul 25, 2022
Unsupervised Feature Ranking via Attribute Networks.

FRANe Unsupervised Feature Ranking via Attribute Networks (FRANe) converts a dataset into a network (graph) with nodes that correspond to the features

7 Sep 29, 2022
TensorFlow-based implementation of "Pyramid Scene Parsing Network".

PSPNet_tensorflow Important Code is fine for inference. However, the training code is just for reference and might be only used for fine-tuning. If yo

HsuanKung Yang 323 Dec 20, 2022
Code & Data for the Paper "Time Masking for Temporal Language Models", WSDM 2022

Time Masking for Temporal Language Models This repository provides a reference implementation of the paper: Time Masking for Temporal Language Models

Guy Rosin 12 Jan 06, 2023
Hysterese plugin with two temperature offset areas

craftbeerpi4 plugin OffsetHysterese Temperatur-Steuerungs-Plugin mit zwei tempereaturbereich abhängigen Offsets. Installation sudo pip3 install https:

HappyHibo 1 Dec 21, 2021
An e-commerce company wants to segment its customers and determine marketing strategies according to these segments.

customer_segmentation_with_rfm Business Problem : An e-commerce company wants to

Buse Yıldırım 3 Jan 06, 2022
Bare bones use-case for deploying a containerized web app (built in streamlit) on AWS.

Containerized Streamlit web app This repository is featured in a 3-part series on Deploying web apps with Streamlit, Docker, and AWS. Checkout the blo

Collin Prather 62 Jan 02, 2023
Learning to Draw: Emergent Communication through Sketching

Learning to Draw: Emergent Communication through Sketching This is the official code for the paper "Learning to Draw: Emergent Communication through S

19 Jul 22, 2022
A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

Evan 1.3k Jan 02, 2023
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python =3.8.0 Pytorch =1.7.1 Usage wit

7 Oct 13, 2022
Face Mask Detection on Image and Video using tensorflow and keras

Face-Mask-Detection Face Mask Detection on Image and Video using tensorflow and keras Train Neural Network on face-mask dataset using tensorflow and k

Nahid Ebrahimian 12 Nov 11, 2022