A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

Overview


Build Status License: MIT

Machine Learning Collection

In this repository you will find tutorials and projects related to Machine Learning. I try to make the code as clear as possible, and the goal is be to used as a learning resource and a way to lookup problems to solve specific problems. For most I have also done video explanations on YouTube if you want a walkthrough for the code. If you got any questions or suggestions for future videos I prefer if you ask it on YouTube. This repository is contribution friendly, so if you feel you want to add something then I'd happily merge a PR 😃

Table Of Contents

Machine Learning

PyTorch Tutorials

If you have any specific video suggestion please make a comment on YouTube :)

Basics

More Advanced

Object Detection

Object Detection Playlist

Generative Adversarial Networks

GAN Playlist

Architectures

TensorFlow Tutorials

If you have any specific video suggestion please make a comment on YouTube :)

Beginner Tutorials

CNN Architectures

Comments
  • ProGAN Pretrained weights link is broken!

    ProGAN Pretrained weights link is broken!

    When i click to dowload pretrained weights i get redirected to https://github.com/aladdinpersson/Machine-Learning-Collection/tree/master/ML/Pytorch/GANs/ProGAN

    opened by extremety1989 11
  • ProGan RuntimeError

    ProGan RuntimeError

    i downloaded celeba_hq image dataset,modified config.py (DATASET = 'celeba_hq') , modified train.py( at main()
    # import sys # sys.exit()) then when i run python train.py i got this error

    return F.conv_transpose2d( RuntimeError: Expected 4-dimensional input for 4-dimensional weight [512, 512, 4, 4], but got 2-dimensional input of size [256, 512] instead

    opened by extremety1989 9
  • Transformer Question, and Request

    Transformer Question, and Request

    Learning PyTorch and love your videos. Your code is so clean and your explanations so crisp.

    Question/Bug?: In SelfAttention you split values, keys, and query by the number of heads. Then pass this into Linear with same input and output dimension. Why not keep the full dimension (ie: not split) and let the Linear do the reduction? This would allow linear to learn what to take out of the input rather?

    btw, https://github.com/tunz/transformer-pytorch/blob/master/model/transformer.py, class MultiHeadAttention(nn.Module) does this (if I interpret their code correctly).

    The paper https://arxiv.org/pdf/1706.03762.pdf indicates "learned linear projections to dk, dk and dv dimensions".

    If I'm all wrong, would love to be corrected as I learning. If I'm right, would also love to know that I'm starting to understand this stuff.

    Request: Starting to understand torch.einsum power but I am sure I am missing a bunch. Can you do a video on this?

    Regards, John

    opened by johngrabner 4
  • Inside your Seq2Seq (Transformers), Observe the parameter forward_expansion.

    Inside your Seq2Seq (Transformers), Observe the parameter forward_expansion.

    You've defined forward_expansion as 4,

    See the implementation of the FeedForward network at the end of encoders & decoders

    So, putting your variable in code. will look like -

    self.linear1 = Linear(d_model,  4, **factory_kwargs)
    self.dropout = Dropout(dropout)
    self.linear2 = Linear(4, d_model, **factory_kwargs)
    

    where d_model = 512 (emb_size)

    Obserser PyTorch's official implementation for more clearance.

    I will suggest you to change that variable to something bigger like 512, 1024 or 2048 I am surprised that even after using 4 as dim_feedforward (Or what you call it forward_expansion), You're getting great results

    opened by KrishPro 3
  • Getting error while executing Sementic segmentation w. UNET in pytorch

    Getting error while executing Sementic segmentation w. UNET in pytorch

    Hi, I watched your recent tutorial on sementic segmentation with pytorch. Being new to pytorch I was looking for some tutorial with good explanations especially in segmentation module and your tutorial came as a great help. I tried to implement your way on a UNet network for segmentation on google-colab but getting an error. I tried to fix it but no luck. Can you please help me in fixing the error. The error I am getting is:


    TypeError Traceback (most recent call last) in () 85 86 if name == "main": ---> 87 main()

    7 frames in main() 67 68 for epoch in range(Num_epochs): ---> 69 train_fn(train_loader, model, optimizer, loss_fn, scaler) 70 71

    in train_fn(loader, model, optimizer, loss_fn, scaler) 2 loop = tqdm(loader) 3 ----> 4 for batch_idx, (data, targets) in enumerate(loop): 5 data= data.to(device=device) 6 targets= targets.float().unsqueeze(1).to(device=device)

    /usr/local/lib/python3.6/dist-packages/tqdm/std.py in iter(self) 1102 fp_write=getattr(self.fp, 'write', sys.stderr.write)) 1103 -> 1104 for obj in iterable: 1105 yield obj 1106 # Update and possibly print the progressbar.

    /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in next(self) 433 if self._sampler_iter is None: 434 self._reset() --> 435 data = self._next_data() 436 self._num_yielded += 1 437 if self._dataset_kind == _DatasetKind.Iterable and \

    /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _next_data(self) 473 def _next_data(self): 474 index = self._next_index() # may raise StopIteration --> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 476 if self._pin_memory: 477 data = _utils.pin_memory.pin_memory(data)

    /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index]

    /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in (.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index]

    in getitem(self, index) 17 18 if self.transform is not None: ---> 19 augmentations= self.transform(image=image, mask=mask) 20 image = augmentations["image"] 21 mask = augmentations["mask"]

    TypeError: 'int' object is not callable

    opened by sautami26 3
  • Model overfitting for 20 classes ( PASCAL VOC 2007 + 2012 dataset )

    Model overfitting for 20 classes ( PASCAL VOC 2007 + 2012 dataset )

    Hi Aladdin , Thank you so much for your video and explanations,
    I am currently doing a project on object detection , and your video helped me a lot. thank you once again.

    I have a problem of overfitting in the model . I am getting test map as 10% , train map 90% . I trained on PASCAL VOC 2007 + VOC 2012 data. I have tried every way I could think of to reduce the overfitting ( dropout layer , weight decay , added 5k more images ,data augmentation , used pretrained extraction weights ,step LR etc etc ) , tried everything as close as possible to original paper its been a month now and I am still not able to figure out why . could you please help me? ( I have used your code for everything). it would be a great help if you can suggest something with respect to your code .

    P.S : I used the same code and modified it for 2 classes and 5 classes , I have got good results , 2classes : test map 50% , 5 classes : test map 60%.

    opened by 100daggers 3
  • Expected object of scalar type Long but got scalar type Float for sequence element 1 in sequence argument at position #1 'tensors'

    Expected object of scalar type Long but got scalar type Float for sequence element 1 in sequence argument at position #1 'tensors'

    Hi,

    I rewrote the code along with watching your tutorial. When I run the training procedure, I get the following error:

    Traceback (most recent call last):
      File "/home/niko/programs/pycharm-community-2019.2.1/helpers/pydev/pydevd.py", line 1415, in _exec
        pydev_imports.execfile(file, globals, locals)  # execute the script
      File "/home/niko/programs/pycharm-community-2019.2.1/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
        exec(compile(contents+"\n", file, 'exec'), glob, loc)
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/train_original.py", line 147, in <module>
        main()
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/train_original.py", line 126, in main
        train_loader, model, iou_threshold=0.5, threshold=0.4
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/utils.py", line 255, in get_bboxes
        true_bboxes = cellboxes_to_boxes(labels)
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/utils.py", line 322, in cellboxes_to_boxes
        converted_pred = convert_cellboxes(out).reshape(out.shape[0], S * S, -1)
      File "/home/niko/workspace/pytorch-and-lightning-tutorials/yolo/utils.py", line 315, in convert_cellboxes
        (predicted_class, best_confidence, converted_bboxes), dim=-1
    RuntimeError: Expected object of scalar type Long but got scalar type Float for sequence element 1 in sequence argument at position #1 'tensors'
    

    Then I tried to copy the exact same code from your train.py and dataset.py file but the error still persisted. I guess getitem in dataset.py should return long instead of float types for bounding boxes. Do you know what might be the cause of the error above?

    opened by nikogamulin 3
  • CNN Architecture implementations in Tensorflow

    CNN Architecture implementations in Tensorflow

    Your YT contents are awesome!! Especially those CNN architectures from scratch videos. I myself is trying to learn DL and your videos helped me understand the concept better when I was reading the academic papers.

    Not very long ago, I started implementing some of the popular CNN architectures with Tensorflow 2.0 in my repo and I think it would be good to PR those to here so the rest can checkout both PyTorch and Tensorflow implementations.

    I am not super good with Tensorflow, so if there's something that can be improved, feel free to give comments.

    I have implemented

    • AlexNet
    • GoogLeNet / Inception V1
    • LeNet5
    • ResNet
    • VGGNet
    opened by the-robot 3
  • error when running code

    error when running code

    when running your code i get this error:

    RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select

    any tips? thank you

    opened by javismiles 3
  • Why do you need to slice captions?

    Why do you need to slice captions?

    https://github.com/AladdinPerzon/Machine-Learning-Collection/blob/9f3b2a82c0b8b6ba8c16293d8118d8d8c888f8e6/ML/Pytorch/more_advanced/image_captioning/train.py#L82

    Hello, thank you for your version of the image captioning solution! However, one thing is not clear to me. Why would you do that slice? If I correctly understood the captions, in that case, is a padded batch of captions, so it looks like: 1 1 1 1 1 2 1 1 1 2 0 0 1 1 1 1 2 0

    and if you make a slice [:,: - 1] that would be: 1 1 1 1 1 1 1 1 2 0 1 1 1 1 2 (1 is any token, 2 is and 0 is padding)

    So if you want to get rid of tokens that would not work.

    opened by concrete13377 3
  • Image captioning: all training example output is <UNK>

    Image captioning: all training example output is

    When training for image captioning, in the first epoch, the print_examples function returns the following

    Example 1 CORRECT: Dog on a beach by the ocean
    Example 1 OUTPUT: chasing stores mossy participates player brush museum phone handle drops native punk buried alongside cellphones very bags hairy paintball mouths mats markings volleyball backpacker dressed backpacks legos light bitten various pillow singing attempt superman weather try gnawing ceiling shaped tree someone phone scarf crouching courtyard cows indoors seeds hits hits
    Example 2 CORRECT: Child holding red frisbee outdoors
    Example 2 OUTPUT: chasing stores mossy bushes tags hardwood tulips chin lining gnawing taken tinkerbell both kind cable tile colorfully shepherd dangling skinny cake scene tattooed swimmer beverage come points come 23 wheels puppy scenic ring snake one piggy snowboard camera slightly fireworks nature try gnawing ceiling shaped tree someone phone scarf crouching
    Example 3 CORRECT: Bus driving by parked cars
    Example 3 OUTPUT: trucks each that cheerleader hawk jeeps formal ring skeleton forested various plastic goofy snowmobile dances very wearing seaweed cards kick works baseman past daughter football waterfalls bathroom motorcycle bar bikers phone following kid ring past converse nose nose college wide skyscraper rough holding bending seeds broken kissing follows pouring pouring
    Example 4 CORRECT: A small boat in the ocean
    Example 4 OUTPUT: chasing stores mossy bushes tags hardwood tulips chin lining gnawing taken tinkerbell both kind cable tile colorfully shepherd dangling skinny cake scene tattooed swimmer beverage come points come 23 wheels puppy scenic ring snake one piggy snowboard camera slightly fireworks nature try gnawing ceiling shaped tree someone phone scarf crouching
    Example 5 CORRECT: A cowboy riding a horse in the desert
    Example 5 OUTPUT: avoid windsurfing alongside roof between enjoys dimly artists artists others biting upon holding silhouette ascending apples curve tennis o leaves gives dinner chasing picnic pack ceremony kayak kayak office festive hikes covered visible signs dancing construction construction when hiking pillow foot leotard about all pit between stool ear sports cigarette
    

    however, after the first epoch and later, the print_examples function returns:

    Example 1 CORRECT: Dog on a beach by the ocean                                  
    Example 1 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    Example 2 CORRECT: Child holding red frisbee outdoors
    Example 2 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    Example 3 CORRECT: Bus driving by parked cars
    Example 3 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    Example 4 CORRECT: A small boat in the ocean
    Example 4 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    Example 5 CORRECT: A cowboy riding a horse in the desert
    Example 5 OUTPUT: <SOS> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK>
    

    Im not sure what's going on

    opened by tbass134 3
  • Issues with the YOLO V1 Loss Function

    Issues with the YOLO V1 Loss Function

    I recently decided to try to make a YOLO V1 implementation as my first serious project, based on your guide, but doing all the pre-training and training the full model myself. I have succeeded in making a sort of working model, though there are probably still some mistakes as it is not optimal. For reference, my repository is here.

    Doing this led to me to noticing some issues with your implementation of the loss function:

    • Your target confidence (the tensor torch.flatten(exists_box * target[..., 20:21])) is going to be 1 for every cell where a box exists, and 0 for every box where it does not exist. In fact target[..., 20:21] is the same thing as exists_box. This is not true to the paper, which instead asks that in the case of a responsible predictor, the target confidence should be equal to the IOU of the currently predicted box with the ground truth box. The correct target tensor is exists_box * iou_maxes.unsqueeze(3) (not tested working, but this is the right idea). There is actually currently an open pull request (#44 ) which would fix this.
    • Your no-object loss does not factor in non-responsible predictors which share a cell with a responsible predictor, which it should, as the "1_ij^noobj" from the paper will be 1 for these.
    • You set your MSE function with reduction='sum', but then do not normalize for batch size. This means that the loss scales linearly with the batch size, which results in much larger losses (forcing low learning rates), and is also an entanglement of hyperparameters, which is bad. The correct implementation is to calculate sum-squared error for each sample in the batch independently, then average them. To fix this, replace return loss with return loss / float(predictions.size()[0]) (you will have to use a larger learning rate, but this is a good thing!).
    • Those flatten layers in are totally unnecessary, or rather, they do nothing. Torch MSE is smart enough to be given any two tensors of the same dimension.
    • In dataset.py, you have your width and height target values for each box calculated relative to the cell dimensions: width_cell, height_cell = (width * self.S, height * self.S,) This is incorrect, they are supposed to be relative to the dimensions of the entire image (even though x and y are relative to the dimensions of a cell!) The reason for this, as stated in the paper, is so that each element of [x,y,w,h] will be between 0.0 and 1.0. To fix this, just remove multiplication by self.S. This will also need to be fixed on the other end when you convert predicted labels back to boxes for visualization. This is really more about the dataloading than the loss function, but because it unbalances the loss function it has the same sort of effect: failing to fix this causes mode collapse on object classification when you try to generalize the model.

    Obviously your project is just about overfitting the model, and none of these issues are apparent when attempting to overfit. They do, however, cause serious issues when you are trying to train the whole thing. If you want to fix it, feel free to have reference to my re-implementation of the loss function, which should be compatible with yours, but is re-written to try to mimic the paper's formula as close as possible. Do bear in mind, though, that mine evidently isn't perfect either (I can't get my model stable under 1e-2 learning rate, indicating a probable scaling mistake somewhere).

    opened by a-g-moore 0
  • StyleGAN - what's exacttly wrong with it?

    StyleGAN - what's exacttly wrong with it?

    I run the code and it seem to work. Could you please clarify what's wrong with your implementation? It is too slow, or the generated faces are poor, or something else?

    opened by moneroexamples 0
  • warning: Embedding dir exists, did you set global_step for add_embedding()?

    warning: Embedding dir exists, did you set global_step for add_embedding()?

    I was doing the pytorch tensor board tutorial. While running the pytorch_tensorboard_.py notebook file, I get this:

    warning: Embedding dir exists, did you set global_step for add_embedding()?

    Somebody else also faced this (might be in a different case). I don't understand the cause & effect of this, any help?

    Thanks in advance!

    opened by massisenergy 0
  • Error while training YOLOv3 on COCO dataset

    Error while training YOLOv3 on COCO dataset

    Training on PASCAL VOC dataset work fine but while training on COCO dataset I am having the following error:

    File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\albumentations\core\bbox_utils.py", line 417, in check_bbox raise ValueError(f"Expected {name} for bbox {bbox} to be in the range [0.0, 1.0], got {value}.") ValueError: Expected x_min for bbox (-0.0020920502092049986, 0.09853100000000004, 0.327091949790795, 0.681844, 0.0) to be in the range [0.0, 1.0], got -0.0020920502092049986.

    opened by michalt38 0
  • Query: DCGAN implementation saving results/sampling

    Query: DCGAN implementation saving results/sampling

    Thank you for the tutorial. I have followed the tutorial and coded in parallel. I have been able to execute the model, however I wanted to know how to save/sample from the model so that I can visualize the various versions of model with different model architectures. I request you to guide me. Regards Prabhav

    P.S. It would be great if you could share how to use multiple GPUs in a single node for training.

    opened by KomputerMaster64 0
Owner
Aladdin Persson
I'm a math geek who likes programming. Particularly interested in machine learning, algorithms and software development.
Aladdin Persson
SingleVC performs any-to-one VC, which is an important component of MediumVC project.

SingleVC performs any-to-one VC, which is an important component of MediumVC project. Here is the official implementation of the paper, MediumVC.

谷下雨 26 Dec 28, 2022
MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens

MSG-Transformer Official implementation of the paper MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens, by Jiemin

Hust Visual Learning Team 68 Nov 16, 2022
Message Passing on Cell Complexes

CW Networks This repository contains the code used for the papers Weisfeiler and Lehman Go Cellular: CW Networks (Under review) and Weisfeiler and Leh

Twitter Research 108 Jan 05, 2023
Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more"

The Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more" Arxiv preprint Louay Hazami   ·   Rayhane Mama   ·   Ragavan Thurairatn

Rayhane Mama 144 Dec 23, 2022
[ACM MM 2019 Oral] Cycle In Cycle Generative Adversarial Networks for Keypoint-Guided Image Generation

Contents Cycle-In-Cycle GANs Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Acknowledgments Relat

Hao Tang 67 Dec 14, 2022
MinHash, LSH, LSH Forest, Weighted MinHash, HyperLogLog, HyperLogLog++, LSH Ensemble

datasketch: Big Data Looks Small datasketch gives you probabilistic data structures that can process and search very large amount of data super fast,

Eric Zhu 1.9k Jan 07, 2023
A python implementation of Yolov5 to detect fire or smoke in the wild in Jetson Xavier nx and Jetson nano

yolov5-fire-smoke-detect-python A python implementation of Yolov5 to detect fire or smoke in the wild in Jetson Xavier nx and Jetson nano You can see

20 Dec 15, 2022
Scheduling BilinearRewards

Scheduling_BilinearRewards Requirement Python 3 =3.5 Structure main.py This file includes the main function. For getting the results in Figure 1, ple

junghun.kim 0 Nov 25, 2021
SciFive: a text-text transformer model for biomedical literature

SciFive SciFive provided a Text-Text framework for biomedical language and natural language in NLP. Under the T5's framework and desrbibed in the pape

Long Phan 54 Dec 24, 2022
zeus is a Python implementation of the Ensemble Slice Sampling method.

zeus is a Python implementation of the Ensemble Slice Sampling method. Fast & Robust Bayesian Inference, Efficient Markov Chain Monte Carlo (MCMC), Bl

Minas Karamanis 197 Dec 04, 2022
Here I will explain the flow to deploy your custom deep learning models on Ultra96V2.

Xilinx_Vitis_AI This repo will help you to Deploy your Deep Learning Model on Ultra96v2 Board. Prerequisites Vitis Core Development Kit 2019.2 This co

Amin Mamandipoor 1 Feb 08, 2022
一个多模态内容理解算法框架,其中包含数据处理、预训练模型、常见模型以及模型加速等模块。

Overview 架构设计 插件介绍 安装使用 框架简介 方便使用,支持多模态,多任务的统一训练框架 能力列表: bert + 分类任务 自定义任务训练(插件注册) 框架设计 框架采用分层的思想组织模型训练流程。 DATA 层负责读取用户数据,根据 field 管理数据。 Parser 层负责转换原

Tencent 265 Dec 22, 2022
An open source app to help calm you down when needed.

By: Seanpm2001, Et; Al. Top README.md Read this article in a different language Sorted by: A-Z Sorting options unavailable ( af Afrikaans Afrikaans |

Sean P. Myrick V19.1.7.2 2 Oct 24, 2022
A repository for storing njxzc final exam review material

文档地址,请戳我 👈 👈 👈 ☀️ 1.Reason 大三上期末复习软件工程的时候,发现其他高校在GitHub上开源了他们学校的期末试题,我很受触动。期末

GuJiakai 2 Jan 18, 2022
This is a pytorch implementation for the BST model from Alibaba https://arxiv.org/pdf/1905.06874.pdf

Behavior-Sequence-Transformer-Pytorch This is a pytorch implementation for the BST model from Alibaba https://arxiv.org/pdf/1905.06874.pdf This model

Jaime Ferrando Huertas 83 Jan 05, 2023
Official Implementation of VAT

Semantic correspondence Few-shot segmentation Cost Aggregation Is All You Need for Few-Shot Segmentation For more information, check out project [Proj

Hamacojr 114 Dec 27, 2022
PyTorch implementation for ComboGAN

ComboGAN This is our ongoing PyTorch implementation for ComboGAN. Code was written by Asha Anoosheh (built upon CycleGAN) [ComboGAN Paper] If you use

Asha Anoosheh 139 Dec 20, 2022
TensorFlow implementation of AlexNet and its training and testing on ImageNet ILSVRC 2012 dataset

AlexNet training on ImageNet LSVRC 2012 This repository contains an implementation of AlexNet convolutional neural network and its training and testin

Matteo Dunnhofer 161 Nov 25, 2022
Fibonacci Method Gradient Descent

An implementation of the Fibonacci method for gradient descent, featuring a TKinter GUI for inputting the function / parameters to be examined and a matplotlib plot of the function and results.

Emma 1 Jan 28, 2022
Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition (NeurIPS 2019)

MLCR This is the source code for paper Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition. Xuesong Niu, Hu Han, Shiguang

Edson-Niu 60 Nov 29, 2022