salabim - discrete event simulation in Python

Related tags

Deep Learningsalabim
Overview

Logo

Object oriented discrete event simulation and animation in Python.

Includes process control features, resources, queues, monitors. statistical distributions.

Powerful and high quality animation facilities, which can be virtually separated from the model code.

Salabim follows a well proven and very intuitive ame process description method (like Tomas and Must). The package provides animation, queues, 'states', monitors for data collection and presentation, tracing and statistical distributions.

See www.salabim.org for details.

See www.salabim.org/manual for the documentation.

DOI Code style: black PyPI license

Comments
  • `Queue.length` etc. should be `State` (or `wait()`-able)

    `Queue.length` etc. should be `State` (or `wait()`-able)

    Hi,

    It would be quite handy if I could use Queue.length in a wait statement, without wrapping it in a State object. E.g. something like

    class Item(sim.component):
        pass
    
    
    class A(sim.component):
        def __init__(self):
            self.q = sim.Queue()
        
        def process(self):
            Item().enter(self.q)
            yield self.hold(10.)
    
    class C(sim.component):
        def __init__(self):
            self.a = A()
    
        def process(self):
            yield self.wait(self.a.q.length == 20)
            print("waited for 20 items in queue")
    
    opened by hildensia 6
  • Clarify contract for interaction with interrupted component

    Clarify contract for interaction with interrupted component

    Consider the following example

    import salabim as sim
    
    
    class Customer(sim.Component):
        def process(self):
            # yield self.hold(0.1) # not needed to reproduce the problem, but can be reproduced if there is a yield in the process
            print("huhu")
    
    
    env = sim.Environment(trace=True)
    
    c = Customer()
    
    c.hold(5)
    
    env.run(1)
    
    c.interrupt()
    
    env.run(1)
    
    ## this does not fail, but reschedules the component despite its interrupt status without resetting the latter
    c.hold(1)
    
    env.run(1)
    
    c.resume()
    

    This fails at the last line when trying to resume the component with ValueError: customer.0 not interrupted.

    Traceback (most recent call last):
      File "C:/brandl_data/projects/salamim_repo/mytests/hold_interrupt.py", line 27, in <module>
        c.resume()
      File "C:\brandl_data\projects\\salamim_repo\salabim.py", line 12198, in resume
        raise ValueError(self.name() + " not interrupted")
    ValueError: customer.0 not interrupted
    

    When checking the internal state of at the last statement we can see that it has preserved the interrupt status image

    It should handle such a situation more gracefully:

    a) either it should fail already when interacting with an interrupted component (e.g. via hold as in the example) b) or an interaction with an interrupted component should clear its interrupt status.

    Maybe there is an even better solution, but in any case it would be wonderful if the contract of interrupt could be detailed out better.

    opened by holgerbrandl 6
  • Tagged release

    Tagged release

    Could you add a git or GitHub tag for your latest release (version 2.2.23)? It can be done on this page: https://github.com/salabim/salabim/releases

    The tagged release is needed for openjournals/joss-reviews#767

    opened by gonsie 4
  • Multiple Put requests for different anonymous resource coming frome same component

    Multiple Put requests for different anonymous resource coming frome same component

    I observed today in the trace, that if a component triggers 2 one put-request to anonymous resources, one of them will always fail. However, analyzing the code I could identify a check which prevents the successful behavior.

    `

     def honor_all(self):
         for r in self._requests:
            if r._honor_only_first and r._requesters[0] != self:
                return []
            self_prio = self.priority(r._requesters)
            if r._honor_only_highest_priority and self_prio != r._requesters._head.successor.priority:
                return []
            if self._requests[r] > 0:
                if self._requests[r] > (r._capacity - r._claimed_quantity + 1e-8):
                    return []
            else:
                if -self._requests[r] > r._claimed_quantity + 1e-8:
                     return []
        return list(self._requests.keys())`
    

    Maybe it is a numerical issue, but commenting the check if -self._requests[r] > r._claimed_quantity + 1e-8 out fixes the issue.

    After that the trace will contain the following lines (R31 is the component, p31 and p32 are anonymous resources).

    R31 put (request) 2.0 to p31 priority=inf 276 R31 claim -2.0 from p31 276 R31 request honor p31 scheduled for 0.300 @ 276+ mode=output 279 R31 put (request) 8.0 to p32 priority=inf 279 R31 claim -8.0 from p32 279 R31 request honor p32 scheduled for 0.300 @ 279+ mode=output

    opened by PhilippWillms 3
  • Q: Log of actual start and end time within simulation run on component level

    Q: Log of actual start and end time within simulation run on component level

    Next question coming up: Multiple components may be scheduled at common point in simulation time (e.g. t=0 which would result in at=0), but due to custom business logic in the simulation, the actual schedule comes later, e.g. t=1. Currently I only see that information in the trace. This is kind of uncomfortable when creating graphics based on the actual data of simulation run, e.g. Gantt chart.

    Is there any possibility to log actual start/end time (schedule - hold - release) on component level? Rebuilding it from the strings in the trace is quite an effort actually ...

    opened by PhilippWillms 2
  • Sample Code Discrepancy

    Sample Code Discrepancy

    @salabim, the explanation of simple bank example 1 in the documentation mentions a couple calls that do not show up in the sample code. This includes self.leave(), and clerk.reactivate(). This is in regards to https://github.com/openjournals/joss-reviews/issues/767

    opened by pspringer 2
  • Details in the Readme

    Details in the Readme

    It would be nice for usage and/or installation instruction to be in the readme. It would also be great to have a link directly to the ReadtheDocs documentation.

    These are nice-to-haves for openjournals/joss-reviews#767

    opened by gonsie 2
  • Repository organization

    Repository organization

    Hi,

    The files in this respository need to be organized. A typical repository has the following directories:

    • src/ directory for source code
    • doc/ directory for documentation
    • test/ directory for tests

    You may also want to set this python project up as a module. There are many guides online (such as this one).

    This organization is needed for openjournals/joss-reviews#767

    opened by gonsie 2
  • Label line display in AnimateMonitor not scaled properly

    Label line display in AnimateMonitor not scaled properly

    For using labels in the AnimateMonitor function, I found out that the lines in the graph to be drawn according to the label are not scaled properly. This affects ll. 3171 - 3182 in the salabim.py.

                        self.aos.append(
                            AnimateLine(
                                spec=(0, 0, width, 0),
                                x=x,
                                y=y,
                                offsetx=offsetx,
                                offsety=label_y,
                                angle=angle,
                                linewidth=label_linewidth,
                                linecolor=label_linecolor,
                                over3d=over3d,
                            )
    

    Comparing it to the code for drawing the text label, I see that screen_coordinates is not explicitly set to True here. Could it be related to that?

    opened by PhilippWillms 1
  • AnimateText shows the same text when called in a loop

    AnimateText shows the same text when called in a loop

    When I use AnimateText like this in a loop to show each item in a list, the animation shows the same values for all of them despite being in different locations:

    a = [1,2,3,4]
    for i in range(len(a)):
        sim.AnimateText(text=lambda:str(a[i]),x=0, y=100+20*i)
    

    The screen only shows 4 "4"s as opposed to 1,2,3,4

    However it works fine when I call them individually:

        sim.AnimateText(text=lambda:str(a[0]),x=0, y=100+20*0)
        sim.AnimateText(text=lambda:str(a[1]),x=0, y=100+20*1)
        sim.AnimateText(text=lambda:str(a[2]),x=0, y=100+20*2)
        sim.AnimateText(text=lambda:str(a[3]),x=0, y=100+20*3)
    

    which could get quite cumbersome and ugly if the list is large.

    Can anyone help with this? Thanks!

    opened by ericlian1 1
  • Delaying issuance of a resource, even when sufficient resources are available

    Delaying issuance of a resource, even when sufficient resources are available

    Hello - I love the project!

    I'm playing around with it and slowly building up a simulation, trying to add features as I go. I'm stuck on trying to add behavior and I haven't been able to find the answer in the docs - I'm wondering whether what I'm trying to do is possible?

    I want to add in a delay when issuing a resource to my customers. This delay will be an attribute I can parameterize. My desired behavior is:

    • [X] Customers request the resource and enter the resource's queue
    • [X] If the resource's available quantity < amount customer at the head of the line requests, customers wait until sufficient resources arrive
    • [ ] If/when sufficient resources are available, I still want to delay the issuance, representing a lengthy issuance process (example: if waiting for a heart transplant, the heart doesn't instantly appear in your chest as soon as it arrives, the surgery takes time)

    Right now, I'm accomplishing the delay as follow:

    class Customer(sim.Component):
        def __init__(self, config={}):
            sim.Component.__init__(self)
            self.config = config
           
        def process(self):
            # Destructure the config dict
            _, store, n_consumed = itemgetter(
                'gen_dist', 'store', 'n_consumed')(self.config)
    
            # Enter the queue for resources at the store
            yield self.hold(store.config.get('clerk').issue_time)
            yield self.request((store.config.get('resource'), n_consumed))
            yield self.passivate()
    

    Generated customers hold for the clerk's issue_time, then request the resource. This partially works, however, my "length of stay in requesters" statistics are invalid, since the issuance delay doesn't count against the time they are waiting for the resource.

    Can someone point me to what I'm missing in the docs or otherwise propose a solution?

    Thank you!

    opened by twgardner2 1
  • Extend class Queue with method available_capacity

    Extend class Queue with method available_capacity

    In some of my models I rely on the number of free spaces in a queue. Having this number available directly saves time and makes models more readable. It's also generic enough that I expect it to be useful for other Salabim simmers.

    opened by tcdejong 0
  • problem loading .obj files

    problem loading .obj files

    sim.Animate3dObj("12281_Container_v2_L2", x=lambda t: t, y=-40, y_translate=90, x_scale=0.1, y_scale=0.1, z_scale=0.1) produces the following error: warnings.warn("Could not set COM MTA mode. Unexpected behavior may occur.")

    File E:\Anaconda3\lib\site-packages\pywavefront\visualization.py:48, in 43 return v 45 pyglet.image._nearest_pow2 = same 47 VERTEX_FORMATS = { ---> 48 'V3F': GL_V3F, 49 'C3F_V3F': GL_C3F_V3F, 50 'N3F_V3F': GL_N3F_V3F, 51 'T2F_V3F': GL_T2F_V3F, 52 # 'C3F_N3F_V3F': GL_C3F_N3F_V3F, # Unsupported 53 'T2F_C3F_V3F': GL_T2F_C3F_V3F, 54 'T2F_N3F_V3F': GL_T2F_N3F_V3F, 55 # 'T2F_C3F_N3F_V3F': GL_T2F_C3F_N3F_V3F, # Unsupported 56 } 59 def draw(instance, lighting_enabled=True, textures_enabled=True): 60 """Generic draw function"""

    NameError: name 'GL_V3F' is not defined

    How do I properly add an OBJ file?

    opened by ggblake 0
  • AnimatePolygon does not close the shape that is defined by spec

    AnimatePolygon does not close the shape that is defined by spec

    AnimatePolygon should close an unclosed polygon for drawing purposes.

    sim.AnimatePolygon(
        spec=[
            *(-75.0, -75.0), #sw
            *(75.0, -75.0), #nw
            *(75.0, 75.0), #ne
            *(-75.0, 75.0) #se
        ],
        linewidth=5,
        ...
    )
    
    image
    opened by citrusvanilla 1
Releases(v2.3.0)
  • v2.3.0(Jun 18, 2018)

    version 2.3.0 2018-06-28

    New functionality

    As from this version, animation is more powerful and easier to use. Although the old style Animate class is still available, it is recommended to use the new style classes.

    The documentation is not yet completely up-to-date. Please read these release notes carefully to get more information.

    All the docstrings (and therefore the reference section of the manual) are however up-to-date. It is planned to publish a number of tutorial videos or guides, both for basic and advanced animation.

    To visualize rectangles, lines, points, polygon, texts, circles and images salabim offers the new classes

    • AnimateCircle
    • AnimateImage
    • AnimateLine
    • AnimatePoints
    • AnimatePolygon
    • AnimateRectangle
    • AnimateText

    The main difference with the Animate class is that no automatic linear interpolation over time is supported. But, each of the characteristics may be still changed over time easily! All visualizations (apart from AnimateText) have an attached text field that will be displayed relative to the shape. Thus, for instance, it possible to say: vis = sim.AnimateRectangle(spec=(100, 100, 300, 50), text='some text') and then a rectangle with the text 'some text' in the middle will be displayed. In contrast to Animate, updating any of the specifying fields does not require the update method, but can be done directly. In the above example you can just say vis.text='yet another text' or vis.x=100

    One of the key features of this new visualization is that all the specifying fields can now be functions or methods. This make is possible to automatically update fields, e.g. vis = sim.AnimateText(lambda:'mean of histogram = ' + str(hist1.mean()), x=100, y=100) which will show and update the current mean of the histogram or vis = sim.AnimateRectangle(spec=(0, 0, 60, 20), x=100, y=lambda t:t+10) which results in a rectangle, moving from bottom to top. The animation_objects method of Component now accepts any of the new visualization class instances as well as Animate instances.

    Animation of queues is now specified with the class AnimateQueue, although Queue.animate() is still supported. One queue can now be animated in several ways, whereas previously one queue could be animated only once. See Demo queue animation.py for an example. It possible to restrict the number of components shown (max_length). Is possible to change all the parameters of the queue animation and the shown components dynamically. See for instance Elevator animated.py where the queue position moves up and down. Or see Machine shop animated.py where the shape of the components changes dynamically. Internally, the animation of queues uses a new, more efficient, algorithm.

    Most examples have been updated to use this new visualization functionality.

    Texts can now spawn multiple lines (lines separated by linefeeds). Also, a list or tuple of strings may be used instead, in which case each element of the list/tuple will be treated as another line. This is particularly useful to present (dynamic) monitor values. With AnimateText, it is possible to restrict the number of lines (parameter max_lines) shown.

    Class Animate has a new animation parameter, as_points that applies to lines, rectangles and polygons. If as_points is False (the default), all lines will be drawn. If as_points is True, only the end point will be drawn as a square with a width equal to the linewidth. Technical remark: the advantage of using as_points this instead of a series of individual squares is that there is only one bitmap to be placed on the canvas, which may lead to better performance in many cases. Also this is used internally for AnimateMonitor() (see below). Points are also available in the new AnimatePoints class.

    Class AnimateMonitor() can be used to visualize the value of a timestamped monitor over time. It is particularly useful for visualizing the length of a queue, the various monitors of a resource or the value of a state. It is possible to connect the lines (very useful for 'duration' monitors, like queue length) or just show the individual points. This class can also visualize the relationship between the index and the value of a non time stamped monitor. The points can be just shown or connected with a line. It is possible to use Monitor.animate() and MonitorTimestamp.animate() as an alternative, although not recommended.

    The MMc animated.py model demonstrates the use of the (timestamped) monitor animation.

    Monitor and MonitorTimestamp can now be used to create a merged (timestamped) monitor. This is done by providing a list of (timestamped) monitors (all have to have the same type), like mc = MonitorTimestamp(name='m1 and m2', merge=(m1, m2)) For monitors, just all of the tallied x-values are copied from the to be merged monitors. For timestamped monitors, the x-values are summed, for all the periods where all the monitors were on. Periods where one or more monitors were off, are excluded. Note that the merge only takes place at creation of the (timestamped) monitor and not dynamically later.

    Sample usage: Suppose we have three types of products, that each have a queue for processing, so a.processing, b.processing, c.processing. If we want to print the histogram of the combined (=summed) length of these queues: MonitorTimestamp(name='combined processing length', merge=(a.processing.length, b.processing.length, c.processing.length)).print_histogram() and to print the histogram of the length_of_stay for all entries: Monitor(name='combined processing length of stay', merge=(a.processing.length_of_stay, b.processing.length_of_stay, c.processing.length_of_stay)).print_histogram()

    CumPdf is a new distribution type that is similar to Pdf, but where cumulative probability values are used. This is particularly useful for dichotomies, like failing probabilities: failrate = 0.1 if CumPdf(True, failrate, False,1) print('failed!')

    All methods print_histogram() print_histograms() print_statistics() print_info() now have an additional parameter as_str, that allows the output to be returned as a string, rather than print the information (the default is False, so just print): This is particularly useful for animation of that information (see demo queue animation.py) or to write directly to a file.

    sim.Random() is a new class that makes a randomstream. It is essentially the same as sim.random.Random().

    Queue.name(value), Resource.name(value) and State.name(value) now also update the derived names.

    API changes

    The API of Component has changed slightly. The parameter process now defaults to None, which means that it tries to run the process generator method, if any. If you don't want to start the process generator method, even if it exists, now set process='' (this was None).

    The API of Environment had changed slightly. The parameter random_seed now defaults to None, which means that 1234567 will be used as the random seed value. If random_seed is '*', a system generated, non reproducable, random seed will be used.

    The API of Environment.random_seed has changed slightly. If the argument seed is '*', a system generated, non reproducable random seed will be used.

    State.animate() is phased out. Use the standard visualization classes, like AnimateRectangle, AnimateCircle and AnimateTex instead.

    Future changes

    Python 2.7 will not be supported in a future version. Please upgrade to Python 3.x as soon as possible .

    Internal changes

    Most default parameters are now None, instead of omitted, which is completely phased out. This makes it easier to specify default arguments, like: myname = None sim.Component(name=myname) This internal change required a couple of changes to the API (see above). Apart from that, the user shouldn't notice this rather dramatic internal change (>500 replacements in the code!).

    Animating lines and polygons without any points is now supported.

    Source code(tar.gz)
    Source code(zip)
A collection of Jupyter notebooks to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

StyleGAN3 CLIP-based guidance StyleGAN3 + CLIP StyleGAN3 + inversion + CLIP This repo is a collection of Jupyter notebooks made to easily play with St

Eugenio Herrera 176 Dec 30, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 07, 2022
Official implementation of "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets" (CVPR2021)

Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets This is the official implementation of "Towards Good Pract

Sanja Fidler's Lab 52 Nov 22, 2022
Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

FFD Source Code Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face M

88 Nov 22, 2022
Decensoring Hentai with Deep Neural Networks. Formerly named DeepMindBreak.

DeepCreamPy Decensoring Hentai with Deep Neural Networks. Formerly named DeepMindBreak. A deep learning-based tool to automatically replace censored a

616 Jan 06, 2023
Earthquake detection via fiber optic cables using deep learning

Earthquake detection via fiber optic cables using deep learning Author: Fantine Huot Getting started Update the submodules After cloning the repositor

Fantine 4 Nov 30, 2022
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
ViDT: An Efficient and Effective Fully Transformer-based Object Detector

ViDT: An Efficient and Effective Fully Transformer-based Object Detector by Hwanjun Song1, Deqing Sun2, Sanghyuk Chun1, Varun Jampani2, Dongyoon Han1,

NAVER AI 262 Dec 27, 2022
A lightweight library to compare different PyTorch implementations of the same network architecture.

TorchBug is a lightweight library designed to compare two PyTorch implementations of the same network architecture. It allows you to count, and compar

Arjun Krishnakumar 5 Jan 02, 2023
General purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends)

General purpose GPU compute framework for cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usec

The Kompute Project 1k Jan 06, 2023
Goal of the project : Detecting Temporal Boundaries in Sign Language videos

MVA RecVis course final project : Goal of the project : Detecting Temporal Boundaries in Sign Language videos. Sign language automatic indexing is an

Loubna Ben Allal 6 Dec 21, 2022
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 18, 2021
[CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception

Versatile Multi-Modal Pre-Training for Human-Centric Perception Fangzhou Hong1  Liang Pan1  Zhongang Cai1,2,3  Ziwei Liu1* 1S-Lab, Nanyang Technologic

Fangzhou Hong 96 Jan 03, 2023
CS50's Introduction to Artificial Intelligence Test Scripts

CS50's Introduction to Artificial Intelligence Test Scripts 🤷‍♂️ What's this? 🤷‍♀️ This repository contains Python scripts to automate tests for mos

Jet Kan 2 Dec 28, 2022
The official implementation of Equalization Loss for Long-Tailed Object Recognition (CVPR 2020) based on Detectron2

Equalization Loss for Long-Tailed Object Recognition Jingru Tan, Changbao Wang, Buyu Li, Quanquan Li, Wanli Ouyang, Changqing Yin, Junjie Yan ⚠️ We re

Jingru Tan 197 Dec 25, 2022
Train Scene Graph Generation for Visual Genome and GQA in PyTorch >= 1.2 with improved zero and few-shot generalization.

Scene Graph Generation Object Detections Ground truth Scene Graph Generated Scene Graph In this visualization, woman sitting on rock is a zero-shot tr

Boris Knyazev 93 Dec 28, 2022
Disease Informed Neural Networks (DINNs) — neural networks capable of learning how diseases spread, forecasting their progression, and finding their unique parameters (e.g. death rate).

DINN We introduce Disease Informed Neural Networks (DINNs) — neural networks capable of learning how diseases spread, forecasting their progression, a

19 Dec 10, 2022
TensorFlow-based implementation of "ICNet for Real-Time Semantic Segmentation on High-Resolution Images".

ICNet_tensorflow This repo provides a TensorFlow-based implementation of paper "ICNet for Real-Time Semantic Segmentation on High-Resolution Images,"

HsuanKung Yang 406 Nov 27, 2022
Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

DTU Acoustic Technology Group 11 Dec 17, 2022