Useful tool for inserting DataFrames into the Excel sheet.

Overview

PyCellFrame

Insert Pandas DataFrames into the Excel sheet with a bunch of conditions

Install

pip install pycellframe

Usage

Examples

Let's suppose that we have an Excel file named "numbers.xlsx" with the sheet named "Dictionary" in which we would like to insert the pandas.DataFrame.

Import pandas and create an example DataFrame (which will be inserted into the Excel sheet):

import pandas as pd


ex = {
    'Num': [1, 2, 3, 4],
    'AfterFirstBlankCol': 'AfterFirstBlank',
    'Descr': ['One', 'Two', 'Three', 'Four'],
    'AfterSecondBlankCol': 'AfterSecondBlank.',
    'Squared': [1, 4, 9, 16],
    'Binary:': ['1', '10', '11', '100']
}

df = pd.DataFrame(ex)
  • Import openpyxl.load_workbook and open numbers.xlsx - Our Excel workbook;
  • Get - Dictionary our desired sheet:
from openpyxl import load_workbook


workbook = load_workbook('numbers.xlsx')
worksheet = workbook['Dictionary']

Functions

1. incell_style(cell_src, cell_dst)
  • Let's say, we have a cell in Excel Dictionary sheet that we would like to copy the style from, and it is O3;
  • Let O4 be our destination cell:

NOTE: If we wanted to copy that style to more than one cell, we would simply use the loop depending on the locations of the destination cells.

from pycellframe import incell_style


incell_style(cell_src=worksheet['O3'], cell_dst=worksheet['O4'])
2. sheet_to_sheet(filename_sheetname_src, filename_sheetname_dst, calculated)
  • Let's say that we have two Excel files, and we need specific sheet from one file to be completely copied to another file's specific sheet;
  • filename_sheetname_src is the parameter for one file -> sheet the data to be copied from (tuple(['FILENAME_SRC', 'SHEETNAME_SRC']));
  • worksheet_dst is the parameter for the destination Worksheet the data to be copied to (openpyxl.worksheet.worksheet.Worksheet);
  • Let's assume that we have file_src.xlsx as src file and for worksheet_src we can use its CopyThisSheet sheet.
  • We can use output.xlsx -> CopyToThisSheet sheet as the destination worksheet, for which we already declared the Workbook object above.

NOTE: We are assuming that we need all the formulas (where available) from the source sheet, not calculated data, so we set calculated parameter to False

from pycellframe import sheet_to_sheet


worksheet_to = workbook['CopyToThisSheet']

sheet_to_sheet(filename_sheetname_src=('file_src.xlsx', 'CopyThisSheet'),
               worksheet_dst=worksheet_to,
               calculated=False)
3. incell_frame(worksheet, dataframe, col_range, row_range, num_str_cols, skip_cols, headers)
  • From our package pycellframe import function incell_frame;
  • Insert ex - DataFrame into our sheet twice - with and without conditions:
from pycellframe import incell_frame


# 1 - Simple insertion
incell_frame(worksheet=worksheet, dataframe=df)

# 2 - Insertion with some conditions
incell_frame(worksheet=worksheet,
             dataframe=df,
             col_range=(3, 0),
             row_range=(6, 8),
             num_str_cols=['I'],
             skip_cols=['D', 'F'],
             headers=True)

In the first insertion, we did not give our function any arguments, which means the DataFrame ex will be inserted into the Dictionary sheet in the area A1:F4 (without the headers).

However, with the second insertion we define some conditions:

  • col_range=(3, 0) - This means that insertion will be started at the Excel column with the index 3 (column C) and will not be stopped until the very end, since we gave 0 as the second element of the tuple

  • row_range=(6, 8) - Only in between these rows (in Excel) will the DataFrame data be inserted, which means that only the first row (since the headers is set to True) from ex will be inserted into the sheet

  • num_str_cols=['F'] - Another condition here is to not convert Binary column values to int. If we count, this column will be inserted in the Excel column F, so we tell the function to leave the values in it as string

  • skip_cols=['D', 'F'] - D and F columns in Excel will be skipped and since our worksheet was blank in the beginning, these columns will be blank (that is why I named the columns in the DataFrame related names)

  • headers=True - This time, the DataFrame columns will be inserted, too, so the overall insertion area would be C6:J8

For really detailed description of the parameters, please see:
  1. incell_frame.__docs__
  2. sheet_to_sheet.__docs__
  3. incell_style.__docs__
  • Finally, let's save our changes to a new Excel file:
workbook.save('output.xlsx')

Full Code

import pandas as pd
from openpyxl import load_workbook
from pycellframe import incell_style, \
                        incell_frame, \
                        sheet_to_sheet


ex = {
    'Num': [1, 2, 3, 4],
    'AfterFirstBlankCol': 'AfterFirstBlank',
    'Descr': ['One', 'Two', 'Three', 'Four'],
    'AfterSecondBlankCol': 'AfterSecondBlank.',
    'Squared': [1, 4, 9, 16],
    'Binary:': ['1', '10', '11', '100']
}

df = pd.DataFrame(ex)

workbook = load_workbook('numbers.xlsx')
worksheet = workbook['Dictionary']


# Copy the cell style
incell_style(cell_src=worksheet['O3'], cell_dst=worksheet['O4'])


# Copy the entire sheet
worksheet_to = workbook['CopyToThisSheet']

sheet_to_sheet(filename_sheetname_src=('file_src.xlsx', 'CopyThisSheet'),
               worksheet_dst=worksheet_to,
               calculated=False)


# Insert DataFrame into the sheet

## 1 - Simple insertion
incell_frame(worksheet=worksheet, dataframe=df)

## 2 - Insertion with some conditions
incell_frame(worksheet=worksheet,
             dataframe=df,
             col_range=(3, 0),
             row_range=(6, 8),
             num_str_cols=['I'],
             skip_cols=['D', 'F'],
             headers=True)

workbook.save('output.xlsx')
Owner
Luka Sosiashvili
Luka Sosiashvili
MDAnalysis is a Python library to analyze molecular dynamics simulations.

MDAnalysis Repository README [*] MDAnalysis is a Python library for the analysis of computer simulations of many-body systems at the molecular scale,

MDAnalysis 933 Dec 28, 2022
Package for decomposing EMG signals into motor unit firings, as used in Formento et al 2021.

EMGDecomp Package for decomposing EMG signals into motor unit firings, created for Formento et al 2021. Based heavily on Negro et al, 2016. Supports G

13 Nov 01, 2022
Bearsql allows you to query pandas dataframe with sql syntax.

Bearsql adds sql syntax on pandas dataframe. It uses duckdb to speedup the pandas processing and as the sql engine

14 Jun 22, 2022
Retentioneering 581 Jan 07, 2023
MIR Cheatsheet - Survival Guidebook for MIR Researchers in the Lab

MIR Cheatsheet - Survival Guidebook for MIR Researchers in the Lab

SeungHeonDoh 3 Jul 02, 2022
A data structure that extends pyspark.sql.DataFrame with metadata information.

MetaFrame A data structure that extends pyspark.sql.DataFrame with metadata info

Invent Analytics 8 Feb 15, 2022
PyPSA: Python for Power System Analysis

1 Python for Power System Analysis Contents 1 Python for Power System Analysis 1.1 About 1.2 Documentation 1.3 Functionality 1.4 Example scripts as Ju

758 Dec 30, 2022
SparseLasso: Sparse Solutions for the Lasso

SparseLasso: Sparse Solutions for the Lasso Introduction SparseLasso provides a Scikit-Learn based estimation of the Lasso with cross-validation tunin

Gabriel Okasa 1 Nov 08, 2021
BasstatPL is a package for performing different tabulations and calculations for descriptive statistics.

BasstatPL is a package for performing different tabulations and calculations for descriptive statistics. It provides: Frequency table constr

Angel Chavez 1 Oct 31, 2021
Aggregating gridded data (xarray) to polygons

A package to aggregate gridded data in xarray to polygons in geopandas using area-weighting from the relative area overlaps between pixels and polygons. Check out the binder link above for a sample c

Kevin Schwarzwald 42 Nov 09, 2022
Multiple Pairwise Comparisons (Post Hoc) Tests in Python

scikit-posthocs is a Python package that provides post hoc tests for pairwise multiple comparisons that are usually performed in statistical data anal

Maksim Terpilowski 264 Dec 30, 2022
Pipetools enables function composition similar to using Unix pipes.

Pipetools Complete documentation pipetools enables function composition similar to using Unix pipes. It allows forward-composition and piping of arbit

186 Dec 29, 2022
PyClustering is a Python, C++ data mining library.

pyclustering is a Python, C++ data mining library (clustering algorithm, oscillatory networks, neural networks). The library provides Python and C++ implementations (C++ pyclustering library) of each

Andrei Novikov 1k Jan 05, 2023
A simple and efficient tool to parallelize Pandas operations on all available CPUs

Pandaral·lel Without parallelization With parallelization Installation $ pip install pandarallel [--upgrade] [--user] Requirements On Windows, Pandara

Manu NALEPA 2.8k Dec 31, 2022
Improving your data science workflows with

Make Better Defaults Author: Kjell Wooding [email protected] This is the git re

Kjell Wooding 18 Dec 23, 2022
Python package for processing UC module spectral data.

UC Module Python Package How To Install clone repo. cd UC-module pip install . How to Use uc.module.UC(measurment=str, dark=str, reference=str, heade

Nicolai Haaber Junge 1 Oct 20, 2021
apricot implements submodular optimization for the purpose of selecting subsets of massive data sets to train machine learning models quickly.

Please consider citing the manuscript if you use apricot in your academic work! You can find more thorough documentation here. apricot implements subm

Jacob Schreiber 457 Dec 20, 2022
Intake is a lightweight package for finding, investigating, loading and disseminating data.

Intake: A general interface for loading data Intake is a lightweight set of tools for loading and sharing data in data science projects. Intake helps

Intake 851 Jan 01, 2023
ASTR 302: Python for Astronomy (Winter '22)

ASTR 302, Winter 2022, University of Washington: Python for Astronomy Mario Jurić Location When: 2:30-3:50, Monday & Wednesday, Winter quarter 2022 Wh

UW ASTR 302: Python for Astronomy 4 Jan 12, 2022
X-news - Pipeline data use scrapy, kafka, spark streaming, spark ML and elasticsearch, Kibana

X-news - Pipeline data use scrapy, kafka, spark streaming, spark ML and elasticsearch, Kibana

Nguyễn Quang Huy 5 Sep 28, 2022