pandas
Assists with loading, cleaning, transforming, and analyzing tabular data using pandas. Use when importing CSV/Excel/SQL data, handling missing values, performing groupby aggregations, merging datasets, working with time series, or building analysis-ready datasets. Trigger words: pandas, dataframe, csv, groupby, merge, time series, data cleaning.
Usage
Getting Started
- Install the skill using the command above
- Open your AI coding agent (Claude Code, Codex, Gemini CLI, or Cursor)
- Reference the skill in your prompt
- The AI will use the skill's capabilities automatically
Example Prompts
- "Analyze the sales data in revenue.csv and identify trends"
- "Create a visualization comparing Q1 vs Q2 performance metrics"
Documentation
Overview
Pandas is a Python library for loading, cleaning, transforming, and analyzing tabular data. It provides DataFrames for structured data manipulation, supports CSV, Excel, SQL, JSON, and Parquet formats, and offers powerful groupby aggregation, merge/join operations, time series resampling, and method chaining for building analysis pipelines.
Instructions
- When loading data, use
pd.read_parquet()for large datasets (faster, smaller, type-preserving),pd.read_csv()with explicitdtypefor CSVs, andpd.read_sql()for database queries. - When cleaning data, handle missing values with
fillna()ordropna(), deduplicate withdrop_duplicates(), use string methods (.str.strip(),.str.lower()) for text cleaning, and convert types explicitly withastype()andpd.to_datetime(). - When transforming data, use
assign()for computed columns,pipe()for method chaining,melt()andpivot_table()for reshaping, andpd.cut()/pd.qcut()for binning. - When aggregating, use
groupby().agg()with named aggregation for readable column names,transform()to broadcast results back to original shape, andresample()for time-based grouping. - When merging, use
pd.merge()with explicithowandvalidateparameters to catch data quality issues at merge time, andpd.concat()for stacking DataFrames. - When optimizing performance, use
categorydtype for low-cardinality strings, vectorized operations over.apply(), and Parquet for storage; for datasets over 10GB, consider Polars or DuckDB.
Examples
Example 1: Clean and analyze a sales dataset
User request: "Load a messy CSV of sales data, clean it, and generate monthly revenue summaries"
Actions:
- Load with
pd.read_csv()specifyingdtypeandparse_datesfor key columns - Clean missing values, deduplicate by order ID, and standardize text fields
- Add computed columns for revenue and profit margin using
assign() - Group by month with
resample("M").agg()for revenue, order count, and average order value
Output: A clean DataFrame with monthly revenue summaries ready for visualization or reporting.
Example 2: Merge and enrich customer data from multiple sources
User request: "Join customer data from CRM, transactions, and support tickets into a single view"
Actions:
- Load each dataset and standardize key columns (email, customer ID)
- Merge CRM and transactions with
pd.merge(on="customer_id", how="left", validate="one_to_many") - Aggregate support tickets per customer and merge counts
- Export the enriched dataset to Parquet for downstream analysis
Output: A unified customer DataFrame with CRM info, transaction history, and support metrics.
Guidelines
- Use
pd.read_parquet()for intermediate and output files since it is faster, smaller, and preserves types. - Chain transformations with
.pipe()for readable and testable code. - Use named aggregation in
.agg()for self-documenting column names. - Set
dtypeexplicitly onread_csv()for large files since type inference reads the full file twice. - Use
categorydtype for columns with fewer than 1000 unique values for significant memory savings. - Validate merges with
validate="one_to_many"to catch data quality issues at merge time. - Use
query()for complex filters instead of chained boolean indexing for better readability.
Information
- Version
- 1.0.0
- Author
- terminal-skills
- Category
- Data & AI
- License
- Apache-2.0