Skip to content

Commit 41f4fde

Browse files
committed
Created using Colaboratory
1 parent 7272311 commit 41f4fde

File tree

1 file changed

+241
-0
lines changed

1 file changed

+241
-0
lines changed

Python_MySQL_P8.ipynb

Lines changed: 241 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,241 @@
1+
{
2+
"nbformat": 4,
3+
"nbformat_minor": 0,
4+
"metadata": {
5+
"colab": {
6+
"provenance": [],
7+
"authorship_tag": "ABX9TyOSgG+kTDBlHL2RvlEEXTYb",
8+
"include_colab_link": true
9+
},
10+
"kernelspec": {
11+
"name": "python3",
12+
"display_name": "Python 3"
13+
},
14+
"language_info": {
15+
"name": "python"
16+
}
17+
},
18+
"cells": [
19+
{
20+
"cell_type": "markdown",
21+
"metadata": {
22+
"id": "view-in-github",
23+
"colab_type": "text"
24+
},
25+
"source": [
26+
"<a href=\"https://colab.research.google.com/github/Animeshcoder/MySQL-Python/blob/main/Python_MySQL_P8.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
27+
]
28+
},
29+
{
30+
"cell_type": "markdown",
31+
"source": [
32+
"### **Introduuction:**\n",
33+
" This project extracts data from an existing table in a MySQL database, transforms it using a custom function that checks for specific ID values and fills new rows with values from the Value column, and inserts the transformed data into a new table in a different database. This allows you to reorganize and restructure the data in a way that is more suitable for your needs."
34+
],
35+
"metadata": {
36+
"id": "slFyz5mQspII"
37+
}
38+
},
39+
{
40+
"cell_type": "markdown",
41+
"source": [
42+
"This code connects to a MySQL database using the create_engine function from the sqlalchemy library and providing the necessary credentials. It then constructs an SQL query to extract data from an existing table in the database.\n",
43+
"\n",
44+
"The query is executed using the read_sql_query function from the pandas library, which returns the result as a DataFrame. The code then creates a connection to a different database where the transformed data will be inserted.\n",
45+
"\n",
46+
"A function named process_group is defined to process each group of rows with the same values in two columns. This function checks if any of the rows have an ID value that is in a specific group of values. If it does, it starts a new entry with other remaining columns set to NULL. Otherwise, it continues filling the same row with values from the Value column.\n",
47+
"\n",
48+
"The function then fills a new row with values from the Value column corresponding to each ID. This new row is returned by the function.\n",
49+
"\n",
50+
"The process_group function is applied to each group of rows with the same values in two columns using the groupby and apply methods of the DataFrame. The result is a new DataFrame containing the transformed data.\n",
51+
"\n",
52+
"Finally, this new DataFrame is saved to the new database using the to_sql method of the DataFrame and providing the necessary arguments such as table name, connection object, and options for handling existing data.\n",
53+
"\n",
54+
"Here’s a step-by-step tutorial explaining each part of the code:\n",
55+
"\n",
56+
"**Import necessary libraries:** Import the pandas, sqlalchemy, and urllib.parse libraries."
57+
],
58+
"metadata": {
59+
"id": "CzElmUuns49p"
60+
}
61+
},
62+
{
63+
"cell_type": "code",
64+
"execution_count": null,
65+
"metadata": {
66+
"id": "w1vGpuhRsmZu"
67+
},
68+
"outputs": [],
69+
"source": [
70+
"import pandas as pd\n",
71+
"from sqlalchemy import create_engine\n",
72+
"import urllib.parse"
73+
]
74+
},
75+
{
76+
"cell_type": "markdown",
77+
"source": [
78+
"**Create a connection to the MySQL database:** Use the create_engine function from the sqlalchemy library to create a connection to the MySQL database. Provide the necessary credentials such as host, user, password, and database name. Use the quote function from the urllib.parse library to properly encode special characters in the password."
79+
],
80+
"metadata": {
81+
"id": "xb6hhLE5tLbd"
82+
}
83+
},
84+
{
85+
"cell_type": "code",
86+
"source": [
87+
"password = \"yourpassword@123\"\n",
88+
"password = urllib.parse.quote(password)\n",
89+
"engine = create_engine(f\"mysql+pymysql://youruser:{password}@yourlhost/yourdatabasename\")"
90+
],
91+
"metadata": {
92+
"id": "34t_l-8mtP_K"
93+
},
94+
"execution_count": null,
95+
"outputs": []
96+
},
97+
{
98+
"cell_type": "markdown",
99+
"source": [
100+
"**Write an SQL query to extract data from an existing table:** Write an SQL query to select data from an existing table in the database based on certain conditions."
101+
],
102+
"metadata": {
103+
"id": "BB4wBul3tTUn"
104+
}
105+
},
106+
{
107+
"cell_type": "code",
108+
"source": [
109+
"query = \"\"\"\n",
110+
" SELECT ID, Value FROM yourdatabasename.tablename\n",
111+
" WHERE Value is not null and ID_No > '363000' and ID IN ('1', '2','3','4','5','6', '7', '8')\n",
112+
"\"\"\""
113+
],
114+
"metadata": {
115+
"id": "qn5lD_5WtYqT"
116+
},
117+
"execution_count": null,
118+
"outputs": []
119+
},
120+
{
121+
"cell_type": "markdown",
122+
"source": [
123+
"**Execute the query and store the result in a DataFrame:** Use the read_sql_query function from the pandas library to execute the query and store the result in a DataFrame."
124+
],
125+
"metadata": {
126+
"id": "XRe24tlftcE0"
127+
}
128+
},
129+
{
130+
"cell_type": "code",
131+
"source": [
132+
"df = pd.read_sql_query(query, engine)"
133+
],
134+
"metadata": {
135+
"id": "ymjDNk5pthTw"
136+
},
137+
"execution_count": null,
138+
"outputs": []
139+
},
140+
{
141+
"cell_type": "markdown",
142+
"source": [
143+
"**Create a connection to a different database:** Create a connection to a different MySQL database where you want to insert the transformed data."
144+
],
145+
"metadata": {
146+
"id": "aR4y7-QCtkH9"
147+
}
148+
},
149+
{
150+
"cell_type": "code",
151+
"source": [
152+
"new_engine = create_engine(f\"mysql+pymysql://youruser:{password}@yourhost/yourdatabasename\")"
153+
],
154+
"metadata": {
155+
"id": "YAw2GnI3towf"
156+
},
157+
"execution_count": null,
158+
"outputs": []
159+
},
160+
{
161+
"cell_type": "markdown",
162+
"source": [
163+
"**Define a function to process each group of rows:** Define a function named process_group that takes as input a group of rows with the same values in two columns. This function checks if any of these rows have an ID value that is in a specific group of values. If it does, it starts a new entry with other remaining columns set to NULL. Otherwise, it continues filling the same row with values from the Value column.\n",
164+
"The function then fills a new row with values from the Value column corresponding to each ID. This new row is returned by the function."
165+
],
166+
"metadata": {
167+
"id": "U2ua5qtltrt4"
168+
}
169+
},
170+
{
171+
"cell_type": "code",
172+
"source": [
173+
"def process_group(group):\n",
174+
" # check if any of these rows have an ID value that is in a specific group of values\n",
175+
" if group[\"ID\"].isin(['1', '2']).any():\n",
176+
" # start a new entry with other remaining columns set to NULL\n",
177+
" new_row = pd.Series({\"Name\": None, \"Phone No\": None, \"Date of Birth\": None, \"Age\": None})\n",
178+
" else:\n",
179+
" # continue filling same row with values from Value column\n",
180+
" new_row = pd.Series(dtype=object)\n",
181+
"\n",
182+
" # fill new_row with values from Value column corresponding to each ID\n",
183+
" for form_meta_id, value in group[[\"ID\", \"Value\"]].values:\n",
184+
" if form_meta_id in ['1', '2']:\n",
185+
" new_row[\"Name\"] = value\n",
186+
" elif form_meta_id in ['3','4']:\n",
187+
" new_row[\"Phone No\"] = value\n",
188+
" elif form_meta_id in ['5','6']:\n",
189+
" new_row[\"Date of Birth\"] = value\n",
190+
" elif form_meta_id in ['7','8']:\n",
191+
" new_row[\"Age\"] = value\n",
192+
" return new_row"
193+
],
194+
"metadata": {
195+
"id": "PXVuxg8-txq_"
196+
},
197+
"execution_count": null,
198+
"outputs": []
199+
},
200+
{
201+
"cell_type": "markdown",
202+
"source": [
203+
"**Apply the process_group function to each group of rows:** Use the groupby and apply methods of the DataFrame to apply the process_group function to each group of rows with the same values in two columns. The result is a new DataFrame containing the transformed data."
204+
],
205+
"metadata": {
206+
"id": "vdECdufOt2JR"
207+
}
208+
},
209+
{
210+
"cell_type": "code",
211+
"source": [
212+
"column_data = df.groupby([\"ID\", \"Value\"]).apply(process_group).reset_index()"
213+
],
214+
"metadata": {
215+
"id": "afhXm-xst9N5"
216+
},
217+
"execution_count": null,
218+
"outputs": []
219+
},
220+
{
221+
"cell_type": "markdown",
222+
"source": [
223+
"**Save the transformed data to the new database:** Use the to_sql method of the DataFrame to save the transformed data to a new table in the new database. Provide the necessary arguments such as table name, connection object, and options for handling existing data."
224+
],
225+
"metadata": {
226+
"id": "1fe7aAQPuAD8"
227+
}
228+
},
229+
{
230+
"cell_type": "code",
231+
"source": [
232+
"column_data.to_sql(\"newtable\", new_engine, index=False, if_exists=\"append\")"
233+
],
234+
"metadata": {
235+
"id": "M9-IQweVuFKQ"
236+
},
237+
"execution_count": null,
238+
"outputs": []
239+
}
240+
]
241+
}

0 commit comments

Comments
 (0)