Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

04_bigquery

December 18, 2023

0.1 Migrating from Spark to BigQuery via Dataproc – Part 4


• Part 1: The original Spark code, now running on Dataproc (lift-and-shift).
• Part 2: Replace HDFS by Google Cloud Storage. This enables job-specific-clusters. (cloud-
native)
• Part 3: Automate everything, so that we can run in a job-specific cluster. (cloud-optimized)
• Part 4: Load CSV into BigQuery, use BigQuery. (modernize)
• Part 5: Using Cloud Functions, launch analysis every time there is a new file in the bucket.
(serverless)

0.1.1 Catch-up cell


0.1.2 Load data into BigQuery

[30]: !bq mk sparktobq

BigQuery error in mk operation: Dataset 'qwiklabs-gcp-00-0bb736ec2d40:sparktobq'


already exists.

[31]: BUCKET='qwiklabs-gcp-00-0bb736ec2d40' # CHANGE


!bq --location=US load --autodetect --source_format=CSV sparktobq.kdd_cup_raw␣
↪gs://$BUCKET/kddcup.data_10_percent.gz

Waiting on bqjob_r18c34ec8b9c7a605_0000018c7a951615_1 … (6s) Current status:


DONE

0.1.3 BigQuery queries


We can replace much of the initial exploratory code by SQL statements.

[32]: %%bigquery
SELECT * FROM sparktobq.kdd_cup_raw LIMIT 5

Query is running: 0%| |


Downloading: 0%| |

[32]: int64_field_0 string_field_1 string_field_2 string_field_3 int64_field_4 \


0 25602 tcp IRC RSTR 3658
1 6155 tcp IRC RSTO 1112
2 0 tcp IRC SF 81

1
3 238 tcp IRC RSTO 132
4 0 tcp IRC RSTO 0

int64_field_5 int64_field_6 int64_field_7 int64_field_8 int64_field_9 \


0 8518 0 0 0 0
1 4968 0 0 0 0
2 18 0 0 0 0
3 1247 0 0 0 0
4 0 0 0 0 0

… int64_field_32 double_field_33 double_field_34 double_field_35 \


0 … 12 0.31 0.08 0.03
1 … 8 0.50 0.12 0.06
2 … 13 0.28 0.04 0.02
3 … 22 0.13 0.01 0.01
4 … 23 0.13 0.01 0.01

double_field_36 double_field_37 double_field_38 double_field_39 \


0 0.0 0.00 0.00 0.31
1 0.0 0.00 0.00 0.44
2 0.0 0.00 0.00 0.24
3 0.0 0.01 0.05 0.10
4 0.0 0.01 0.04 0.10

double_field_40 string_field_41
0 1.00 normal.
1 0.88 normal.
2 0.85 normal.
3 0.73 normal.
4 0.74 normal.

[5 rows x 42 columns]

Ooops. There are no column headers. Let’s fix this.

[33]: %%bigquery

CREATE OR REPLACE TABLE sparktobq.kdd_cup AS

SELECT
int64_field_0 AS duration,
string_field_1 AS protocol_type,
string_field_2 AS service,
string_field_3 AS flag,
int64_field_4 AS src_bytes,
int64_field_5 AS dst_bytes,
int64_field_6 AS wrong_fragment,

2
int64_field_7 AS urgent,
int64_field_8 AS hot,
int64_field_9 AS num_failed_logins,
int64_field_11 AS num_compromised,
int64_field_13 AS su_attempted,
int64_field_14 AS num_root,
int64_field_15 AS num_file_creations,
string_field_41 AS label
FROM
sparktobq.kdd_cup_raw

Query is running: 0%| |

[33]: Empty DataFrame


Columns: []
Index: []

[34]: %%bigquery
SELECT * FROM sparktobq.kdd_cup LIMIT 5

Query is running: 0%| |


Downloading: 0%| |

[34]: duration protocol_type service flag src_bytes dst_bytes wrong_fragment \


0 0 tcp IRC S1 0 0 0
1 0 tcp IRC RSTO 0 0 0
2 0 tcp IRC RSTO 0 0 0
3 0 tcp IRC REJ 0 0 0
4 7993 tcp IRC RSTR 773 6955 0

urgent hot num_failed_logins num_compromised su_attempted num_root \


0 0 0 1 0 0 0
1 0 0 0 0 0 0
2 0 0 0 0 0 0
3 0 0 0 0 0 0
4 0 0 0 0 0 0

num_file_creations label
0 0 normal.
1 0 normal.
2 0 normal.
3 0 satan.
4 0 normal.

0.1.4 Spark analysis


Replace Spark analysis by BigQuery SQL

3
[35]: %%bigquery connections_by_protocol
SELECT COUNT(*) AS count
FROM sparktobq.kdd_cup
GROUP BY protocol_type
ORDER by count ASC

Query is running: 0%| |


Downloading: 0%| |

[36]: connections_by_protocol

[36]: count
0 20354
1 190065
2 283602

0.1.5 Spark SQL to BigQuery


Pretty clean translation

[37]: %%bigquery attack_stats


SELECT
protocol_type,
CASE label
WHEN 'normal.' THEN 'no attack'
ELSE 'attack'
END AS state,
COUNT(*) as total_freq,
ROUND(AVG(src_bytes), 2) as mean_src_bytes,
ROUND(AVG(dst_bytes), 2) as mean_dst_bytes,
ROUND(AVG(duration), 2) as mean_duration,
SUM(num_failed_logins) as total_failed_logins,
SUM(num_compromised) as total_compromised,
SUM(num_file_creations) as total_file_creations,
SUM(su_attempted) as total_root_attempts,
SUM(num_root) as total_root_acceses
FROM sparktobq.kdd_cup
GROUP BY protocol_type, state
ORDER BY 3 DESC

Query is running: 0%| |


Downloading: 0%| |

[38]: %matplotlib inline


ax = attack_stats.plot.bar(x='protocol_type', subplots=True, figsize=(10,25))

4
5
0.1.6 Write out report
Copy the output to GCS so that we can safely delete the AI Platform Notebooks instance.

[39]: import google.cloud.storage as gcs

# save locally
ax[0].get_figure().savefig('report.png');
connections_by_protocol.to_csv("connections_by_protocol.csv")

# upload to GCS
bucket = gcs.Client().get_bucket(BUCKET)
for blob in bucket.list_blobs(prefix='sparktobq/'):
blob.delete()
for fname in ['report.png', 'connections_by_protocol.csv']:
bucket.blob('sparktobq/{}'.format(fname)).upload_from_filename(fname)

Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the “License”); you
may not use this file except in compliance with the License. You may obtain a copy of the License
at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to
in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for
the specific language governing permissions and limitations under the License.

You might also like