일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- 비트코인
- ADP
- 데이터분석
- 백테스트
- 프로그래머스
- Crawling
- TimeSeries
- 토익스피킹
- 파트5
- randomforest
- 실기
- 데이터분석전문가
- Python
- GridSearchCV
- SQL
- Programmers
- 주식
- Quant
- 빅데이터분석기사
- backtest
- docker
- sarima
- 변동성돌파전략
- PolynomialFeatures
- 코딩테스트
- 볼린저밴드
- hackerrank
- 파이썬 주식
- lstm
- 파이썬
- Today
- Total
목록GridSearchCV (6)
데이터 공부를 기록하는 공간

XGBOOST로 해보기 # 데이터불러오기 X_train = pd.read_csv("C:/Users/###/Downloads/빅데이터분석기사 실기/[Dataset] 작업형 제2유형/X_train.csv",encoding='cp949') X_test = pd.read_csv("C:/Users/###/Downloads/빅데이터분석기사 실기/[Dataset] 작업형 제2유형/X_test.csv",encoding='cp949') y_train = pd.read_csv("C:/Users/###/Downloads/빅데이터분석기사 실기/[Dataset] 작업형 제2유형/y_train.csv",encoding='cp949') print(X_train.shape, X_test.shape, y_train.shape) X_t..

import numpy as np from sklearn import datasets from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler from sklearn.model_selection import GridSearchCV np.random.seed(0) iris = datasets.load_iris() features = iris.data target = iris.target 1. PCA with FeatureUni..

1. 데이터 전처리 import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns df = pd.read_csv("./mobile_cust_churn/mobile_cust_churn.csv") df.drop(columns=['Unnamed: 0','id'], axis=1, inplace=True) target = 'CHURN' features = df.columns.tolist()[:-1] numeric_features = df.select_dtypes(include=['int64']).columns.tolist() category_features= [] for col in features: if co..

import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings('ignore') train = pd.read_csv('./titanic/train.csv') test = pd.read_csv('./titanic/test.csv') 1. 데이터 전처리 # check null data train.isnull().sum() test.isnull().sum() # category, numeric feature seperation target = 'Survived' train[target].value_counts() features = tr..