[인공지능 개론 DL/ANN ]
2022. 10. 8. 18:45ㆍ인공지능,딥러닝,머신러닝 기초
Deep Neural Networks (DNNs)
- DNNs are typically Feed Forward Networks(FFNNs) in which data flows from the input layer to the output layer without backward and the links between the layer are one way which is in the forward direction and they never touch a node again
- FFNNs work in the same way as you will have the flavor of the specific while you eating but just after finishing your meal you will forget what you have eaten.
- If the chef gives you the meal of same ingradients again, you can't recognize the incidents, you have to start from scratch as you don't have any memory of that
Recurrent Neural Network(RNN)
- RNNs can use thier internal state(memory) to process sequence of inputs
- RNN also have problems like vanishing gradient /long-term dependency problem where information rapidly gets lost over time
Long Short Term Memory(LSTM)
- Long short term memory don't have this probelm
- capable of learning long-term dependencies which make RNN smart at remembering things that happenend in the past and finding patterns across time to make its next guesses make sense
[모델 성능 올리기]
1. epoch 증가
- 정확도 증가, dataset 부족할때 효과
- 단점 : 오버피팅 가능성 증가,, 연산시간 증가
2. drop-out
- 연산속도 증가 오버피팅 가능성 감소
- 단점 : dataset 부족할 때 정확도 감소
3.mini-batch
- 연산속도증가, overfitting가능성 감소
- 단점 dataset 부족할때 정확도 감소
'인공지능,딥러닝,머신러닝 기초' 카테고리의 다른 글
[CNN] Padding 무엇인가? (0) | 2022.10.12 |
---|---|
[인공지능 개론] CNN 구조 (0) | 2022.10.11 |
[인공지능 개론_plus] 경사하강 알고리즘 (1) | 2022.10.08 |
Comparison of Long Short Term Memory Networks and the Hydrological Model in Runoff Simulation (0) | 2022.10.06 |
ANN & DL (0) | 2022.10.06 |