

We can see here that accuracy in each step increase.we train our model 10 times.when test images train with lots of train images then accuracy will increase and our model predict more accuracy. More epochs increase the accuracy and decrease the loss. This is most important part of our project.here we take 10 epochs for train our model. Validation_steps=validation_generator.n//validation_generator.batch_sizeĭetection.save('Plant_Disease_Detection.h5') Steps_per_epoch=train_generator.n//train_generator.batch_size Now we print all the convolutional layer and their neurons in short we print all detail of all layers. The optimizers are used for improving speed and performance for training a specific model. Droupout function is use for not taking all the units in the neural network.optimize Adam is used for optimize our data with learning rate.Ī metric is a function that is used to judge the performance of your model. Inside the conv2D first parameter show the number of filters,second show the size of filter/kernal,Input_shape represent here the RGB format of images.The convolutional layer is then pass to MaxPooling layer,pooliing size is the window size.įlatten is used for convert data into 1D form. Dense layer is used for all neurons to the previous layer neurons so it is fully connected layer. We pass the images through the layers more than once for better feature extraction. Here we add various convolutional layers with the Relu activation function (Rectified Linear Unit). Now we use sequential model.The Sequential model API is a way of creating deep learning models where a sequential class is created and model layers are created and added to it #convolutional layer-1ĭetection.add(Conv2D(64,(3,3),padding='same',input_shape=(48,48,3)))ĭetection.add(MaxPooling2D(pool_size=(2,2)))ĭetection.add(Conv2D(128,(5,5),padding='same'))ĭetection.add(Conv2D(256,(3,3),padding='same'))ĭetection.add(Conv2D(512,(3,3),padding='same'))ĭetection.add(Dense(15,activation='softmax'))ĭpile(optimizer=optimum,loss='categorical_crossentropy',metrics=) here we use class mode 'categorical' because more then 2 classes are available here. The train dataset has 16222 images belonging to 15 classes and the test dataset has 1254 images belonging to 15 classes.
#Years used runonly to detection five code
This part of code is used to define train and test data into the model. Validation_generator=datagen_test.flow_from_directory( "F:/JetBrains/goeduhub training/PROJECTDATA 1/test_data" , Train_generator=datagen_train.flow_from_directory( "F:/JetBrains/goeduhub training/PROJECTDATA 1/train_set" ,ĭatagen_test=ImageDataGenerator(horizontal_flip=True) # Complete Dataset images can be loaded using ImageDataGenerator functionĭatagen_train=ImageDataGenerator(horizontal_flip=True) We use os.listdir for fatching list of all the images in the folder.inside this we give the path ("highlight")where images are store.here we take only train dataset images from our plant disease dataset. Print(str(len(os.listdir( "F:/JetBrains/goeduhub training/PROJECTDATA 1/train_set/" + types)))+" "+ types+' images') # For checking out that how many images are available in the train set we can use import OSįor types in os.listdir( "F:/JetBrains/goeduhub training/PROJECTDATA 1/train_set/" ): we use various layers and model,optimizer for train our model. we use here tensorflow version 2.1.0 and keras is also available inside this tensorflow version.

Here we import various libraries like numpy,matplotlib,os,tensorflow etc. #keras is in built function in Tensorflow.įrom import ImageDataGeneratorįrom import Dense, Input, Dropout,Flatten, Conv2Dįrom import BatchNormalization, Activation, MaxPooling2Dįrom import Model, Sequentialįrom import Adamįrom import ModelCheckpoint, ReduceLROnPlateauįrom import plot_model # here we are working on Tensorflow version 2.1.0 so we need to write tensorflow.keras.

Implementation #Intialization of Program.

We can also create our own data set and train our model. Various datasets are available on internet to detect your plant disease and train your model with these datasets. CNN most commonly used to analyze visual imagery and are frequently working behind the scenes in image classification. Why CNN: As we have seen in CNN Tutorial, CNN reads a very large image in a simple manner. In Agriculture field all farmers facing the problem of plant disease.in olden days their are various way to destroy these disease but in technological time through detection we can easily detect which type of disease are available in particular plant.īasically we will first train our CNN models with a lot of images of potato,pepper and tomato. PROJECT - LEAF DISEASE DETECTION AND RECOGNITION
