<samp id="e4iaa"><tbody id="e4iaa"></tbody></samp>
<ul id="e4iaa"></ul>
<blockquote id="e4iaa"><tfoot id="e4iaa"></tfoot></blockquote>
    • <samp id="e4iaa"><tbody id="e4iaa"></tbody></samp>
      <ul id="e4iaa"></ul>
      <samp id="e4iaa"><tbody id="e4iaa"></tbody></samp><ul id="e4iaa"></ul>
      <ul id="e4iaa"></ul>
      <th id="e4iaa"><menu id="e4iaa"></menu></th>

      代做COMP9414、代寫C++,Java程序語言

      時間:2024-06-20  來源:  作者: 我要糾錯



      COMP9414 24T2
      Artificial Intelligence
      Assignment 1 - Artificial neural networks
      Due: Week 5, Wednesday, 26 June 2024, 11:55 PM.
      1 Problem context
      Time Series Air Quality Prediction with Neural Networks: In this
      assignment, you will delve into the realm of time series prediction using neural
      network architectures. You will explore both classification and estimation
      tasks using a publicly available dataset.
      You will be provided with a dataset named “Air Quality,” [1] available
      on the UCI Machine Learning Repository 1. We tailored this dataset for this
      assignment and made some modifications. Therefore, please only use the
      attached dataset for this assignment.
      The given dataset contains 8,358 instances of hourly averaged responses
      from an array of five metal oxide chemical sensors embedded in an air qual-
      ity chemical multisensor device. The device was located in the field in a
      significantly polluted area at road level within an Italian city. Data were
      recorded from March 2004 to February 2005 (one year), representing the
      longest freely available recordings of on-field deployed air quality chemical
      sensor device responses. Ground truth hourly averaged concentrations for
      carbon monoxide, non-methane hydrocarbons, benzene, total nitrogen ox-
      ides, and nitrogen dioxide among other variables were provided by a co-
      located reference-certified analyser. The variables included in the dataset
      1https://archive.ics.uci.edu/dataset/360/air+quality
      1
      are listed in Table 1. Missing values within the dataset are tagged
      with -200 value.
      Table 1: Variables within the dataset.
      Variable Meaning
      CO(GT) True hourly averaged concentration of carbon monoxide
      PT08.S1(CO) Hourly averaged sensor response
      NMHC(GT) True hourly averaged overall Non Metanic HydroCar-
      bons concentration
      C6H6(GT) True hourly averaged Benzene concentration
      PT08.S2(NMHC) Hourly averaged sensor response
      NOx(GT) True hourly averaged NOx concentration
      PT08.S3(NOx) Hourly averaged sensor response
      NO2(GT) True hourly averaged NO2 concentration
      PT08.S4(NO2) Hourly averaged sensor response
      PT08.S5(O3) Hourly averaged sensor response
      T Temperature
      RH Relative Humidity
      AH Absolute Humidity
      2 Activities
      This assignment focuses on two main objectives:
      ? Classification Task: You should develop a neural network that can
      predict whether the concentration of Carbon Monoxide (CO) exceeds
      a certain threshold – the mean of CO(GT) values – based on historical
      air quality data. This task involves binary classification, where your
      model learns to classify instances into two categories: above or below
      the threshold. To determine the threshold, you must first calculate
      the mean value for CO(GT), excluding unknown data (missing values).
      Then, use this threshold to predict whether the value predicted by your
      network is above or below it. You are free to choose and design your
      own network, and there are no limitations on its structure. However,
      your network should be capable of handling missing values.
      2
      ? Regression Task: You should develop a neural network that can pre-
      dict the concentration of Nitrogen Oxides (NOx) based on other air
      quality features. This task involves estimating a continuous numeri-
      cal value (NOx concentration) from the input features using regression
      techniques. You are free to choose and design your own network and
      there is no limitation on that, however, your model should be able to
      deal with missing values.
      In summary, the classification task aims to divide instances into two cat-
      egories (exceeding or not exceeding CO(GT) threshold), while the regression
      task aims to predict a continuous numerical value (NOx concentration).
      2.1 Data preprocessing
      It is expected you analyse the provided data and perform any required pre-
      processing. Some of the tasks during preprocessing might include the ones
      shown below; however, not all of them are necessary and you should evaluate
      each of them against the results obtained.
      (a) Identify variation range for input and output variables.
      (b) Plot each variable to observe the overall behaviour of the process.
      (c) In case outliers or missing data are detected correct the data accord-
      ingly.
      (d) Split the data for training and testing.
      2.2 Design of the neural network
      You should select and design neural architectures for addressing both the
      classification and regression problem described above. In each case, consider
      the following steps:
      (a) Design the network and decide the number of layers, units, and their
      respective activation functions.
      (b) Remember it’s recommended your network accomplish the maximal
      number of parameters Nw < (number of samples)/10.
      (c) Create the neural network using Keras and TensorFlow.
      3
      2.3 Training
      In this section, you have to train your proposed neural network. Consider
      the following steps:
      (a) Decide the training parameters such as loss function, optimizer, batch
      size, learning rate, and episodes.
      (b) Train the neural model and verify the loss values during the process.
      (c) Verify possible overfitting problems.
      2.4 Validating the neural model
      Assess your results plotting training results and the network response for the
      test inputs against the test targets. Compute error indexes to complement
      the visual analysis.
      (a) For the classification task, draw two different plots to illustrate your
      results over different epochs. In the first plot, show the training and
      validation loss over the epochs. In the second plot, show the training
      and validation accuracy over the epochs. For example, Figure 1 and
      Figure 2 show loss and classification accuracy plots for 100 epochs,
      respectively.
      Figure 1: Loss plot for the classifica-
      tion task
      Figure 2: Accuracy plot for the clas-
      sification task
      4
      (b) For the classification task, compute a confusion matrix 2 including True
      Positive (TP), True Negative (TN), False Positive (FP), and False Neg-
      ative (FN), as shown in Table 2. Moreover, report accuracy and pre-
      cision for your test data and mention the number of tested samples as
      shown in Table 3 (the numbers shown in both tables are randomly cho-
      sen and may not be consistent with each other). For instance, Sklearn
      library offers a various range of metric functions 3, including confusion
      matrix 4, accuracy, and precision. You can use Sklearn in-built met-
      ric functions to calculate the mentioned metrics or develop your own
      functions.
      Table 2: Confusion matrix for the test data for the classification task.
      Confusion Matrix Positive (Actual) Negative (Actual)
      Positive (Predicted) 103 6
      Negative (Predicted) 6 75
      Table 3: Accuracy and precision for the test data for the classification task.
      Accuracy Precision Number of Samples
      CO(GT) classification 63% 60% 190
      (c) For the regression task, draw two different plots to illustrate your re-
      sults. In the first plot, show how the selected loss function varies for
      both the training and validation through the epochs. In the second
      plot, show the final estimation results for the validation test. For in-
      stance, Figure 3 and Figure 4 show the loss function and the network
      outputs vs the actual NOx(GT) values for a validation test, respec-
      tively. In Figure 4 no data preprocessing has been performed, however,
      as mentioned above, it is expected you include this in your assignment.
      (d) For the regression task, report performance indexes including the Root
      Mean Squared Error (RMSE), Mean Absolute Error (MAE) (see a
      discussion on [2]), and the number of samples for your estimation of
      2https://en.wikipedia.org/wiki/Confusion matrix
      3https://scikit-learn.org/stable/api/sklearn.metrics.html
      4https://scikitlearn.org/stable/modules/generated/sklearn.metrics.confusion matrix.html
      5
      Figure 3: Loss plot for the re-
      gression task.
      Figure 4: Estimated and actual NOx(GT)
      for the validation set.
      NOx(GT) values in a table. Root Mean Squared Error (RMSE) mea-
      sures the differences between the observed values and predicted ones
      and is defined as follows:
      RMSE =

      1
      n
      Σi=ni=1 (Yi ? Y?i)2, (1)
      where n is the number of our samples, Yi is the actual label and Y?i
      is the predicted value. In the same way, MAE can be defined as the
      absolute average of errors as follows:
      MAE =
      1
      n
      Σi=ni=1 |Yi ? Y?i|. (2)
      Table 4 shows an example of the performance indexes (all numbers are
      randomly chosen and may not be consistent with each other). As men-
      tioned before, Sklearn library offers a various range of metric functions,
      including RMSE5 and MAE 6. You can use Sklearn in-built metric func-
      tions to calculate the mentioned metrics or develop your own functions.
      Table 4: Result table for the test data for the regression task.
      RMSE MAE Number of Samples
      90.60 50.35 55
      5https://scikit-learn.org/stable/modules/generated/sklearn.metrics.root mean squared error.html
      6https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean absolute error.html
      6
      3 Testing and discussing your code
      As part of the assignment evaluation, your code will be tested by tutors along
      with you in a discussion session carried out in the tutorial session in week 6.
      The assignment has a total of 25 marks. The discussion is mandatory and,
      therefore, we will not mark any assignment not discussed with tutors.
      You are expected to propose and build neural models for classification
      and regression tasks. The minimal output we expect to see are the results
      mentioned above in Section 2.4. You will receive marks for each of these
      subsection as shown in Table 5, i.e. 7 marks in total. However, it’s fine if
      you want to include any other outcome to highlight particular aspects when
      testing and discussing your code with your tutor.
      For marking your results, you should be prepared to simulate your neural
      model with a generalisation set we have saved apart for that purpose. You
      must anticipate this by including in your submission a script ready to open
      a file (with the same characteristics as the given dataset but with fewer data
      points), simulate the network, and perform all the validation tests described
      in Section 2.4 (b) and (d) (accuracy, precision, RMSE, MAE). It is recom-
      mended to save all of your hyper-parameters and weights (your model in
      general) so you can call your network and perform the analysis later in your
      discussion session.
      As for the classification task, you need to compute accuracy and precision,
      while for the regression task RMSE and MAE using the generalisation set.
      You will receive 3 marks for each task, given successful results. Expected
      results should be as follows:
      ? For the classification task, your network should achieve at least 85%
      accuracy and precision. Accuracy and precision lower than that will
      result in a score of 0 marks for that specific section.
      ? For the regression task, it is expected to achieve an RMSE of at most
      280 and an MAE of 220 for unseen data points. Errors higher than the
      mentioned values will be marked as 0 marks.
      Finally, you will receive 1 mark for code readability for each task, and
      your tutor will also give you a maximum of 5 marks for each task depending
      on the level of code understanding as follows: 5. Outstanding, 4. Great,
      3. Fair, 2. Low, 1. Deficient, 0. No answer.
      7
      Table 5: Marks for each task.
      Task Marks
      Results obtained with given dataset
      Loss and accuracy plots for classification task 2 marks
      Confusion matrix and accuracy and precision tables for classifi-
      cation task
      2 marks
      Loss and estimated NOx(GT) plots for regression task 2 marks
      Performance indexes table for regression task 1 mark
      Results obtained with generalisation dataset
      Accuracy and precision for classification task 3 marks
      RMSE and MAE for regression task 3 marks
      Code understanding and discussion
      Code readability for classification task 1 mark
      Code readability for regression task 1 mark
      Code understanding and discussion for classification task 5 mark
      Code understanding and discussion for regression task 5 mark
      Total marks 25 marks
      4 Submitting your assignment
      The assignment must be done individually. You must submit your assignment
      solution by Moodle. This will consist of a single .ipynb Jupyter file. This file
      should contain all the necessary code for reading files, data preprocessing,
      network architecture, and result evaluations. Additionally, your file should
      include short text descriptions to help markers better understand your code.
      Please be mindful that providing clean and easy-to-read code is a part of
      your assignment.
      Please indicate your full name and your zID at the top of the file as a
      comment. You can submit as many times as you like before the deadline –
      later submissions overwrite earlier ones. After submitting your file a good
      practice is to take a screenshot of it for future reference.
      Late submission penalty: UNSW has a standard late submission
      penalty of 5% per day from your mark, capped at five days from the as-
      sessment deadline, after that students cannot submit the assignment.
      8
      5 Deadline and questions
      Deadline: Week 5, Wednesday 26 June of June 2024, 11:55pm. Please
      use the forum on Moodle to ask questions related to the project. We will
      prioritise questions asked in the forum. However, you should not share your
      code to avoid making it public and possible plagiarism. If that’s the case,
      use the course email cs9414@cse.unsw.edu.au as alternative.
      Although we try to answer questions as quickly as possible, we might take
      up to 1 or 2 business days to reply, therefore, last-moment questions might
      not be answered timely.
      6 Plagiarism policy
      Your program must be entirely your own work. Plagiarism detection software
      might be used to compare submissions pairwise (including submissions for
      any similar projects from previous years) and serious penalties will be applied,
      particularly in the case of repeat offences.
      Do not copy from others. Do not allow anyone to see your code.
      Please refer to the UNSW Policy on Academic Honesty and Plagiarism if you
      require further clarification on this matter.
      References
      [1] De Vito, S., Massera, E., Piga, M., Martinotto, L. and Di Francia, G.,
      2008. On field calibration of an electronic nose for benzene estimation in an
      urban pollution monitoring scenario. Sensors and Actuators B: Chemical,
      129(2), pp.750-757.
      [2] Hodson, T. O. 2022. Root mean square error (RMSE) or mean absolute
      error (MAE): When to use them or not. Geoscientific Model Development
      Discussions, 2022, 1-10.

      請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp













       

      標(biāo)簽:

      掃一掃在手機打開當(dāng)前頁
    • 上一篇:代寫指標(biāo)編寫 編寫同花順指標(biāo)公式 代編公式
    • 下一篇:ECON2101代做、代寫Python/c++設(shè)計編程
    • CMT219代寫、代做Java程序語言
    • 代做MATH1033、代寫c/c++,Java程序語言
    • 代做CSCI 2525、c/c++,Java程序語言代寫
    • COMP 315代寫、Java程序語言代做
    • 昆明生活資訊

      昆明圖文信息
      蝴蝶泉(4A)-大理旅游
      蝴蝶泉(4A)-大理旅游
      油炸竹蟲
      油炸竹蟲
      酸筍煮魚(雞)
      酸筍煮魚(雞)
      竹筒飯
      竹筒飯
      香茅草烤魚
      香茅草烤魚
      檸檬烤魚
      檸檬烤魚
      昆明西山國家級風(fēng)景名勝區(qū)
      昆明西山國家級風(fēng)景名勝區(qū)
      昆明旅游索道攻略
      昆明旅游索道攻略
    • 幣安app官網(wǎng)下載 幣安app官網(wǎng)下載

      關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責(zé)聲明 | 幫助中心 | 友情鏈接 |

      Copyright © 2023 kmw.cc Inc. All Rights Reserved. 昆明網(wǎng) 版權(quán)所有
      ICP備06013414號-3 公安備 42010502001045

      主站蜘蛛池模板: 国产精品免费看久久久无码| 亚洲av无码专区在线观看素人| 狠狠爱无码一区二区三区| 国产成A人亚洲精V品无码性色 | 人妻精品无码一区二区三区| 久久亚洲精品成人无码网站| 久久综合精品国产二区无码| 无码人妻AⅤ一区二区三区| 久久久久亚洲精品无码蜜桃| WWW久久无码天堂MV| 久久无码AV中文出轨人妻| 中文字幕韩国三级理论无码| 无码人妻一区二区三区在线水卜樱 | 亚洲精品中文字幕无码A片老| 国产精品亚洲专区无码WEB | 一本色道久久HEZYO无码| 亚洲日韩精品无码专区加勒比 | 日韩激情无码免费毛片| 中文字幕乱码无码人妻系列蜜桃| 无码内射中文字幕岛国片| 国产精品无码无片在线观看3D | 亚洲av无码国产精品夜色午夜 | 亚洲AV无码专区国产乱码4SE | 人妻丰满熟妇AV无码区免| 亚洲日韩中文无码久久| 日韩成人无码中文字幕| 精品无码成人片一区二区| 无码精品人妻一区| 亚洲人成人无码网www国产| 国产亚洲?V无码?V男人的天堂| 激情无码人妻又粗又大| 国产精品一级毛片无码视频| 免费无码看av的网站| 久久久久亚洲av成人无码电影 | 性无码专区一色吊丝中文字幕 | 毛片一区二区三区无码| 免费a级毛片无码a∨蜜芽试看| 久久久久亚洲精品无码网址色欲| 亚洲av无码专区亚洲av不卡| 人妻丰满?V无码久久不卡 | 精品久久久久久无码中文野结衣 |