<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE ArticleSet PUBLIC "-//NLM//DTD PubMed 2.7//EN" "https://dtd.nlm.nih.gov/ncbi/pubmed/in/PubMed.dtd">
<ArticleSet>
<Article>
<Journal>
				<PublisherName>Allameh Tabataba'i University Press</PublisherName>
				<JournalTitle>Journal of Mathematics and Modeling in Finance</JournalTitle>
				<Issn>2783-0578</Issn>
				<Volume>5</Volume>
				<Issue>1</Issue>
				<PubDate PubStatus="epublish">
					<Year>2025</Year>
					<Month>07</Month>
					<Day>01</Day>
				</PubDate>
			</Journal>
<ArticleTitle>Comparing the performance of different deep learning architectures for time series forecasting</ArticleTitle>
<VernacularTitle></VernacularTitle>
			<FirstPage>63</FirstPage>
			<LastPage>87</LastPage>
			<ELocationID EIdType="pii">18671</ELocationID>
			
<ELocationID EIdType="doi">10.22054/jmmf.2025.83410.1157</ELocationID>
			
			<Language>EN</Language>
<AuthorList>
<Author>
					<FirstName>Reza</FirstName>
					<LastName>Taleblou</LastName>
<Affiliation>Faculty of Economics, Allameh Tabataba'i University, Tehran, Iran</Affiliation>

</Author>
</AuthorList>
				<PublicationType>Journal Article</PublicationType>
			<History>
				<PubDate PubStatus="received">
					<Year>2024</Year>
					<Month>12</Month>
					<Day>14</Day>
				</PubDate>
			</History>
		<Abstract>In this paper, we evaluate the performance of two machine learning architectures— Recurrent Neural Networks (RNN) and Transformer-based models—on four commodity-based company indices from the Tehran Stock Exchange. The Transformer-based models used in this study include AutoFormer, FEDformer, Informer, and PatchTST, while the RNN-based models consist of GRU and LSTM. The dataset comprises daily observations collected from April 20, 2020, to November 20, 2024. To enhance the generalization power of the models and prevent overfitting, we employ two techniques: splitting the training and test samples, and applying regularization methods such as dropout. Hyperparameters for all models were selected using a visual method. Our results indicate that the PatchTST model outperforms other methods in terms of Root Mean Squared Error (RMSE) for both 1-day and 5-day (1-week) forecasting horizons. The FEDformer model also demonstrates promising performance, particularly for forecasting the MetalOre time series. In contrast, the AutoFormer model performs relatively poorly for longer forecasting horizons, while the GRU and LSTM models yield mixed results. These findings underscore the significant impact of model selection and forecasting horizon on the accuracy of time series forecasts, emphasizing the importance of careful model choice and hyperparameter tuning for achieving optimal performance.</Abstract>
		<ObjectList>
			<Object Type="keyword">
			<Param Name="value">Time Series Forecasting</Param>
			</Object>
			<Object Type="keyword">
			<Param Name="value">Deep Learning</Param>
			</Object>
			<Object Type="keyword">
			<Param Name="value">Recurrent Neural Network</Param>
			</Object>
			<Object Type="keyword">
			<Param Name="value">Long short-term memory</Param>
			</Object>
			<Object Type="keyword">
			<Param Name="value">Gated recurrent unit</Param>
			</Object>
			<Object Type="keyword">
			<Param Name="value">Transformer Architecture</Param>
			</Object>
		</ObjectList>
<ArchiveCopySource DocType="pdf">https://jmmf.atu.ac.ir/article_18671_e0c6b51659aba3765b1e0b22c6d5ba10.pdf</ArchiveCopySource>
</Article>
</ArticleSet>
