Csicc2006 3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

‫‪١‬‬

‫ﻧﺎﺣﻴﻪﺑﻨﺪﻱ ﺭﻓﺘﺎﺭﻱ ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺑﻪ ﺭﻭﺵ ﺭﻗﺎﺑﺘﻲ ﺑﺮﺍﻱ ﺧﺒﺮﻩﻫﺎﻱ ﻣﺤﻠﻲ‬

‫ﻣﺤﻤﺪ ﺣﺴﻴﻦ ﺭﻫﺒﺎﻥ‬ ‫ﻣﺤﻤﺪ ﻣﻬﺪﻱ ﮐﺮﺍﻣﺘﻲ‬ ‫ﻣﺤﻤﺪ ﺭﺿﺎ ﺫﺍﮐﺮﻱ ﻧﺴﺐ‬ ‫ﺁﺭﺵ ﻋﻨﺪﻟﻴﺐ‬
‫ﺩﺍﻧﺸﮕﺎﻩ ﺻﻨﻌﺘﻲ ﺷﺮﻳﻒ‬ ‫ﮔﺮﻭﻩ ﺑﺮﻕ ﻭ ﮐﺎﻣﭙﻴﻮﺗﺮ ﺩﺍﻧﺸﮕﺎﻩ‬ ‫ﺑﻨﻴﺎﺩ ﭘﮋﻭﻫﺸﻲ ﺭﺑﺎﺗﻴﮏ ﻭ ﻫﻮﺵ‬ ‫ﺑﻨﻴﺎﺩ ﭘﮋﻭﻫﺸﻲ ﺭﺑﺎﺗﻴﮏ ﻭ ﻫﻮﺵ‬
‫‪rahban@ce.sharif.edu‬‬ ‫ﺗﻬﺮﺍﻥ‬ ‫ﻣﺼﻨﻮﻋﻲ ﺳﭙﻨﺘﺎ‬ ‫ﻣﺼﻨﻮﻋﻲ ﺳﭙﻨﺘﺎ‬
‫‪r.zakeri@srrf.net‬‬ ‫‪a.andalib@srrf.net‬‬

‫ﻣﻲﮔﻮﻳﻨﺪ ]‪ .[۱‬ﺍﻳﻦ ﺍﻳﺪﻩ ﺍﺯ ﺍﺻﻞ ﺗﻘﺴﻴﻢ ﻭ ﺣﻞ ﺍﻟﻬﺎﻡ ﮔﺮﻓﺘﻪ ﺷﺪﻩ ﺍﺳﺖ‬ ‫ﭼﻜﻴﺪﻩ‪ :‬ﺩﺭ ﺍﻳﻦ ﻣﻘﺎﻟﻪ‪ ،‬ﺭﻭﺵ ﺟﺪﻳﺪﻱ ﺑﺮﺍﻱ ﺑﻜﺎﺭﮔﻴﺮﻱ ﻳﺎﺩﮔﻴﺮﻱ ﻣﺤﻠﻲ‬
‫]‪ [۴,۲‬ﻛﻪ ﺩﺭ ﺁﻥ ﺑﺮﺍﻱ ﻣﻮﺍﺟﻬﻪ ﺑﺎ ﻣﺴﺎﻟﻪ‪ ،‬ﺑﺎ ﺗﻘﺴﻴﻢ ﺁﻥ ﺑﻪ ﺗﻌﺪﺍﺩﻱ‬ ‫ﺩﺭ ﺣﻞ ﻣﺴﺎﻳﻞ ﻏﻴﺮﺧﻄﻲ ﭼﻨﺪﻣﺘﻐﻴﺮﻩ ﭘﻴﺸﻨﻬﺎﺩ ﺷﺪﻩ ﺍﺳﺖ‪ .‬ﺩﺭ ﺍﻳﻦ ﺭﻭﺵ‪،‬‬
‫ﺯﻳﺮﻣﺴﺎﻟﻪ ﺳﺎﺩﻩﺗﺮ ﻭ ﺟﻤﻊﺁﻭﺭﻱ ﺟﻮﺍﺏ ﺁﻧﻬﺎ ﺑﺮﺍﻱ ﺗﻮﻟﻴﺪ ﺟﻮﺍﺏ ﻧﻬﺎﻳﻲ‪ ،‬ﺑﺮ‬ ‫ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺑﻪ ﭼﻨﺪ ﺧﺒﺮﻩ ﺁﻣﻮﺯﺵ ﺩﺍﺩﻩ ﻣﻲﺷﻮﺩ‪ .‬ﺧﺒﺮﻩﻫﺎ ﺑﻪ ﺗﺪﺭﻳﺞ‬
‫ﭘﻴﭽﻴﺪﮔﻲ ﻣﺴﺎﻟﻪ ﻓﺎﻳﻖ ﻣﻲﺁﻳﻨﺪ ]‪.[۵‬‬ ‫ﺭﻓﺘﺎﺭﻫﺎﻱ ﻣﺤﻠﻲ ﺭﺍ ﺷﻨﺎﺳﺎﻳﻲ ﻣﻲﻛﻨﻨﺪ ﻭ ﺩﺭ ﻧﻬﺎﻳﺖ ﺗﺨﺼﺺ ﻻﺯﻡ ﺭﺍ ﺑﺮﺍﻱ‬
‫ﺩﺭ ﺭﻫﻴﺎﻓﺖﻫﺎﻱ ﻣﺘﺪﺍﻭﻝ ﻳﺎﺩﮔﻴﺮﻱ ﻣﺤﻠﻲ‪ ،‬ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﻣﺴﺎﻟﻪ‬ ‫ﻣﺪﻝﻛﺮﺩﻥ ﻳﻚ ﻳﺎ ﭼﻨﺪ ﺯﻳﺮﻓﻀﺎ ﻳﺎ ﻧﺎﺣﻴﻪ ﺍﺯ ﺳﻴﺴﺘﻢ ﺍﺣﺮﺍﺯ ﻣﻲﻧﻤﺎﻳﻨﺪ‪ .‬ﺑﺮﺍﻱ‬
‫ﺑﺮﺍﺳﺎﺱ ﭼﮕﺎﻟﻲ ﺗﻮﺯﻳﻊ ﺩﺍﺩﻩﻫﺎ ﻭ ﺑﺪﻭﻥ ﺗﻮﺟﻪ ﺑﻪ ﻭﻳﮋﮔﻲﻫﺎﻱ ﺭﻓﺘﺎﺭﻱ ﺁﻥ‬ ‫ﻧﺎﺣﻴﻪﺑﻨﺪﻱ ﻓﻀﺎ‪ ،‬ﻧﻤﻮﻧﻪﻫﺎ ﺑﻪ ﺧﺒﺮﻩﻫﺎ ﺍﻋﻤﺎﻝ ﻣﻲﺷﻮﻧﺪ ﻭ ﺩﺭ ﻫﺮ ﻣﻮﺭﺩ‬
‫ﻧﺎﺣﻴﻪﺑﻨﺪﻱ ﻣﻲﺷﻮﺩ‪ .‬ﺑﺮﺧﻲ ﺍﺯ ﺍﻳﻦ ﺭﻫﻴﺎﻓﺖﻫﺎ ﺩﺍﺭﺍﻱ ﭘﻴﭽﻴﺪﮔﻲ ﻣﺤﺎﺳﺒﺎﺗﻲ‬ ‫ﺧﺒﺮﻩﺍﻱ ﻛﻪ ﭘﺎﺳﺦ ﺑﻬﺘﺮﻱ ﺗﻮﻟﻴﺪ ﻛﻨﺪ‪ ،‬ﺗﺸﻮﻳﻖ ﻣﻲﺷﻮﺩ ﺗﺎ ﻋﻼﻭﻩ ﺑﺮ ﺁﻥ ﻧﻤﻮﻧﻪ‬
‫ﺑﺎﻻﻳﻲ ﻫﺴﺘﻨﺪ‪ .‬ﺑﻪ ﻋﻨﻮﺍﻥ ﻣﺜﺎﻝ ﺍﻟﮕﻮﺭﻳﺘﻢﻫﺎﻱ ‪ k‬ﻧﺰﺩﻳﻜﺘﺮﻳﻦ ﻫﻤﺴﺎﻳﻪ ﻭ‬ ‫ﻓﻀﺎﻱ ﺍﻃﺮﺍﻑ ﺁﻧﺮﺍ ﻧﻴﺰ ﻓﺮﺍ ﮔﻴﺮﺩ‪ .‬ﻫﺮ ﺧﺒﺮﻩ ﺑﺮﺍﻱ ﺣﻔﻆ ﻭ ﺗﻮﺳﻌﻪ ﺯﻳﺮﻓﻀﺎﻱ‬
‫‪ RBF‬ﻛﻪ ﻫﺮ ﺩﻭ ﺍﺯ ﺧﺎﻧﻮﺍﺩﻩ ﺍﻟﮕﻮﺭﻳﺘﻢﻫﺎﻱ ﻣﺒﺘﻨﻲ ﺑﺮ ﻫﺴﺘﻪ ﻣﺤﺴﻮﺏ‬ ‫ﺗﺨﺼﺼﻲ ﺧﻮﺩ ﺑﺎ ﺳﺎﻳﺮ ﺧﺒﺮﻩﻫﺎ ﺭﻗﺎﺑﺖ ﻣﻲﻛﻨﺪ؛ ﺑﻪﮔﻮﻧﻪﺍﻱ ﻛﻪ ﺩﺭ ﭘﺎﻳﺎﻥ‬
‫ﻣﻲﺷﻮﻧﺪ ]‪ ،[۱‬ﺑﻪ ﺗﺮﺗﻴﺐ ﺩﺭ ﻓﺎﺯﻫﺎﻱ ﻳﺎﺩﮔﻴﺮﻱ ﻭ ﺍﺳﺘﻔﺎﺩﻩ ﺩﺍﺭﺍﻱ ﻣﺸﻜﻞ‬ ‫ﺁﻣﻮﺯﺵ‪ ،‬ﺣﺮﻳﻢ ﺗﺨﺼﺼﻲ ﻫﺮ ﺧﺒﺮﻩ ﺑﺮ ﺣﺴﺐ ﺭﻓﺘﺎﺭﻫﺎﻱ ﻣﺘﻔﺎﻭﺕ ﻣﻮﺟﻮﺩ ﺩﺭ‬
‫ﭘﻴﭽﻴﺪﮔﻲ ﻣﺤﺎﺳﺒﺎﺗﻲ ﻫﺴﺘﻨﺪ]‪ .[۶-۹‬ﻳﻜﻲ ﺩﻳﮕﺮ ﺍﺯ ﻧﻘﺎﻁ ﺿﻌﻒ ﺭﻭﺵﻫﺎﻱ‬ ‫ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﻣﻌﻴﻦ ﻣﻲﺷﻮﺩ ﻭ ﺁﻥ ﺧﺒﺮﻩ ﻣﺴﻮﻟﻴﺖ ﺁﻥ ﻗﺴﻤﺖ ﺭﺍ ﺑﺮ ﻋﻬﺪﻩ‬
‫ﻣﺘﺪﺍﻭﻝ ﻳﺎﺩﮔﻴﺮﻱ ﻣﺤﻠﻲ ﺗﻌﻴﻴﻦ ﻣﺮﺯﻫﺎﻱ ﻧﻮﺍﺣﻲ ﭘﻴﺶ ﺍﺯ ﻣﺸﺨﺺﺷﺪﻥ‬ ‫ﻣﻲﮔﻴﺮﺩ‪ .‬ﭘﺲ ﺍﺯ ﭘﺎﻳﺎﻥ ﺁﻣﻮﺯﺵ ﺧﺒﺮﻩﻫﺎ‪ ،‬ﻳﻚ ﺍﻧﺘﺨﺎﺏﻛﻨﻨﺪﻩ ﻛﻪ ﻭﻇﻴﻔﻪ‬
‫ﻣﻴﺰﺍﻥ ﺗﻮﺍﻧﺎﻳﻲ ﻭ ﻳﺎ ﻛﺎﺭﺍﻳﻲ ﺧﺒﺮﻩﻫﺎﺳﺖ‪ .‬ﻋﻤﻮﻣﺎ ﺑﺮﺍﻱ ﺣﻞ ﻣﺴﺎﻳﻞ ﻳﺎﺩﮔﻴﺮﻱ‬ ‫ﻧﮕﺎﺷﺖ ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺑﻪ ﺧﺒﺮﻩﻫﺎﻱ ﻣﺘﺨﺼﺺ ﺭﺍ ﺑﺮﻋﻬﺪﻩ ﺩﺍﺭﺩ‪ ،‬ﺁﻣﻮﺯﺵ ﺩﺍﺩﻩ‬
‫ﻣﺤﻠﻲ ﺍﺯ ﻳﻚ ﺗﻘﺴﻴﻢﻛﻨﻨﺪﻩ ﻏﻴﺮﻧﻈﺎﺭﺗﻲ ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﻣﺎﻧﻨﺪ ﻳﻚ ﺷﺒﻜﻪ‬ ‫ﻣﻲﺷﻮﺩ‪ .‬ﺍﻳﻦ ﻣﻌﻤﺎﺭﻱ ﺑﺮﺍﻱ ﺑﺮﺍﺯﺵ ﻳﻚ ﻣﻨﺤﻨﻲ ﭼﻨﺪﺿﺎﺑﻄﻪﺍﻱ ﺍﺳﺘﻔﺎﺩﻩ‬
‫‪ SOM‬ﺍﺳﺘﻔﺎﺩﻩ ﻣﻲﺷﻮﺩ ﺗﺎ ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺑﻪ ﻧﻮﺍﺣﻲ ﻣﺠﺰﺍ ﺗﻘﺴﻴﻢ ﺷﻮﺩ ﻭ‬ ‫ﺷﺪﻩ ﺍﺳﺖ ﻛﻪ ﭘﺎﺳﺦ ﺁﻥ ﺩﺭ ﭼﻨﺪ ﻧﺎﺣﻴﻪ ﻛﺎﻣﻼ ﺑﺮ ﺗﺎﺑﻊ ﺍﺻﻠﻲ ﻣﻨﻄﺒﻖ‬
‫ﭘﺲ ﺍﺯ ﺁﻥ ﺍﻳﻦ ﺗﻘﺴﻴﻢﻛﻨﻨﺪﻩ ﻫﺮ ﻳﻚ ﺍﺯ ﻧﻮﺍﺣﻲ ﺭﺍ ﺑﻪ ﻳﻜﻲ ﺍﺯ ﺧﺒﺮﻩﻫﺎ‬ ‫ﻣﻲﮔﺮﺩﺩ‪.‬‬
‫ﻣﻲﺳﭙﺎﺭﺩ ]‪ .[۴‬ﺍﻳﻦ ﺭﻭﺵ ﺩﻭ ﻣﺸﻜﻞ ﻋﻤﺪﻩ ﺩﺍﺭﺩ‪ .‬ﻣﺸﻜﻞ ﺍﻭﻝ ﺍﻳﻦ ﺍﺳﺖ ﻛﻪ‬
‫ﻭﺍﮊﻩﻫﺎﻱ ﻛﻠﻴﺪﻱ‪ :‬ﻳﺎﺩﮔﻴﺮﻱ ﻣﺤﻠﻲ‪ ،‬ﻳﺎﺩﮔﻴﺮﻱ ﺭﻗﺎﺑﺘﻲ‪ ،‬ﺧﺒﺮﻩﻫﺎﻱ ﻣﺤﻠﻲ‪،‬‬
‫ﻣﻤﻜﻦ ﺍﺳﺖ ﺩﺭ ﻣﺮﺯﻫﺎﻱ ﺩﻭﻧﺎﺣﻴﻪ‪ ،‬ﻛﺎﺭﺍﻳﻲ ﺩﻭ ﺧﺒﺮﻩ ﻣﺠﺎﻭﺭ ﺍﺧﺘﻼﻑ ﺯﻳﺎﺩﻱ‬
‫ﺷﺒﻜﻪﻫﺎﻱ ﻋﺼﺒﻲ‪.‬‬
‫ﺩﺍﺷﺘﻪ ﺑﺎﺷﺪ ﻭ ﺩﺭ ﻧﺘﻴﺠﻪ ﻋﻤﻠﻜﺮﺩ ﻣﺪﻝﻛﻨﻨﺪﻩ ﺳﺮﺍﺳﺮﻱ ﻫﻨﮕﺎﻡ ﻋﺒﻮﺭ ﺍﺯ ﺍﻳﻦ‬
‫ﻣﺮﺯﻫﺎ ﺩﭼﺎﺭ ﺍﺧﺘﻼﻝ ﮔﺮﺩﺩ‪ .‬ﻣﺸﻜﻞ ﺩﻭﻡ ﺍﻳﻦ ﺍﺳﺖ ﻛﻪ ﺍﺻﻮﻻ ﺭﺍﺑﻄﻪ‬ ‫‪ -۱‬ﻣﻘﺪﻣﻪ‬
‫ﻣﺴﺘﻘﻴﻤﻲ ﻣﻴﺎﻥ ﭘﻴﭽﻴﺪﮔﻲ ﺭﻓﺘﺎﺭ ﻳﻚ ﺳﻴﺴﺘﻢ ﺩﺭ ﻳﻚ ﻧﺎﺣﻴﻪ ﻭ ﭼﮕﺎﻟﻲ‬
‫ﺑﺮﺍﻱ ﻣﺪﻝ ﮐﺮﺩﻥ ﺳﻴﺴﺘﻢﻫﺎﻱ ﻏﻴﺮﺧﻄﻲ ﭼﻨﺪﻣﺘﻐﻴﺮﻩ ﻛﻪ ﻓﻀﺎﻱ‬
‫ﺩﺍﺩﻩﻫﺎﻱ ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺩﺭ ﺁﻥ ﻧﺎﺣﻴﻪ ﻭﺟﻮﺩ ﻧﺪﺍﺭﺩ‪ .‬ﺑﻨﺎﺑﺮﺍﻳﻦ ﻣﻤﻜﻦ ﺍﺳﺖ‬
‫ﺣﺎﻟﺖ ﺁﻧﻬﺎ ﺩﺭﺟﻪ ﭘﻴﭽﻴﺪﮔﻲ ﺑﺎﻻﻳﻲ ﺩﺍﺭﺩ‪ ،‬ﻣﻌﻤﻮﻻ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺗﻨﻬﺎ ﻳﮏ‬
‫ﻳﻚ ﺧﺒﺮﻩ ﺗﻮﺍﻧﺎﻳﻲ ﻣﺪﻝﺳﺎﺯﻱ ﻧﺎﺣﻴﻪ ﻣﺮﺑﻮﻁ ﺑﻪ ﺧﻮﺩ ﺭﺍ ﭘﻴﺪﺍ ﻧﻜﻨﺪ ﻭ ﺩﺭ‬
‫ﻋﺎﻣﻞ ﻳﺎﺩﮔﻴﺮﻧﺪﻩ ﮐﺎﻓﻲ ﻧﻴﺴﺖ‪ .‬ﺍﺯ ﺁﻧﺠﺎ ﻛﻪ ﺩﺭ ﺑﺴﻴﺎﺭﻱ ﺍﺯ ﻣﻮﺍﺭﺩ ﺭﻓﺘﺎﺭ ﺍﻳﻦ‬
‫ﻫﻤﺎﻥ ﺣﺎﻝ ﺧﺒﺮﻩﺍﻱ ﺩﻳﮕﺮ ﺗﻨﻬﺎ ﺍﺯ ﻗﺴﻤﺘﻲ ﺍﺯ ﻇﺮﻓﻴﺖ ﻳﺎﺩﮔﻴﺮﻱ ﺧﻮﺩ‬
‫ﺳﻴﺴﺘﻢﻫﺎ ﻏﻴﺮﺍﻳﺴﺘﺎ ﺍﺳﺖ‪ ،‬ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺗﻨﻬﺎ ﻳﻚ ﻣﺪﻝﻛﻨﻨﺪﻩ ﻣﻄﺎﺑﻖ‬
‫ﺍﺳﺘﻔﺎﺩﻩ ﻛﻨﺪ‪ .‬ﺑﺮﺍﻱ ﺣﻞ ﺍﻳﻦ ﻣﺸﻜﻞ ﻣﻌﻤﻮﻻ ﺗﻼﺵ ﻣﻲﻛﻨﻨﺪ ﺧﺒﺮﻩﻫﺎ ﺭﺍ ﺑﻪ‬
‫ﺭﻭﺵﻫﺎﻱ ﻳﺎﺩﮔﻴﺮﻱ ﻋﻤﻮﻣﻲ ﺳﺒﺐ ﺍﻓﺰﺍﻳﺶ ﭘﻴﭽﻴﺪﮔﻲ ﺳﻴﺴﺘﻢ ﻭ ﻛﻨﺪﻱ‬
‫ﮔﻮﻧﻪﺍﻱ ﻃﺮﺍﺣﻲ ﻛﻨﻨﺪ ﺗﺎ ﻫﻤﻪ ﺁﻧﻬﺎ ﻇﺮﻓﻴﺖ ﻳﺎﺩﮔﻴﺮﻱ ﺑﺎﻻﻳﻲ ﺩﺍﺷﺘﻪ ﺑﺎﺷﻨﺪ‪.‬‬
‫ﻓﺮﺁﻳﻨﺪ ﻳﺎﺩﮔﻴﺮﻱ ﻣﻲﺷﻮﺩ ]‪ .[۲,۱‬ﻫﻤﭽﻨﻴﻦ ﺍﮔﺮ ﺍﺯ ﺍﻳﻦ ﺳﻴﺴﺘﻢ ﺑﻪ ﻋﻨﻮﺍﻥ‬
‫ﺍﻣﺎ ﺍﻳﻦ ﺭﻫﻴﺎﻓﺖ ﺧﻮﺩ ﺳﺮﺑﺎﺭ ﻣﺤﺎﺳﺒﺎﺗﻲ ﺯﻳﺎﺩﻱ ﺑﻪ ﻫﻤﺮﺍﻩ ﺩﺍﺭﺩ ﻭ ﭘﻴﭽﻴﺪﮔﻲ‬
‫ﻳﻚ ﭘﻴﺶﺑﻴﻨﻲﻛﻨﻨﺪﻩ ﺍﺳﺘﻔﺎﺩﻩ ﺷﻮﺩ‪ ،‬ﺧﻄﺎﻱ ﺗﻌﻤﻴﻢ ﺳﻴﺴﺘﻢ ﺍﻓﺰﺍﻳﺶ‬
‫ﻣﺤﺎﺳﺒﺎﺗﻲ ﺭﺍ ﺍﻓﺰﺍﻳﺶ ﻣﻲﺩﻫﺪ ]‪.[۱۰‬‬
‫ﻣﻲﻳﺎﺑﺪ ]‪ .[۳‬ﺑﺮﺍﻱ ﺣﻞ ﺍﻳﻦ ﻣﺸﻜﻼﺕ ﻣﻲﺗﻮﺍﻥ ﺍﺯ ﭼﻨﺪﻳﻦ ﻣﺪﻝﻛﻨﻨﺪﻩ ﻛﻪ‬
‫ﺩﻭ ﺭﺍﻩ ﺣﻞ ﺑﻬﺘﺮ ﺑﺮﺍﻱ ﺣﻞ ﻣﺸﻜﻞ ﺗﻮﺯﻳﻊ ﻧﺎﺣﻴﻪﻫﺎ ﺑﺮ ﺍﺳﺎﺱ ﺗﻮﺍﻧﺎﻳﻲ‬
‫ﻫﺮﻛﺪﺍﻡ ﺑﺮﺍﻱ ﻧﺎﺣﻴﻪ ﺧﺎﺻﻲ ﺧﺒﺮﻩ ﺷﺪﻩﺍﻧﺪ‪ ،‬ﺍﺳﺘﻔﺎﺩﻩ ﻛﺮﺩ‪ .‬ﺑﻪ ﺍﻳﻦﮔﻮﻧﻪ‬
‫ﺧﺒﺮﻩﻫﺎ‪ ،‬ﺭﻫﻴﺎﻓﺖﻫﺎﻱ ﻳﺎﺩﮔﻴﺮﻱ ﻣﺤﻠﻲ ﺳﻠﺴﻠﻪﻣﺮﺍﺗﺒﻲ ]‪ [۴,۲‬ﻭ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ‬
‫ﺭﻭﺵﻫﺎ ﻛﻪ ﺑﺮﺍﻱ ﺗﺴﻬﻴﻞ ﻣﺪﻝﺳﺎﺯﻱ‪ ،‬ﭘﺎﺭﺍﻣﺘﺮﻫﺎﻱ ﺳﻴﺴﺘﻢ ﻳﺎﺩﮔﻴﺮﻧﺪﻩ ﺭﺍ ﺑﺮ‬
‫ﺭﻭﺵﻫﺎﻱ ﺳﺎﺧﺘﻲ ﺩﺭ ﺍﻳﺠﺎﺩ ﺧﺒﺮﻩﻫﺎ ]‪ [۱۰‬ﺍﺳﺖ‪ .‬ﻫﺮﻳﻚ ﺍﺯ ﺍﻳﻦ ﺩﻭ‬
‫ﺍﺳﺎﺱ ﻭﻳﮋﮔﻲﻫﺎﻱ ﻣﺤﻠﻲ ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺗﻨﻈﻴﻢ ﻣﻲﻛﻨﻨﺪ‪ ،‬ﻳﺎﺩﮔﻴﺮﻱ ﻣﺤﻠﻲ‬
‫ﺭﻫﻴﺎﻓﺖ ﺑﺨﺸﻲ ﺍﺯ ﻣﺸﻜﻼﺕ ﻣﻮﺟﻮﺩ ﺭﺍ ﭘﻮﺷﺶ ﻣﻲﺩﻫﻨﺪ ﺍﻣﺎ ﻣﻌﺎﻳﺐ ﻣﺮﺑﻮﻁ‬

‫‪ .١‬ﺍﻳﻦ ﺗﺤﻘﻴﻖ ﺗﻮﺳﻂ ﺑﻨﻴﺎﺩ ﭘﮋﻭﻫﺸﻲ ﺭﺑﺎﺗﻴﮏ ﻭ ﻫﻮﺵ ﻣﺼﻨﻮﻋﻲ ﺳﭙﻨﺘﺎ ﺣﻤﺎﻳﺖ ﺷﺪﻩ ﺍﺳﺖ‪.‬‬
‫ﺍﺯ ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺭﺍ ﻋﻬﺪﻩﺩﺍﺭ ﺷﻮﻧﺪ‪ .‬ﻫﻤﺰﻣﺎﻧﻲ ﻳﺎﺩﮔﻴﺮﻱ ﺧﺒﺮﻩﻫﺎ ﻭ ﺍﻧﺠﺎﻡ‬ ‫ﺑﻪ ﺧﻮﺩ ﺭﺍ ﻧﻴﺰ ﺩﺍﺭﻧﺪ‪ .‬ﻳﺎﺩﮔﻴﺮﻱ ﻣﺤﻠﻲ ﺳﻠﺴﻠﻪﻣﺮﺍﺗﺒﻲ ﻧﺎﺣﻴﻪﺑﻨﺪﻱ ﺭﺍ ﺑﻪ‬
‫ﻧﺎﺣﻴﻪﺑﻨﺪﻱ‪ ،‬ﺍﻳﻦ ﺍﻣﻜﺎﻥ ﺭﺍ ﻓﺮﺍﻫﻢ ﻣﻲﺁﻭﺭﺩ ﻛﻪ ﻣﺮﺯﻫﺎﻱ ﻧﻮﺍﺣﻲ‪ ،‬ﺍﺯﭘﻴﺶ‬ ‫ﺻﻮﺭﺕ ﻧﺎﺣﻴﻪﻫﺎﻱ ﺗﻮﺩﺭﺗﻮ ﺍﻧﺠﺎﻡ ﻣﻲﺩﻫﺪ ﻭ ﻫﺮﻧﺎﺣﻴﻪﺍﻱ ﺭﺍ ﻛﻪ ﭘﻴﭽﻴﺪﻩﺗﺮ ﺍﺯ‬
‫ﺗﻌﻴﻴﻦﺷﺪﻩ ﻧﺒﻮﺩﻩ ﻭ ﻃﻲ ﻓﺮﺁﻳﻨﺪﻱ ﺗﺪﺭﻳﺠﻲ ﺷﻜﻞ ﮔﻴﺮﻧﺪ‪ .‬ﺩﺭ ﻣﺮﺣﻠﻪ ﺩﻭﻡ‬ ‫ﺗﻮﺍﻥ ﺧﺒﺮﻩ ﻣﺮﺑﻮﻃﻪ ﺑﺎﺷﺪ ﺑﻪ ﺩﻭ ﻧﺎﺣﻴﻪ ﺗﻘﺴﻴﻢ ﻣﻲﻛﻨﺪ ]‪ .[۲‬ﺩﺭ ﺭﻫﻴﺎﻓﺖ‬
‫ﺑﺎ ﺗﻮﺟﻪ ﺑﻪ ﻧﺎﺣﻴﻪ ﺗﺼﺮﻑ ﺷﺪﻩ ﺗﻮﺳﻂ ﺧﺒﺮﻩ‪ ،‬ﻋﺎﻣﻞ ﺍﻧﺘﺨﺎﺏﻛﻨﻨﺪﻩ ﺑﺎ ﺍﺳﺘﻔﺎﺩﻩ‬ ‫ﺩﻭﻡ ﺑﺎ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺭﻭﺵﻫﺎﻱ ﺳﺎﺧﺘﻲ ﭘﻴﭽﻴﺪﻩ ]‪ [۱۰‬ﻭ ﺑﺮ ﺍﺳﺎﺱ ﻧﻤﻮﻧﻪﻫﺎﻱ‬
‫ﺍﺯ ﻳﻚ ﺍﻟﮕﻮﺭﻳﺘﻢ ﻳﺎﺩﮔﻴﺮﻱ ﻧﻈﺎﺭﺗﻲ ﻧﮕﺎﺷﺖ ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺑﻪ ﺧﺒﺮﻩﻫﺎ ﺭﺍ ﻓﺮﺍ‬ ‫ﻣﺸﺎﻫﺪﻩﺷﺪﻩ ﺍﺯ ﺭﻓﺘﺎﺭ ﺳﻴﺴﺘﻢ‪ ،‬ﺧﺒﺮﻩ ﻣﻮﺭﺩ ﻧﻴﺎﺯ ﺑﺮﺍﻱ ﻣﺪﻝﺳﺎﺯﻱ ﻳﻚ ﻧﺎﺣﻴﻪ‬
‫ﻣﻲﮔﻴﺮﺩ‪ .‬ﺑﻪ ﺍﻳﻦ ﻣﻔﻬﻮﻡ ﻛﻪ ﭘﺲ ﺍﺯ ﻓﺮﺁﻳﻨﺪ ﻳﺎﺩﮔﻴﺮﻱ‪ ،‬ﻋﺎﻣﻞ ﺍﻧﺘﺨﺎﺏﻛﻨﻨﺪﻩ‬ ‫ﻃﺮﺍﺣﻲ ﻣﻲﺷﻮﺩ]‪ .[۱۱‬ﺍﻣﺎ ﻫﺮ ﺩﻭ ﺭﻭﺵ ﻓﻮﻕ ﻫﻤﭽﻨﺎﻥ ﺍﺯ ﻣﺸﻜﻞ ﺛﺎﺑﺖﺑﻮﺩﻥ‬
‫ﻣﻲﺗﻮﺍﻧﺪ ﻫﺮ ﻧﻤﻮﻧﻪ ﻭﺭﻭﺩﻱ ﺭﺍ ﺑﻪ ﺧﺒﺮﻩﺍﻱ ﻛﻪ ﻧﻤﻮﻧﻪ ﺩﺭ ﻧﺎﺣﻴﻪ ﺗﺨﺼﺺ ﺁﻥ‬ ‫ﻣﺮﺯ ﻧﻮﺍﺣﻲ ﺭﻧﺞ ﻣﻲﺑﺮﻧﺪ‪.‬‬
‫ﻭﺍﻗﻊ ﺷﺪﻩ ﺍﺳﺖ ﺍﺭﺟﺎﻉ ﺩﻫﺪ‪ .‬ﭘﺲ ﺍﺯ ﭘﺎﻳﺎﻥ ﻓﺮﺁﻳﻨﺪ ﻳﺎﺩﮔﻴﺮﻱ‪ ،‬ﺩﺭ ﻣﺮﺣﻠﻪ‬ ‫ﺩﺭ ﺑﺮﺧﻲ ﺩﻳﮕﺮ ﺍﺯ ﺭﻭﺵﻫﺎ ﻫﺮ ﻳﻚ ﺍﺯ ﺧﺒﺮﻩﻫﺎ ﺗﻤﺎﻡ ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺭﺍ‬
‫ﺳﻮﻡ ﻧﻤﻮﻧﻪﻫﺎ ﺑﻪ ﺍﻧﺘﺨﺎﺏﻛﻨﻨﺪﻩ ﺩﺍﺩﻩ ﻣﻲﺷﻮﻧﺪﮐﻪ ﺁﻥ ﻫﺮ ﻧﻤﻮﻧﻪ ﺭﺍ ﺑﻪ ﺧﺒﺮﻩ‬ ‫ﺗﺨﻤﻴﻦ ﻣﻲﺯﻧﻨﺪ ﻭ ﭘﺎﺳﺦ ﺳﻴﺴﺘﻢ ﻣﺠﻤﻮﻉ ﻭﺯﻥﺩﺍﺭ ﭘﺎﺳﺦ ﺧﺒﺮﻩﻫﺎ ﺍﺳﺖ‪ .‬ﺍﻳﻦ‬
‫ﻣﺘﺨﺼﺺ ﺁﻥ ﺍﺭﺟﺎﻉ ﻣﻲﺩﻫﺪ ﻭ ﺍﻳﻦ ﺧﺒﺮﻩ ﺟﻮﺍﺏ ﺧﺮﻭﺟﻲ ﺭﺍ ﺗﻮﻟﻴﺪ ﻣﻲﻛﻨﺪ‪.‬‬ ‫ﻭﺯﻥﺩﻫﻲ ﺑﺮ ﺍﺳﺎﺱ ﺩﻗﺖ ﺧﺒﺮﻩﻫﺎ ﺩﺭ ﻧﻘﺎﻁ ﭘﻴﺮﺍﻣﻮﻥ ﻧﻤﻮﻧﻪ ﻭﺭﻭﺩﻱ ﺩﺭ‬
‫ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﻣﺴﺎﻟﻪ ﺻﻮﺭﺕ ﻣﻲﮔﻴﺮﺩ ]‪ .[۱۳,۱۲‬ﺑﻪ ﺩﻟﻴﻞ ﺗﻼﺵ ﻫﺮ ﻳﻚ ﺍﺯ‬
‫‪X‬‬ ‫‪Expert 1‬‬ ‫‪O1‬‬
‫ﺧﺒﺮﻩﻫﺎ ﺑﺮﺍﻱ ﻣﺪﻝﻛﺮﺩﻥ ﻛﻞ ﻓﻀﺎﻱ ﺣﺎﻟﺖ‪ ،‬ﺩﻗﺖ ﻫﺮ ﻳﻚ ﺍﺯ ﺁﻧﻬﺎ ﺩﺭ‬
‫‪Expert 2‬‬ ‫‪O2‬‬ ‫ﻣﺪﻝﺳﺎﺯﻱ ﻧﻮﺍﺣﻲ ﻣﺤﻠﻲ ﻛﺎﻫﺶ ﻣﻲﻳﺎﺑﺪ‪ .‬ﻳﻜﻲ ﺍﺯ ﻣﺸﻜﻼﺕ ﺍﻳﻦ ﺭﻭﺵ‬
‫‪Selector‬‬ ‫ﻭﺟﻮﺩ ﻫﻤﺒﺴﺘﮕﻲ ﺑﻴﻦ ﺧﺒﺮﻩﻫﺎ ﺍﺳﺖ ﺑﻪ ﮔﻮﻧﻪﺍﻱ ﻛﻪ ﺗﻼﺵ ﻳﻚ ﺧﺒﺮﻩ ﺑﺮﺍﻱ‬
‫‪On‬‬
‫ﺑﻬﺒﻮﺩ ﻛﺎﺭﺍﻳﻲ ﺑﺮ ﺭﻭﻱ ﺧﺒﺮﻩﻫﺎﻱ ﺩﻳﮕﺮ ﺗﺎﺛﻴﺮ ﻣﻨﻔﻲ ﻣﻲﮔﺬﺍﺭﺩ ]‪.[۱۳‬‬
‫‪Expert n‬‬
‫ﺩﺭ ﺍﺩﺍﻣﻪ ﻣﻘﺎﻟﻪ ﺑﻪ ﻣﻌﺮﻓﻲ ﺭﻭﺷﻲ ﺑﻪ ﻧﺎﻡ ﻧﺎﺣﻴﻪﺑﻨﺪﻱ ﺭﻓﺘﺎﺭﻱ ﭘﺮﺩﺍﺧﺘﻪ‬
‫ﺷﮑﻞ ‪ .۱‬ﻣﻌﻤﺎﺭﻱ ﻛﻠﻲ ﺳﻴﺴﺘﻢ‬ ‫ﺷﺪﻩ ﺍﺳﺖ‪ .‬ﺩﺭ ﺍﻳﻦ ﺭﻭﺵ ﺩﺭ ﺍﺑﺘﺪﺍ ﻫﺮ ﻳﻚ ﺍﺯ ﺧﺒﺮﻩﻫﺎ ﻳﻚ ﻣﺪﻝ ﺗﺼﺎﺩﻓﻲ‬
‫ﺍﺭﺍﻳﻪ ﻣﻲﻛﻨﻨﺪ‪ .‬ﻫﺮ ﺧﺒﺮﻩ ﺑﺎ ﺗﻮﺟﻪ ﺑﻪ ﻣﺪﻝ ﺍﻭﻟﻴﻪ‪ ،‬ﺩﺭ ﺑﺮﺧﻲ ﻧﻘﺎﻁ‪ ،‬ﺭﻓﺘﺎﺭ‬
‫ﻫﺮ ﻳﻚ ﺍﺯ ﻭﺍﺣﺪﻫﺎﻱ ﺗﺸﻜﻴﻞ ﺩﻫﻨﺪﻩ ﻣﻌﻤﺎﺭﻱ ﻓﻮﻕ‪ ،‬ﻣﻤﻜﻦ ﺍﺳﺖ‬ ‫ﺳﻴﺴﺘﻢ ﺭﺍ ﺑﻬﺘﺮ ﺍﺯ ﺑﻘﻴﻪ ﺗﺨﻤﻴﻦ ﻣﻲﺯﻧﺪ ﻭ ﺑﺮﺍﺳﺎﺱ ﺭﻭﺵ ﭘﻴﺸﻨﻬﺎﺩﻱ‪،‬‬
‫ﺳﻴﺴﺘﻢ ﻳﺎﺩﮔﻴﺮﻧﺪﻩ ﺧﺎﺻﻲ ﻣﺎﻧﻨﺪ ﺷﺒﻜﻪﻫﺎﻱ ﻋﺼﺒﻲ ﭘﺮﺳﭙﺘﺮﻭﻥ ﭼﻨﺪﻻﻳﻪ‪،‬‬ ‫ﻣﺪﻝﺳﺎﺯﻱ ﻫﻤﺴﺎﻳﮕﻲ ﺍﻳﻦ ﻧﻘﺎﻁ ﺭﺍ ﻧﻴﺰ ﺑﺮﻋﻬﺪﻩ ﻣﻲﮔﻴﺮﺩ‪ .‬ﻃﻲ ﻓﺮﺁﻳﻨﺪﻱ‬
‫ﺷﺒﻜﻪﻫﺎﻱ ‪ SVM ،RBF‬ﻭ ﻳﺎ ﺳﺎﻳﺮ ﺳﻴﺴﺘﻢﻫﺎﻱ ﻳﺎﺩﮔﻴﺮﻧﺪﻩ ﺑﺎﺷﻨﺪ‪ .‬ﺩﺭ ﺍﻳﻦ‬ ‫ﺗﺪﺭﻳﺠﻲ ﻧﺎﺣﻴﻪﻫﺎﻳﻲ ﺩﺭ ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺷﻜﻞ ﻣﻲﮔﻴﺮﻧﺪ ﻭ ﺳﭙﺲ ﺩﺭ‬
‫ﻣﻘﺎﻟﻪ ﺑﺪﻭﻥ ﺁﻧﻜﻪ ﺧﻠﻠﻲ ﺑﻪ ﻛﻠﻴﺎﺕ ﻣﺒﺎﺣﺚ ﻣﻄﺮﺡﺷﺪﻩ ﻭﺍﺭﺩ ﮔﺮﺩﺩ‪ ،‬ﺍﺯ‬ ‫ﻓﺮﺁﻳﻨﺪﻱ ﺭﻗﺎﺑﺘﻲ ﻣﺮﺯﻫﺎﻱ ﻧﺎﺣﻴﻪﻫﺎ ﺑﻪ ﺻﻮﺭﺗﻲ ﺣﺮﻛﺖ ﻣﻲﻛﻨﻨﺪ ﻛﻪ ﺩﺭ‬
‫ﺷﺒﻜﻪﻫﺎﻱ ﻋﺼﺒﻲ ﭘﺮﺳﭙﺘﺮﻭﻥ ﭼﻨﺪﻻﻳﻪ ﺑﻪ ﻋﻨﻮﺍﻥ ﺳﻴﺴﺘﻢﻫﺎﻱ ﺧﺒﺮﻩ ﻭ‬ ‫ﻧﻬﺎﻳﺖ ﻫﺮ ﻧﺎﺣﻴﻪ ﺯﻳﺮﻓﻀﺎﻳﻲ ﺑﺎ ﺭﻓﺘﺎﺭﻱ ﻣﺘﻤﺎﻳﺰ ﺭﺍ ﺑﭙﻮﺷﺎﻧﺪ‪ .‬ﺑﻪ ﺩﻟﻴﻞ‬
‫ﻋﺎﻣﻞ ﺍﻧﺘﺨﺎﺏﻛﻨﻨﺪﻩ ﺍﺳﺘﻔﺎﺩﻩ ﺷﺪﻩ ﺍﺳﺖ‪.‬‬ ‫ﺭﻗﺎﺑﺘﻲﺑﻮﺩﻥ ﺗﺨﺼﻴﺺ ﻧﺎﺣﻴﻪﻫﺎ‪ ،‬ﺍﻧﺪﺍﺯﻩ ﻭ ﺷﻜﻞ ﻫﻨﺪﺳﻲ ﻫﺮ ﻧﺎﺣﻴﻪ ﺑﺮ ﺍﺳﺎﺱ‬
‫‪ -۱-۲‬ﺍﻟﮕﻮﺭﻳﺘﻢ ﻳﺎﺩﮔﻴﺮﻱ ﺳﻴﺴﺘﻢ‬ ‫ﺗﻮﺍﻧﺎﻳﻲ ﻫﺮ ﺧﺒﺮﻩ ﺗﻐﻴﻴﺮ ﻣﻲﻛﻨﺪ‪.‬‬
‫ﻣﻜﺎﻧﻴﺰﻡ ﻳﺎﺩﮔﻴﺮﻱ ﺍﻳﻦ ﺳﻴﺴﺘﻢ ﺍﺯ ﺷﺒﻜﻪﻫﺎﻱ ﺭﻗﺎﺑﺘﻲ ﻛﻮﻫﻮﻧﻦ‬ ‫‪ -۲‬ﻧﺎﺣﻴﻪﺑﻨﺪﻱ ﺭﻓﺘﺎﺭﻱ‬
‫ﺍﻟﻬﺎﻡﮔﺮﻓﺘﻪ ﺷﺪﻩ ﺍﺳﺖ‪ .‬ﺩﺭ ﺷﺒﻜﻪﻫﺎﻱ ﻛﻮﻫﻮﻧﻦ ﺩﺭ ﺍﺑﺘﺪﺍ ﺗﻌﺪﺍﺩﻱ ﺑﺮﺩﺍﺭ ﺑﻪ‬ ‫ﻣﻌﻤﺎﺭﻱ ﺳﻴﺴﺘﻢ ﺷﺎﻣﻞ ﺗﻌﺪﺍﺩﻱ ﺧﺒﺮﻩ ﻭ ﻳﻚ ﻋﺎﻣﻞ ﺑﺮﺍﻱ ﻫﺪﺍﻳﺖ‬
‫ﺻﻮﺭﺕ ﺗﺼﺎﺩﻓﻲ ﺩﺭ ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺍﻧﺘﺨﺎﺏ ﻣﻲﺷﻮﻧﺪ‪ .‬ﺍﻧﺘﻈﺎﺭ ﻣﻲﺭﻭﺩ ﺩﺭ ﭘﺎﻳﺎﻥ‬ ‫ﺻﺤﻴﺢ ﻭﺭﻭﺩﻱﻫﺎ ﺑﻪ ﺳﻤﺖ ﺧﺒﺮﻩ ﻣﺘﺨﺼﺺ ﺍﺳﺖ ﻛﻪ ﺍﺯ ﺍﻳﻦ ﭘﺲ ﺁﻧﺮﺍ‬
‫ﻓﺮﺁﻳﻨﺪ ﺁﻣﻮﺯﺵ‪ ،‬ﻫﺮ ﻳﻚ ﺍﺯ ﺍﻳﻦ ﺑﺮﺩﺍﺭﻫﺎ ﻧﻤﺎﻳﻨﺪﻩ ﺧﻮﺷﻪﺍﻱ ﺍﺯ ﺩﺍﺩﻩﻫﺎ ﺩﺭ‬ ‫ﺍﻧﺘﺨﺎﺏﻛﻨﻨﺪﻩ ﻣﻲﻧﺎﻣﻴﻢ‪ .‬ﺷﻤﺎﻱ ﻛﻠﻲ ﻣﻌﻤﺎﺭﻱ ﺳﻴﺴﺘﻢ ﺩﺭ ﺷﻜﻞ ‪ ۱‬ﻧﻤﺎﻳﺶ‬
‫ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺑﺎﺷﻨﺪ‪ .‬ﭘﺲ ﺍﺯ ﺍﻧﺘﺨﺎﺏ ﺗﺼﺎﺩﻓﻲ ﺑﺮﺩﺍﺭﻫﺎ‪ ،‬ﺩﺭ ﻳﻚ ﻓﺮﺁﻳﻨﺪ‬ ‫ﺩﺍﺩﻩ ﺷﺪﻩ ﺍﺳﺖ‪.‬‬
‫ﺗﻜﺮﺍﺭﺷﻮﻧﺪﻩ ﺑﻪ ﺍﺯﺍﻱ ﻫﺮ ﻧﻤﻮﻧﻪ ﻭﺭﻭﺩﻱ ﺑﺮﺩﺍﺭﻱ ﻛﻪ ﻧﺰﺩﻳﻜﺘﺮﻳﻦ ﻓﺎﺻﻠﻪ‬ ‫ﺳﻴﺴﺘﻢ ﺩﺭ ﺳﻪ ﻣﺮﺣﻠﻪ ﺗﻮﺳﻌﻪ ﻣﻲﻳﺎﺑﺪ‪ .‬ﺩﻭ ﻣﺮﺣﻠﻪ ﻧﺨﺴﺖ‪ ،‬ﻓﺮﺁﻳﻨﺪ‬
‫ﺍﻗﻠﻴﺪﺳﻲ ﺗﺎ ﺁﻥ ﺭﺍ ﺩﺍﺭﺩ ﺑﻪ ﻋﻨﻮﺍﻥ ﺑﺮﺩﺍﺭ ﺑﺮﻧﺪﻩ ﺷﻨﺎﺧﺘﻪ ﻣﻲﺷﻮﺩ‪ .‬ﺍﻳﻦ ﺑﺮﺩﺍﺭ‬ ‫ﻳﺎﺩﮔﻴﺮﻱ ﺳﻴﺴﺘﻢ ﻭ ﻣﺮﺣﻠﻪ ﺳﻮﻡ ﻓﺮﺁﻳﻨﺪ ﺑﻜﺎﺭﮔﻴﺮﻱ ﺁﻥ ﺭﺍ ﺗﺸﻜﻴﻞ‬
‫ﺑﻪ ﮔﻮﻧﻪﺍﻱ ﺗﻘﻮﻳﺖ ﻣﻲﺷﻮﺩ ﻛﻪ ﺑﻪ ﻧﻤﻮﻧﻪ ﻭﺭﻭﺩﻱ ﻧﺰﺩﻳﻜﺘﺮ ﺷﻮﺩ‪ .‬ﺩﺭ ﻧﺘﻴﺠﻪ‬ ‫ﻣﻲﺩﻫﻨﺪ‪ .‬ﺩﺭ ﮔﺎﻡ ﺍﻭﻝ‪ ،‬ﺧﺒﺮﻩﻫﺎ ﺑﺮﺍﻱ ﻣﺪﻝﻛﺮﺩﻥ ﻭ ﺗﺼﺮﻑ ﻧﻮﺍﺣﻲ ﻣﺨﺘﻠﻒ‬
‫ﺩﺭﺟﻪ ﺗﻌﻠﻖ ﻧﻤﻮﻧﻪ ﻭﺭﻭﺩﻱ ﺑﻪ ﺑﺮﺩﺍﺭ ﻧﻮﺭﻭﻥ ﺑﺮﻧﺪﻩ ﺍﻓﺰﺍﻳﺶ ﻣﻲﻳﺎﺑﺪ‪.‬‬ ‫ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺑﻪ ﺭﻗﺎﺑﺖ ﻣﻲﭘﺮﺩﺍﺯﻧﺪ‪ .‬ﺩﺭ ﺍﻳﻦ ﻣﺮﺣﻠﻪ‪ ،‬ﻋﺎﻣﻞ ﺍﻧﺘﺨﺎﺏﻛﻨﻨﺪﻩ‪،‬‬
‫ﺩﺭ ﺳﻴﺴﺘﻢ ﻣﻄﺮﺡﺷﺪﻩ ﻧﻴﺰ ﺩﺭ ﺍﺑﺘﺪﺍ ﻫﺮ ﺧﺒﺮﻩ‪ ،‬ﺗﺎﺑﻌﻲ ﺗﺼﺎﺩﻓﻲ ﺭﺍ ﻣﺪﻝ‬ ‫ﻧﻘﺸﻲ ﺩﺭ ﻓﺮﺁﻳﻨﺪ ﻧﺎﺣﻴﻪﺑﻨﺪﻱ ﻭ ﻳﺎﺩﮔﻴﺮﻱ ﻧﻮﺍﺣﻲ ﺗﻮﺳﻂ ﺧﺒﺮﻩﻫﺎ ﻧﺪﺍﺭﺩ ﻭ‬
‫ﻣﻲﻛﻨﺪ‪ .‬ﺍﻳﻦ ﺗﺼﺎﺩﻓﻲﺑﻮﺩﻥ‪ ،‬ﺑﺎ ﻣﻘﺪﺍﺭﺩﻫﻲ ﺗﺼﺎﺩﻓﻲ ﺑﻪ ﻭﺯﻥﻫﺎﻱ ﺷﺒﻜﻪﻫﺎﻱ‬ ‫ﻧﻤﻮﻧﻪ ﻣﺴﺘﻘﻴﻤﺎ ﺑﻪ ﻫﻤﻪ ﺧﺒﺮﻩﻫﺎ ﺍﻋﻤﺎﻝ ﻣﻲﺷﻮﺩ ﻭ ﻳﻜﻲ ﺍﺯ ﺧﺒﺮﻩﻫﺎ ﺑﺎ ﺗﻮﺟﻪ‬
‫‪ MLP‬ﺗﺤﻘﻖ ﻣﻲﻳﺎﺑﺪ ‪ .‬ﺳﭙﺲ ﺩﺭ ﻳﻚ ﻓﺮﺁﻳﻨﺪ ﺗﻜﺮﺍﺭﺷﻮﻧﺪﻩ ﺑﻪ ﺍﺯﺍﻱ ﻫﺮ‬ ‫ﺑﻪ ﻳﻚ ﻣﻌﻴﺎﺭ ﻣﺸﺨﺺ‪ ،‬ﺗﻮﺍﻧﺎﻳﻲ ﺑﻴﺸﺘﺮﻱ ﺭﺍ ﺑﻪ ﻧﻤﺎﻳﺶ ﻣﻲﮔﺬﺍﺭﺩ‪ .‬ﻣﻌﻴﺎﺭ‬
‫ﻧﻤﻮﻧﻪ ﻭﺭﻭﺩﻱ‪ ،‬ﺧﺒﺮﻩﺍﻱ ﻛﻪ ﺑﺮﺍﺳﺎﺱ ﻳﻚ ﺗﺎﺑﻊ ﺧﻄﺎﻱ ﻣﺸﺨﺺ ﺑﻬﺘﺮﻳﻦ‬ ‫ﻳﺎﺩﺷﺪﻩ ﻧﺸﺎﻥﺩﻫﻨﺪﻩ ﻣﻴﺰﺍﻥ ﺷﺒﺎﻫﺖ ﺭﻓﺘﺎﺭ ﺳﻴﺴﺘﻢ ﻭ ﺭﻓﺘﺎﺭ ﻣﺪﻝﺷﺪﻩ‬
‫ﺟﻮﺍﺏ ﺭﺍ ﺗﻮﻟﻴﺪ ﻣﻲﻛﻨﺪ ﺑﻪ ﻋﻨﻮﺍﻥ ﺑﺮﻧﺪﻩ ﺗﻌﻴﻴﻦ ﻣﻲﺷﻮﺩ‪ .‬ﺧﺒﺮﻩ ﺑﺮﻧﺪﻩ ﺑﺎ‬ ‫ﺗﻮﺳﻂ ﻫﺮ ﺧﺒﺮﻩ ﭘﻴﺮﺍﻣﻮﻥ ﻧﻤﻮﻧﻪ ﻭﺭﻭﺩﻱ ﺍﺳﺖ‪ .‬ﺍﻳﻦ ﻓﺮﺁﻳﻨﺪ ﺑﻪ ﮔﻮﻧﻪﺍﻱ ﭘﻴﺶ‬
‫ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺭﺍﺑﻄﻪ ﺫﻳﻞ ﺷﻨﺎﺧﺘﻪ ﻣﻲ ﺷﻮﺩ‪:‬‬ ‫ﻣﻲﺭﻭﺩ ﻛﻪ ﺩﺭ ﭘﺎﻳﺎﻥ ﺁﻥ‪ ،‬ﻫﺮ ﻳﻚ ﺍﺯ ﺧﺒﺮﻩﻫﺎ ﻣﺴﺌﻮﻟﻴﺖ ﻣﺪﻝﻛﺮﺩﻥ ﺑﺨﺸﻲ‬
‫‪i( X ) = Arg min ( Err j ( X )) j = 1,2, L, n‬‬ ‫) ‪(۱‬‬
‫‪j‬‬
‫‪ -۲-۲‬ﺍﻟﮕﻮﺭﻳﺘﻢ ﺑﻜﺎﺭﮔﻴﺮﻱ ﺳﻴﺴﺘﻢ‬ ‫ﺩﺭ ﺍﻳﻦ ﺭﺍﺑﻄﻪ )‪ i(X‬ﺍﻧﺪﻳﺲ ﺧﺒﺮﻩ ﺑﺮﻧﺪﻩ‪ Errj(X) ،‬ﺧﻄﺎﻱ ﺧﺒﺮﻩ ‪ j‬ﺍﻡ ﺑﻪ‬
‫ﭘﺲ ﺍﺯ ﭘﺎﻳﺎﻥ ﻓﺮﺁﻳﻨﺪ ﺁﻣﻮﺯﺵ‪ ،‬ﺑﺎ ﻭﺭﻭﺩ ﻫﺮ ﻧﻤﻮﻧﻪ ﺑﻪ ﺳﻴﺴﺘﻢ‪ ،‬ﺍﺑﺘﺪﺍ‬ ‫ﺍﺯﺍﻱ ﻧﻤﻮﻧﻪ ﻭﺭﻭﺩﻱ ‪ X‬ﻭ ‪ n‬ﺗﻌﺪﺍﺩ ﺧﺒﺮﻩﻫﺎﺳﺖ‪ Errj(X) .‬ﺍﺯ ﺭﻭﺍﺑﻂ ﺫﻳﻞ ﺑﻪ‬
‫ﻋﺎﻣﻞ ﺍﻧﺘﺨﺎﺏﻛﻨﻨﺪﻩ ﺧﺒﺮﻩﺍﻱ ﺭﺍ ﻛﻪ ﻧﺎﺣﻴﻪ ﺷﺎﻣﻞ ﻧﻤﻮﻧﻪ ﺭﺍ ﺑﻪ ﺧﻮﺩ‬ ‫ﺩﺳﺖ ﻣﻲﺁﻳﺪ‪:‬‬
‫ﺍﺧﺘﺼﺎﺹ ﺩﺍﺩﻩ ﺍﺳﺖ‪ ،‬ﺷﻨﺎﺳﺎﻳﻲ ﻣﻲﻛﻨﺪ ﻭ ﻧﻤﻮﻧﻪ ﻭﺭﻭﺩﻱ ﺭﺍ ﺑﻪ ﺁﻥ ﺍﺭﺟﺎﻉ‬ ‫} ‪H X = {K Î S X - K < r1‬‬ ‫) ‪(۲‬‬
‫‪Err j ( X ) = å e‬‬ ‫) ‪(۳‬‬
‫‪2‬‬

‫ﻣﻲﺩﻫﺪ‪ .‬ﺧﺒﺮﻩ ﻧﻴﺰ ﺟﻮﺍﺏ ﻧﻬﺎﻳﻲ ﺭﺍ ﺗﻮﻟﻴﺪ ﻣﻲﻛﻨﺪ‪ .‬ﺍﻳﻦ ﺭﻭﻧﺪ ﺭﺍ ﻣﻲﺗﻮﺍﻥ ﺑﻪ‬ ‫‪- K -X‬‬ ‫‪2s‬‬ ‫‪2‬‬
‫)) ‪× E (YK , Expert j ( K‬‬
‫‪K ÎH X‬‬
‫ﺻﻮﺭﺕ ﺫﻳﻞ ﻧﻤﺎﻳﺶ ﺩﺍﺩ‪:‬‬
‫) ‪OFinal = Expert Selector( X ) ( X‬‬ ‫ﺩﺭ ﺭﺍﺑﻄﻪ )‪ (۲‬ﻧﻤﻮﻧﻪﻫﺎﻳﻲ ﻛﻪ ﺩﺭ ﻫﻤﺴﺎﻳﮕﻲ ﺑﻪ ﺷﻌﺎﻉ ‪) r1‬ﺷﻌﺎﻉ ﺳﻨﺠﺶ‬
‫) ‪(۴‬‬
‫ﺭﻓﺘﺎﺭ( ﺍﺯ ﻧﻤﻮﻧﻪ ﻭﺭﻭﺩﻱ ‪ X‬ﻭﺍﻗﻊ ﺷﺪﻩﺍﻧﺪ ﺩﺭ ﻣﺠﻤﻮﻋﻪ ‪ HX‬ﻗﺮﺍﺭ ﻣﻲﮔﻴﺮﻧﺪ‪.‬‬
‫ﺩﺭ ﺷﮑﻞ ‪ ۲‬ﺍﻟﮕﻮﺭﻳﺘﻢ ﻫﺎﻱ ﺁﻣﻮﺯﺵ ﻭ ﺁﺯﻣﺎﻳﺶ ﺳﻴﺴﺘﻢ ﺑﻄﻮﺭ ﺧﻼﺻﻪ‬
‫ﺩﺭ ﺍﻳﻦ ﺭﺍﺑﻄﻪ ‪ S‬ﺩﺍﻣﻨﻪ ﻣﺠﻤﻮﻋﻪ ﺩﺍﺩﻩﻫﺎﺳﺖ‪ .‬ﺩﺭ ﺭﺍﺑﻄﻪ )‪ ،(۳‬ﺑﺮﺍﻱ ﻣﺤﺎﺳﺒﻪ‬
‫ﺗﺸﺮﻳﺢ ﺷﺪﻩ ﺍﺳﺖ‪.‬‬
‫)‪ Errj(X‬ﺍﺯ ﻳﻚ ﺧﻄﺎﻱ ﻭﺯﻥﺩﺍﺭ ﺩﺭ ﻳﻚ ﻫﻤﺴﺎﻳﮕﻲ ﺑﻪ ﻣﺮﻛﺰ ‪ X‬ﻭ ﺷﻌﺎﻉ ‪r1‬‬
‫ﺍﺳﺘﻔﺎﺩﻩ ﻣﻲﺷﻮﺩ‪ .‬ﺩﺭ ﺍﻳﻦ ﺭﺍﺑﻄﻪ‪ E ،‬ﺗﺎﺑﻊ ﺧﻄﺎﻱ ﻧﻘﻄﻪﺍﻱ ﻭ‬
‫‪2‬‬
‫‪- K-X‬‬ ‫‪2s‬‬
‫‪2‬‬
‫‪e‬‬
‫ﺿﺮﻳﺒﻲ ﺍﺳﺖ ﻛﻪ ﺑﺮ ﺍﺳﺎﺱ ﻓﺎﺻﻠﻪ ﻳﻚ ﻧﻤﻮﻧﻪ ﺍﺯ ﻣﺮﻛﺰ ﺑﺎﺯﻩ‪ ،‬ﻳﻚ ﻭﺯﻥﺩﻫﻲ‬
‫ﮔﺎﻭﺳﻲ ﺍﻧﺠﺎﻡ ﻣﻲﺩﻫﺪ‪ .‬ﻫﻤﭽﻨﻴﻦ ‪ YK‬ﺧﺮﻭﺟﻲ ﻣﻄﻠﻮﺏ ﻭ )‪Expertj (K‬‬
‫ﺧﺮﻭﺟﻲ ﺧﺒﺮﻩ ‪ j‬ﺍﻡ ﺑﻪ ﺍﺯﺍﻱ ﺑﺮﺩﺍﺭ ‪ K‬ﺍﺳﺖ‪ .‬ﭘﺲ ﺍﺯ ﻣﺸﺨﺺﺷﺪﻥ ﺧﺒﺮﻩ‬
‫ﺑﺮﻧﺪﻩ‪ ،‬ﻧﻤﻮﻧﻪ ﻭﺭﻭﺩﻱ ﻭ ﻧﻤﻮﻧﻪﻫﺎﻳﻲ ﻛﻪ ﺩﺭ ﺑﺎﺯﻩﺍﻱ ﺑﻪ ﺷﻌﺎﻉ ‪) r2‬ﺷﻌﺎﻉ‬
‫ﻳﺎﺩﮔﻴﺮﻱ( ﺍﺯ ﺍﻳﻦ ﻧﻤﻮﻧﻪ ﻗﺮﺍﺭ ﺩﺍﺭﻧﺪ‪ ،‬ﺑﻪ ﺗﻌﺪﺍﺩ ‪) m‬ﻧﺮﺥ ﺗﺸﺪﻳﺪ( ﺑﺎﺭ ﺑﻪ ﺧﺒﺮﻩ‬
‫ﺑﺮﻧﺪﻩ ﺁﻣﻮﺯﺵ ﺩﺍﺩﻩ ﻣﻲﺷﻮﻧﺪ‪ .‬ﭘﺎﺭﺍﻣﺘﺮ ‪ r1‬ﺑﺎ ﺗﻮﺟﻪ ﺑﻪ ﻟﺰﻭﻡ ﺗﺸﺨﻴﺺ ﺭﻓﺘﺎﺭ‬
‫ﺩﺭ ﻳﻚ ﺑﺎﺯﻩ ﺑﺠﺎﻱ ﻳﻚ ﻧﻘﻄﻪ‪ ،‬ﺗﻌﺮﻳﻒ ﺷﺪﻩ ﺍﺳﺖ‪ .‬ﻫﻤﭽﻨﻴﻦ ﻫﺪﻑ ﺍﺯ ﺗﻌﺮﻳﻒ‬
‫ﭘﺎﺭﺍﻣﺘﺮ ‪ r2‬ﻛﻤﻚ ﺑﻴﺸﺘﺮ ﺑﻪ ﺧﺒﺮﻩ ﺑﺮﻧﺪﻩ ﺩﺭ ﻣﺪﻝﻛﺮﺩﻥ ﺭﻓﺘﺎﺭ ﺩﺭ ﻧﺎﺣﻴﻪ‬
‫ﺷﮑﻞ ‪ .۲‬ﺗﻮﺿﻴﺢ ﺍﻟﮕﻮﺭﻳﺘﻢ ﻫﺎﻱ ﺁﻣﻮﺯﺵ ﻭ ﺁﺯﻣﺎﻳﺶ ﺳﻴﺴﺘﻢ‬ ‫ﭘﻴﺮﺍﻣﻮﻥ ﻧﻤﻮﻧﻪ ﺍﺳﺖ‪ .‬ﺍﻳﻦ ﭘﺎﺭﺍﻣﺘﺮ ﻣﻮﺟﺐ ﺷﺎﻧﺲ ﺑﻴﺸﺘﺮ ﺧﺒﺮﻩ ﺑﺮﺍﻱ‬
‫ﺑﺮﻧﺪﻩﺷﺪﻥ ﺩﺭ ﺭﻗﺎﺑﺖﻫﺎﻱ ﺑﻌﺪﻱ ﺩﺭ ﻫﻤﺎﻥ ﻧﺎﺣﻴﻪ ﻣﻲﺷﻮﺩ‪ .‬ﺩﺭ ﺗﻌﺮﻳﻒ ﺩﻭ‬
‫‪ -۳‬ﺑﺮﺭﺳﻲ ﻣﻮﺭﺩﻱ ﻭ ﺍﺭﺍﻳﻪ ﻧﺘﺎﻳﺞ‬ ‫ﭘﺎﺭﺍﻣﺘﺮ ﺍﺧﻴﺮ ﻓﺮﺽ ﺷﺪﻩ ﺍﺳﺖ ﻛﻪ ﻧﻤﻮﻧﻪﻫﺎﻱ ﺗﺎﺑﻊ ﻫﺪﻑ ﺩﺭ ﻳﻚ ﻫﻤﺴﺎﻳﮕﻲ‬
‫ﺭﻭﺵ ﭘﻴﺸﻨﻬﺎﺩﺷﺪﻩ ﺑﺮﺍﻱ ﺑﺮﺍﺯﺵ ﻳﻚ ﺗﺎﺑﻊ ﭼﻬﺎﺭﺿﺎﺑﻄﻪﺍﻱ ﺑﻜﺎﺭﮔﺮﻓﺘﻪ‬ ‫ﺑﻪ ﺍﻧﺪﺍﺯﻩ ﻛﺎﻓﻲ ﻛﻮﭼﻚ‪ ،‬ﻣﺘﻌﻠﻖ ﺑﻪ ﻳﻚ ﻧﺎﺣﻴﻪ ﻫﺴﺘﻨﺪ‪.‬‬
‫ﺷﺪﻩ ﺍﺳﺖ ﻛﻪ ﺩﺭ ﺷﻜﻞﻫﺎﻱ‪ ۳‬ﻭ‪ ۵‬ﺑﺎ ﻋﻨﻮﺍﻥ ﺗﺎﺑﻊ ﻫﺪﻑ ﻧﻤﺎﻳﺶ ﺩﺍﺩﻩ ﺷﺪﻩ‬ ‫ﻋﺪﻡ ﺗﻮﺟﻪ ﺑﻪ ﺍﻳﻦ ﻣﺴﺎﻟﻪ ﺳﺒﺐ ﺍﺧﺘﻼﻝ ﺩﺭ ﻣﺮﺯﻫﺎﻱ ﺑﻴﻦ ﻧﺎﺣﻴﻪﺍﻱ‬
‫ﺍﺳﺖ‪ .‬ﺑﺎ ﺗﻮﺟﻪ ﺑﻪ ﭼﻨﺪﺿﺎﺑﻄﻪﺍﻱ ﺑﻮﺩﻥ ﺗﺎﺑﻊ ﺩﺭ ﺑﺎﺯﻩﻫﺎﻱ ﻣﺨﺘﻠﻒ‪ ،‬ﭼﻨﺪ ﺭﻓﺘﺎﺭ‬ ‫ﻣﻲﮔﺮﺩﺩ‪ .‬ﻫﻤﭽﻨﻴﻦ ﺗﻌﺮﻳﻒ ﭘﺎﺭﺍﻣﺘﺮ ‪ m‬ﺑﻪ ﺑﺮﺗﺮﻱ ﺷﺒﮑﻪ ﺑﺮﻧﺪﻩ ﺩﺭ ﻳﺎﺩﮔﻴﺮﻱ‬
‫ﻣﺤﻠﻲ ﻣﺘﻔﺎﻭﺕ ﻭﺟﻮﺩ ﺩﺍﺭﺩ‪ ،‬ﻟﺬﺍ ﻣﺴﺎﻟﻪ ﺍﻧﺘﺨﺎﺏ ﺷﺪﻩ‪ ،‬ﺍﺯ ﻭﻳﮋﮔﻲﻫﺎﻱ ﻻﺯﻡ‬ ‫ﻳﮏ ﺭﻓﺘﺎﺭ ﺧﺎﺹ ﻛﻤﻚ ﺷﺎﻳﺎﻧﻲ ﻣﻲﻛﻨﺪ‪.‬‬
‫ﺑﺮﺍﻱ ﺑﻜﺎﺭﮔﻴﺮﻱ ﺍﻳﻦ ﺭﻭﺵ ﺑﺮﺧﻮﺭﺩﺍﺭ ﺍﺳﺖ‪.‬‬ ‫ﺩﺭ ﺭﻭﻧﺪ ﺍﺟﺮﺍﻱ ﻓﺮﺁﻳﻨﺪ ﻓﻮﻕ ﺩﺭ ﺍﺑﺘﺪﺍ ﻧﺎﺣﻴﻪﻫﺎﻳﻲ ﻛﻪ ﻫﺮ ﺧﺒﺮﻩ ﺑﻪ ﺧﻮﺩ‬
‫ﺑﺮﺍﻱ ﺣﻞ ﻣﺴﺎﻟﻪ ﺑﺮ ﺍﺳﺎﺱ ﻣﻌﻤﺎﺭﻱ ﺍﺭﺍﻳﻪﺷﺪﻩ‪ ،‬ﺍﺯ ﺷﺶ ﺷﺒﮑﻪ ﻋﺼﺒﻲ‬ ‫ﺍﺧﺘﺼﺎﺹ ﻣﻲﺩﻫﺪ ﭘﺮﺍﻛﻨﺪﻩ‪ ،‬ﺩﺭ ﻣﻌﺮﺽ ﺗﻐﻴﻴﺮ ﻭ ﺩﺭ ﻧﺘﻴﺠﻪ ﺩﺍﺭﺍﻱ ﺗﻮﺯﻳﻌﻲ‬
‫ﭘﺮﺳﭙﺘﺮﻭﻥ ﺳﻪ ﻻﻳﻪ )‪ (۱:۵:۱‬ﺑﻪ ﻋﻨﻮﺍﻥ ﺷﺒﮑﻪﻫﺎﻱ ﻻﻳﻪ ﺭﻗﺎﺑﺘﻲ ﻭ ﻳﮏ ﺷﺒﮑﻪ‬ ‫ﻧﺎﻣﻄﻠﻮﺏ ﻫﺴﺘﻨﺪ‪ .‬ﻃﻲ ﻓﺮﺁﻳﻨﺪﻱ ﺭﻗﺎﺑﺘﻲ ﺑﺎ ﺣﺮﻛﺖ ﻣﺮﺯﻫﺎﻱ ﻧﺎﺣﻴﻪﻫﺎﻱ ﻫﺮ‬
‫ﭘﺮﺳﭙﺘﺮﻭﻥ ‪ ۳‬ﻻﻳﻪ )‪ (۱:۹:۱‬ﺑﻪ ﻋﻨﻮﺍﻥ ﺍﻧﺘﺨﺎﺏ ﮐﻨﻨﺪﻩ ﺍﺳﺘﻔﺎﺩﻩ ﺷﺪﻩ ﺍﺳﺖ‪.‬‬ ‫ﺧﺒﺮﻩ‪ ،‬ﺍﻳﻦ ﺗﻮﺯﻳﻊ ﺑﻪ ﺗﻮﺯﻳﻊ ﻣﻄﻠﻮﺏ ﻧﺰﺩﻳﻚ ﻣﻲﺷﻮﺩ ﻭ ﺗﺎ ﺯﻣﺎﻧﻲ ﻛﻪ ﻣﺮﺯﻫﺎﻱ‬
‫ﻓﺮﺁﻳﻨﺪ ﻳﺎﺩﮔﻴﺮﻱ ﺑﺮﺍﻱ ﺗﻤﺎﻣﻲ ﺷﺒﻜﻪﻫﺎ‪ ،‬ﺍﻟﮕﻮﺭﻳﺘﻢ ﭘﺲﺍﻧﺘﺸﺎﺭ ﺧﻄﺎ ﺍﺳﺖ‪.‬‬ ‫ﻧﻮﺍﺣﻲ ﭘﺎﻳﺪﺍﺭ ﺷﻮﻧﺪ‪ ،‬ﺍﺩﺍﻣﻪ ﻣﻲﻳﺎﺑﺪ‪ .‬ﺩﺭ ﺍﻳﻦ ﻣﻘﻄﻊ ﺁﻣﻮﺯﺵ ﺧﺒﺮﻩﻫﺎ ﭘﺎﻳﺎﻥ‬
‫ﺷﻌﺎﻉ ﺳﻨﺠﺶ ﺧﻄﺎ ﺑﺮﺍﺑﺮ ﺑﺎ ﺷﻌﺎﻉ ﻳﺎﺩﮔﻴﺮﻱ ﻭ ﺑﺮﺍﺑﺮ ﺳﻪ ﺩﺭﻧﻈﺮ ﮔﺮﻓﺘﻪ ﺷﺪﻩ‬ ‫ﻳﺎﻓﺘﻪ ﻭ ﻫﺮ ﺧﺒﺮﻩ ﺭﻓﺘﺎﺭ ﺳﻴﺴﺘﻢ ﺭﺍ ﺩﺭ ﻧﺎﺣﻴﻪﺍﻱ ﻛﻪ ﺑﻪ ﺧﻮﺩ ﺍﺧﺘﺼﺎﺹ ﺩﺍﺩﻩ‪،‬‬
‫ﺍﺳﺖ‪ .‬ﺑﺮﺍﻱ ﻣﺤﺎﺳﺒﻪ ﻣﻌﻴﺎﺭ ﺧﻄﺎ ﺑﻪ ﻣﻨﻈﻮﺭ ﺗﺸﺨﻴﺺ ﺷﺒﮑﻪ ﺑﺮﺗﺮ ﺑﻪ ﺍﺯﺍﻱ‬ ‫ﻓﺮﺍﮔﺮﻓﺘﻪ ﺍﺳﺖ‪.‬‬
‫ﻳﮏ ﻧﻤﻮﻧﻪ ‪ ،X‬ﺧﻄﺎﻱ ﻭﺯﻥﺩﺍﺭ ﻫﺮ ﺷﺒﮑﻪ ﺩﺭ ﻳﮏ ﻫﻤﺴﺎﻳﮕﻲ ﺁﻥ ﻣﺤﺎﺳﺒﻪ‬ ‫ﻣﺮﺣﻠﻪ ﺑﻌﺪ‪ ،‬ﺁﻣﻮﺯﺵ ﻋﺎﻣﻞ ﺍﻧﺘﺨﺎﺏﻛﻨﻨﺪﻩ ﺑﺮﺍﻱ ﻓﺮﺍﮔﻴﺮﻱ ﻧﮕﺎﺷﺖ‬
‫ﻣﻲﮔﺮﺩﺩ‪ .‬ﺗﺎﺑﻊ ﻭﺯﻥﺩﻫﻲ ﺗﺎﺑﻌﻲ ﮔﺎﻭﺳﻲ ﺑﺎ ﻣﻘﺪﺍﺭ ‪ s۲ = ۳‬ﺑﻪ ﻣﺮﮐﺰ ‪ X‬ﺍﺳﺖ‪.‬‬ ‫ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺑﻪ ﺧﺒﺮﻩﻫﺎﺳﺖ‪ .‬ﺩﺭ ﺍﻳﻦ ﻣﺮﺣﻠﻪ ﻫﺮ ﻧﻤﻮﻧﻪ ﺑﻪ ﺧﺒﺮﻩﻫﺎﻱ‬
‫ﻫﻤﭽﻨﻴﻦ ﻧﺮﺥ ﺗﺸﺪﻳﺪ ﺑﺮﺍﺑﺮ ﺑﺎ ‪ ۱۰‬ﺩﺭ ﻧﻈﺮ ﮔﺮﻓﺘﻪ ﺷﺪﻩ ﺍﺳﺖ‪.‬‬ ‫ﺁﻣﻮﺯﺵﺩﻳﺪﻩ ﻻﻳﻪ ﺭﻗﺎﺑﺘﻲ ﺍﻋﻤﺎﻝ‪ ،‬ﻭ ﺧﺒﺮﻩ ﺑﺮﻧﺪﻩ ﺷﻨﺎﺳﺎﻳﻲ ﻣﻲﺷﻮﺩ‪ .‬ﺍﺯ‬
‫ﺷﻜﻞ ﺷﻤﺎﺭﻩ ‪ ۳‬ﻧﺘﺎﻳﺞ ﺣﺎﺻﻞ ﺍﺯ ﺳﻴﺴﺘﻢ ﭘﻴﺎﺩﻩﺳﺎﺯﻱ ﺷﺪﻩ ﺩﺭ ﻣﻘﺎﻳﺴﻪ‬ ‫ﺍﻧﺪﻳﺲ ﺧﺒﺮﻩ ﺑﺮﻧﺪﻩ ﺑﻪ ﻋﻨﻮﺍﻥ ﺧﺮﻭﺟﻲ ﻣﻄﻠﻮﺏ ﺍﻧﺘﺨﺎﺏﻛﻨﻨﺪﻩ ﺑﻪ ﺍﺯﺍﻱ ﻧﻤﻮﻧﻪ‬
‫ﺑﺎ ﺗﺎﺑﻊ ﻫﺪﻑ ﺭﺍ ﻧﺸﺎﻥ ﻣﻲﺩﻫﺪ‪ .‬ﻣﻘﺪﺍﺭ ‪ MSE‬ﺑﺮﺍﻱ ﺭﻭﺵ ﭘﻴﺸﻨﻬﺎﺩ ﺷﺪﻩ‬ ‫ﻭﺭﻭﺩﻱ ﺍﺳﺘﻔﺎﺩﻩ ﻣﻲﺷﻮﺩ‪ .‬ﺍﻳﻦ ﻓﺮﺁﻳﻨﺪ ﺑﺮﺍﻱ ﺗﻤﺎﻣﻲ ﻧﻤﻮﻧﻪﻫﺎ ﻭ ﭼﻨﺪﻳﻦ ﺑﺎﺭ ﺗﺎ‬
‫‪ ۰,۰۳۱‬ﺑﺪﺳﺖ ﺁﻣﺪﻩ ﺍﺳﺖ‪ .‬ﻫﻤﺎﻧﻄﻮﺭ ﻛﻪ ﻣﺸﺎﻫﺪﻩ ﻣﻲﺷﻮﺩ‪ ،‬ﺩﺭ ﺩﻭ ﻧﺎﺣﻴﻪ‬ ‫ﺩﺳﺖﻳﺎﺑﻲ ﺑﻪ ﺩﻗﺖ ﻛﺎﻓﻲ ﺩﺭ ﻧﮕﺎﺷﺖ ﻓﻀﺎﻱ ﺣﺎﻟﺖ ﺑﻪ ﺧﺒﺮﻩﻫﺎ ﺍﺩﺍﻣﻪ‬
‫ﺍﻭﻝ ﺗﺎﺑﻊ ﻣﺪﻝ ﺷﺪﻩ ﺑﺮ ﺗﺎﺑﻊ ﻣﻮﺭﺩ ﻧﻈﺮ ﮐﺎﻣﻼ ﻣﻨﻄﺒﻖ ﺍﺳﺖ‪ .‬ﻫﻤﭽﻨﻴﻦ ﺗﻮﺍﺑﻊ‬ ‫ﻣﻲﻳﺎﺑﺪ‪.‬‬
‫ﺑﺪﺳﺖ ﺁﻣﺪﻩ ﺗﻮﺳﻂ ﺷﺒﮑﻪﻫﺎﻱ ﻻﻳﻪ ﺭﻗﺎﺑﺘﻲ ﺩﺭ ﺷﻜﻞ ‪ ۴‬ﻧﺸﺎﻥ ﺩﺍﺩﻩ ﺷﺪﻩ‬
‫ ﻣﻮﺟﺐ ﺍﻓﺰﺍﻳﺶ ﻛﺎﺭﺍﻳﻲ ﺳﻴﺴﺘﻢ‬،‫ ﺍﻳﻦ ﺭﻭﺵ‬.‫ﺳﻴﺴﺘﻢﻫﺎﻱ ﻳﺎﺩﮔﻴﺮﻧﺪﻩ ﻫﺴﺘﻨﺪ‬ ‫ ﺍﺯ ﻣﻴﺎﻥ‬.‫ ﭘﺎﺳﺦ ﻫﺮ ﺷﺒﮑﻪ ﺑﺮﺍﻱ ﺗﻤﺎﻡ ﻗﻠﻤﺮﻭ ﺗﺎﺑﻊ ﺭﺳﻢ ﺷﺪﻩ ﺍﺳﺖ‬.‫ﺍﺳﺖ‬
‫ ﻫﻤﭽﻨﻴﻦ ﻭﺟﻮﺩ‬.‫ﻭ ﻛﺎﻫﺶ ﭘﻴﭽﻴﺪﮔﻲ ﻣﺪﻝﻛﻨﻨﺪﻩﻫﺎﻱ ﻣﺤﻠﻲ ﻣﻲﮔﺮﺩﺩ‬ ‫ ﺩﻭ ﺷﺒﮑﻪ‬.‫ ﭼﻬﺎﺭ ﻧﺎﺣﻴﻪ ﻣﺨﺘﻠﻒ ﺭﺍ ﻣﺪﻝ ﻛﺮﺩﻩﺍﻧﺪ‬،‫ ﭼﻬﺎﺭ ﺷﺒﻜﻪ‬،‫ﺷﺶ ﺷﺒﻜﻪ‬
‫ ﺑﺎ‬،‫ﺍﺧﺘﻼﻝ ﺩﺭ ﻣﺮﺯﻫﺎﻱ ﺑﻴﻦ ﻧﻮﺍﺣﻲ ﻛﻪ ﺩﺭ ﺭﻭﺵﻫﺎﻱ ﻗﺒﻞ ﻭﺟﻮﺩ ﺩﺍﺷﺖ‬ .‫ﺭﻗﺎﺑﺘﻲ ﺩﻳﮕﺮ ﻧﻴﺰ ﻧﻘﺸﻲ ﺩﺭ ﺗﻮﻟﻴﺪ ﺟﻮﺍﺏ ﻧﻬﺎﻳﻲ ﻧﺪﺍﺷﺘﻪﺍﻧﺪ‬
.‫ﺗﻌﺮﻳﻒ ﺷﻌﺎﻉ ﻳﺎﺩﮔﻴﺮﻱ ﻣﺮﺗﻔﻊ ﻣﻲﮔﺮﺩﺩ‬

‫ ﻣﺮﺍﺟﻊ‬-۵
1.2

[1] L. Bottou and V. Vapnik, “Local learning 0.8


algorithms,”. Neural Computation, vol. 4, pp. 888- ‫ﺗﺎﺑﻊ ﻫﺪﻑ‬
0.6
901, 1992. ‫ﻳﺎﺩﮔﻴﺮﻱ ﻣﺤﻠﻲ‬
[2] M. I. Jordan and R. A. Jacobs, “Hierarchical 0.4

mixtures of experts and the EM algorithm,”. Neural 0.2


Computation, vol. 6, pp. 181-214, 1994.
0
[3] S. Haykin, Neural networks: A comprehensive 1 6 11 16 21 26 31 36
foundation, Prentice Hall, 1999.
[4] L. Cao, “Support vector machines experts for time ‫ ﻳﺎﺩﮔﻴﺮﻱ ﻣﺤﻠﻲ‬،‫ ﻣﻨﺤﻨﻲﻫﺎﻱ ﻣﺮﺑﻮﻁ ﺑﻪ ﺗﻮﺻﻴﻒ ﻣﺴﺎﻟﻪ‬.۳‫ﺷﮑﻞ‬
series forecasting,”. Neural Computation, vol. 51, pp.
321-339, 2003.
‫ ﺗﺎﺑﻊ ﻣﻮﺭﺩ ﻧﻈﺮ ﺑﺎ ﻳﮏ ﺷﺒﮑﻪ ﻋﺼﺒﻲ ﭘﺮﺳﭙﺘﺮﻭﻥ‬،‫ﺑﻪ ﻣﻨﻈﻮﺭ ﺍﻧﺠﺎﻡ ﻣﻘﺎﻳﺴﻪ‬
[5] T. Cormen, C. Leiserson, R. Rivest and C. Stein,
Introduction to algorithms, McGraw Hill, 2001. ۵ ‫ ﺧﺮﻭﺟﻲ ﺍﻳﻦ ﺷﺒﻜﻪ ﺩﺭ ﺷﻜﻞ‬.‫( ﻣﺪﻝ ﺷﺪﻩ ﺍﺳﺖ‬۱:۱۰:۱۰:۱) ‫ﭼﻬﺎﺭ ﻻﻳﻪ‬
[6] D. Wettschereck and T.G. Dietterich, “Locally ‫ ﻣﻼﺣﻈﻪ ﻣﻲﺷﻮﺩ ﻛﻪ ﺍﻳﻦ ﺷﺒﮑﻪ ﺗﻮﺍﻧﺎﻳﻲ‬.‫ﻧﻤﺎﻳﺶ ﺩﺍﺩﻩ ﺷﺪﻩ ﺍﺳﺖ‬
adaptive nearest neighbor algorithms,” Neural
‫ ﺣﺎﺻﻞ‬MSE ‫ ﻫﻤﭽﻨﻴﻦ ﻣﻘﺪﺍﺭ‬.‫ﻣﺪﻝﮐﺮﺩﻥ ﻫﻤﺰﻣﺎﻥ ﺩﻭ ﻧﺎﺣﻴﻪ ﺍﻭﻝ ﺭﺍ ﻧﺪﺍﺭﺩ‬
Information Processing Systems, vol 6, pp. 184-191,
1994. .‫ ﺍﺳﺖ ﻛﻪ ﺍﺯ ﺧﻄﺎﻱ ﺭﻭﺵ ﺍﺭﺍﻳﻪﺷﺪﻩ ﺑﻴﺸﺘﺮ ﺍﺳﺖ‬۰,۰۵۵ ‫ﺍﺯ ﺍﻳﻦ ﺭﻭﺵ ﺑﺮﺍﺑﺮ‬
[7] J. McNames, Innovations in local modeling for time
series prediction, PhD Thesis, Stanford University, 1.2
1999.
1
[8] R. Murray-Smith and T. A. Johansen, “Local
learning in local model networks”, Proc. 4th IEE Int. 0.8 ‫ﻣﺪﻝ ﺳﺮﺍﺳﺮﻱ‬

‫ﺷﺒﻜﻪ ﺍﻭﻝ‬
Conf. on Artificial Neural Networks, pp. 40-46, 0.6 ‫ﺷﺒﻜﻪ ﺩﻭﻡ‬
1995. ‫ﺷﺒﻜﻪ ﺳﻮﻡ‬

[9] F. W. Op’t Landt, Stock Price Prediction using 0.4 ‫ﺷﺒﻜﻪ ﭼﻬﺎﺭﻡ‬

Neural Networks, Master Thesis, Leiden University, 0.2


1997.
0
[10] C. Giles, D. Chen, G.-Z. Sun, H.-H. Chen, Y.-C. 1 6 11 16 21 26 31 36 41
Lee, and M. Goudreau, “Constructive learning of
recurrent neural networks: Limitations of recurrent
‫ ﺗﻮﺍﺑﻊ ﺑﺪﺳﺖ ﺁﻣﺪﻩ ﺗﻮﺳﻂ ﺷﺒﮑﻪﻫﺎﻱ ﻻﻳﻪ ﺭﻗﺎﺑﺘﻲ‬.۴ ‫ﺷﮑﻞ‬
cascade correlation and a simple solution, ” IEEE
Transaction on Neural Networks, vol. 6, pp. 829-836,
1997. 1.2
[11] R. Murray-Smith and H. Gollee, “A constructive
1
learning algorithm for local model networks,”. Proc.
0.8
IEEE Workshop on Computer-intensive methods in
‫ﺗﺎﺑﻊ ﻫﺪﻑ‬
control and signal processing, Prague, Czech 0.6
‫ﻳﺎﺩﮔﻴﺮﻱ ﻋﻤﻮﻣﻲ‬
Republic, pp. 21-29, 1994. 0.4
[12] L.C. Kiong, M. Rajeswari and M.V.C. Rao,
0.2
“Extrapolation detection and novelty-based node
insertion for sequential growing multi-experts 0
1 6 11 16 21 26 31 36
network, ” Applied Soft Computing, vol 3, pp. 159-
175, 2003.
‫ ﻳﺎﺩﮔﻴﺮﻱ ﻋﻤﻮﻣﻲ‬،‫ ﻣﻨﺤﻨﻲﻫﺎﻱ ﻣﺮﺑﻮﻁ ﺑﻪ ﺗﻮﺻﻴﻒ ﻣﺴﺎﻟﻪ‬.۵ ‫ﺷﮑﻞ‬
[13] R. A. Jacobs and S. J. Nowlan, “Adaptive mixtures of
local experts,” Neural Computation, vol 3, No 1, pp
‫ ﻧﺘﻴﺠﻪﮔﻴﺮﻱ‬-۴
79-87, 1991.
،‫ ﻧﺎﺣﻴﻪﺑﻨﺪﻱ ﺍﻧﺠﺎﻡﺷﺪﻩ ﺑﺮ ﺍﺳﺎﺱ ﺭﻓﺘﺎﺭ‬،‫ﺑﺎ ﺗﻮﺟﻪ ﺑﻪ ﻧﺘﺎﻳﺞ ﺣﺎﺻﻞﺷﺪﻩ‬
‫ﻣﻮﺟﺐ ﺷﻨﺎﺳﺎﻳﻲ ﻧﺎﺣﻴﻪﻫﺎﻳﻲ ﻣﻲﺷﻮﺩ ﻛﻪ ﺑﺮﺍﺣﺘﻲ ﻗﺎﺑﻞ ﻣﺪﻝﺷﺪﻥ ﺗﻮﺳﻂ‬

You might also like