Dějiny
TheproposaloftheMarkovchaincomesfromtheRussianmathematicianAndreyMarkov(АндрейАндреевичМарков).IntheresearchpublishedbyMarkovbetween1906and1907,inordertoprovethattheindependencebetweenrandomvariablesisnotanecessaryconditionfortheestablishmentoftheweaklawoflargenumbersandthecentrallimittheorem(centrallimittheorem),heconstructedabasisArandomprocessinwhichconditionalprobabilitiesdependoneachother,anditisprovedthatitconvergestoasetofvectorsundercertainconditions.ThisrandomprocessiscalledaMarkovchaininlatergenerations.
AftertheMarkovchainwasproposed,PaulEhrenfestandTatianaAfanasyevausedtheMarkovchaintoestablishtheEhrenfestmodelofdiffusionin1907.In1912,JulesHenriPoincaréstudiedMarkovchainsonfinitegroupsandobtainedPoincaréinequality.
In1931,AndreyKolmogorov(АндрейНиколаевичКолмогоров)extendedtheMarkovchaintothecontinuousexponentialsetinthestudyofthediffusionproblemandobtainedthecontinuous-timeMarkovchain,Andintroducedthecalculationformulaofitsjointdistributionfunction.IndependentofKolmogorov,in1926,SydneyChapmanalsoobtainedthecalculationformulawhenstudyingBrownianmotion,whichisthelaterChapman-Kolmogorovequation.
In1953,NicholasMetropolisetal.completedtherandomsimulationofthefluidtargetdistributionfunctionbyconstructingaMarkovchain.ThismethodwasfurtherimprovedbyWilfredK.Hastingsin1970anddevelopedintothecurrentMetreRobolis-Hastingsalgorithm(Metropolis-Hastingsalgorithm).
In1957,RichardBellmanfirstproposedMarkovDecisionProcesses(MDP)throughadiscretestochasticoptimalcontrolmodel.
1959-1962,theformerSovietmathematicianEugeneBorisovichDynkinperfectedKolmogorov’stheoryandusedtheDynkinformulatocomparethestationaryMarkovproceswiththemartingaleprocess.connect.
BasedonMarkovchains,morecomplexMarkovmodels(suchasHiddenMarkovModelsandMarkovRandomFields)havebeenproposedoneafteranotherandobtainedfrompracticalproblemssuchaspatternrecognitionApplied.
Definice
AMarkovchainisasetofdiscreterandomvariableswithMarkovproperties.Specifically,fortherandomvariablesetwithaone-dimensionalcountablesetastheindexsetintheprobabilityspace,ifthevaluesoftherandomvariablesareallinthecountablesetInside:,andtheconditionalprobabilityoftherandomvariablesatisfiesthefollowingrelationship:
TheniscalledaMarkovchain,Thecountablesetiscalledthestatespace,andthevalueoftheMarkovchaininthestatespaceiscalledthestate.TheMarkovchaindefinedhereisadiscrete-timeMarkovchain(Discrete-TimeMC,DTMC).Althoughithasacontinuousexponentset,itiscalledacontinuous-timeMarkovchain(Continuous-TimeMC,CTMC),butInessence,itisaMarkovproces.Commonly,theindexsetofaMarkovchainiscalled"step"or"time-step".
TheaboveformuladefinestheMarkovpropertywhiledefiningtheMarkovchain.Thispropertyisalsocalled"memorylessness",thatis,therandomvariableatstept+1isgivenAfterstept,therandomvariableisconditionallyindependentfromtherestoftherandomvariables:.Onthisbasis,theMarkovchainhasstrongMarkovproperty,thatis,foranystoppingtime,thestateoftheMarkovchainbeforeandafterthestoppingtimeisindependentofeachother.
Vysvětlující příklad
AcommonexampleofMarkovchainisthesimplifiedstockfluctuationmodel:ifastockrisesinoneday,itwillbeThereisaprobabilitythatpwillstarttofalland1-pwillcontinuetorise;ifthestockfallsinoneday,thereisaprobabilitythatqwillstarttorisetomorrowand1-qwillcontinuetofall.TheriseandfallofthestockisaMarkovchain,andtheconceptsinthedefinitionhavethefollowingcorrespondenceintheexample:
Náhodná proměnná:stav zásoby v den;stav;Prostor:"rostoucí" a "klesající";sada indexů:počet dní.
Conditionalprobabilityrelationship:Bydefinition,evenifallthehistoricalstatesofthestockareknown,itsriseorfallonacertaindayisonlyrelatedtothestateofthepreviousday.
Nomemory:Thestock’sperformanceonthedayisonlyrelatedtothepreviousdayandhasnothingtodowithotherhistoricalstates(theconditionalprobabilityrelationshipisdefinedwhilethememorylessnessisdefined).
Thestatebeforeandafterthestopisindependentofeachother:takeoutthestock’sriseandfallrecords,andtheninterceptasectionfromit.Wecan’tknowwhichsectionwasintercepted,becausetheinterceptionpointisthestoptime.Therecordsbeforeandaftert(t-1andt+1)havenodependencies.
n-objednávka Markovchain
n-objednávka MarkovchainHavingn-thordermemorycanberegardedasageneralizationofMarkovchain.ByanalogytothedefinitionofMarkovchain,then-objednávka Markovchainsatisfiesthefollowingconditions:
Accordingtotheaboveformula,thetraditionalMarkovchaincanbeconsideredasa1-orderMarkovchainKofuchain.AccordingtotheMarkovproperty,wecangetann-objednávka MarkovchainbytakingmultipleMarkovchainsascomponents.
Teorie a vlastnosti
Přenosová teorie
ThechangeofthestateofarandomvariableinaMarkovchainovertimeiscalledevolutionortransfer(transition).HereweintroducetwowaystodescribethestructureofMarkovchain,namelytransitionmatrixandtransitiongraph,anddefinethepropertiesofMarkovchainintheprocessoftransition.
Transitionprobability(transitionprobability)andtransitionmatrix(transitionmatrix)
Mainentry:Transitionmatrix
MarkovchainTheconditionalprobabilitybetweenrandomvariablescanbedefinedasthefollowingformsof(single-step)transitionprobabilityandn-steptransitionprobability:
wherethesubscriptIndicatesthetransferofthenthstep.AccordingtotheMarkovproperty,aftertheinitialprobabilityisgiven,thecontinuousmultiplicationofthetransitionprobabilitycanrepresentthefinite-dimensionaldistributionoftheMarkovchain:
Theintheformulaisthesamplepath,thatis,thevalueofeachstepoftheMarkovchain.Forthen-steptransitionprobability,itcanbeknownfromtheChapman–Kolmogorovequationthatitsvalueisthesumofallsampletracks:
TheaboveequationshowsthatMarlThedirectevolutionofthekoffchainbynstepsisequivalenttoitsfirstevolutionbynkstepsandthenbyksteps(kisinthestatespaceoftheMarkovchain).Theproductofthen-steptransitionprobabilityandtheinitialprobabilityiscalledtheabsoluteprobabilityofthestate.
IfthestatespaceofaMarkovchainislimited,thetransitionprobabilitiesofallstatescanbearrangedinamatrixinasingle-stepevolutiontoobtainthetransitionmatrix:
ThetransitionmatrixoftheMarkovchainistherightstochasticmatrix,andtherowofthematrixrepresentswhenistakenTheprobabilityofallpossiblestates(discretedistribution),sotheMarkovchaincompletelydeterminesthetransitionmatrix,andthetransitionmatrixalsocompletelydeterminestheMarkovchain.Accordingtothenatureoftheprobabilitydistribution,thetransitionmatrixisapositivedefinitematrix,andthesumoftheelementsineachrowisequalto1:Then-steptransitionmatrixcanalsobedefinedinthesameway:,Fromthenatureofthen-steptransitionprobability(Chapman-Kolmogorovequation),then-steptransitionmatrixisthecontinuousmatrixmultiplicationofallprevioustransitionmatrices:.
Přechodový graf
1. Přístupné a komunikovat
TheevolutionofaMarkovchaincanberepresentedasatransitiongraphinagraphstructure,andeachedgeinthegraphisassignedatransitionprobability.Theconceptof"reachable"and"connected"canbeintroducedthroughthetransitiongraph:
IfthestateintheMarkovchainis:,thatisAlltransitionprobabilitiesonthesamplingpatharenot0,thenthestateisthereachablestateofthestate,whichisrepresentedasadirectedconnectioninthetransitiondiagram:.Ifismutuallyreachable,thetwoareconnected,formingaclosedloopinthetransitiondiagram,whichismarkedas.Bydefinition,reachabilityandconnectivitycanbeindirect,thatis,itdoesnothavetobecompletedinasingletimestep.
Connectivityisasetofequivalencerelations,soequivalenceclassescanbeconstructed.InMarkovchains,equivalenceclassesthatcontainasmanystatesaspossiblearecalledcommunicatingclasses.).
2.Uzavřenýsetandabsorpčnístav
Asubsetofagivenstatespace,suchasaMarkovchainAfterenteringthesubset,youcannotleave:,thenthesubsetisclosed,calledaclosedset,andallstatesoutsideaclosedsetarenotreachable.Ifthereisonlyonestateintheclosedset,thestateisanabsorptionstate,whichisaself-loopwithaprobabilityof1inthetransitiondiagram.Aclosedsetcanincludeoneormoreconnectedclasses.
3.Příkladdiagramupřechodu
Here,anexampleofatransitiondiagramisusedtoillustratetheaboveconcepts:
BydefinitionItcanbeseenthatthetransitiongraphcontainsthreeconnectedclasses:,threeclosedsets:,andanabsorptionstate,state6.Notethatintheabovetransitiondiagram,theMarkovchainwilleventuallyentertheabsorbingstatefromanystate.ThistypeofMarkovchainiscalledanabsorbingMarkovchain.
Vlastnosti
HerewedefinethefourpropertiesofMarkovchains:neredukovatelnost,recurrence,periodicityandergodičnost.DifferentfromMarkovproperties,thesepropertiesarenotnecessarilythepropertiesoftheMarkovchain,butthepropertiesthatitexhibitstoitsstateduringthetransferprocess.Theabovepropertiesareallexclusive,thatis,Markovchainsthatarenotreduciblearenecessarilyirreducible,andsoon.
neredukovatelnost
IfthestatespaceofaMarkovchainhasonlyoneconnectedclass,thatis,allthemembersofthestatespace,thenTheMarkovchainisirreducible,otherwisetheMarkovchainhasreducibility(reducibility).TheneredukovatelnostoftheMarkovchainmeansthatduringitsevolution,randomvariablescanbetransferredbetweenanystates.
Opakování
IftheMarkovchainreachesastate,itcanreturntothatstaterepeatedlyduringevolution,thenThestateisarecurrentstate,ortheMarkovchainhasa(local)recurrence,otherwiseithasatransient(transience).Formally,forastateinthestatespace,thereturntimeofaMarkovchainforagivenstateistheinfimumofallpossiblereturntimes:
If,thereisnotransientorrecurrenceinthisstate;if,thecriteriaforjudgingthetransientandrecurrenceofthisstateareasfollows:
Whenthetimesteptendstoinfinity,thereturnprobabilityoftherecurrentstate(returnprobability),thatis,theexpectationofthetotalnumberofvisitsalsotendstoinfinity:
Inaddition,ifthestateisrecurring,themeanrecurrencetimecanbecalculated:
Iftheaveragereturntimeis,thestatusis"positiverecurrent",otherwiseitis"nullrecurrent".Ifastateiszero-returning,itmeansthattheexpectationofthetimeintervalbetweentwovisitsoftheMarkovchaintothestateispositiveinfinity.
Fromtheabovedefinitionoftransientnessandrecurrence,thefollowinginferencescanbemade:
Inference:forafinitenumberofstatesofMarrTheKofuchainhasatleastonerecurrentstate,andallrecurrentstatesarenormal.
Inference:IfaMarkovchainwithafinitenumberofstatesisirreducible,allofitsstatesreturnnormally.
Inference:IfstateAisrecursiveandstateBisreachablestateofA,thenAandBareconnected,AndBisafrequentreturn.
Inference:IfstateBisthereachablestateofA,andstateBistheabsorbingstate,thenBistherecurrentstate,andAistheinstantaneousstate.Changestate.
Inference:Thesetcomposedofthenormalreturnstateisaclosedset,butthestateintheclosedsetmaynotbearecurrentstate.
Periodicita
AMarkovchainthatreturnsnormallymayhaveperiodicity,thatis,initsDuringtheevolution,theMarkovchaincanoftenreturntoitsstatewithaperiodgreaterthan1.Formally,givenanormalreturnstate,itsreturnperiodiscalculatedasfollows:
wheremeansTakethegreatestcommondivisorofthesetelements.Forexample,ifinthetransitiondiagram,thenumberofstepsrequiredforaMarkovchaintoreturntoacertainstateis,thenitscycleis3,whichistheminimumnumberofstepsrequiredtoreturntothisstate.Ifiscalculatedaccordingtotheaboveformula,thestateisperiodic.If,thestateisaperiodicity.Fromthedefinitionofperiodicity,thefollowinginferencescanbemade:
Inference:Theabsorptionstateisanon-periodicstate.
Inference:IfstateAandstateBareconnected,thenAandBhavethesamecycle.
Inference:IfanirreducibleMarkovchainhasaperiodicstateA,thenallstatesoftheMarkovchainareperiodicstates.
ergodičnost
IfastateoftheMarkovchainisnormallyreturnedandaperiodic,Thestateisergodic.IfaMarkovchainisnotreducible,andacertainstateisergodic,thenallstatesoftheMarkovchainareergodic,whichiscalledtheergodicchain.Fromtheabovedefinition,theergodičnosthasthefollowinginferences:
Inference:IfstateAisanabsorbingstate,andAisthereachablestateofstateB,thenAistraversed,Bisnottraversed.
Inference:IfaMarkovchainofmultiplestatescontainsabsorbingstates,thentheMarkovchainisnotanergodicchain.
Inference:IfaMarkovchainofmultiplestatesformsadirectedacyclicgraph,orasingleclosedloop,thentheMarkovchainisnotTraversethechain.
Theergodicchainisanon-periodicstableMarkovchainwithsteady-statebehavioronalong-termscale,soitisaMarkovchainthathasbeenwidelystudiedandapplied.
Analýza ustáleného stavu
Hereweintroducethedescriptionofthelong-timescalebehaviorofMarkovchain,namelystationarydistributionandlimitdistribution,anddefinestationaryMarkovchain.
Stationarydistribution(stationarydistribution)
GivenaMarkovchain,ifthereisaprobabilitydistributioninitsstatespaceandthedistributionmeetsthefollowingconditions:
ThenisthestationarydistributionoftheMarkovchain.Whereisthetransitionmatrixandtransitionprobability.Thelinearequationsontherightsideoftheequivalentsymbolarecalledbalanceequations.Further,ifastationarydistributionoftheMarkovchainexists,anditsinitialdistributionisastationarydistribution,thentheMarkovchainisinasteadystate.Fromageometricpointofview,considerthestationarydistributionofMarkovchainsas,sothesupportsetofthisdistributionisastandardsimplex.
ForanirreducibleMarkovchain,ifandonlyifithasauniquestationarydistribution,thatis,whenthebalanceequationhasauniquesolutiononthepositivesimplex,theMarkovchainThechainisreturnednormally,andthestationarydistributionisexpressedasfollows:
Theaboveconclusioniscalledthestationarydistributioncriterion.ForirreducibleandrecurrentMarkovchains,solvingthebalanceequationcangettheonlyeigenvectorexceptthescale,thatis,theinvariantmeasure.IftheMarkovchainisnormallyrecursive,itsstationarydistributionisTheeigenvectorwhentheeigenvalueis1isobtainedwhensolvingthebalanceequation,thatis,theinvariantmeasureafternormalization.Therefore,thenecessaryandsufficientconditionfortheMarkovchaintohaveastabledistributionisthatithasanormalreturnstate.Inaddition,itcanbeknownbyexamplethatifaMarkovchaincontainsmultipleconnectedclassescomposedofnormalreturnstates(thepropertyshowsthattheyareallclosedsets,sotheMarkovchainisnotnormalreturn),theneachconnectedclasshasHaveastationarydistribution,andthesteadystateoftheevolutiondependsontheinitialdistribution.
Limitingdistribution(limitingdistribution)
IfthereisaprobabilitydistributioninthestatespaceofaMarkovchainandsatisfythefollowingrelationship:ThenthedistributionisthelimitdistributionoftheMarkovchain.Notethatthedefinitionofthelimitdistributionhasnothingtodowiththeinitialdistribution,thatis,foranyinitialdistribution,whenthetimesteptendstoinfinity,theprobabilitydistributionoftherandomvariabletendstothelimitdistribution.Bydefinition,thelimitdistributionmustbeastationarydistribution,buttheoppositeisnottrue.Forexample,aperiodicMarkovchainmayhaveastationarydistribution,buttheperiodicMarkovchaindoesnotconvergetoanydistribution,anditsstationarydistributionisnotalimitdistribution.
1.Omezovací věta
Twoindependentnon-cyclicstationaryMarkovchains,thatis,ifthetraversalchainhasthesametransitionmatrix,Thenwhenthetimesteptendstoinfinity,thedifferencebetweenthetwolimitdistributionstendstozero.Accordingtothecouplingtheoryinrandomprocesses,theconclusionisexpressedas:forthesametraversalchaininthestatespace,givenanyinitialdistribution,thereis:
Wheremeanssupremum.Consideringthenatureofthestationarydistribution,thisconclusionhasacorollary:forthetraversalchain,whenthetimesteptendstoinfinity,itslimitdistributiontendstobeastationarydistribution:
section>ThisconclusionissometimesreferredtoasthelimittheoremofMarkovchain(limittheoremofMarkovchain),indicatingthatiftheMarkovchainisergodic,itslimitdistributionisastationarydistribution.Foranirreducibleandnon-periodicMarkovchain,theergodicchainisequivalenttotheexistenceofitslimitdistribution,anditisalsoequivalenttotheexistenceofitsstabledistribution.
2. Ergodictheorem (ergodictheorem)
IfaMarkovchainisanergodicchain,thenbytheergodictheorem,theTheratioofthenumberofvisitstothetimestepapproachesthereciprocaloftheaveragereturntimewhenthetimestepapproachesinfinity,thatis,thestationaryorlimitingdistributionofthestate:
TraversaltheoremTheproofofreliesontheStrongLawofLargeNumbers(SLLN),whichshowsthatregardlessoftheinitialdistributionofthetraversalchain,afterasufficientlylongevolution,multipleobservations(limittheorem)ofoneoftherandomvariablesandmultipleOneobservationofrandomvariables(leftsideoftheaboveequation)cangetanapproximationofthelimitdistribution.Sincetheergodicchainsatisfiesthelimittheoremandtheergodictheorem,MCMCbuildstheergodicchaintoensurethatitconvergestoastabledistributionduringiteration.
Stacionární Markovchain
IfaMarkovchainhasauniquestationarydistributionandthelimitdistributionconvergestoastationarydistribution,Bydefinition,itisequivalenttothattheMarkovchainisastationaryMarkovchain.AstationaryMarkovchainisastrictlystationaryrandomprocess,anditsevolutionhasnothingtodowiththetimesequence:
ItcanbeknownfromthelimittheoremthatthetraversalchainisastationaryMarkovchain.Inaddition,fromtheabovedefinition,thetransitionmatrixofastationaryMarkovchainisaconstantmatrix,andthen-steptransitionmatrixisthen-thpoweroftheconstantmatrix.AstationaryMarkovchainisalsocalledatime-homogeneousMarkovchain.Correspondingtoit,iftheMarkovchaindoesnotmeettheaboveconditions,itiscalledanon-stationaryMarkovchain(non-stationaryMarkvochain)ornon-homogeneousMarkovchain(time-inhomogeneousMarkovchain).
IfastationaryMarkovchainsatisfiesthedetailedbalanceconditionforanytwostates,ithasreversibilityandiscalledareversibleMarkovchain(reversibleMarkovchain):
ThereversibilityofaMarkovchainisastricterneredukovatelnost,thatis,itcannotonlytransferbetweenanystates,butalsotransfertoeachstateTheprobabilitiesareequal,sothereversibleMarkovchainisasufficientnon-essentialconditionforastableMarkovchain.InMarkvochainMonteCarlo(MCMC),constructingareversibleMarkovchainthatsatisfiesthefinebalanceconditionisamethodtoensurethatthesamplingdistributionisastabledistribution.
Speciální případ
Bernoulliprocess(Bernoulliprocess)
Mainentry:Bernoulliprocess
TheBernoulliprocessisalsocalledBinomialMarkovchain.Itsestablishmentprocessisasfollows:GivenaseriesIndependent"identifications",eachidentificationisbinomial,andtheprobabilityistakenaspositiveandnegative.Letthepositiverandomprocessrepresenttheprobabilityofkpositiveidentifiersinnidentifiers,thenitisaBernoulliprocessinwhichtherandomvariablesobeythebinomialdistribution:
Fromtheestablishmentprocess,itcanbeseenthattheprobabilityofpositivesignsintheaddednewsignshasnothingtodowiththenumberofpreviouspositivesigns,andhasMarkovproperties,sotheBernoulliprocessisaMarkovchain.
Thegambler's Ruinproblem (gambler's Ruinproblem)
Viz: Gambler'sruin
Assumingthatthegamblerholdsalimitednumberofchipstobetinthecasino,eachbetwillwinorloseonechipwiththeprobabilityof.Ifthegamblerkeepsbetting,thetotalnumberofchipsheholdsisamarkIthasthefollowingtransfermatrix:
Thegambler’slightbetistheabsorptionstate,whichcanbeknownfromone-stepanalysis.WhenAtthetime,theMarkovchainwillinevitablyentertheabsorptionstate,thatis,nomatterhowmanychipsthegamblerholds,itwilleventuallyloseoutasthebetprogresses.
Randomwalk(randomwalk)
Mainentry:RandomWandering
Defineaseriesofindependentidenticallydistributed(iid)integerrandomvariables,anddefinethefollowingrandomprocess:
Therandomprocessisarandomwalkintheintegerset,andisthesteplength.Sincethesteplengthisiid,thecurrentstepandthepreviousstepareindependentofeachother,andthisrandomprocessisaMarkovchain.BoththeBernoulliprocessandthebankruptcyproblemofgamblersarespecialcasesofrandomwalk.
Fromtheaboveexampleofrandomwalk,wecanseethatMarkovchainshavegeneralconstructionmethods.Specifically,iftherandomprocessinthestatespaceisTherearesatisfyingforms:
Amongthem,andareiidrandomvariablesofspaceandIndependentof,therandomprocessisaMarkovchain,anditsone-steptransitionprobabilityis:.ThisconclusionshowsthattheMarkovchaincanbenumericallysimulatedbyrandomvariables(randomnumbers)thatfollowtheuniformdistributionofiidintheintervalof.
povýšení
Markovproces
Mainentry:Markovproces
MarkovprocesisalsocalledcontinuoustimeMarkovchain,itisMarkovchainordiscretetimeMarkovchainInthegeneralizationof,itsstatespaceisacountableset,buttheone-dimensionalindexsetnolongerhasthelimitofacountablesetandcanrepresentcontinuoustime.ThepropertiesofaMarkovprocesandaMarkovchaincanbecompared,anditsMarkovpropertyisusuallyexpressedasfollows:
SincethestatespaceoftheMarkovprocesisForasetofnumbers,thesampletrackincontinuoustimeisalmostnecessarily(as)arightcontinuoussegmentfunction,sotheMarkovprocescanbeexpressedasajumpprocessandrelatedtotheMarkovchain:
whereisthesojourntimeofacertainstate,andisthesequentialindexsetmember(timesegment).TheMarkovchainandtheresidencetimesatisfyingtheaboverelationshiparethejumpingprocesslocallyembeddedprocessunderalimitedtimesegment..
Markovmodel(Markovmodel)
Hlavní článek:Markovmodel
MarkovchainorMarkovprocesisnottheonlyrandomprocessbasedonMarkovproperty.Infact,hiddenMarkovmodel,Markovdecisionprocess,MarkovrandomprocessStochasticprocesses/randommodelssuchasairportshaveMarkovpropertiesandarecollectivelyreferredtoasMarkovmodels.HereisabriefintroductiontoothermembersoftheMarkovmodel:
1.HiddenMarkovModel(HiddenMarkovModel,HMM)
HMMisAstatespaceisnotcompletelyvisible,thatis,aMarkovchaincontaininghiddenstatus.ThevisiblepartoftheHMMiscalledtheemissionstate,whichisrelatedtothehiddenstate,butitisnotenoughtoformacompletecorrespondence.Takespeechrecognitionasanexample.Thesentencethatneedstoberecognizedisinaninvisiblehiddenstate,andthereceivedvoiceoraudioistheoutputstaterelatedtothesentence.Atthistime,thecommonapplicationofHMMisbasedontheMarkovnatureandderivedfromthevoiceinput.Thecorrespondingsentenceistoreversethehiddenstatefromtheoutputstate.
2.Markovdecisionprocess(Markovdecisionprocess,MDP):
MDPintroduces"actions"onthebasisofstatespaceTheMarkovchain,thatis,thetransitionprobabilityoftheMarkovchainisnotonlyrelatedtothecurrentstate,butalsorelatedtothecurrentaction.MDPincludesasetofinteractiveobjects,namelyagentandenvironment,anddefines5modelelements:state,action,policy,rewardandreturn.AmongthemStrategyisthemappingfromstatetoaction,andrewardisthediscountoraccumulationofrewardovertime.IntheevolutionofMDP,theagentperceivestheinitialstateoftheenvironment,implementsactionsaccordingtothestrategy,andtheenvironmententersanewstateundertheinfluenceoftheactionandfeedsbacktotheagentareward.Theagentreceivesthe"reward"andadoptsnewstrategiestocontinuouslyinteractwiththeenvironment.MDPisoneofthemathematicalmodelsofreinforcementlearning(reinforcementlearning),whichisusedtosimulatetherandomnessstrategiesandrewardsthatcanbeachievedbyagents.OneofthepopularizationsofMDPisthepartiallyobservableMarkovdecisionprocess(POMDP),whichconsidersthehiddenstateandoutputstateoftheMDPintheHMM.
3.MarkovRandomField(MarkovRandomField,MRF)
MRFisaMarkovchainfromone-dimensionalindexsettohigh-dimensionalspacepovýšení.TheMarkovpropertyofMRFshowsthatthestateofanyrandomvariableisonlydeterminedbythestateofallitsadjacentrandomvariables.Analogoustothefinite-dimensionaldistributioninaMarkovchain,thejointprobabilitydistributionofarandomvariableinMRFistheproductofallcliquescontainingtherandomvariable.ThemostcommonexampleofMRFistheIsingmodel.
Harrischain
HarrischainisthegeneralizationofMarkovchainfromcountablestatespacetocontinuousstatespace,givenThestationaryMarkovchainonthemeasurablespace,ifanysubsetofthemeasurablespaceandthereturntimeofthesubset,theMarkovchainsatisfies:
pak je Markovův řetězec Harrisův řetězec, kde je měřitelný prostorΣ-konečná míra (σ-konečná míra).
aplikace
MCMC
BuildingaMarkovchainwithsamplingdistributionasthelimitdistributionisMarkovChainMonteCarlo(MarkovChainMonteCarlo,MCMC)ThecorestepofMCMCistocontinuouslyiteratethetimestepsontheMarkovchaintoobtainrandomnumbersthatapproximatelyobeythesamplingdistribution,andusetheserandomnumberstoapproximatethemathematicalexpectationofthetargetforthesamplingdistribution:
ThelimitdistributionnatureofMarkovchaindeterminesthatMCMCisunbiasedestimation,thatis,whenthenumberofsamplestendstoinfinity,thetruevalueofthemathematicalexpectationofsolvingthetargetwillbeobtained.ThiscombinesMCMCanditsalternativemethod,Forexample,variationalBayesianinference(variationalBayesianinference)isdistinguished,thelatterisusuallylesscomputationallyexpensivethanMCMC,butitcannotguaranteeanunbiasedestimate.
Ostatní
Inphysicsandchemistry,MarkovchainsandMarkovprocesesareusedtomodeldynamicsystems,formingMarkovdynamics(Markovdynamics).dynamics).Inthequeueingtheory,theMarkovchainisthebasicmodelofthequeuingprocess.Intermsofsignalprocessing,Markovchainsaresomesequentialdatacompressionalgorithms,suchasthemathematicalmodelofZiv-Lempelcoding.Inthefinancialfield,theMarkovchainmodelisusedtopredictthemarketshareofenterpriseproducts.