Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/693b7cce-38f4-800d-92f7-5e56a467bfa7
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Below are small but important mathematical refinements that improve interpretability, numerical stability, and empirical expressivity. ==== ===== Replace raw WijW_{ij}Wij with normalized influence: ===== W~ij=Wijϵ+∑kWik\widetilde W_{ij} = \frac{W_{ij}}{\epsilon + \sum_{k} W_{ik}}Wij=ϵ+∑kWikWij (with small ϵ>0\epsilon>0ϵ>0 to avoid divide-by-zero). Then the continuous update becomes: dbidt=−λibi+∑jW~ij (b~j→i−bi)+ηi.\frac{db_i}{dt} = -\lambda_i b_i + \sum_j \widetilde W_{ij}\,(\tilde b_{j\to i} - b_i) + \eta_i.dtdbi=−λibi+j∑Wij(b~j→i−bi)+ηi. This prevents the total incoming pull from arbitrarily large ∑jWij\sum_j W_{ij}∑jWij and makes updates scale-invariant across nodes with different degrees. ===== Equivalently (and useful for spectral analysis): ===== dbidt=−λibi−∑jLij bj+∑jW~ij b~j→i+ηi,\frac{db_i}{dt} = -\lambda_i b_i - \sum_j L_{ij}\, b_j + \sum_j \widetilde W_{ij}\,\tilde b_{j\to i} + \eta_i,dtdbi=−λibi−j∑Lijbj+j∑Wijb~j→i+ηi, where LLL is the weighted graph Laplacian derived from W~\widetilde WW. This isolates the smoothing / consensus effect and allows eigenvalue stability analysis. ===== Replace single σji2\sigma_{ji}^2σji2 with two components: ===== * error variance σji(err)\sigma^{(err)}_{ji}σji(err) — unintentional corruption/noise * creative distortion σji(cre)\sigma^{(cre)}_{ji}σji(cre) — deliberate rephrasing/embellishment Model: σji2=σji(err)+σji(cre)\sigma_{ji}^2 = \sigma^{(err)}_{ji} + \sigma^{(cre)}_{ji}σji2=σji(err)+σji(cre) Set: σji(err)=Merr (1−Cj) (1−Tji),σji(cre)=Mcre (Oj) (1−Aj).\sigma^{(err)}_{ji} = M_{err}\,(1 - C_j)\,(1 - T_{ji}),\qquad \sigma^{(cre)}_{ji} = M_{cre}\,(O_j)\,(1-A_j).σji(err)=Merr(1−Cj)(1−Tji),σji(cre)=Mcre(Oj)(1−Aj). Rationale: high Openness → more creative retelling (not necessarily lower fidelity), low Agreeableness → more malicious distortion. This separation helps interpretability and experimental control. ===== Cap feedback so PjiP_{ji}Pji can’t spike unrealistically. Use: ===== Pji(t)=σ(β0+βEEj+βAAj−βCCj+βN⋅g(Nj,aj(t))+βTTji)P_{ji}(t)=\sigma\big(\beta_0 + \beta_E E_j + \beta_A A_j - \beta_C C_j + \beta_N\cdot g(N_j,a_j(t)) + \beta_T T_{ji}\big)Pji(t)=σ(β0+βEEj+βAAj−βCCj+βN⋅g(Nj,aj(t))+βTTji) with g(Nj,aj)=tanh(γ Njaj)g(N_j,a_j) = \tanh(\gamma\,N_j a_j)g(Nj,aj)=tanh(γNjaj) (bounded, smooth). This prevents runaway PPP when arousal grows.
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)