-
Content count
1,072 -
Joined
-
Last visited
Everything posted by Ero
-
@CARDOZZO Pointers and actual maps are different things. I can point you in the right direction and you can sill fall off a cliff and kill yourself. I do agree with the need for a new paradigm, which is why I started a journal here explicitly with this goal: Thing is, far more accomplished and cogent arguments have been made for the new of a scientific paradigm. I mention some of them in the first post above, but they include Nobel Laureates like Ilya Prigogine, Fields Medalists like Alexander Grothendieck, and the most accomplished neuroscientist of all time - Karl Friston. The first two have worked on 'a new paradigm' since before Wolfram was born. Saying this from the standpoint of someone who reads hundreds of pages of scientific literature a week, Wolfram's writings are some of most tedious and excruciating to read and not because they are conceptually hard, but rather the opposite - it's just fluff.
-
From a philosophical standpoint there are very interesting and important observations, especially relating computational irreducibility, the ruliad and the hyper-ruliad. Those deserve attention and I am sure Leo will cover them in detail, which is why I will abstain from the philosophical aspect and instead give my two cents on the technical aspect of his work (after all, he claims he is doing 'Science', fundamentally rigorous in nature), given my background is in Pure Mathematics. For reference, I have read his 'New Kind of Science" and his 'Physics Project' Technical Report. To get straight to the point, the technical aspect his work is subpar - the entirety of his arguments are heuristic and qualitative based. His 'proofs' and discoveries are mostly pictures/ diagrams and he references solely himself with sense of aggrandizement that would make you think he is the only human being who has every said anything about this topic (far from it). The actual 'physics' and math concepts he derives from his 'foundational theory' are unworkable toy examples that do not transfer at all. There are no isomorphisms, functors or representations (i.e 'bridges') that would help you put all prior existing work into his context in a way that would be expected from a 'foundational theory'. Consider for example his 'operator' interpretation here (the bread and butter of physics)- instead of interpreting the functional space and operator algebra as is necessary in defining it, he only gives you a toy 'commutator' example that does nothing to demonstrate that his approach is even workable at the level of complexity modern science expects (i.e Hamiltonian, Langevin). For reference, the majority of Quantum Mechanics is based on non-commutative operator algebras. Furthermore, I don't see how you would even be able to define spectral decomposition of eigenvalues and eigenfunctions in his context. Same goes for his 'gauge invariant' interpretation - instead of defining/ representing the fundamental gauge field symmetry groups U(1)xSU(2)xSU(3), he again simply refers to a toy example without any prescriptive powers. This means that his model fundamentally fails at predicting any kind of behavior that we observe for example in particle colliders. Why on earth would then physicists use his theory? To put it plainly, if I were to present his work to any of my professors, I would get an 'F'. Plain and simple. You may ask, isn't this something that can be solved with some extra time and rigor? The problem in fact with his 'theory' is deeper - he makes a fundamental ontological fallacy. Having observed what he describes as 'complex behavior' in 1D finite state automata, he claims then it must be true the entirety of existence is based on finite, simple and discrete rules. There are two problems with this argument: Firstly, there is no point at which he defines 'complexity' or provides us with his interpretation of it (immediate red flag for any scientist). Simply observing something as seemingly complex does not mean it inherently is. For example, if we take Kologomorov Complexity as our working definition, the entirety of his celullar automaton examples are in fact not complex because it takes very little 'space' to define them. The second and deeper issue is that there is nothing special about finite state automata. They are simply an instance of a larger class of systems we call 'Turing complete'. Even the representation he uses to draw the pictures is not special to cellular automata (and no, he did not invent it)- consider for example the following picture, showing three different Turing machines's tapes as they progress in time (horizontal axis). You can clearly see the parallel. Now, here is where is the problem. There are in fact many Turing-complete systems we know of, which means by definition everything he observes - rule 30, rule 110, etc. has an exact equivalent: - The four nucleotide bases of DNA - Fluid systems - Ferromagnets (Spin Glass model) - Water pipes - ... and many more By his argument, is then the entire world the genetic code of some organism? A large magnet? A large glass of water? The sewage system of some alien's house? There is nothing that gives basis for making the claims he does. And as much as he wants you to believe that he is the only one doing this kind of work, there are entire disciplines dedicated to studying integrable systems. Mathematicians and physicists know all too well about the problem of uncomputatbility, lack of definedness, etc., and instead of discarding the entire field of mathematics as he proposes we do, they build tools specifically to find one's way around it. My research into Heavy-Tailed Matrices is an example. You can also consider scattering resonances, chiral models, etc. and much more. The fields of mathematics and statistics are in fact much ahead of him than he wants you to think. TLDR: Philosophically interesting, technically subpar
-
That maybe true for High School Math and some undergraduate topics that have a visual component, like Linear Algbera, Multivariable Analysis and Complex Analysis. However, for any more advanced topic like Algebraic Geometry, Algebraic Topology, Representation theory and Functional analysis, not only are there zero to none videos, I fail to see how you would even represent infinite-dimensional spaces, sheafs and cohomology chains. The power of mathematics comes precisely from the ability to prove stuff about things we can’t even visualise.
-
There is indeed a very large variance in how textbooks are written. Especially for grad material, some professors just like being dicks and making everything excruciatingly dry. It's the same reason for those laughs. Mathematics gets such a bad name because of a few people who can't help their insecurities.
-
Ero replied to Will1125's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
There is something to be said about being a catalyst. Instead of 'control', why not 'steer'? Carnot 'steered' Steam Power. Tesla was instrumental in the 'wave' of Electrification. Oppenheimer - nuclearization. Once a chain fission reaction is unleashed, little can be done to stop a wave so destructive as to 'surf it' . But under the right conditions, it becomes a source of energy. By finding the underlying mechanisms of the complex system, one can learn to orchestrate the emergent phenomena without micro-managing. Who is to say we won't be able to surgically engineer tectonic movement or climate currents? There are those that strive to Tame the Chaos. -
Pulled an 11h day working through the literature. Meeting with advisor is schedule on Wednesday. Current thesis proposal draft: # Thesis Proposal: Heavy-tailed Random Matrices ## Introduction Recent developments in the field of Deep Neural Networks (DNNs) have proven incredibly effective in extracting correlations from data. However, the current paradigm and methodology are still largely based on heuristics and lack the theoretical underpinnings necessary for both prescriptive and explanatory properties. Many approaches have been proposed with the purpose of alleviating this so-called 'black box problem' (i.e., lack of interpretability), ranging from the early attempts at using Vapnik-Chervonenkis (VC) theory [1] to subsequent applications of Statistical Mechanics [2-4]. Arguably, none have been as effective at predicting the quality of state-of-the-art trained models as Random Matrix Theory (RMT) [5,6], and more specifically, the recently established Theory of Heavy-Tailed Self-Regularization (HT-SR) by Martin and Mahoney [7-11]. Their empirical results have led to the creation of novel metrics, as well as a variety of interesting theoretical results with respect to the study of the generalization properties of stochastic gradient descent (SGD) under heavy-tailed noise [12,13]. ## Background and Significance ### HT-SR Theory Martin and Mahoney's approach is based on the study of the empirical spectral density (ESD) of layer matrices and their distributions [7]. More specifically, for \( N \times M, N \geq M \) real-valued weight matrices \( W_l \) with singular value decomposition \( \mathbf{W} = \mathbf{U}\mathbf{\Sigma}\mathbf{V}^T \), where \( \nu_i = \mathbf{\Sigma}_{ii} \) is the \( i \)-th singular value and \( p_i = \nu_i^2/\sum_i \nu_i^2 \). They define the associated \( M \times M \) correlation matrix \( \mathbf{X}_l = \frac{1}{N}\mathbf{W}_l^T\mathbf{W}_l \) and compute its eigenvalues, i.e., \( \mathbf{X}\mathbf{v}_i =\lambda_i\mathbf{v}_i \), where \( \forall_{i=1, \cdots, M}\lambda_i = \nu_i^2 \). They subsequently categorize 5+1 phases in the training dynamics by modeling the elements of the latter matrices using Heavy-Tailed distributions, i.e., \( W_{ij}\sim P(X)\sim \frac{1}{x^{1+\mu}}, \mu>0 \), whereas the ESD \( \rho_N(\lambda) \) likewise exhibits Heavy-Tailed properties. Excluding the two initial phases and that of over-training (+1), there are 3 phases of interest, categorized by their better generalization, namely: - **Weakly Heavy-Tailed**: \( 4 < \mu \) with Marchenko-Pastur behavior in the finite limit and Power-Law statistics at the edge. - **Moderately Heavy-Tailed**: \( 2 < \mu < 4 \) with \( \rho(\lambda)\sim \lambda^{-1-\mu/2} \) at finite size and \( \rho_N(\lambda)\sim \lambda^{-a\mu+b} \) at infinite size, whereas the parameters \( a, b \) are empirically fitted using linear regression. Maximum eigenvalues follow the Frechet distribution. - **Very Heavy-Tailed**: \( 0 < \mu < 2 \), where the ESD is Heavy-Tailed/PL for all finite \( N \) and converges for \( N\rightarrow\infty \) to a distribution with tails \( \rho(\lambda)\sim \lambda^{-1-\mu/2} \). The maximum eigenvalues again follow a Frechet distribution. ### Significance The theory of HT-SR has led to interesting results both for the sake of applicability and from a purely theoretical standpoint. The practicality of this work has become apparent due to the development of more efficient training policies, such as temperature balancing [9], as well as real-time metrics like the Frobenius Norm, Spectral Norm, Weighted Alpha, and \( \alpha \)-Norm, which are calculated using HT-SR independently of the training and testing data [10]. On the other hand, the empirical observations have inspired the construction of stronger bounds for the generalization properties of SGD's trajectories via stochastic differential equations (SDEs) under heavy-tailed gradient noise [12]. These bounds have indicated a 'non-monotonic relationship between the generalization error and heavy tails,' and have been developed into a general class of objective functions based on the Wasserstein stability bounds for heavy-tailed SDEs and their discretization [13]. The aforementioned results support the claim that a more detailed study of the ESDs of various open-source models can lead to a refined understanding of the phenomenology and provoke interesting theoretical insights. It is also important to mention the feasibility of this work, as the empirical component does not require extensive computational resources. Using open-sourced models and their weights, the analysis can be performed on a local machine without significant overhead. ## Objectives & Methodology The goals of this paper are two-fold: - To present a theoretical exposition of the 'relatively new branch' from RMT [14], specifically that of heavy-tailed random matrices, by citing the rapidly developing literature [15-18]. - To expand the empirical results of HT-SR by applying refined classification through the use of Maximum Likelihood Expectation (MLE) with respect to a range of heavy-tailed distributions, instead of linear regression for a Power-Law fit. Additionally, the paper aims to examine a wide array of open-source models varying in architecture and underlying symmetries. ### Empirical Study The methodology proposed follows Martin and Mahoney’s approach [7]—studying the ESD of layer weight matrices of DNNs. Their classification of training dynamics involves 5+1 phases determined by the deviation of Bulk Statistics from the standard Marchenko-Pastur Distribution towards a Heavy-Tailed distribution. Martin and Mahoney estimate the extent of heavy-tailed behavior through linear regression on the log-log plot of the empirically fitted Power-Law exponent \( \alpha \). While sufficient for their stated "aim for an operational theory that can guide practice for state-of-the-art DNNs, not for idealized models for which one proves theorems" [8], this approach is agnostic to the underlying heavy-tailed distribution and potentially misses valuable information. Studies of heavy tails have noted the unreliability of using linear regression for estimating the scaling parameter \( \alpha \) [19]. To address this issue, we propose using MLE with respect to different heavy-tailed distributions, such as the Pareto, Cauchy, Levy, Weibull, or Frechet distributions. The latter is particularly meaningful given the empirical observations in HT-SR [@martin2019traditional]. This approach aims to refine the classification of underlying distributions by analyzing a broader array of models, such as the 16 open-source symmetry-informed geometric representation models of *Geom3D* [20]. ### Theoretical The purpose of providing a theoretical exposition of heavy-tailed random matrices is, first and foremost, to consolidate what is currently a rich, but largely disconnected body of literature[14-19]. With only a single dedicated chapter in the Oxford Handbook of RMT [14] and not a lot of theoretical surveys, it is hard to put in context the earlier work. The consequences of this can be seen through a single example, found in a paper, predating the HT-SR theory by more than 11 years. More specifically, observe the following theorem (Theorem 1 [21]) for random matrices with i.i.d heavy tailed entries, i.e $(a_{ij}), 1\leq i\leq n, 1\leq j\leq n$ with $1-F(x)=\bar{F}(x)=\mathbb{P}\left(\left|a_{i j}\right|>x\right)=L(x) x^{-\alpha}$ where $0<\alpha<4$ and $\forall t>0, \lim_{x\rightarrow\infty}\frac{L(tx)}{L(x)}=1$. With the additional assumption of $\mathbb{E}(a_{ij})=0$ for $2\leq \alpha <4$ , the theorem stats that the random point process $\hat{\mathcal{P}}_n = \sum_{i\leq i\leq j\leq n}\delta_{b_n^{-1}|a_{ij}|}$ converges to a Poisson Point process with intensity $\rho(x) =\alpha\cdot x^{-1-\alpha}$. This theoretical result in fact matches precisely the values used to classify one of the phase transitions in HT-SR [7] w.r.t $\alpha$, as well as the power law exponent of the linear regression fit. Furthermore, its corollary (Corollary 1 [21]) gives theoretical justification for what the authors of HT-SR [7] observe to be the Frechet distribution fit for the maximum eigenvalues within that same heavy-tailed phase. Not only that, but its Poisson process phenomenology seems to agree with the underlying assumption behind one of the aforementioned theoretical results, namely that SDE trajectories of SGD are well-approximated by a Feller process [12]. This suggests that the latter results are exceptionally interesting due their potential to serve as theoretical grounding for what is currently only an empirical theory. A more rigorous exposition of the material, paired with the aforementioned empirical analysis have the potential to give clarity to what is already being developed as a potential theory for the learning of DNNs. ## References 1] - V. Vapnik, E. Levin, and Y. Le Cun. Measuring the VC-dimension of a learning machine. Neural Computation, 6(5):851–876, 1994. [2] - A. Engel and C. P. L. Van den Broeck. Statistical mechanics of learning. Cambridge University Press, New York, NY, USA, 2001. [3] - Y. Bahri, J. Kadmon, J. Pennington, S. S. Schoenholz, J. Sohl-Dickstein, and S. Ganguli. Statistical Mechanics of Deep Learning. Annual Review of Condensed Matter Physics 11:501-528, 2020. [4] - Schoenholz, S. S., Pennington, J., & Sohl-Dickstein, J. A Correspondence Between Random Neural Networks and Statistical Field Theory, 2020 [5] -J. Pennington, and P. Worah. Nonlinear random matrix theory for deep learning. NIPS 2017 [6] - J. Pennington, and Y. Bahri. Geometry of Neural Network Loss Surfaces via Random Matrix Theory. PMLR 70, 2017. [7] - C. H. Martin and M. W. Mahoney. Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning. Journal of Machine Learning Research, 22(165):1–73, 2021. [8] - C. H. Martin and M. W. Mahoney. Traditional and heavy tailed self regularization in neural network models. In International Conference on Machine Learning, 2019. [9] - Y. Zhou, T. Pang, K. Liu, C. H. Martin, M. W. Mahoney, and Y. Yang. Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training. NIPS 2023 [10] -C. H. Martin, T. S. Peng, and M. W. Mahoney. Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data. Nature Communications, 12(1):1–13, 2021. [11] - C. H. Martin and M. W. Mahoney. Heavy-tailed universality predicts trends in test accuracies for very large pre-trained deep neural networks. In SIAM International Conference on Data Mining, 2020. [12] - Şimşekli, U., Sener, O., Deligiannidis, G., & Erdogdu, M. A. Hausdorff dimension, heavy tails, and generalization in neural networks. Journal of Statistical Mechanics 2021(12), 2021. [13] - Anant Raj, Zhu, L., Gürbüzbalaban, M., & Umut \c{S}imşekli. Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions, 2023 [14] -32. Z. Burda and J. Jurkiewicz. Heavy-tailed random matrices. The Oxford Handbook of Random Matrix Theory, 2011 [15] - J. Bouchaud, and M. Potters. Financial applications of random matrix theory: a short review. The Oxford Handbook of Random Matrix Theory, 2011 [16] - Edelman, A., Guionnet, A., & Péché, S. .Beyond Universality in Random Matrix Theory. The Annals of Applied Probability 26(3), 2016 [17] G. B. Arous, and A. Guionnet. The Spectrum of Heavy Tailed Random Matrices. Springer, 2017 [18] - Rebrova, E. Spectral Properties of Heavy-Tailed Random Matrices. ProQuest Dissertations & Theses, 2018 [19] -Nair, J., Wierman, A., & Zwart, B. The fundamentals of heavy-tails: properties, emergence, and identification. Proceedings of the ACM SIGMETRICS/International Conference on Measurement and Modeling of Computer Systems 387–388, 2013 [20] - S., Du, W., Li, Y., Li, Z., Zheng, Z., Duan, C., Ma, Z., Yaghi, O., Anandkumar, A., Borgs, C., Chayes, J., Guo, H., & Tang, J. (2023). Symmetry-Informed Geometric Representation for Molecules, Proteins, and Crystalline Materials. NIPS 2024, 2023 [21] - A. Auffinger, G. B. Arous, G, and S. Peche. Poisson convergence for the largest eigenvalues of heavy tailed random matrices. Annales de l’I.H.P. Probabilités et Statistiques 45(3), 589–610, 2009
-
Hey, brother, hope you are hanging in there. Since most people took the spiritual route with their answer, let me give you a man to man answer. I have had periods of consecutive 4-5 months that I hadn't talked to absolutely anybody. 2 years of my life I had only nightmares. I've had months on end when the only thing I felt was depression so strong, I had tearing physical pain in my chest. That is to say, I've been to some dark times and places, and the one thing that has kept me together throughout is physical exercise. I mean it. Whether it was running 6 miles/10k daily a few years ago, or as of the last 2 years powerlifting, the precise exercise doesn't really matter as long as I am pushing. As males, our hormone levels and aggression (or lack there-of) are directly tied to our body. By creating a baseline of discipline, it gives you something to fall back on. It is also incredibly healthy for your neurotransmitters/brain chemistry. Along that line, I also quit weed 8 months ago, so respect for your commitment, you are already steps in the right direction.
-
I just set up a python script that creats my daily note with all the necessary structure, pulls my tasks from the whole database and syncs it with my Google Calendar. And all I have to do is type 'python daily_drive.py'. If you are a techie/ nerd out on stuff like this, ain't no other option than obsidian. P.S - you can use your notes as a contextual window for AI agents to help you with projects. I set up the script above in 10 minutes using this fact.
-
@Null Simplex Thanks for sharing! I do agree that there are many deficiencies in our current model of teaching world-wide. Some of it has to do with lack of resources, but even in Ivy I've stumbled on really badly taught classes in math, so I get your point. I agree mathematics can be taught in much more intuitive way with visualizations like the ones 3Blue1Brown does and sure hope to contribute in the automation of such visualizations in the near future. My only point is that if you are really passionate about mathematics, university is a really good idea.
-
I would say both my personality and life purpose are at a strong opposition with New Age beliefs/ social Dynamics, despite having extremely strong spiritual experiences, dozens of trips, and a very deep love for nature. I cannot stand conformity and could care less about being part of a group. I am open to and have experienced paranormal/ siddhis. But as someone highly-technical, the new age thinking and rhetoric on these topics is just making me cringe. I seek understanding, so I can engineer the next paradigm. I work 9-10h days with passion and could care less about “going with the flow”.
-
How far did you go into math beyond linear algebra and group theory? I am also a pure math student, and the truth is you can’t find online good resources on anything above abstract algebra and real analysis. Have you ever spent 6h on a problem on elliptic functions? Or on singular cohomology? There are no actual “online courses” on topics like these, which are considered basic graduate topics. It is true you can just download the books for free, but the difference between studying yourself (which I have) and in college is incomparable. Moreso, at that level, there’s no cute video that can demonstrate the concepts your working with. If anyone is genuinely serious about mathematics (which you would know by end of HS), i mean it, college is the only way. You wont even know how to write a proofs without that, and that means you won’t be doing mathematics, only hand waving.
-
Gimmicky how? OneNote doesn’t have more than 3 layers in it’s hierarchy (journal, tab, page), you cannot do relational linking and can’t write Latex/code. If you are a student for example, OneNote simply won’t work for you.
-
Newly discovered mathematician/ program in the foundations of mathematics: Homotopy Type Theory, with a primary author Vladimir Voevodsky - similar to Barry Mazur, his only higher education is in fact a PhD from Harvard, having been expelled from Moscow State University due to skipping failing classes. Nonetheless, his research papers were so impressive that he got in without even applying. His formulation of motivic cohomology (study of the invariants of algebraic varieties and schemes) is believed to be the correct one, uniting previously distinct cohomology theories, such as singular (what I am studying now), de Rahm, etale and crystalline cohomology. (Definitely a topic I need to consider for my PhD) During this foundational work, for which he received a Fields medal in 2002, he started realizing that 'human brains could not keep up with the ever-increasing complexity of mathematics. Computers were the only solution' His work has in fact been foundational to the development of the *Coq* formal proof assistant. The main idea behind homotopy type theory is that types can be regarded as spaces in homotopy theory or higher-dimensional groupoids in category theory. The higher (graded) homotopy groups form a weak ∞-groupoid with k-morphisms at level k of the filtration. The traversal (paths) of this multi-dimensional space represents the proofs (for e.g., proofs of existence have the space is filled with elements of the satisfactory set). An analogy that was given on reddit, which I really liked, is that Topos Theory is the computer architecture and hardware construction (mathematical universe), whereas HoTT is the software and programs in it (proofs). Definitely have to read the original (and still only book) on the topic - https://hott.github.io/book/hott-ebook-15-ge428abf.pdf
-
I second that. Even philosophically brilliant people can fool themselves in thinking the big picture can substitute for the details. Would you rather be operated on by a MD grad or a reiki practitioner with only HS? I’ve self-taught myself large parts of math and physics, and have also taken grad-level classes in these topics. Truth is, whatever you think you can do in 5h, college will make you (and in the process teach you) how to do it in 30 mins. It gives structure, rigour and most of all - direction. You may think some idea is interesting and worth pursuing, whereas it could have been discovered to be a dead end years earlier- an advisor/ professor is someone who can tell you that in advance. Not without its faults, but the critiques should be from above, not below, understanding first that the size and complexity of human knowledge requires an academic structure to begin with. If you are genuinely serious about STEM, college is the only way.
-
It is the first encounter of periodic functions (one period). Wait till you discover elliptic functions, which are doubly periodic in complex space - they serve as the foundation for modular forms and many hyperbolic geometries - that’s where you will find the aliens, having mastered the curvature tensors👽
-
+1 on Obsidian. I’m taking PhD classes in Math and Stats and it’s fair to say nothing even comes close to its functionality for text, Latex, code, visualisation boards, task management, etc. Ive used paper, one note, latex and jupyter notebooks, notion and it’s fair to say I’m never going back. p.s wait till u start using git and customizing plugins
-
I have tried every knowledge management system imaginable. 700+ pages of text on paper, over 1300 page- equivalent text in one note, 950+ in notion and I have to agree with @Joshe. Nothing even comes close to Obsidian.
-
Philosophically-informed science is more rewarding than pure philosophy for some people. It is also harder because you are not only trying to get the big picture, but also the details. An example of this is Steve Patterson - after your blog post I read his book, multiple of his post and reached out. We corresponded. Whilst he has very developed understanding of the limits of science, its social dynamics, etc., his actual ideas on how to fix the 'foundations of mathematics' are exceptionally ignorant, i.e finitism, discreteness and entirely rejecting continuity.
-
I got my hands on a lot of Salvia basically for free. I am planning on smoking it with a pipe. Set and setting are all in order. I have about 30 trips, last one I had was 300 ug of LSD about a month ago: it was a top 3 experience - an alien awakening which set the foundation for what I am exploring now as a technical paradigm for my thesis - chaos, entropy, order. I am aware salvia is quite different in that regard - rather than dialling up consciousness, it distorts and morphs it, but nonetheless I am committed on trying it. Any tips and directions from fellow psychonauts are welcome.
-
Smoking in the Balkans is really something else, I think it's close to 40% of us. Most of my friends smoke, I have been on and off since the age of 18. Stopping entirely is always the best idea, but when you are surrounded by it, it's a bitch. I am powerlifting, so I have dialed it back significantly, lighting a few only on social occasions. The good thing is I study in the US and here the packs are mad expensive and no one really smokes so I go for 8-9 months without. To answer your question, you are definitely all good if you stop now. I have also smoked for about 4 years on and off and have no issues with climbing, running and exercising at high intensity. It will start getting bad if it's a daily habit.
-
@Keryo Koffa Your video encapsulates what you just described so well, which just goes to show how meta you went. I think as a psychonaut you have chartered territory comparable to the likes of Kilindi Iyi, the ultra-high does stuff, possibly even more complex due to the psychedelic-dissodelic synergy. Glad you are still here to share. From my 6 years on this forum, this is definitely a highlight.
-
This.
-
You are just next level. Them alien holarchies in your brain be working 24/7 to translate relational and contextual meaning like it's no one's business.
-
🤯🤯🤯 I know I will be coming back to this video for years to come. Your 'double up' message was so on point, will have to follow the cosmic instructions next time, hah.