Carl-Richard

Moderator
  • Content count

    15,631
  • Joined

  • Last visited

Everything posted by Carl-Richard

  1. She will now never speak to you again.
  2. Here is the one true answer to your problem: stop trying to relax, stop trying not to relax, stop trying to stay conscious, stop trying to not stay conscious, stop trying to do the right thing, stop trying to not do the right thing, stop trying, stop not trying, stop trying not to try, stop trying not to not to try. Now you're relaxed. As for real-world practical things, whenever I sit in my office chair, I'm relaxed. I sit in such a way that I'm resting even if I'm working (left foot under butt, like a kind of half-lotus, with right foot touching the ground, arms resting parallel to the arm rests and in flush with the table). And when I get tired and need to take a break, I only switch focus to something else, e.g. YouTube or the forum (sometimes I change posture to a more lying down posture with my feet up on the heater on the wall under the table, all after what is comfortable).
  3. The problem is never conformism or not conformism. It's always retardation. (@integral made this point I believe, I'm not calling him retarded).
  4. Every time you make a response to me it feels like gaslighting.
  5. Saying "not viewing the globe upside down" when you mean "not making globes upside down", that's an example of not conforming.
  6. If you use AI to solve practical problems or do work, AI makes you less procedurally adept and more verbally adept, because you have to describe to it what type of procedure you want. And then also more verbally adept people become disproportionally more effective AI users that can benefit from AI more. So if you're verbally adept and you find a way to leverage AI for work, you'll strengthen your strengths and get ahead. AI is simply a medium for applying your intelligence, just like Google Search, or the internet. If you always try your best to apply your intelligence, then AI can enhance it. If you don't apply your intelligence, then AI will take it.
  7. If you have a literal globe on your desk where Europe is on top and the text and labels are aligned for reading it that way, then turning the globe upside down is retarded.
  8. Some aspects of science-based lifting might work perfectly for some people. I like the deep stretch microreps/pause at the end of some sets. And this is reflected in how in virtually any measurement in an exercise science study, you have big variation among individuals. That's another point Lyle makes, that the modelled group means belie that individual variation (and we know mathematically that different measures of central tendency treat individual variation differently). So unless you find a study where you believe your particular individual sensitivities are adequately represented (and you believe the methods are not completely flawed), science can only serve as a probability space and a range of suggestions.
  9. It's like some of us are hitting the Piñata of Truth and sometimes we hit each other in the head at the same time.
  10. This is just the beginning 🫣
  11. Mine are too after sneezing and realizing I caught my third virus in 3-4 weeks (yes, I also got sick before the flu virus one but comparatively less). This vessel lacks immunological conformity.
  12. Ok. There is conformity and then there is retardation.
  13. I consider myself very open for what you can use science for (and that you can use science meaningfully; I'm not someone who has "deconstructed science" and thrown it all in the garbage). For example, I think studying psychic phenomena and far-out there stuff like breatharianism are legitimate areas for science. However, curiously, these two examples are currently in the stage of merely proving that the phenomena is possible. The buck is set much lower here in terms of straightforwardness compared to comparing very minute differences in exercise volume or movement patterns (e.g. full stretch, slowed eccentric, etc.) or level of intensity and determining which configuration of those factors is best. It's an entirely different level of investigation that science might not be well-equipped for in any situation, and certainly way less in the current situation where we are essentially using bananas to measure acorns. Randomized control trials are not well-suited for exercise science. The ecological validity is just way too problematic, the lack of blinding of participants and scientists is problematic, the timespan (usually 8 weeks) is problematic, the study population (usually untrained individuals) is problematic, the list goes on. Once ESM for exercise science and objective measures of workout intensity are established, then we're approaching the level of research in other behavioral sciences, but we'll still be stuck with the "minute difference" problem. And there are other things worth looking into, like flow states and how to measure these objectively in an RCT (let alone acquiring participants that regularly enter flow). I believe flow is the number 1 metric for performance in terms of hypertrophy training, as it is in strength training, as it is in general athletic performance. But this is 40 years too early.
  14. I took guitar lessons for about about a year or so and then I spent the next 15 years just improvising every time I play. If you ask me to play a song, I will actually struggle to find one I can remember in its entirety (although I learned essentially Metallica's whole discography the first year or so, then some Opeth, Tool and Death songs, but really, so few songs). I remember one of my cousins (he is much older) came to visit at my dad's place back when I had been playing for like two years or so. I showed a song that I had made as an assignment in music class, and it was some kind of metalcore-inspired song where I took elements from a blues progression I had learned from my guitar teacher and integrated it with something that sounded like a rip-off from some Bullet For My Valentine song. But in the middle of the main riff, I went from an E-scale to an A-scale, and my cousin who is a math nerd and classically trained musician (and on the spectrum), told me "yeah that was very wrong, but it sounds cool 😅". And my dad was like "hey ". And internally I was like "but teach me then". After that, I developed a severe complex around whether or not I was playing music "properly" (in terms of Western music theory), and it took like seriously 10 years before I realized that all my musical idols essentially didn't give a shit about that and just played whatever they thought sounded cool.
  15. I swear I did not read your comment before I left mine @Natasha Tori Maru Help :,<
  16. You're just thinking about "New Agers" again My experience was that my memory became much more accurate for things that are actually useful or relevant, while irrelevant things are basically just gone.
  17. That's what I'm saying. "Threshold of volume", without any specifics, is essentially "you have to lift to gain muscle", and nobody disagrees with that. It's the claims that as volume increases, gains increases, seemingly ad infinitum, that's what is disputed. If the 20 studies all use flawed methods that don't reflect the generality of their conclusions (which I think is very possible), that means it doesn't matter how many studies you do or whether they agree or disagree. They are all flawed at the very foundation. This is not just a critique I'm making of exercise science. It's a critique people have made of all behavioral science. Any jump from specific research design to general conclusion could be problematic, and it's a difficult argument to argue against. But in exercise science, the problem is usually so severe that I think you're bound to run into problems. That said, I think there are some research designs that could be much less problematic, but I don't see anybody doing them (e.g. correlating bodybuilding competition placement with self-reported training style). While correlative and self-report designs are generally considered lower quality than experimental designs (e.g. RCTs), the very problems in exercise science are seemingly exclusively tied to the experimental design setup. Alternatively if you can develop an Experience Sampling Method (ESM) for exercise science (allowing for better ecological validity) and objective measurements and monitors for workout intensity / proximity to muscular failure, then maybe you can circumvent some of those problems, but that of course won't happen overnight and without serious investment, if it's even possible. I'm still waiting for that one "serious scientific study". By the way, feel free to address any of Lyle's points also. I really recommend the video I linked.
  18. The dose-response relationship between resistance training volume and muscle hypertrophy: There are still doubts. https://www.researchgate.net/publication/376670037_The_dose-response_relationship_between_resistance_training_volume_and_muscle_hypertrophy_There_are_still_doubts Critical Commentary on the Stimulus for Muscle Hypertrophy in Experienced Trainees https://www.researchgate.net/publication/342788680_Critical_Commentary_on_the_Stimulus_for_Muscle_Hypertrophy_in_Experienced_Trainees Comment on: Volume for Muscle Hypertrophy and Health Outcomes: The Most Effective Variable in Resistance Training https://link.springer.com/article/10.1007/s40279-018-0865-9 It's a double-bind: they provide the limitations but still present a general conclusion that assumes the limitations are not important. It's like "yeah, so we used a banana to measure the length of an acorn, we put that in Limitations; in Conclusion, we put 'the results point to a length of less than one banana for an acorn'". And because these are standard practices in the field, you have to question the entire field to question the limitations, and the individual studies can slip past the critique with "we're just doing what is standard". So they keep doing it until a large enough number acknowledges the madness and stops doing it. Show me one "serious scientific study" and I'll point out the limitations, just like with the Schoenfeld study.
  19. In theory, yes, but that doesn't matter for the "dose-response relationship" claim if only one of the between-group comparisons were ever solid. The relationship was ever only solidly 1 sets vs 5 sets; very low volume < very high volume (granted the host of other limitations with the study). The intermediate part of the supposed curve is at best tentative or unknown. If the latter, any conclusions about that part of the curve is speculative and hypothetical, not empirical, not "the science has shown that".
  20. Yep, Brad Schoenfeld's paper showing "a dose-response relationship of higher volume on hypertrophy" is absolute garbage (Brad is supposedly the number 1 expert in the field). The crucial between-group comparisons (3SET vs 5SET) were not significant at all using p-values so they (most definitely post-hoc because Brad had allegedly never done it before) also used Bayesian hypothesis testing. For those who know about the replication crisis, this type of behavior is one of the drivers ("but we gotta publish something") and is bottom of the barrel in that context (but it's not exactly uncommon so it's not really a massive deal, but still, minus points). Most solid papers also use corrections for multiple comparisons for their p-values which requires even stronger numbers for significant results. And even there, the comparisons were not consistently convincing (or even significant) such that you can draw a conclusive straight line from 1SET < 3SET <5SET, making "dose-response relationship" a no longer fitting description. It was at most "weak evidence" (seemingly BF⏨ < 3, which is also described as "negligible evidence" by some scales, and Lyle disputed the evidence as "not meaningful") for 1SET < 3SET and 3SET < 5SET, and "positive" or "strong positive evidence" for 1SET < 5SET, but only for some muscle groups. An accurate description would be "for three out of four muscle groups, we saw evidence of 1SET < 5SET, with a (most definitely) not pre-registered/pre-stated statistical method only chosen due to a lack of significant difference with the standard method; 1SET < 3SET and 3SET < 5SET only showed weak/negligible evidence for one to two out of four muscle groups". And there was no blinding of the measurement taker in the ultrasound measurements (and ultrasound measurements are highly inaccurate anyway and a potent source of measurement bias). That could drive much of the differences seen. And of course, the description of the 5SET condition was a practically impossible workout program (you can't take that many sets at a static rep range and static rest times consistently to momentary muscular failure without basically decimating the weight at the end of each exercise, which they in any reasonable scenario didn't do because essentially nobody trains like that). And this is on top of the other critiques I've made. @Enizeo This is the type of "science" we're dealing with from "serious people in the field".
  21. Because science class in primary school, where you were taught that atoms are at the bottom of everything, was fun, while religious studies class was boring.
  22. Peace of mind forever probably only happens when you end all reincarnations.