An Advanced Mathematical Journey: The Structure of Operator Algebras

We are delighted to announce a new research initiative at Cheenta Academy led by Dr. Tattwamasi Amrutam, an accomplished mathematician whose work bridges deep areas of analysis and algebra.  

This intensive 8-week program offers a thorough introduction to the fundamental theory of C*-algebras, a subject that bridges the worlds of analysis and algebra. Beginning with the geometric framework of Hilbert spaces, participants will explore bounded linear operators before abstracting their key properties to define C*-algebras. The course’s primary objective is to grasp the two landmark Gelfand–Naimark theorems, which demonstrate that every C*-algebra can be represented concretely—either as an algebra of continuous functions on a topological space or as an algebra of operators on a Hilbert space. 

Dr. Amrutam earned his Bachelor’s degree in Mathematics and Computer Science from IMA, Bhubaneswar (2014), followed by a Master’s in Mathematics from IIT Bombay (2016). He completed his Ph.D. in Mathematics at the University of Houston in 2021 under the guidance of Dr. Mehrdad Kalantar. From 2021 to 2024, he was a postdoctoral researcher at Ben Gurion University of the Negev, working with Dr. Yair Hartman. He is currently an Adjunct Assistant Professor at the Institute of Mathematics, Polish Academy of Sciences. 

Prerequisites

Jon the class on August 16, 2025 at 8.15PM IST as a free trial before joining the course.

About the Course

The course is structured in two parts, closely following the first three chapters of [M].

Part I: The Concrete World of Operators (Weeks 1-3)

Week 1: The Geometric Setting – Hilbert Spaces

  1. Inner products, completeness, and the definition of a Hilbert space.
  2. Orthogonality, orthonormal bases, and the Riesz Representation Theorem.
  3. The isomorphism of any separable Hilbert space with the sequence space \(l^2\)

Week 2: The Algebra of Bounded Operators

  1. Bounded linear operators on a Hilbert space; the operator norm.
  2. The algebra \(\mathbb{B}(\mathcal{H})\); the adjoint of an operator.
  3. The operator “zoo”: self-adjoint, normal, unitary, and projection operators.

Week 3: The Spectrum

  1. The resolvent set and the spectrum of an operator, \(\sigma(T)\).
  2. Key properties: the spectrum is non-empty and compact.
  3. Calculation of spectra for key examples (diagonal operators, the shift operator). 

Part II: The Abstract and its Powerful Structure (Weeks 4-8)

Week 4: The Leap to Abstraction – C*-Algebras

  1. Banach algebras and the definition of a C*-algebra.
  2. The C*-identity: ||\(x*x\)|| = ||\(x^2\)||
  3. Two guiding examples: the non-commutative algebra \(\mathbb{B}(\mathcal{H})\) and the commutative algebra

Week 5: The Commutative World – The Gelfand Transform

  1. Characters (multiplicative linear functionals) and the character space (\Omega(A)).
  2. The Gelfand transform \(\Gamma: A \rightarrow C(\Omega(A))\)

Week 6: The Commutative Gelfand–Naimark Theorem

  1. Main result: Every commutative C*-algebra is isometrically *-isomorphic to \(C_0(X)\) for some locally compact Hausdorff space (X).
  2. The profound “algebra ⇔ topology” dictionary.

Week 7: States and Representations

  1. Positive elements, positive linear functionals, and states.
  2. The concept of a *-representation \(\pi: A \rightarrow \mathbb{B}(\mathcal{H})\).
  3. The Gelfand-Naimark-Segal (GNS) construction: turning a state into a representation.

Week 8: The General Gelfand–Naimark Theorem & Synthesis

  1. Main result: Every C*-algebra is isometrically *-isomorphic to a C*-subalgebra of \(\mathbb{B}(\mathcal{H})\) for some Hilbert space \(\mathcal{H}\).
  2. Course review: the full circle from concrete operators to abstract algebras and back. 

Why This Matters

Operator algebras form a unifying language for modern mathematics and physics, providing tools for understanding symmetry, quantum states, and the hidden structure of mathematical spaces. This project is designed not just to teach theory, but to develop the analytical skills and research mindset needed to engage with cutting-edge problems in pure mathematics.


Lung Tumor Detection using Computer Vision: Cheenta Research Program

How AI is Helping Detect Lung Cancer: A Young Researcher's Journey

Artificial intelligence (AI) is making significant advancements in medicine. One young researcher, Rushil Reddy, a 10th grader from Germantown Academy, is contributing to this progress with an AI project designed to improve lung cancer detection. His work earned him second place at the Pennsylvania Junior Academy of Science (PJAS) competition, marking the beginning of an exciting journey.

Why This Project?

Rushil has always been passionate about science, especially medicine, inspired by his father, a doctor. He combined this interest with machine learning to develop a project that enhances lung cancer detection using AI.

Lung cancer remains one of the leading causes of cancer-related deaths in the U.S. Early detection significantly increases survival rates. Doctors typically use CT and PET scans to identify abnormalities. However, determining whether a tumor is cancerous or benign requires time and expertise. Rushil aimed to speed up this process and improve accuracy with AI.

How Does It Work?

Rushil’s project optimizes machine learning models to detect lung tumors. His approach improves the screening process, which involves:

Rushil’s AI model automates the final step. Instead of manually analyzing scans, the AI extracts key features and predicts the probability of cancer. This allows doctors to make faster and better-informed decisions.

The AI Model Behind the Project

A key tool in this project is the Brock Model, a widely used predictive model for lung cancer. This model evaluates several factors, including:

Rushil’s AI model extracts these features from CT and PET scans. It then applies this data to the Brock Model to determine the likelihood of cancer. As a result, doctors can decide whether additional tests, such as a biopsy, are necessary.

Challenges and Future Work

Although the project has already shown promising results, Rushil plans to make further improvements. His next steps include:

Beyond building an AI model, Rushil’s work focuses on making AI decisions more transparent for doctors. Many medical professionals hesitate to rely on AI-based diagnoses because they do not fully understand how models reach conclusions. By aligning AI with well-established medical methods, Rushil ensures greater trust in its results.

Final Thoughts

This project highlights how young innovators contribute to medical advancements. Rushil’s work bridges the gap between AI and healthcare, making cancer detection faster and more accessible. As research continues, AI is set to revolutionize early cancer detection and diagnosis.

Shonku - Math Messenger and Micro Blogging App

Shonku math messenger has the following features:

  1. Chat using math equations and real-time preview. Write in Latex.
  2. Do micro-blogging with math. Follow other users.
  3. Make groups with your connections. Make selected users invisible or muted.

Screenshots

Sign up for the test - version.

Optimizing Urban Accessibility: Building a 15-Minute City with Steiner Tree Approximation

A Research Paper by Prishaa Shrimali (USA, Grade 10)

Introduction

Urban planning is increasingly focused on creating sustainable, accessible cities where essential services are within easy reach. The 15-minute city (15-MC) model is an innovative approach aimed at structuring urban spaces so that residents can access key services, like healthcare, shopping, and recreational facilities, within a short walking or biking distance. In the study Optimizing Urban Accessibility: Constructing a 15-Minute City Using Steiner Tree Approximation, researchers introduce a method of applying graph theory—particularly the Steiner tree problem—to efficiently design 15-minute cities.

Methodology

The study employs the Steiner tree problem, which seeks to find the minimum-weight network that connects selected key points, called terminals (e.g., service locations). Using this graph-based approach, the model minimizes travel time between key amenities by optimizing the pathways that connect them. Unlike models that place a focus on residential areas, this approach prioritizes service locations, making it computationally efficient.

The model is applied to Manhattan, using the city's pedestrian network to highlight service accessibility. Here, amenities such as pharmacies, post offices, and supermarkets serve as the Steiner tree's focal points. The OSMnx Python library is used to pull data from Open Street Maps, allowing for a practical analysis of service accessibility within a 15-minute walking radius.

Watch the Video

Key Highlights

  1. Efficient Service Connectivity: By focusing on connecting service points, this model minimizes computational complexity and offers a feasible layout for urban planners to improve walkability.
  2. Dense Network Coverage in Manhattan: The analysis reveals that central and southern Manhattan already supports a high level of walkability, with the Steiner tree model indicating most residents in these areas can reach essential services within a short walk.
  3. Areas for Improvement: The study highlights gaps in the northern parts of Manhattan, suggesting areas where pedestrian access to amenities could be enhanced.
  4. Digital City Models: The study's approach yields detailed digital models that serve as practical tools for urban planners to optimize mobility, service placement, and sustainable design.

Inference

The Steiner tree-based method for designing a 15-minute city provides urban planners with an actionable framework to improve urban accessibility. While central areas of Manhattan demonstrate a high density of accessible services, regions like northern Manhattan could benefit from increased service points or better connectivity. This graph-based approach also shows promise for future expansions, such as multi-criteria optimization considering factors like environmental impact and cost.

In sum, the paper underscores the effectiveness of leveraging graph theory in urban planning and establishes a solid foundation for implementing sustainable, accessible city models that can adapt to the unique needs of various urban landscapes.

Research in School: Epidemiological Modelling and Outbreak Prediction using Hyperbolic Geometry | by Raghav Pai and Shreyas Vivek

Schedule: 26th October 2024 (Saturday)

Time: 5:00PM IST

About the Presenters:

Raghav Pai: He is a Grade 11 student based in Maharastra, Mumbai. He is part of Cheenta since 8 months

Shreyas Vivek: He is a Grade 11 student based in Dubai, UAE.

Abstract:

This paper introduces a novel approach to modeling disease transmission using hyperbolic geometry, specifically the Poincaré disk model. Traditional models like Susceptible-Infected-Recovered (SIR) assume homogeneous populations, which oversimplifies real-world interactions. By incorporating hyperbolic distance, the Poincaré disk model captures spatial clustering and irregular social interactions, offering a more realistic framework for studying epidemics. Simulations of the first wave of COVID-19 in India were performed using both the Poincaré disk and SIR models. Results show that the Poincaré disk model better captures localized transmission patterns and spatial dynamics, providing deeper insights into how diseases spread through structured populations. This approach highlights the importance of accounting for social network structures in epidemic modeling, offering valuable guidance for targeted public health interventions such as localized lockdowns and vaccination strategies.Our findings demonstrate the advantages of hyperbolic geometry in epidemiological modeling, with potential applications for improving future outbreak predictions and interventions.

This session highlighted how mathematical concepts can be applied to understand and predict the spread of infectious diseases more accurately.

Watch the session here
Modeling Epidemics in Hyperbolic Space

Traditional epidemiological models use Euclidean geometry, which may not capture the complex structure of real-world social networks. Hyperbolic geometry, with its curved spaces, better represents these networks by accounting for high clustering and varying levels of social interactions. The model presented maps social interactions onto a hyperbolic plane, visualizing the spread of disease as expanding waves through a network.

Enhancing Prediction Accuracy

The hyperbolic approach allows for the identification of critical clusters where outbreaks may intensify. Compared to traditional methods, these predictions can be more precise, helping to target interventions like vaccinations or lockdowns in specific high-risk zones.

Applications Beyond Epidemiology

While the primary focus was on epidemic modeling, the use of hyperbolic geometry extends to other areas, such as analyzing information spread in social media, enhancing cybersecurity, and understanding financial network risks.

Research Training Program and Research Projects in Cheenta in 2024

Cheenta has outstanding research programs for school and university students.

These programs offer a unique learning experience. They are also great addition to the academic portfolio of a learner. University applications greatly value research projects.

  1. Research Training Program - Duration 6 months - Meets once weekly

    research training program at Cheenta helps school and college students to gain skills to conduct research. In particular there is a weekly workshop for 6 months where they learn theoretical and practical tools from a particular area of one of these subjects: Mathematics, Physics, Computer Science, Statistics, Machine Learning, Artificial Intelligence.
  2. Research Project Program - Duration 8 months to 1 year - Meets once weekly

    A research project program at Cheenta helps students to work closely with advisors. The goal is to perform and publish expository or original work in some area of mathematical science.
Srijit Mukherjee

Srijit Mukherjee

(Pursuing) PhD from Pennsylvania State University. BStat - MStat from Indian Statistical Institute

Ashani Dasgupta

Dr. Ashani Dasgupta

PhD from University of Wisconsin-Milwaukee

Sourayan

Dr. Sourayan Banerjee

PhD from Indian Institute of Science Education and Research

Swarnabja Bhowmik

Swarnabja Bhowmick

Computer Scientist (University of Calcutta)

In 2024, Cheenta is offering research opportunities in the following areas:

  1. Mathematics (Pure) - Hyperbolic Geometry, Topology, Group Theory
  2. Statistical Analysis
  3. Machine Learning - Computer Vision
  4. Artificial Intelligence
  5. Mathematics (Pure) - Algebraic Geometry

Research Seminar: Searching for giants and dwarves: searches for compact objects

Schedule

Saturday, 27th January, 2024.
10:15 PM IST

About Speaker
Dr. Debnandini Mukherjee
Dr. Debnandini Mukherjee | NASA

Dr. Debnandini Mukherjee is a postdoctoral researcher at the Center of Space Plasma and Aeronomic Research (CSPAR). She works in NASA's Marshall Space Flight Center in Huntsville Alabama, with Tyson Littenberg's group. She works in the area of gravitational wave data analysis and astrophysics. Her work involves looking for gravitational waves from inspiralling compact object binaries comprising neutron stars, black holes or both. She has been working with the LIGO-VIrgo-KAGRA (LVK) Collaboration and using data collected by the same to look for signatures of gravitational waves and gleaning astrophysical implications of such observations. Her focus has been on the search for intermediate-mass black-hole (IMBH) binaries which is an interesting astrophysical source for the LISA mission as well. She is also involved in developing searches for gravitational waves for the LISA mission. In particular, she is interested in developing early warning (pre-merger) searches, aimed at sending out early alerts for gravitational waves. Debnandini completed her PhD from the University of Wisconsin Milwaukee in 2018. Before joining CSPAR, she was a postdoctoral scholar at the Pennsylvania State University.

Abstract

The discovery of gravitational waves in 2015, added a new channel for multi-messenger observations of powerful astrophysical phenomena. Besides telescope observations using the pre-existing electromagnetic channels like X-rays, Gamma rays and Optical light, many such observations can also be supplemented and corroborated using gravitational waves. On the more massive end of the mass spectrum of compact objects, the intermediate mass black holes (IMBHs) are expected to have masses in the range of 100 to 100,000 solar masses, making up the mass space between the stellar mass and the supermassive black holes. GW190521, the heaviest black hole binary coalescence seen by the end of the last observation run in data from LIGO-Virgo, with its total mass being about 150 solar masses, has been the first clear observation of an IMBH. The rates of observation of gravitational wave sources with at least one IMBH component, to which the detectors are currently sensitive, would help constrain their formation channel, which so far remains uncertain. Their observations could also point to a missing link between stellar mass and super massive black holes. On the other end of the mass spectrum, GW170817 was not only the first observed binary neutron star (BNS) event in gravitational waves but it also started a new-era in multi-messenger astronomy through its observation and detection in other channels. Such multi-channel observations can lead to a more robust understanding of the physics that can be gleaned from BNS mergers. Such BNSs are expected to spend several minutes before merging in LIGO-Virgo's sensitive band, at design sensitivity. This can be leveraged to send out early alerts to multi-messenger partners, to enable observation of such events in multiple bands.

The space based laser interferometer LISA, expected to be operational in the next decades, will be able to probe the millihertz frequency band. This will make it sensitive to a vast array of compact object mergers, including the massive black holes or MBHs. These black holes, straddling the intermediate and supermassive types of black holes, have masses extending above a minimum of 1000 M. They are expected to be observable within the LISA band for several weeks to months before they merge. This makes them excellent candidates for low latency, pre-merger observations. Also, some mergers of MBHs are expected to have electromagnetic counterparts due to the presence of gas or disks. Pre-merger alerts with sky location information from LISA data analysis sent out to the astronomy community, would enable early detections of such mergers in electromagnetic bands. Such multi-messenger observations stand to further our knowledge of astrophysics, including that of black hole formation and evolution. 

In my talk I will discuss the search for the presently observable gravitational wave sources in LIGO-Virgo data and the possibility of future observations of more massive sources using LISA and explore the possibility of sending out pre-merger alerts for electromagnetically observable sources, to enable multi-messenger observations.

Sign Up for the Live Session

Frieze isometries in Bengal Temples - Student Research Project at Cheenta

Indus Inscriptions - Research Seminar at Cheenta


28th August, 2021

7 PM IST

Internal students, researchers, faculty members may join google meet

Bahata Anshumali Mukhapadhyay

This presentation seeks to demonstrate how multi-disciplinary approaches are indispensable for understanding the semantic role of the Indus valley inscriptions, one of the most enigmatic aspects of the most expansive Bronze Age civilization of the world (c. 2600 BC to 1900 BC).

Interrogating Indus inscriptions to unravel their mechanisms of
meaning conveyance

https://www.nature.com/articles/s41599-019-0274-1

Ancestral Dravidian languages in Indus Civilization: ultraconserved Dravidian tooth-word reveals deep
linguistic ancestry and supports genetics

https://www.nature.com/articles/s41599-021-00868-w

Bahata Ansumali Mukhopadhyay is a software technologist and an independent researcher originally from Bengal, presently settled in Bangalore. She researches the structural and semantic aspects of Indus script inscriptions and explores the linguistic identities of the people of the Indus Valley civilization. Her first paper, titled “Interrogating Indus inscriptions to unravel their mechanisms of meaning conveyance” (https://www.nature.com/articles/s41599-019-0274-1), problematizes more than 90% of existing decipherment effort, as it claims that Indus script inscriptions were mostly written using logographic and/or semasiographic signs, and thus any attempt to read them by treating those signs as phonological units must be flawed. Her second article, titled “Ancestral Dravidian languages in Indus Civilization: ultraconserved Dravidian tooth-word reveals deep linguistic ancestry and supports genetics” (https://www.nature.com/articles/s41599-021-00868-w), published in the Nature group journal Humanities and Social Sciences Communications, seeks to partly resolve one of the most debated questions of South Asian prehistory, the linguistic identities of the Indus valley population. Ms. Mukhopadhyay continues to research the semantic aspects of Indus script inscriptions and her latest research paper, that claims to have decoded certain signs of Indus script, is under peer review. Bahata A. Mukhopadhyay is also a prolific and widely published Bengali poet, whose first book of poetry, ‘Ṭhung Śobdo Holei Kobitā’, came out at the 44th International Kolkata Book Fair 2020.

This presentation seeks to demonstrate how multi-disciplinary approaches are indispensable for understanding the semantic role of the Indus valley inscriptions, one of the most enigmatic aspects of the most expansive Bronze Age civilization of the world (c. 2600 BC to 1900 BC). Traditionally, study of inscriptions, i.e. epigraphy, was known to mostly demand a thorough knowledge of linguistics, ancient languages, numismatics, palaeography, and history. However, much like how the Linear B script of ancient Greece was finally deciphered based on the grid-based statistical analyses done by Michael Ventris, the yet undeciphered Indus valley inscriptions too have immensely benefited from the researches of various mathematicians, physicists, computer professionals, etc. who have employed their skills, building on various incisive analyses done by linguists, archaeologists, and historians. The methods used by them encompass a broad spectrum of scientific tools and techniques ranging from the use of n-gram Markov model for exploring the correlation between co-occurring signs; calculating conditional entropy of the sign-sequences to predict their linguistic nature; and clustering Indus signs based on their frequency distributions; to applying different linguistic rules to tease out the underlying language used in the inscriptions. The author of this paper has applied the role of aerodynamic factors on the phonetic basis of phonological structures used in natural languages, the distinction between phonological and semantic co-occurrence restriction patterns, as well as comparison between various formalized data carriers and coexistence of document specific and linguistic syntaxes in their mechanisms of meaning conveyance, to understand certain aspects of the Indus script and its nature. This presentation would also very briefly discuss a few points from another upcoming paper, currently going through peer review, where the author explores the semantics of certain Indus inscriptions using various archaeological, linguistic, and historical evidences. A theoretical payoff from this presentation would be demonstration of the extent to which fluid movements between different branches of science can aid in the understanding of inscriptions that have obstinately defied and resisted traditional decipherment methods for 150 years since their discovery.

Bayes' in-sanity || Cheenta Probability Series

One of the most controversial approaches to statistics, this post mainly deals with the fundamental objections to Bayesian methods and Bayesian school of thinking. Turning to the Bayesian crank, Fisher put forward a vehement objection towards Bayesian Inference, describing it as "fallacious rubbish".

However, ironically enough, it’s interesting to note that Fisher’s greatest statistical failure, fiducialism, was essentially an attempt to “enjoy the Bayesian omelette without breaking any Bayesian eggs" !

Ronald Fisher - Objections to Bayesian theory
Ronald Fisher

Inductive Logic

An inductive logic is a logic of evidential support. In a deductive logic, the premises of a valid deductive argument logically entail the conclusion, where logical entailment means that every logically possible state of affairs that makes the premises true must make the conclusion truth as well. Thus, the premises of a valid deductive argument provide total support for the conclusion. An inductive logic extends this idea to weaker arguments. In a good inductive argument, the truth of the premises provides some degree of support for the truth of the conclusion, where this degree-of-support might be measured via some numerical scale.

If a logic of good inductive arguments is to be of any real value, the measure of support it articulates should be up to the task. Presumably, the logic should at least satisfy the following condition:

Criterion of Adequacy (CoA):
The logic should make it likely (as a matter of logic) that as evidence accumulates, the total body of true evidence claims will eventually come to indicate, via the logic’s measure of support, that false hypotheses are probably false and that true hypotheses are probably true.

One practical example of an easy inductive inference is the following:

" Every bird in a random sample of 3200 birds is black. This strongly supports the following conclusion: All birds are black. "

This kind of argument is often called an induction by enumeration. It is closely related to the technique of statistical estimation.

Critique of Inductive Logic

Non-trivial calculi of inductive inference are shown to be incomplete. That is, it is impossible for a calculus of inductive inference to capture all inductive truths in some domain, no matter how large, without resorting to inductive content drawn from outside that domain. Hence inductive inference cannot be characterized merely as inference that conforms with some specified calculus.
A probabilistic logic of induction is unable to separate cleanly neutral support from disfavoring evidence (or ignorance from disbelief). Thus, the use of probabilistic representations may introduce spurious results stemming from its expressive inadequacy. That such spurious results arise in the Bayesian "doomsday argument" is shown by a re-analysis that employs fragments of inductive logic able to represent evidential neutrality. Further, the improper introduction of inductive probabilities is illustrated with the "self-sampling assumption."

Objections to Bayesian Statistics

While Bayesian analysis has enjoyed notable success with many particular problems of inductive inference, it is not the one true and universal logic of induction. Some of the reasons arise at the global level through the existence of competing systems of inductive logic. Others emerge through an examination of the individual assumptions that, when combined, form the Bayesian system: that there is a real valued magnitude that expresses evidential support, that it is additive and that its treatment of logical conjunction is such that Bayes' theorem ensues.

The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience. The second objection to Bayes' comes from the opposite direction and addresses the subjective strand of Bayesian inference.

Andrew Gelman , a staunch Bayesian pens down an interesting criticism of the Bayesian ideology in the voice of a hypothetical anti-Bayesian statistician.

Here is the list of objections from a hypothetical or paradigmatic non-Bayesian ; and I quote:

"Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications. Subjective prior distributions don’t transfer well from person to person, and there’s no good objective principle for choosing a non-informative prior (even if that concept were mathematically defined, which it’s not). Where do prior distributions
come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence. To put it another way, why should I believe your subjective prior? If I really believed it, then I could just feed you some data and ask you for your subjective posterior. That would save me a lot of effort!"

Andrew Gelman
Andrew Gelman

In 1986 , a statistician as prominent as Brad Efron restates these concerns mathematically:

"I like unbiased estimates and I like confidence intervals that really have their advertised confidence coverage. I know that these aren’t always going to be possible, but I think the right way forward is to get as close to these goals as possible and to develop robust methods that work with minimal assumptions. The Bayesian approach—to give up even trying to approximate unbiasedness and to instead rely on stronger and stronger assumptions—that seems like the wrong way to go. When the priors I see in practice are typically just convenient conjugate forms. What a coincidence that, of all the infinite variety of priors that could be chosen, it always seems to be the normal, gamma, beta, etc., that turn out to be the right choices?"

Well that really sums up every frequentist's rant about Bayes' 😀 !

And the torrent of complaints never ceases....

Some frequentists believe that in the old days, Bayesian methods at least had the virtue of being mathematically
clean. Nowadays, they all seem to be computed using Markov chain Monte Carlo, which means that, not only can you not realistically evaluate the statistical properties of the method, you can’t even be sure it’s converged, just adding one more item to the list of unverifiable (and unverified) assumptions in Bayesian belief.

As the applied statistician Andrew Ehrenberg wrote :

" Bayesianism assumes:

(a) Either a weak or uniform prior, in which case why bother?,

(b) Or a strong prior, in which case why collect new data?,

(c) Or more realistically, something in between,in which case Bayesianism always seems to duck the issue."

Many are skeptical about the new found empirical approach of Bayesians which always seems to rely on the assumption of "exchangeability", which is almost impossible to obtain in practical scenarios.

Finally Peace!!!

No doubt, some of these are strong arguments worthy enough to be taken seriously.

There is an extensive literature, which sometimes seems to overwhelm that of Bayesian inference itself, on
the advantages and disadvantages of Bayesian approaches. Bayesians’ contributions to this discussion have included defense (explaining how our methods reduce to classical methods as special cases, so that we can be as inoffensive as anybody if needed).

Obviously, Bayesian methods have filled many loopholes in classical statistical theory.

And always remember that you are subjected to mass-criticism only when you have done something truly remarkable walking against the tide of popular opinion.

Hence : "All Hail the iconoclasts of Statistical Theory:the Bayesians"

N.B. The above quote is mine XD

Wait for our next dose of Bayesian glorification!

Till then ,

Stay safe and cheers!

References

1."Critique of Bayesianism"- John D Norton

2."Bayesian Informal Logic and Fallacy" - Kevin Korb

3."Bayesian Analysis"- Gelman

4."Statistical Re-thinking"- Richard McElreath

Some Important Links: