Pages

2019-06-01

No More Fat Shaming: Turning Breast Cancer Patients into Full Body Models Might Just Work

Researchers in the Department of Biomedicine at the University of Basel, Switzerland succeeded at converting breast cancer cells into fat cells. During trials of a combination therapy, the tumor cells were unable to form metastases in mice. Now it is hoped that the method can be applied to human clinical studies. 

Tumor cells are extremely versatile: they turn into "nomads" who leave the primary tumor and will migrate to other parts of the body on the highway provided by circulation where they will then turn stationary and start forming metastases. An important role in the conversion of stationary to mobile cancer cells is played by a process that normally takes place in embryonic development and facilitates the emergence of organs. This cellular program is called epithelial mesenchymal transition or EMT and researchers headed by Gerhard Christofori utilized it to develop a novel therapeutic approach. EMT and the inverse process, mesenchymal-epithelial transition (MET), are implicated in cancer's ability to metastasize.

Relying on two known active ingredients the study published in Cancer Cell controlled the EMT program in mice in such a way that metastasizing breast cancer cells ultimately turned into fat cells.  Converted cells cannot multiply and are barely distinguishable from ordinary fat cells. Above all, they can no longer metastasize. Cells undergoing EMT or MET are in a highly changeable state, providing a window of opportunity for therapeutic targeting. Active ingredients used were rosiglitazone, a drug used used against type 2 diabetes, and trametinib, used to restrict the growth and spread of cancer cells. In combination with conventional chemotherapy, these agents may be able to suppress growth of the primary tumor and simultaneously the formation of metastases: forcing a critical mass of cancerous cells to differentiate into fat cells could deplete a tumor's ability to fight off conventional chemotherapy. Of course, this will have to be tested by clinical studies in humans where, unfortunately, rosiglitazone has showed apparent associations with increased risk of heart attacks and death, calling for a careful complex risk assessment that will largely center on the duration requirements of combination therapy to secure against metastases.

2019-05-26

Is l'embarras du choix Another Axiom of Choice in Neuroscience?


Are supermarkets doing their customers – and ultimately themselves – a favor by presenting dozens of varieties of honey, pasta, shower gel and other products that are almost indistinguishable from one another? And is it a good idea for a restaurant to offer their guests a menu the size of a novel?
 
Not really, according to the Paradox of Choice, a theorem formulated in 2004 by American psychologist Barry Schwartz.[1] It looks like limitation of choices is needed to save us from ourselves. These thoughts were further developed in Cass Sunstein and Richard Thaler’s Nudge[2] theory about choice architectures to be modified in light of human agents’ bounded rationality.
 
This was confirmed by a recent Caltech study and some of its precursors on the phenomenon of "choice overload," which happens when we are offered so many choices that they result in fatigue and disinterest. The same effect was found in Tinder users who grew “emotionally exhausted” by “swiping left” after a fairly short while. The impact of assortment size and variety on consumer satisfaction has been studied from a variety of angles.
 
And neuroscience shows that this effect is not limited to trivial little things like a menu. Caltech behavioral economist Colin Camerer points to the example of Sweden where, as part of a partial privatization of the social security system, people were offered a long list of private funds to invest their savings in for retirement savings; there were a few hundred funds overall. Absent a choice, investors were assigned to a default fund. Initially, close to 70 percent of eligible citizens participated in this selection. Ten years later, the rate had come down to one percent, which was not the program’s intention. But it reflects a very common reflexive reaction to (however) hard choices: taking the “safe” option.
 
To see what happens to the brain during a selection process, Camerer and his colleagues conducted an experiment. They had their subjects make decisions while they were being observed by functional magnetic resonance imaging while thinking – a method previously used in “localizing” brain activity involved in learning and choice. The scans revealed high activity in two regions: in the striatum where rewards are assessed by the researchers, and in the anterior cingulate cortex where costs and benefits are weighed against each other.
 
In this specific case, subjects could choose between different themes that would be printed on a personalized coffee cup for them. The selection included either 6, 12 or 24 different themes. The experiment showed that sample of 12 triggered the most intense brain activity.
 
Camerer’s interpretation of the results is that humans prefer a larger selection - but beyond a certain ideal size, the factor of time expenditure interferes. Apparently applying lessons from decreasing marginal utility, the brain concludes that the best of 12 is already pretty good – the best of 24 would not amount to such a big improvement anymore that doubling the effort would be worthwhile. The nature of mental effort and the cost of thinking are still poorly understood.
 
The study demonstrates that 12 is not a magic number but a consequence of the experimental setup. Still, Camerer believes that the order of magnitude is reasonably close: somewhere between 8 and 15 choices lies the golden mean. Individual factors would then determine what each individual case may look like.

The fact that we are still overwhelmed in the supermarket and elsewhere with an abundance of choices or even want to be overwhelmed is because "
our eyes are bigger than our stomach": We appreciate a wide range of choices, at least at first glance, but only because, at this stage, we do not calculate how much effort the selection process will costs us. What this behavior can do has been shown in a previous study cited by the authors: in one store, customers were offered a stand with either 24 or just six varieties of marmalade for tasting. The table with the 24 varieties attracted significantly more interested parties, but the cash register told a very different story in the end: the probability that someone actually bought marmalade after tasting was ten (!) times higher at the booth with only six varieties as at the one with 24. It would be difficult to find more cogent arguments for limitation of choice.



[1] Barry Schwartz, The Paradox of Choice – Why More Is Less (2004).
[2] Richard Thaler & Cass R. Sunstein, Nudge: Improving Decisions about Health, Wealth and Happiness (2008).

2019-04-01

Elementary, Watson

Theoretically, the perfect crime could be committed. But every mistake leaves a trace. The problem with most crimes is that they are not perfectly planned, if at all, and that too many interpersonal relationships hover in the background. Criminals often leave traces because they are stressed or get surprised. Besides, rarely will crimes be resolved based on physical traces only. Questioning, review of bank accounts or e-mails, as well as apparently random discoveries also play an important role. This ex post facto combination of at first seemingly unrelated information turns increasingly into a Big Data task of recognizing and relating patterns that resolve crimes. The perfect crime remains possible in theory, but, procedural failings of investigation and trial aside, becomes increasingly difficult to commit.

We all leave a lot of traces all the time that stem from virtually every activity we undertake. We shed DNA in an amazing variety of forms. Just one such example is “touch DNA,” left when we touch a surface such as a table, similarly to fingerprints left on surfaces we touch without gloves. Both DNA and fingerprints are often found at crime scenes. They can identify and place an individual and are usually considered part of the gold standard of evidence. But there are also other traces, such as footwear impression evidence. While they usually lead to a shoe rather than to an individual, they are especially important in serial crimes such as burglaries: burglars often wear the very same shoes to different crime scenes, which allows investigators to make critical connections between cases. A less well-known type of evidence is fiber traces, stemming from fibers that are transferred through contact between fabrics. They allow conclusions to be drawn about the presence of an individual in a room.
Of course, among the challenges that remain is figuring out what traces are relevant to solving a case.
Another question is how perpetrators blur their traces. They do this primarily by trying not to leave any. Shoe and finger marks, hair, clothing fibers, voice or surveillance camera records and DNA are typically left at the crime scene by any individual that had contact with the room in the first place. It is theoretically possible not leave such traces if a person wears gloves, a jumpsuit or, rather, a full-body suit donned without leaving external DNA and fingerprints. Still, such an individual’s behavior would have to be timed very professionally so as not to leave any traces at all. Even very careful people often overlook trifles: wearing a glove eliminates the risk of touch DNA but touching one’s face with the glove and then some object in the room still leaves that individual’s DNA traces behind. So, if traces exist, technologically savvy criminals seek to clean them up. Such cleanup attempts are often fruitless because by now, scientific methods exist to visualize concealed traces. Furthermore, most crimes are neither calculated nor planned but occur out of a poorly controlled impulse. Mistakes happen easily. Each of them leaves a trail.

Even “removed” traces can be made visible again, somewhat depending on circumstances. As an example: blood on a red carpet is not easily noticed at first sight because of poor contrast. But different light sources with different wavelengths can be used to illuminate the rug from different angles. Blue light, white light, UV light and infrared light are often used to highlight easily overlooked traces. If blood has dried, it absorbs blue light and appears very dark, at best it is darker than the surface it is on. Where light tools are not enough, chemicals may be brought in. As far as traces of blood are concerned, Luminol is a well-known chemical that, when sprayed on potential blood stains, reacts with blood hemoglobin so that luminescence becomes visible: after treatment with luminol, blood traces shine bright blue. In principle, of course, forensics first uses optical methods which do not alter the chemistry of the trace. Only in a second or further round, chemicals are resorted to.

Except for identical twins, DNA is different from person to person. It exists in every cell of the human body. Everything we touch, and every bodily fluid we emit, such as blood, semen, sweat, contains our DNA. It enables individual identification of a specific person. This is why DNA evidence is often considered the safest avenue and why it enjoys a reputation superior to virtually every other type of evidence.

But DNA can also mislead an investigation. One needs to be careful in assessing the significance of traces. DNA can lead to a particular person, but it does not automatically mean that the individual whose DNA ended up at a place of interest is the perpetrator of a crime.

Speaking of DNA as a key to Pandora’s box, the mere availability of answers may not always render it wise to ask the question. One feels reminded of Jack Nicholson’s immortal dictum in A Few Good Men: ”You want the truth? You can’t handle the truth!” Especially when it comes to questions of legitimacy, DNA evidence raised questions about the descent of the House of Plantagenet but, multiple-edged sword that it is, appears to have raised no fewer questions about the House of Windsor that has long been beleaguered by compelling genetic arguments disputing the descent of Queen Victoria from Prince Edward, Duke of Kent and Strathearn, holding that hemophilia suddenly appeared in Victoria’s descendants but did not exist in the royal family before, while porphyria was prevalent in the family before Victoria (such as in George III) but never afterwards.[1] Of course, the mere thought of DNA evidence conveying residency at Buckingham Palace from Elizabeth II to a character like Ernst August of Hanover would likely terminate popular acceptance of legitimacy-based monarchy in Britain.

Another frequently encountered challenge is the determination of the age of fingerprints. Fingerprints consist mainly of water and lipids such as cholesterol. To identify a person based on fingerprints, the papillary lines we see on fingertips that form a fingerprint need to be analyzed very precisely: where do two lines cross, where does a line end? These details serve to compare a trace with evidence in databases. But we can also analyze a fingerprint’s chemical composition. In that process, we learn which substances are present and in what quantity. These substances change with time, and from this change one can deduce hypotheses about the age of a fingerprint.

Current research examines how collaboration and communication between different actors in the criminal justice system such as expert witnesses, crime scene investigators, detectives, prosecutors and judges works, especially regarding DNA, fingerprints and handwriting. Communication between them is important to enable interpretation of information obtained from traces that have been secured. DNA and fingerprints have a strong reputation. Handwriting, however, has come to be considered rather unreliable evidence. This is so in part because handwriting analysis is frequently confused with graphology. Graphology intends to explore the writer’s character from his handwriting, but it is not based on scientific principles. Handwriting analysis, on the other hand, compares the handwriting in several documents to determine whether they were written by the same individual. This process results in a probability statement that can appear unreliable because the public generally expects an analysis of evidence to provide close to 100 percent reliability of results. Therefore, it is often and conveniently forgotten that there is no such thing as any analysis with 100 percent reliability. Even DNA testing is based on mere probability statements.

Handwriting analysis is not the only salient aspect of a written piece. Formation of sentences is no less important. Analysis starts with sheet design. For example, handwriting experts pay attention to where an individual starts to write on a sheet of paper, what the top margin and side margins are, what line spacing is used, whether the entire sheet is used, and other similar factors. It is true that handwriting appears to lose significance in a digitized world, but it maintains an important application in the analysis of authenticity of holographic wills.

Scope and methods of forensic analysis have changed greatly over recent years as technology has taken on far greater importance. Developments in photography, computer science, robotics or analytical chemistry simplified many processes. DNA analysis has revolutionized forensics and police investigations at the end of the 20th century and forensic applications of technology will continue to evolve. But while technology development is an important focus, evaluation and significance of analytic results needs to remain the principal focus of every investigation.

The educational relationship between forensic science and police investigation is in flux as well. Switzerland has its own forensic study program at the University of Lausanne. It conducts international research in forensics and offers an academic education where students deal with chemistry, mathematics, physics, and securing and evaluation of evidence. Graduates are qualified as "general practitioners of forensics." Forensics is a composite science that supports police investigations. Most European universities do not offer comparable training. In the U.S., forensics is also confined to second- and third-tier colleges. This correlates somewhat with the low priority accorded to Evidence in the curriculum of leading American as well as European law schools – and even in bar exams – very likely because the field itself does not lend itself terribly much to academic theorizing but is extremely important in practice. Yet it is an indispensable sequitur from burden of proof. Crime scene work and forensics are police tasks, so research in this area is mainly police driven, and crime lab staff is notoriously overworked and underpaid. It would be beneficial if forensics could be established more broadly as a scientific discipline. One area where forensics is a firmly established subject of analysis or scholarship only marginally related to the physical sciences is forensic accounting and valuation, a function on which I plan to reflect  in a different post at a later time. 

Another 21st century facet is emerging with vast speed: digital evidence and digital forensics. As is the case with other types of evidence, courts make no presumption that digital evidence is reliable without some evidence of empirical testing in relation to the theories and techniques associated with its production. The issue of reliability means that courts pay close attention to the manner in which electronic evidence has been obtained and in particular to the process in which the data was captured and stored. Earlier process models have tended to focus on one particular area of digital forensic practice, such as law enforcement, and have not incorporated a formal description. The more recent Advanced Data Acquisition Model contends that this approach has prevented the establishment of generally-accepted standards and processes that are urgently needed in the domain of digital forensics. It represents a generic process model as a step towards developing such a generally-accepted standard for a fundamental digital forensic activity – the acquisition of digital evidence.



[1] A. N. Wilson, The Victorians 25 (2002).