We have a new scientific article out. It discusses how bone mineral density correlates with age, and thus can be used to model age at death from human remains. We employed artificial neural networks, a simple machine learning technique, to learn patterns of femur densitometric data gathered in 100 female individuals from the Coimbra Identified Skeletal Collection. The mean error of the method, depending on the variables used, ranged from 9.19 to 13.49 years. It is also the first publication about the DXAGE app that was developed with the cooperation with some team members from the Laboratory of Forensic Anthropology, UC. Despite preliminary, and only indicated for skeletal remains of adult females, it shows a very original approach for bioarchaeologists and forensic anthropologists to assess age at death.
I am very pleased to announce that the HOT team has a new site:
The coolest thing I learned when developing HOT BONES was to create this beauties, using only the R programming language and a simple sheet from Excel.
Dead Weight paper is out
Besides, we published recently, with some of the team members a new paper, Dead Weight: validation of mass regression equations on experimentally burned skeletal remains to assess skeleton completeness, on Science and Justice. It tests the MassReg app on the experimentally burned skeletons of the CEIXXI available at Laboratory of Forensic Anthropology, UC. Despite MassReg being initially developed for non-burned skeletal remains, it seems to perform as well on burned skeletal remains as other techniques (e.g. comparing to references). Thus it is a new promising and useful tool to assess skeletal completeness.
So this is one of those really cool things that passed through my mind when I was a kid. Luckily, it is a dream I can remove from the checklist! So I appeared on a documentary on biological anthropology, more specifically about a recently discovered archaeological skeletal collection from Lagos, Portugal. This collection is unique, worldwide, in how much it can tell us about slavery related to the Age of Discovery.
Die Sklaven von Lagos
The documentary is split into 6 episodes:
The skeletons from Algarve
Bioarchaeology: working with the bones
History meets CSI
What the teeth tell us
A piece of history for the nameless
The general interview with Maria Teresa Ferreira
All very interesting and available in german and english, my 15 seconds of fame are mostly on the fifth one, but best is to watch all of them!
I have finally finished a full tree of knowledge in Duolingo. This means there are not any new games for me to try, with this language in specific, in the platform. Yet, Danish sure is a difficult language, while I can somewhat read it now, it’s still really hard to understand by listening. I’ve watched a few series and listened to a few audiobooks to improve that, yet…
Well, just wanted to share this little achievement. By the way, you can add me as your friend and check my general scores in all languages:
Today I was trying to update myself on the state of the art of deep learning, a paradigm of machine learning that is exploding in popularity. In very few words, it grows artificial neural networks with very deep architectures (i.e. hundreds of hidden layers, thousands or millions of neurons) in an attempt to simulate or even beat human performance in a variety of tasks, for example, recognizing objects in pictures. And somehow I ended up watching this video:
It is a work from the Evolving AI Lab, with Jeff Clune giving the lecture. But as he notices, most of the work presented, was done by his student, Anh Nguyen, who has some awesome publications in the field of deep learning.
Now the coolest thing, is that near the end of the video, Clune starts talking about how deep learning can be used in a generative sense. For example, you can give hundreds of pictures of a flower or a bus to your model, and then use your model to create new, never seen before, pictures of what the model understands as a flower, or a bus. Then he shows some images of the results, and its very interesting.
Now the crazy thing. When I think about machine learning in general is hard not to see connections on how these abstract algorithms work and how we humans - a big-brain ape - think. With most algorithms, it is kind of a stretch really! Ultimately, it is hard to conceive of concepts like creativity, coming from the side of computers. But deep neural networks are a really strange metaphor for “understanding” in its general sense, and if you go too deep in the rabbit hole, the experience of thinking about all this stuff gets a bit psychedelic.
What showed up in the video you can see here: evolvingai.org/synthesizing. Near the bottom there are some examples with ladybugs, cardoon flowers, fountains and other objects. I was not looking specifically at anything, but for a few seconds I got this very paranoid feeling of “something is really wrong!”. It took me a few minutes to realize what it was. But then it hit me… there was a picture taken by me there!
So it seems I had this picture on my old Flickr account. And somehow, as they explain “we show the top 9 validation set images that highest activate a given neuron (left) and 9 synthetic images produced by our method (right)”, this makes my picture a very ladybug-ish picture to “ladybug neurons” that were trained with thousands of ladybug pictures, or so I guess. That’s so strange and hard to gasp.