Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Divergent Association Task: Fast creativity test (datcreativity.com)
142 points by amichail on May 1, 2022 | hide | past | favorite | 149 comments


Two observations:

1) people who seem to have gotten the highest scores are not following directions properly (e.g. someone on Twitter who got >98 used decidedly technical terms like "codec", despite clear instructions not to use technical jargon)

2) this seems to measure how extensive your vocabulary is (e.g. obscure noncommon words seem to score much higher than common words, even though the common words may be quite diverse, seemingly simply because common words are...well, more common and so more likely to be mentioned in a sentence together, just owing to English sentence structure).

I don't understand how knowledge of a broad amount of words maps to creativity? The person who would do best on this test is someone who knows a variety of very obscure nouns. In other words, someone who memorizes a wordlist. How does this map onto creativity?

Case in point: young children are extremely creative, and yet I predict would score very low on this test owing to a very limited vocabulary.


In my biology class, we used to have a quiz on whether students could name five organisms as different from each other as possible. (Feel free to try this yourself before reading further.)

Some students would name all sorts of exotic and obscure animals -- axolotl, vampire squid -- but these would be scored poorly. All animals are actually pretty close, no matter how "obscure." The best score would simply pick organisms from different kingdoms: Dog, maple, portobello mushroom, amoeba, e. coli.

I tried this tactic in the above test, picking common words as categorically distinct as possible (dog, thought, kindness, Tuesday, zero), and received a very low score.

It seems that this system prefers the "axolotl, vampire squid..." version of distinctness instead.


One justification for this might be that the greater the number of words in your vocabulary, the more "colors" with which you can paint your linguistic constructs. More complex words can have a subtlety of meaning that distinguishes them from one another, so you can "pack" your sentences with more meaning. The more nuance in a word, the more potential cognitive distance between it and another word.

  E.g.
  He had a green car.
  versus
  He leased an olive drab Jeep.
Surely a distinguished artist could produce a masterpiece with three crayons, but most would not choose to.


> The more nuance in a word, the more potential cognitive distance between it and another word.

I don't see that. What is the cognitive distance between a more common color and a more obscure color? They're both colors. In your example, describing the color as olive instead of just green is certainly more specific, and maybe more accurate, but how is it more creative? If anything, it is less creative because it's more analytical and merely descriptive.

For instance, saying something is '.01 centimeters' long is not more creative than saying something is 'small' or saying that something is '.1 centimeters', it's just more precise.


If you're writing a story and trying to flesh out a character, someone who leases an olive drab Jeep is likely (a) someone who has money and (b) someone who likes to go off-roading, or at least give the appearance of being rough-and-tumble. Olive drab is also a color commonly associated with military usage, so that plays into it.

On the other hand, you could have a character who owns a lime Honda Insight. That is an old, unstylish hybrid, so that character is likely frugal, square, and perhaps more environmentally conscious.

And yet both characters could be said to have a green car. But the more detailed word choices can help give the story more life.


Is exposition-by-possessions really that powerful though? I would not infer all of the military off-roading stuff from a car, even if it was a camouflage painted Humvee.

I'd argue the creative writer would put this person in a situation where they make choices in a way that reveals their affluence and fondness of terrain driving.

Unless maybe this part of their personality is a shallow veneer and not really who they are at their core, in which case I guess possessions might work because it's also not as critical that the reader understand it.


> And yet both characters could be said to have a green car. But the more detailed word choices can help give the story more life.

Given that 'green' encompasses _all_ of the scenarios you describe, whereas olive/lime narrows the scenario, that would then seemingly be an argument that more specific words actually inhibit creativity by limiting the potential scenarios you can imagine with just a 'green' car.


I believe one of the ideas behind the evaluation is being creative enough to be able to imagine the scenario with those details, and then being able to select the words that best fit the scenario you've created. If you know words that encompass more disparate concepts, then the "cognitive distance" between them is greater. If you haven't told us which shade of green the car is, how do we know that you're not imagining everything to be the same shade?

If you're the most creative person in the world, but are incapable of expressing yourself, are you actually creative? It could be the same logic as "I've never played this game, therefore I have never lost, therefore I am the best in the world "

All this being said, I'm not trying to necessarily defend the usefulness of this survey. By their nature, none of these tests for "creativity", "intelligence", or "personality" will ever be able to perfectly capture or categorize or numerically label what is at heart a subjective quality.


> inhibit creativity by limiting the potential scenarios

Constraints are a typical tool for augmenting creativity, by restricting the easy path to a solution.



> I don't understand how knowledge of a broad amount of words maps to creativity?

It's not exactly, but from first principles, using it as a proxy does make some sense.

A robust finding in the educational literature is that creativity is generally limited by how many things you know that are relevant to the topic at hand e.g. people who've never programmed before are not actually good at writing creative programs (there's a subtle distinction here where the end goal of the program may be creative, but how they go about getting there is unlikely to be creative).

So, very roughly, if you consider someone knowing more words as a proxy for them knowing both more concepts and having more connections between those concepts, you would expect them to have more creative output on topics of general interest (on average).

source: former educator


Using it as a proxy would make more sense if the instructions said to try to choose the most obscure words you know.


> It's not exactly, but from first principles, using it as a proxy does make some sense.

How so? If I asked you to take a test in Mandarin Chinese, I would assume you would know less words, but wouldn't you be just as creative in your mind?

(I admittedly did not take the test.)


I got 82 using rather simple vocabulary (cat, milk, office plant, chess, rock, range).

Those aren’t random and I have connection in my mind between them but I did make a “stretch” to make them different but not a really huge effort (like 3 seconds max). E.g. plant is a well plant but also can mean putting something in place, and that could be game piece and chess is a game where game pieces are used etc.

This can be absolutely gamed and that’s why I believe authors want test taker to first take it and only later read about them.


FWIW, I tried a representative word from each category in the Dewey Decimal System and got a significantly below average score. I feel like that should be at least an average spread for concepts, so I agree that it's probably biased toward more obscure words. The restriction on "technical" terms (What even is that?) probably comes from the people running it realizing it's easy to game if you use high-brow words.


I noticed as you go down the list that thinking about divergent terms gets harder because of the suggestive power of the previous items. I suspect it's measure the ability to "think outside the box", or the ability to ignore constraints to find solutions. Interesting, but as it says it's a limited definition of creativity.


Also to get a good score you need to think about multiple meanings of each word - for instance I entered gas and can as separate words, however I was thinking of gas (as in fart gas) and can (as in can of beans) and that was chosen as an example of being close (because a gas can is a petrol container).

But during the test I realised this about half way through and was then checking for double-meanings. I almost wrote pot to describe marajuana for instance, then decided to go for Satvia because i thought pot would cross with Flower.

Having said all this, I did score very well anyway, although I do improv and listing unassociated words or “a to c” connections is a pretty common excercise.


I was in the top 2.1% and I used all non-technical words. The most obscure words I used were trimmed off (it only uses 7) and the ones it counted seemed to be words I expect all people of average intelligence to know.

It honestly seems like a rather poorly executed test. Valid nouns that are too obscure or "technical" to be in the word list get dropped but the person I know that got the highest score (95.07 raw score, 99.4 percentile) used "titties" (not even a real word) and at least two of the other words he used weren't even nouns (nouns are stated as a requirement).


With 2) I got 91.28 with happiness, basement, electron, cricket, morphine, artichoke, philosophy. Nothing notably advanced here. (My last 3, which weren't included were: pop [like soda pop], disecretion, moon).


My guess: High vocabulary is correlated with high Openness to Experience (a personality trait in the 5-factor model). Openness in turn correlates with creativity.

I will note that Openness to Experience can be broken down into two aspects: Intellect* and simple Openness. The former is more closely correlated with vocabulary, whereas the latter is more associated with creativity. Thus I also have some doubts about the quality of this test.

*Intellect here does not mean intelligence (though there is a modest correlation). In this case it just means a preference for intellectual activity.


Creativity and openness also correlate with lateral thinking, which is the tendency to quickly think of lots of concepts with low relatedness. Lateral thinking naturally correlates with facility in rapidly thinking of words with low relatedness. So if you can quickly think of a lot of words with low relatedness, that serves as a loose measure of openness and creativity.


> I don't understand how knowledge of a broad amount of words maps to creativity?

I can see how it maps to complexity, but not creativity. Perhaps, I am not creative enough to think of a way to piece the two concepts together.

I would like to see how this test would map to languages with more/less distinct nouns than English. Are native speakers of said languages less creative? I doubt it.


I found myself started trying to use obscure words from different time periods as a way of "randomizing" my thoughts before I knew what the test was measuring. It seems like it could be a good technique, but implies nothing about your real creativity.


Creativity isn't necessarily useful or profound. Children are naturally quite creative, and we find their paintings to be pleasantly unique for example. The reason little kids don't have their creative works in the Louvre is that creativity is not significant in itself.


I didn't say anything about creativity being useful or profound. My point was that neither is it measured by vocabulary, which this seems to be testing.


> used decidedly technical terms like "codec", despite clear instructions not to use technical jargon

Words that are too technical are invalid and not used. Either “codec” is not too technical or it was not used in the score calculation.


creativity I think is fundamentally synthesis. a large, varied, vocabulary of concepts is important for this, but I think more important is a high drive for novelty. such a drive probably leads to larger 'vocabularies.'


I got 98.63% using what I think were pretty fair words:

Witchcraft Legumes Pagers Uranium Regret Showers Archaeology


I don't buy it. Seems to measure vocabulary + problem-solving-specific-to-this-question.

I bet plenty of "uncreative" people could easily do well if you just told them to pick specific things from different topics(e.g. an obscure food, an obscure job, etc...), which is all I did and I scored in the top 1%. You don't have to be creative at all to do that. You only have to think of such a strategy, and then pick some obscure things.

Also, there seems to be some cultural bias - it wouldn't accept chow mein, but I bet pizza would have worked.

Also, the instructions seem silly - the more close to "technical" the terms were that I picked, the higher they seemed to score.


I got 90th percentile by rhyming my way through - box, fox, socks, locks, rocks, etc.


But that, I would say, is a display of creativity. Instead of using mental associations between concepts to generate words, you deconstructed them into symbols and found associations through symbols that have little to no connection to the underlying word.


Well put. The cultural bias may be approximated by comparing results obtained from English-as-the-mother-language speakers to those produced by people with less and less command of the language.


They should really limit the test to the most common 10,000 words or something.

I tried in good faith to not select obscure words (knowing from previous experience how the exercise works), but then I ended up typing in autoclave anyway, starting to wonder if it was too obscure or not. In hindsight, it was. It would have been nice to get that feedback before I submitted.


The task states: “Only single words in English.”

Chow mein is not a single world, pizza is a single world.


Very interestring. I might be actually wrong here.

So I assumed the problem is not cultural bias, but a tokenizer which does not have the concept of open compound words. For example a naive implementation which is just splitting the strings into chunks of printable characters would be suspectible to this problem.

To test this I have tried it with various open compound words. My theory was that it will reject all of them. Turns out it is not the case!

The site rejected “sweet tooth”, “apple tree”, “chow mein”, “banana split”, “school bus”, “life jacket”, “mobile phone”, but curriously accepted “rain forest”. This means that it has the ability to sometimes recognise open compound words. Curiouser and curiouser!


rainforest is a single word. it probably removes all spaces before checking for words.


> chow mein

That's two words.


>> chow mein

> That's two words.

No, it is a single open compound word. You can check this by consulting a dictionary, which will list 'chow mein' as "a noun".


Just because it says "a noun" doesn't make it one word. That's like saying "brown pants" is one word. If you know Chinese you'd know it's two words, because mein means noodles and chow means fried (as opposed to lo, which means tossed).


> Just because it says "a noun" doesn't make it one word.

In fact, it literally does, because "a" refers to a single entity.

> That's like saying "brown pants" is one word.

No, it is not at all like saying "brown pants" because in order for a word to be an open compound word the modifying adjective has to create a new noun. For example, 'living room' is an open compound word, while "brown room" is not. If you have trouble differentiating between open compound words and just words with modifying adjectives, once again, consult a dictionary.

> If you know Chinese you'd know it's two words

We are discussing English. In English, chow mein is what is called an open compound word, which is when two words come together to form a new single word. There are open compound words, like chow mein, and closed compound words, like notebook. But in both cases, open compound and closed compound words are singular words.

Edit: and also, the site explicitly tells you to select "nouns". So your objection that chow mein is "two words" (it's not), is irrelevant.


I'm aware of what an open compound is, and I still say chow mein isn't one of them. Show me a dictionary that actually refers to it as such.

Furthermore since it's a borrow word, usually the "compoundness" comes with the borrow, and it's most definitely two words in Cantonese.

> the site explicitly tells you to select "nouns"

It says to select 10 nouns. It says "Only single words in English". It also doesn't allow "Living room", so as you say, your objection is irrelevant because it doesn't allow open compounds.


> Show me a dictionary that actually refers to it as such.

Here you go: https://www.dictionary.com/browse/chow-mein

Note how it is listed as "noun", not nouns.

Just like living room: https://www.dictionary.com/browse/living-room

Note that there is no entry for "brown pants". Do you understand why both living room and chow mein are listed, but brown pants isn't?

> and it's most definitely two words in English.

No, it is not. Again, consult a dictionary.

> It says to select 10 nouns.

Yes, and "chow mein" is a noun. That the site doesn't seem to allow open compounds, but allows highly technical wording (despite saying to not use technical wording) would seemingly be a bug.


Your condescension is not necessary, nor is it correct. But I don't feel like continuing to debate someone who is so rude.


He is not being rude by asking you to consult a reference text.


> Do you understand why both living room and chow mein are listed, but brown pants isn't?

I already told him I know what an open compound is. This is just rude.


I apologize if my tone came off as rude. I try to match the tenor of the person I am responding to. My question was also asked in good faith. If you understand what an open compound is, but don't consider chow mein to be an open compound and have earlier compared it to being the equivalent of 'brown pants', then how do you in your understanding reconcile chow mein being in a standard English dictionary, but not brown pants?


I think naming this "Creativity Test" instead of "Max Word Distance Test" had gotten everyone in here really defensive about their scores.


Having taken the test and scoring an average 73 or so and then looking at the Twitter high rankings this is definitely rigged against simple terms. Yet the criteria “No specialised vocabulary (e.g., no technical terms)” should rule out a lot of words.

Words like moon and elephant will get you an average score but using those words are following the criteria closely.

Is it really an accurate test when you can look at other results and then score 90+ 5 minutes later?


I scored 90.88 points on my first and only try with the following words: frenzy ghoul protocol concept autocracy hymn nephew screwdriver bail molecule. (Of which only the first seven words are scored when all are valid.) “Autocracy” is arguably the least commonly used word of those. After seeing the resulting scoring matrix, I realized that some of the words were too close to each other like frenzy with ghoul and protocol with concept. There was certainly some kind of priming effect going on when I selected those.

My point is, while the test can certainly be gamed, I don’t think that you need to pick highly unusual words to score high (90+).


I scored 85.86 first try with a surprisingly similar mix: lion, asteroid, easel, nostalgia, monarch, dilemma, paradigm, molecule, angst, corn.

Likewise I didn't appreciate the possibility that lion may have primed me for a monarch, dilemma for angst etc.

The order of our 7 may have changed the score.

We seem to each have a Tool (Screwdriver/Easel), Political Construct (Autocracy/Monarch), Abstraction (Concept/Protocol/Paradigm), "Animal" (Ghoul/Lion), Emotion (Frenzy/Nostalgia/Angst), and Scientific Object (Molecule/Asteroid/Molecule).


> Is it really an accurate test when you can look at other results and then score 90+ 5 minutes later?

It lets you repeat the test but only the first one is used in the study.


There's nothing seeing with using words like moon and elephant, it's a relative measure between words, not absolute.


Considering I got >95 on my first try and I have been frequently told that I find obscure associations, I think it is.


Yes, definitely this. I do not see how this test claims to measure creativity. It is essentially a vocabulary test.


The argument behind this test is that the words we generate are highly correlated, and therefore a person who is capable of generating a diverse set of words also exhibits diverging train of thought.


My score is better than average, but looking at the highest scoring sets they seem to not follow the rules, because they use specialised terms like "trichrome" or "pustule".

I'm not a native English speaker, but I don't suppose the mentioned words are used in normal conversation.


Pustule isn't a common word, unless you work in a medical context perhaps, but I'd expect a native speaker to know it.


My work group uses it all the time, but always preceded by “festering”.


I kind of wish they had asked whether respondents had come across semantic similarity, or had previous experience with linguistics, NLP, or sentiment analysis. I don’t think I’m particularly creative but guessing at how it must have been implemented under the covers I have to believe contributed bunch to the score I received. It’d be interesting to have seen how much of one’s score can be explained by those past experiences.


Interesting study. However, I rather disagree with the premise that "creativity" is akin to random juxtaposition of nouns. I consider that "imagination"...

Years ago, I saw small poster which read (paraphrasing): "Imagination is making the simple complicated, while creativity is making the complicated simple"

The idea being that one can imagine almost anything (and "random" nouns is really just that), but creative results are "elegant", "simple" or even "obvious". It's a perspective I keep in mind when hearing and/or seeing "creative" things.


It seems the way this works is that you get a higher score if the 2-gram frequencies for all pairs in your set are low. But this can easily be gamed just by choosing very specific words, since they will naturally occur rarely with other niche words, regardless of whether or not their meanings are "related" in any sense.


Ok, do it and report back. Remember, you need to think of the words yourself.


Done. Moved my score from the 54th percentile to 98th percentile.

My words:

Cannabinol, idempotence, median, mitosis, infrared, soufflé, synonym, genomics, tortoiseshell, crankshaft.


The site doesn't police the rules completely, so some self-policing is necessary to avoid things like specialised vocabulary.


I managed 76.4% with the super unrelated words

flag, buttress, tower, stables, kitchen, dungeon, portcullis, throne, keep, castle

[edit] tweaked it a bit to make the score higher after I realized the form always drops the 10th word :-)


> super unrelated words

Did you mean to say that these were all super related, not unrelated, words?

Crenel, buttress, hearth, stables, dungeon, gargoyle, throne, flag are all words relating to castles - some like crenel, explicitly so.

Edit: and in your updated wordlist, you just swapped one castle-specific term, 'crenel', for another, 'portcullis'...


perhaps the /s was needed. I swapped out crenel for portcullis because I figured more people would know what it is (crenel was an eliminated word) Also moved flag to the front because I noticed it was always eliminated and added castle to the end because the 10th word was always eliminated and it's funny to have castle eliminated from a list of parts of a castle.


I'm not so sure this is any good measure. From running it a few times it's easy to get an very high score if you pick extremely specific words that are not often used together. For example the words "Chair" and "Mountain" have a score of 78. While the words "String" and "Quark" have a score of 86. "String" and "Quark" are much, much more related though via String theory then a chair and a mountain.

Basically if you know many obscure words you will have an easy time getting a much higher score.


1) No specialised vocabulary 2) Only the first score is valid, as they recognize that gaming the test is much easier after the first go.

So maybe instead of “string” or “quark” you go “physics”. I may be wrong, but I think having very general knowledge over different topics will spur more creative thinking because specific knowledge will be less likely to be helpful in a completely different domain(not 100% of the time obv, but just a personal thought and generalization).


Your score is 97.52, higher than 99.83% ———————- paternity chocolate snowblower troglodytes sediment dictionary ricochet ————————

This could be a fun wordle type game that one could play everyday.


That's really well done because those are common enough words.


You might enjoy Semantle: https://semantle.novalis.org/


Or better yet, Pimantle: https://semantle.pimanrul.es/


Out of curiosity, are you ESL or a native English speaker? It really seems like choosing uncommon words is the way to go.


I am an ESL speaker. That said, I did learn English at a very young age and it was the language of instruction at school.


> no jargon

> troglodytes


Takes a troglodyte would think "troglodyte" is jargon.


The site states that this tests only a sliver of the creativity process, but the comments seem to be taking this as a very hard metric(with a lot of negativity).

Creativity can come from being able to fuse unrelated ideas from different topics into a novel outcome, and it seems like this test just tries to boil that down into a 10 word puzzle, which is obviously not going to be a fool proof method, but could be interesting when data is aggregated.


96% percentile with:

- city

- paddle

- fingernail

- magma

- sphere

- birdseed

- game

I can definitely understand the criticisms of how the specificity of a word influences its 'distance'. That's a difficult problem to solve, but maybe the study could calculate some kind of 'accuracy' metric based on how common the used words are.


My score was low, and oddly the word "nothing" was mostly the cause of it (if I'm understanding the table right). "Nothing" paired with nearly every other word (dog, thought, kindness, wind...) scored a high 50 or low 60.


This seems to be somewhat biased in favor of vocabulary. It would be interesting to see if there are similar methods for estimating breadth of vocabulary from 10 words and checking for correlation. You could perhaps use this to scale the score. The cynic in me feels like this would just result in headlines like "Knowing more words makes you creative"


95.14 in less than 3 minutes.

Eucalyptus, Zircon, Pistol, clay, anarchy, ovaries, interface, noodle, octopus, fan.

The process in my head was more or less mental navigation of the world + filtering. The problem with random search over our vocabulary is that our recall is highly correlated so to find something that is very unrelated we need to go through 5+ mental hops. Given sufficient working memory, it is feasible.

Sex also matters, in this case, I exploited my visuospatial abilities to become passthrough and walk through the world quickly to observe as many items as possible, with the idea being that given sufficient exploration, I'd find things that are uncorrelated.

Somebody whose primary sex hormone is oestrogen will have better odds of doing what I described 2 paragraphs above since their language centre sees far more activations than mine.

I had tried this task before, and the highest score I have gotten was around 110.


Judging by the other comments, it is creative to break the rules.


Apparently, this is how the world operates.


It's interesting to know the possible difference for native and non-native English speakers for this test.

I got "higher than 94.3%" and wonder if being non-native puts me at an advantage for the task.


I can't imagine it would. I'm fluent in 4 languages and with each language I learnt the most common phrases and words first.


I think it’s easier because while you indeed learn the most common words first, the learning gap between common and non-common words is much lower and the associations in the brain are less developed.

For reference, I got 92.8 and English is my second language.


I got 94.54 and I'm a native speaker


It’s a pretty random and unrepeatable test.

I think we as a society need to have a profound discussion about averages and how useless they are for understanding people or helping with every day life.


I literally just reacted to my surrounding environment, because I’m sound sensitive and I was curious what would happen if I just used what I pick up from my environment. Maybe creative? I was just wondering how the thing works. 90-something% and I didn’t bother to keep a reference. I was actively regurgitating other thoughts expressed around me, but with no intention other than understanding what’s being measured. Again maybe that’s creative, but it wasn’t intentional.


The rules explicitly say:

> Think of the words on your own (e.g., do not just look at objects in your surroundings).

I guess technically they don't say to not listen, just to not look.


Is it creative to think of a loophole to those rules, or is it a lack of creativity to assume that the rules are intended exactly as written?


I would have adopted those rules without being told, but like I said I’m noise sensitive. It was loud, I couldn’t really think anything other than what I was hearing. So I just picked out what felt interesting and riffed on it.


Random sources probably have a higher dispartiy than our own thoughts, which are probably coherent in some way.


Seems like a lot of people aren't following the rules.

This test seems to me a good way of illustrating how a diversive group of people can be more creative.


Aren't we all different, pretty sure none of us are carbon copies of each other. And so a more correct non political statement would be collaboration can breed creativity.


I suppose I cheated since I read comments here before running the test. I tried to follow the rules and avoid technical terms, but cannot say whether I was primed by others posts...

Your score is 91.32, higher than 97.69% of the people who have completed this task

asteroid rhubarb dishwasher nomenclature virus philosopher plateau


I highly disagree with their definition of different from each other. Usage together does not make the words similar. If they'd said something about words you would not find together in a sentence then I'm certain you'd get very different results.


Yes, agreed. I used the word hose, thinking garden hose, and vacuum, thinking of absence of matter. That specific pairing came to 47. I now see why based on the explanation of the test.

However, even if I had meant vacuum like the cleaning tool, I would not associate it with hose naturally because I use a stick vacuum.


A garden hose and vacuum are still very close semantically though, at least in the entire domain of nouns. They're both household objects.


Understood, but the point I was making is that I would not associate the word hose with vacuum unless someone else makes the connection for me.

I guess the broader point I’m making is the same as others have said. It’s a poor measure of creativity as context is important. A word on its own doesn’t necessarily mean the same thing to you as it does to me.


I think the idea is, actually, valid. But I don't think the measurement is. I think this plagues all creativity measures: you're asking the subject to be very open in their thinking, but the evaluation method must be narrow and objective.


I would agree: first attempt I scored 74 and analyzing their distance matrix it looks like their measure is broken. e.g. hammer and time are relatively close because "hammer time" is a phrase so their proximity frequency is high, not because the words are semantically related. Second attempt was 89 after a trivial change to account for the broken measure. Nice idea for an experiment but the execution is lacking.


I got a 86.44 with the following words that were selected: boat, space, charitable, acid, anger, revolution, code. Some are still a bit close but I'm satisfied with this. I don't like how it throws away 3 words though.


I can’t help but feel that intimately knowing the models behind this test helps you game it.

I consider myself “above average” creatively, but I credit scoring much higher than expected by approximating how the models behind the test actually work.


My words: crap cookware tether crumpet bisector pretender sky

Seemed to score pretty high. Though I’m not sure if I cheated or not. I wasn’t quite sure what all entailed “specialized vocabulary” and of something like “bisector” falls in there.


Assuming you don't misspell your words or use jargon, they only take the first 7 (rather than all 10), so put your best ones up front I guess (playing with it, apparently this cost me 5 points, haha).


Don't like this since it's probably based on gpt3 and I can see how the results can easily be gamed, and how unrelated nouns can sometimes have a higher correlation based on artifacts in gpt3


89.48

I felt myself chaining the words together, so I think it was still associative, but I skipped 3 or 4 until I found myself hopping to something that felt pretty different from the previous word.


98.4% by just thinking of words that i could not easily draw a mental connection between, then iterating over the list and double checking that no pair of two words had a noticeable link between them. if they did, i swapped one or both of them out and repeated.

my final list of seven was: dolphin, dentures, bitterness, genocide, envelope, crampon, and slime


Same here. I don't think I or these words are particularly creative, but they got me to 99.17%:

  bacon trumpet motorcycle galaxy decimeter phosphorus plumber
Maybe "decimeter" is slightly technical, but the others, definitely not.


dentures and crampon are arguably techincal


Did anyone else realize they probably take some sort of distance function on word embeddings then immediately get bored and leave?

To be honest, you probably already know if you're creative or not.


I sorta guessed that the algorithm was words that had nothing to do with each other so I guess the 85.53 percent I got was probably more confirmation of a guess than anything.


? That's not guessing the algorithm, those were the instructions.

"Please enter 10 words that are as different from each other as possible, in all meanings and uses of the words."


This guy is so smart he followed the instructions without knowing them!


I feel like this is testing my English skill rather than my creative skill. Maybe English native speaker might easy to beat no native speaker with low vocabulary skill?


I scored 90.56 with: cake, car, amoeba, rhombus, dew, atmosphere, heartbreak

The words closest to each other were amoeba/rhombus and the most distant were rhombus/car


I think that tells you a lot about their metric... dew and atmosphere are a lot more related than amoebae and rhombi...


Ugh I over thought this and had a really basic word in the mix of my super silly sat vocab words - by my estimate this was different, but not by their metric.


I got 83, but I notice that some of my words were not included in the summary on the result page, including 'fright' and 'stupor'.


I got 96.14 with singularity, earwax, toilet, verbatim, popular, palladium, and stegosaurus. Though palladium maybe breaks the technical rule?


verbatim and popular break the noun rule.


I suppose that I did relatively well.

This is most interesting as a test as such. That is: how is it implemented?

Getting too concerned about the results may be hasty.


This FAQ by them is quite nice:

https://www.datcreativity.com/faq


This isn't really a creativity test. I'd love to see their baseline and how they show this stacking up against other creativity tests.

I've taken real creativity tests before (the ones that movie studios give for entrance to their internship programs). They are hard to grade but I think far more accurate. They have questions like:

- Name as many uses for a brick as possible in 10 minutes.

- You've been told that a scene calls for a visual of water flowing uphill. How do you accomplish this?


- You're in a desert, walking along when you look down and see a tortoise. It's crawling toward you. You reach down and flip it over on its back, its belly baking in the hot sun, beating its legs trying to turn itself over. But it can't. Not with out your help. But you're not helping. Why is that?


I don't know if those are better. They sound like the same superstitous interviewing techniques that led Google to ask engineers to estimate the number of ping pong balls that would fill up a Boeing 747.


I think that's actually a pretty good question. It shows how someone tries to tackle a very nebulous problem with no clear answer.

And keep in mind that these questions were for internships in making TV and film, where exactly this kind of creativity is necessary.


Except that extensive internal tracking showed that the question was useless. The whole point of why I brought it up is because it sounds smart to the interviewer, but doesn't actually measure anything. Actual, validated psychometric tests are typically quite boring and seem intuitively insufficient, while the creative and fun questions never pass validation.


> uses for a brick

unrelated, but how many can you come up with? I’m struggling after only 4 or so uses..


- in a wall

- as a paperweight

- to keep an elevator open

- as a murder weapon

- for weight training

- as a miniature grave stone

- for practicing long throws

- as a reminder (place it somewhere where you want to be reminded of something, like a knot in a handkerchief)

- as a step-stone to reach the upper shelf

- to keep a button pushed down

- to test how deep the chasm is (drop it in and hear it hit the bottom)

- to prop up whatever

- as a counterweight

- as a fraudulent shipment for something sold on Ebay

- as an anchor for a toy boat

- to train your karate chops

- for cracking nuts open

- as a slingshot (with a rope)

- as a pendulum (same)

- as a hammer (for nails, tenderizing meat, making crushed ice from ice cubes in a bag, …)

- as a digging or carving tool

- practicing to balance it on my finger

- raising the height of my monitor

- for blocking a mousehole

- for milling grains

- for opening a capped bottle

- as a bookend

…I could go on.

I now feel like bragging, but it seems like there’s almost no end to the possible uses.


Excellent. I think you would have gotten the internship. :)


I got stuck too, which is probably why I never got the internship. :)

Here's some odd ones though:

- Keep a car going by putting it on the accelerator

- Doorstop

- Counterweight

- Stairs for a mouse


Crimping an IDC cable.

(It works very poorly for this, and I speak from recent personal experience)


Got above 98th percentile using: microbe, annihilation, annuity, junta, sash, polka, transition, epidermis, tome, token.


If it makes anyone feel better my creativity peaks at about doing paint by number. Somehow I got a 92 (98.6 percentile)


Your score is 89.18, higher than 95.29% of the people who have completed this task.

cannon

terrier 85

tundra 93 84

vasculature 97 90 94

ghost 66 79 85 98

arcana 96 99 101 97 93

trickster 95 89 88 97 73 74

cannon terrier tundra vasculature ghost arcana trickster


Fun one. Got a 90.21 (96.62th percentile) with stupidity, alcohol, iceberg, mollusk, candlewick, umami, rendezvous.


87.31

dog cup vision abstraction green covariance practicality

I don't think I'm that creative though so I don't trust the results.


Your score is 83.15, higher than 77.96% of the people who have completed this task


85. Used Libertine and Coquette. My diet of early American literature is showing.


I guess this is done by vectorizing words and measuring distance between them?


Looks like a game created by wordcel to feel good about themselves. I wonder if someone can do the same but for shapes.


i got 102. way more creative than you.


99.74% by mixing simple and extremely specific technical words


The instructions specifically ask the user not to enter technical terms.


So if we just look at the score, is the conclusion of this test that if you limit yourself to respect the rule, you are not creative ?

Honestly, I don't see any justification why this score should be correlated to creativity.


haha missed that. words like "plenum", "scoliosis" etc. i dunno if they qualify




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: