Skip to content

Track and measure constructs, concepts or categories in text documents

License

Notifications You must be signed in to change notification settings

danielmlow/construct-tracker

Repository files navigation

Build codecov Ruff

PyPI Python Version License

construct-tracker

Track and measure constructs, concepts or categories in text documents. Build interpretable lexicon models quickly by using LLMs. Built on top of the OpenRouterAI package so you can use most Generative AI models.

Why build lexicons?

They can be used to build models that are:

  • interpretable: understand why the model outputs a given score, which can help avoid biases and guarantee the model will detect certain phrases (important for high-risk scenarios to use in tandem with LLMs)
  • lightweight: no GPU needed (unlike LLMs)
  • private and free: you can run on your local computer instead of submitting to a cloud API (OpenAI) which may not be secure
  • have high content validity: measure what you actually want to measure (unlike existing lexicons or models that measure something only slightly related)

If you use, please cite

Low DM, Rankin O, Coppersmith DDL, Bentley KH, Nock MK, Ghosh SS (2024). Using Generative AI to create lexicons for interpretable text models with high content validity. PsyarXiv.


Installation

pip install construct-tracker

Measure 49 suicide risk factors in text data

Highlight matches

Tutorial Open in Google Colab

We have created a lexicon with 49 risk factors for suicidal thoughts and behaviors (plus one construct for kinship) validated by clinicians who are experts in suicide research.

from construct_tracker import lexicon

srl = lexicon.load_lexicon(name = 'srl_v1-0') # Load lexicon

documents = [
	"I've been thinking about ending it all. I've been cutting. I just don't want to wake up.",
	"I've been feeling all alone. No one cares about me. I've been hospitalized multiple times. I just want out. I'm pretty hopeless"
             ]

# Extract
counts, matches_by_construct, matches_doc2construct, matches_construct2doc = srl.extract(documents, normalize = False)

counts

Highlight matches

You can also access the Suicide Risk Lexicon in csv and json formats:


Create your own lexicon using generative AI

Open in Google Colab

Create a lexicon: keywords prototypically associated to a construct

We want to know if these documents contain mentions of certain construct "insight"

documents = [
 	"Every time I speak with my cousin Bob, I have great moments of clarity and wisdom", # mention of insight
 	"He meditates a lot, but he's not super smart" # related to mindfulness, only somewhat related to insight
	"He is too competitive"] #not very related

Choose model here and obtain an API key from that provider. Cohere offers a free trial API key, 5 requests per minute. I'm going to choose GPT-4o:

os.environ["api_key"]  = 'YOUR_OPENAI_API_KEY' # This one might work for free models if no submissions have been tested:  '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'
model = 'gpt-4o'

Two lines of code to create a lexicon

l = lexicon.Lexicon()         # Initialize lexicon
l.add('Insight', section = 'tokens', value = 'create', source = model)

See results:

print(l.constructs['Insight']['tokens'])
['acuity', 'acumen', 'analysis', 'apprehension', 'awareness', 'clarity', 'comprehension', 'contemplation', 'depth', 'discernment', 'enlightenment', 'epiphany', 'foresight', 'grasp', 'illumination', 'insightfulness', 'interpretation', 'introspection', 'intuition', 'meditation', 'perception', 'perceptiveness', 'perspicacity', 'profoundness', 'realization', 'recognition', 'reflection', 'revelation', 'shrewdness', 'thoughtfulness', 'understanding', 'vision', 'wisdom']

We'll repeat for other constructs ("Mindfulness", "Compassion"). Now count whether tokens appear in document:

feature_vectors, matches_counter_d, matches_per_doc, matches_per_construct  = lexicon.extract(
	documents,
	l.constructs,
	normalize = False)

display(feature_vectors)

Lexicon counts

This traditional approach is perfectly interpretable. The first document contains three matches related to insight. Let's see which ones with highlight_matches():

lexicon.highlight_matches(documents, 'Insight', matches_construct2doc, max_matches = 1)

Highlight matches



We provide many features to add/remove tokens, generate definitions, validate with human ratings, and much more (see tutorials/construct_tracker.ipynb) Open in Google Colab


Structure of the lexicon.Lexicon() object

# Save general info on the lexicon
my_lexicon = lexicon.Lexicon()			# Initialize lexicon
my_lexicon.name = 'Insight'		# Set lexicon name
my_lexicon.description = 'Insight lexicon with constructs related to insight, mindfulness, and compassion'
my_lexicon.creator = 'DML' 				# your name or initials for transparency in logging who made changes
my_lexicon.version = '1.0'				# Set version. Over time, others may modify your lexicon, so good to keep track. MAJOR.MINOR. (e.g., MAJOR: new constructs or big changes to a construct, Minor: small changes to a construct)

# Each construct is a dict. You can save a lot of metadata depending on what you provide for each construct, for instance:
print(my_lexicon.constructs)
{
 'Insight': {
	'variable_name': 'insight', # a name that is not sensitive to case with no spaces
	'prompt_name': 'insight',
	'domain': 'psychology', 	 # to guide Gen AI model as to sense of the construct (depression has different senses in psychology, geology, and economics)
	'examples': ['clarity', 'enlightenment', 'wise'], # to guide Gen AI model
	'definition': "the clarity of understanding of one's thoughts, feelings and behavior", # can be used in prompt and/or human validation
	'definition_references': 'Grant, A. M., Franklin, J., & Langford, P. (2002). The self-reflection and insight scale: A new measure of private self-consciousness. Social Behavior and Personality: an international journal, 30(8), 821-835.',
	'tokens': ['acknowledgment',
	'acuity',
	'acumen',
	'analytical',
	'astute',
	'awareness',
	'clarity',
	...],
	'tokens_lemmatized': [], # when counting you can lemmatize all tokens for better results
	'remove': [], #which tokens to remove
	'tokens_metadata': {'gpt-4o-2024-05-13, temperature-0, ...': {
								'action': 'create',
								'tokens': [...],
								'prompt': 'Provide many single words and some short phrases ...',
								'time_elapsed': 14.21},
						{'gpt-4o-2024-05-13, temperature-1, ...': { ... }},
						}
	},
'Mindfulness': {...},
'Compassion': {...},
}

Contributing

See docs/contributing.md

About

Track and measure constructs, concepts or categories in text documents

Resources

License

Stars

Watchers

Forks

Packages

No packages published