About the Role
We Are Seeking an ideal candidate with deep expertise in operational psychometrics, experience in researching AI applications in measurement, and the ability to develop proof-of-concept software solutions in Python or similar languages. This position requires a balance between rigorous scientific research, practical application, and the implementation of innovative solutions to enhance our assessment systems.
Location: Remote. Working hours are based on CST Time Zone.
About the Company:
Abstra is a fast-growing Nearshore Tech Talent services company that provides top Latin American tech talent to U.S. companies and beyond. Founded by U.S.-bred engineers with over 15 years of experience, Abstra specializes in sourcing skilled professionals across a wide range of technologies to meet our clients’ needs, driving innovation and efficiency.
Job Description
Key Responsibilities:
- Conduct and support psychometric analyses related to test development, calibration, scaling, equating, and validation studies.
- Research AI-related applications in psychometrics, including machine learning techniques and automated assessment models.
- Develop and implement proof-of-concept software solutions in Python or other relevant programming languages.
- Collaborate with cross-functional teams, including data scientists, engineers, and exam development specialists, to integrate psychometric models into operational processes.
- Ensure the delivery of high-quality outcomes through rigorous methodology, innovative thinking, and attention to detail.
- Adapt to evolving research and business priorities while maintaining a flexible and goal-oriented approach.
Required Qualifications:
- Master’s or PhD in psychometrics, I/O psychology, educational measurement, statistics, or an aligned field.
- Strong programming skills in Python or similar languages.
- Strong analytical skills in R.
- Experience with AI-related research in measurement, such as machine learning applications in assessment.
- Proficiency in psychometric methodologies such as classical test theory (CTT), item response theory (IRT), equating, test form assembly, standard setting, item/form analysis, automated item generation, test security analysis, report writing, and bias and fairness analysis.
- Demonstrated ability to work both independently and collaboratively in a distributed, dynamic team environment.
- Strong problem-solving skills, attention to detail, customer focus, and commitment to high-quality outcomes.
Preferred Qualifications:
- A strong desire to make a better future where assessments are high-quality and unbiased while also being hyper-efficient and accessible.
- At least 2-5 years of experience in applied psychometrics, preferably in an operational testing environment.
- Experience in researching and/or implementing AI-driven innovations in assessment.
- Familiarity with large-scale assessment programs and operational constraints.
- Experience implementing automated scoring algorithms or adaptive testing models.
What we offer
- Flexible working hours and remote work options.
- Opportunities for professional growth and development.
- A collaborative and inclusive work environment.
- The chance to work on impactful projects with a talented team.
- Excellent compensation in USD.
- Hardware and software setup.
Job Features
Job Category | Data Science, Machine Learning/AI |
Type | Remote |
Time Zone | US Central Time |