You are viewing a preview of this job. Log in or register to view more details about this job.

AI Research Scientist

About the Role

CareerVest AI Research is seeking world-class researchers and engineers in machine learning (ML) and deep learning to join a high-impact team driving innovation in AI for next-generation applications. You’ll be part of a multidisciplinary research group collaborating across software, hardware, and systems to deliver state-of-the-art, power-efficient AI solutions across domains such as mobile, automotive, cloud, IoT, and more.

Key Responsibilities

Your responsibilities may include applied and fundamental research in the following areas:

  • Research and development of novel deep learning models (LLMs, diffusion networks, VAEs, transformers, SSMs, etc.)
  • Advancing model efficiency via compression, quantization, sparsity, and hardware-aware optimization
  • Innovation in ML system design, including federated learning, on-device learning, edge-cloud collaboration, and quantum/causal ML
  • Application of deep learning in computer vision, speech, NLP, power/wireless systems, and chip design
  • Design and implementation of machine learning frameworks and compilers optimized for on-device and accelerator-backed deployment
  • Training and deploying deep learning/reinforcement learning models using modern frameworks

Minimum Qualifications

One of the following:

  • Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field and 2+ years of related work experience
  • Master’s degree in the same field and 1+ year of related work experience
  • PhD in Computer Science, Engineering, Information Systems, or related field

Preferred Qualifications

  • PhD in AI, Computer Science, Engineering, or related field, or Master’s with 4+ years of machine learning experience
  • Strong foundations in ML, deep learning, and computer science
  • Proficient in Python and PyTorch; experienced in building complex training/evaluation pipelines
  • Hands-on experience with LLMs, LMMs, LVMs, and transformer-based architectures
  • Knowledge of compiler development and ML model optimization for hardware acceleration
  • Experience with edge deployment and mobile/embedded ML
  • Publications as first author in top-tier AI/ML conferences (e.g., NeurIPS, ICML, ICLR)
  • Experience in:
    • ML for hardware-aware optimization (e.g., quantization, pruning)
    • Generalized AI systems (e.g., agentic systems, retrieval-based frameworks)
    • Reasoning acceleration for LLMs and GenAI
    • ML + Generative AI research and deployment workflows
  • Strong debugging, analytical, and problem-solving skills
  • Excellent communication and collaboration abilities

Additional Information

This role bridges the gap between fundamental research and real-world deployment. Ideal for candidates passionate about transforming cutting-edge research into high-impact, efficient AI applications at scale.