banner

Experts in
ADVANCE SOFTWARE ENGINEERING

Building a diverse future, one placement at a time

We aim to build exceptional teams in the tech space, focusing on two core engineering specialisms: Advanced Software Engineering and AI, ML, & Data Engineering.

 

As a multi-award-winning global staffing specialist, we offer both permanent and contract recruitment solutions and are dedicated to nurturing the tech community. Starting out as the leading niche staffing experts in Scala nine years ago, we've expanded our expertise to support clients and candidates in building tech teams across a wider range of specialisms. 

 

From our offices in LA, Austin, and London, we support Fortune 100 Companies, as well as startups in their hiring journey. 

 

Certified as a minority-owned business by MSDUK, we are dedicated to diversifying and evolving the tech industry.

Find Out More
banner2

Industry
Insights.

Our Industry Census provides tailored insights into the trends, salaries, and work environments that matter most to you.

It's our mission to empower employers and employees with data to make informed decisions. Whether you're a job seeker, a hiring manager, or a policy maker, our Industry Census is a valuable tool for understanding the current landscape and anticipating future shifts in our industry.

These insights help shape the future, ensuring it reflects the needs and ambitions of professionals like you.

Gated Content CTA image

AI Product Hiring Trends

AI Product Managers are scarce, hiring is fierce, and salaries are soaring.

Gated Content CTA image

DOWNLOAD GUIDE

Gain access to the most comprehensive data on industry trends, salaries, and work environments

Trusted by

Latest
Jobs.

Senior Software Engineer – Capital Markets Technology (Aladdin Specialist)
Location
Milwaukee, United States, - None Specified -
Artisan Partners Limited Partnership is seeking a hands-on Senior Software Engineer to join the Trading and Trade Operations Technology team. This role blends solution design, end-user engagement with traders and operations, and day-to-day production support. You will own Aladdin integrations and adjacent systems, drive reliability and automation, and mentor junior engineers to build sustainable capability across the trading lifecycle. The ideal candidate has 7+ years of software development experience within investment management and a deep understanding of the trading lifecycle. Positioned at the forefront of shaping and advancing technology solutions for our trading and trade operations teams, the role is central to driving efficiency, scalability, and innovation across enterprise applications. Location: San Francisco, CA | Boston, MA | New York, NY Base Salary Range: $150,000 - $200,000 Specific placement within the provided range will be determined by an individual’s geographic location as well as relevant experience and skills for the role. Base salary is only one component of our total compensation package. Associates may be eligible for a discretionary bonus, which is determined upon Firm and individual performance. Responsibilities The candidate is expected to:   Lead design and delivery of Aladdin integrations including trade capture, allocations, positions/P&L feeds, confirmations, reconciliations Engage directly with traders, middle office, risk and operations to translate business requirements into technical solutions Provide day-to-day production support, including incident management, RCA, fixes, post-mortems and continuous improvement Build and operate resilient services such as APIs, ETL pipelines, data models, adapters, and monitoring solutions Maintain clear and comprehensive documentation for designs, code, processes, and system changes; ensure runbooks and operational playbooks remain current Coordinate with vendors (BlackRock/Aladdin) and internal platform teams for updates, patches, and environment management Mentor and coach junior engineers; participate in hiring, onboarding, and knowledge transfer Automate repetitive operational tasks (deployment, testing, data validation) and enforce secure, compliant design patterns   Qualifications The successful candidate will possess strong analytical skills and attention to detail. Additionally, the ideal candidate will possess:   A bachelor’s degree in Computer Science, Engineering, Finance, or equivalent experience 7+ years of professional software engineering experience, with at least 3 years in capital-markets, trading, or middle-office technology Proven track record integrating with BlackRock Aladdin, or substantial experience implementing large enterprise trading platform integrations Investment management industry experience required Strong communication skills with demonstrated stakeholder engagement (traders, operations, risk) Strong knowledge of the trading lifecycle and related operational workflows Experience owning production systems, managing incidents, and delivering post-incident improvements   Technical Skills   Languages: Python, Java, or C# (Python/Java preferred) Data and ETL: SQL, Snowflake, and experience designing robust pipelines and data validation Cloud/Infrastructure: AWS, GCP, Azure fundamentals; Docker and Kubernetes (or proven cloud experience) Security and Compliance: Awareness of security, regulatory, and data governance requirements relevant to trading systems
Sr Backend Engineer
Location
Slovakia , Europe, - None Specified -
About the job We are Ataccama, and we are on a mission to power a better future with data. Our product enables both technical and less technical ‘data people’ across their organizations to create high-quality, governed, safe, and reusable data products. It’s what made us a Leader in the Gartner Magic Quadrant® for Data Quality Solutions™, and what inspired Bain Capital Tech Opportunities to invest in our future growth. Our vision is to be the leading AI-powered cloud data management company and to do that, we’re making Ataccama a great place to work and grow. Our people are located across the globe. They succeed by collaborating as a team and thrive in our company culture defined by these core values: Challenging Fun ONE Team Customer Centric Candid and Caring Aim High We're building the backbone of data trust. Our product reveals the hidden paths data takes across systems, bringing clarity, compliance, and confidence to data-driven companies. As we grow, we're looking for a Senior Backend Engineer who thrives on complexity and wants to shape a product that’s anything but ordinary. Your challenge   Build data lineage scanners for a wide range of technologies, from various databases to data integration tools and business intelligence platforms. Design and evolve backend systems that go far beyond typical REST APIs. You'll model metadata-rich flows and scalable ingestion pipelines. Collaborate closely with product managers to turn user problems into pragmatic, technically sound features. Own features end-to-end, from design to production deployment and operations. Mentor teammates with empathy—supporting juniors and constructively challenging seniors through thoughtful code reviews and design conversations. Play a hands-on role in architectural decisions, balancing innovation with simplicity. Contribute to our AI-driven efforts - we’re actively exploring how to embed AI, LLM, and MCP into lineage scanning.   You might be a great fit if you...   Are fluent in Java and curious or experienced with Kotlin (we ?? Kotlin) Enjoy deep backend work - you’ve moved beyond standard APIs and get excited by metadata models, SQL parsing, or platform-like architectures (Very nice to have) Have touched data lineage, SQL parsers like ANTLR, or metadata systems before Have any data engineering experience - it is more than welcome Have experience building and operating distributed, multi-tenant SaaS systems Know your way around Kubernetes or are eager to learn it Are comfortable with event-driven architectures and message queues (Kafka, RabbitMQ) Want to work in a collaborative engineering culture where mentorship, feedback, and shared ownership are the norm   Why join us?   You’ll help shape a complex, valuable product at the core of modern data governance. You’ll work with sharp minds who care about code quality, design thinking, and continuous learning. We balance autonomy with collaboration, giving you space to innovate and grow. Your work will power the next wave of AI-aware data infrastructure.   Tech we use   Languages: Kotlin (preferred), Java (21) Frameworks & Tools: Spring Boot, Gradle, jOOQ, GitLab CI/CD Infra: Kubernetes, ArgoCD, Helm, S3, Elastic, Aurora/Postgres Observability: Grafana stack + Prometheus Cloud: AWS and Azure (multi-deployment SaaS)   Work equipment   Company laptop Company mobile phone + SIM card   Perks & Benefits   Long-Term Incentive Program 2 sick days and 25-30 days of vacation, with the option to request additional Flexible Time-Off days when needed The Global Family Support Program - a paid leave program to help all parents focus on the new addition to their family Flexible working hours & hybrid work setup Benefit Plus - flexible employee benefit platform (incl. Multisport card) Annual package for mental health support "Bring Your Friend" referral program Online courses & company access to Udemy to hone your skills Conference tickets to the best industry events of the year Company library, where you can even suggest the best educational books for us to order Kitchen stocked with fresh fruit and juice, teas, and the best coffee Meal vouchers
Sr ETL Developer
Location
New York, New York, - None Specified -
Senior ETL Software Engineer (Would consider other locations)   FIA Tech is the leading technology provider to the exchange traded derivatives industry. Owned by a consortium of thirteen leading clearing firms and the Futures Industry Association (FIA), FIA Tech is committed to serving the industry and launching innovative solutions to improve market infrastructure across the listed and cleared derivatives industry. FIA Tech works in close partnership with the broader industry, including exchanges, clearinghouses, clearing firms and other intermediaries, as well as independent software vendors, buyside firms and end users to bring efficiency to the exchange traded and cleared derivatives industry. FIA Tech is seeking an ETL (Extract, Transform, Load) expert to join our Connectivity team during a period of exciting growth. The team is responsible for integrating CCPs, execution platforms, back-office providers, brokers and buy-side firms with FIA Tech applications.   What you'll do Design & Develop ETL Workflows Build robust, scalable data pipelines using CloverDX to extract, transform, and load data across systems. Data Integration & Transformation Handle complex data sources (e.g., derivatives trade data, financial systems) and ensure alignment with business and regulatory requirements. Transform raw data into structured formats using business rules and logic Load data into target databases, data lakes, or warehouses Optimize performance of ETL processes for scalability and efficiency Conduct data quality checks and validation to ensure accuracy and consistency Collaborate with data analysts, engineers, and business stakeholders Maintain and troubleshoot ETL pipelines and resolve data-related issues Document ETL processes and maintain technical specifications   Qualifications CloverDX is a plus to extract, transform, and load data across systems Proficiency in SQL and ETL tools (e.g., Informatica, Talend, Apache NiFi, SSIS) Experience with data warehousing concepts and data modeling Strong understanding of database systems (e.g., Oracle, MySQL, PostgreSQL) Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) Knowledge of scripting languages like Python or Shell Attention to detail and problem-solving skills Bachelor’s degree in computer science, Information Systems, or related field Over 5 years of experience in ETL development or data engineering Certifications in ETL tools or cloud platforms are a plus Salary range 130,000 to 150,000 depending on experience
Sr. PostgreSQL Database Engineer
Location
New York, New York, - None Specified -
Job Title: Sr. PostgreSQL Database Engineer Corporate Title: VP Department: Technology Location: New York Reporting To: Linux Systems Director Base Salary Range: $175k - $225k Job Purpose: BTIG is seeking a highly skilled and experienced Senior PostgreSQL Database Administrator to join our team.  In this role, you will be responsible for managing and optimizing our PostgreSQL databases, ensuring their performance, security and reliability.  All deployments are done on Linux, so the candidate must have hands-on daily work experience with Linux OS.  You should be comfortable wearing multiple hats and supporting various aspects of our technology infrastructure as necessary. Duties and Responsibilities: Administration:  Own/maintain various PostgreSQL clusters and instances across different environments (prod/dev/UAT) on Linux (bare-metal, virtualized and containerized) Design:  Design and implement PostgreSQL database systems to meet business requirements (performance, HADR, etc.).  Design/deploy/support appropriate data models and patterns. Performance Tuning:  Optimize database performance: hardware, configuration, indexing, partitioning, etc. Query Optimization:  Write and optimize complex SQL queries for high-volume environments Backup and Recovery:  Develop/deploy/support backup and disaster recovery procedures Monitoring and Alerting:  Set up monitoring, alerting and proactive measures to maintain database health and performance Troubleshooting:  Diagnose and resolve database-related issues and provide timely solutions.  Be able to reach below the database tier and troubleshoot OS, hardware, and network issues as well. Collaboration:  Work closely with development and infrastructure teams to support database usage Qualifications and Desirable Experience: 5+ years working as a PostgreSQL DBA Hand-on and daily work with Linux operating systems Expert knowledge of PostgreSQL management: high availability (esp. Patroni), CDC (Change Data Tracking), security, replication, maintenance (backups, restores, vacuum, re-indexing), version upgrades, disaster recovery, legacy migrations, configuration, monitoring, alerting, etc. Strong experience in PostgreSQL Performance Tuning and Query optimization PostgreSQL user best practices and the ability to evangelize/train Excellent communication and collaboration skills.  The ability to work with developers as well as other infrastructure team members. Skill with containerization technologies (Docker, Kubernetes) and storage management Strong Linux skills An engineering mindset – robustness, debugging, troubleshooting, root-cause analysis Experience with other RDBMS/NoSQL technologies (SQL Server, MySQL, MongoDB, ClickHouse, Kafka) Scripting/coding:  shell script, Python, PowerShell
AI Developer
Location
New Jersey, Philadelphia , - None Specified -
1. Basic Role Information • Job Title: AI Developer • Department/Team: AI Development team – Responsible for everything Technology • Location (On-site / Remote / Hybrid): Preference is onsite but open to anywhere in the US • Hiring Manager Name & Contact: Matthew Friebis and his Dev Manager • Contract Type: (e.g., Permanent, Contract, Temp-to-Perm) – Meant to be full time, but open to C2H – 12 months contract to hire 2. Essential Skills & Experience Top 3–5 technical skills required: - Python Programming – Pytorch, FastAPI – Their AI team is called Visdom - Knowledge and experience for Vector based testing, RAG System concept - Prompt Engineering into LLM’s - PostgreSQL - NLP Top 3–5 soft skills or behaviours required: - Taking Ownership - Leadership/Drive the team - Strong communicator Minimum years of experience expected: Ideally someone with 4-5 yrs experience across these different technologies 3. Daily Rate / Compensation: • Hourly/Daily Rate: No hourly rate right now. Will need to figure it out • Is there flexibility in the rate? (Yes / No) – Yes If yes, under what conditions? 4. Role Feasibility • What could prevent this role from going ahead? – Role has signed off. Where they would need to get approval would be us as a supplier and what we would have to be done to get us on it. • What’s the impact if this role remains unfilled? (e.g., delivery delays, team overload, client risk) – Delivery delays 5. Ideal Candidate Profile ? What would make a candidate truly stand out? ? Any certifications, companies, or achievements you view as a “bonus”? 6. Resume & Sourcing Checkpoint • How many resumes have you received so far? – 0 • How many candidates have been interviewed? – 0 • What’s been lacking or missing so far? – 0 • What can we improve in our search or messaging to attract better candidates? – 0 7. Availability & Urgency • When do you need someone to start? – December Start • How urgent is this hire (1–5)? – 5 • Do you have placeholder interview slots booked? – Yes (If not, can you block time proactively this week?) 8. Interview Process • What does the current interview process look like? – Technical Assignment then a phone call with the Development Manage, Maybe Matthew and a Senior Develops • How many stages are there? – 1 Rounds • Any assessments or test tasks? (Yes / No) – yes If yes, please describe. Could the process be shortened if I (the recruiter) fully vet the candidate beforehand? (Yes / No) If yes, which stages could be skipped or streamlined? 9. Interview Panel • Who else will be involved in the interview process? (Names, roles, involvement by stage) – • Any specific focus areas per interviewer? (e.g., technical skills, leadership, team fit) 10. Collaboration & Exclusivity • Are you happy to work with me exclusively on this role for an agreed period? (Yes / No) – Yes If yes, how long? (e.g., 5 working days from go-live) – Next Wednesday • Are you comfortable giving feedback within 24–48 hours of candidate submission? (Yes / No) –
ML engineer
Location
Chicago, chicago , United States
URGENT ML contractor  Term: 3 months Openings: 1 Location: Remote (U.S. time zones; must overlap ?4 hours with U.S. Central Time) Start: ASAP Overview We are seeking a senior Python ML engineer to lead the migration of multiple analytics and machine learning applications from a legacy SQL environment to Amazon Redshift. In addition, the codebases need to be standardized on a modern Python architecture that supports best practices for deployment, testing, and maintainability. This role combines hands-on work with mentoring, ensuring sustainable practices across the team. Key Responsibilities Review existing Python applications to map dependencies, data access patterns, configuration, and deployment processes. Transition data pipelines to pull from Redshift while eliminating legacy SQL dependencies. Standardize code organization, packaging, configuration, logging, and containerization according to a modern reference framework. Develop unit and integration tests for data ingestion, transformations, and model outputs, integrating them into CI/CD pipelines. Document code, add clear type hints, improve readability, and produce operational runbooks for all applications. Update deployment pipelines using containerization and orchestration tools to ensure repeatable, automated releases. Provide guidance and training to engineers on modern development standards, testing practices, and Redshift integration. Expected Deliverables Week 1: Conduct application inventory, define architecture targets, and begin updating the first application (data layer, tests, documentation). Week 2: Complete first app migration, validate in a staging environment, and begin work on a second application. Week 3+: Continue migrating ~2 applications per week, including code standardization, testing, documentation, and deployment automation, until all applications are fully transitioned. Required Skills and Experience 7+ years of professional experience developing production ML or analytics applications in Python. Strong knowledge of Python project structures, dependency management, and packaging tools (pip, poetry, conda). Experience migrating applications from legacy SQL databases to cloud data warehouses (Redshift, Snowflake, BigQuery), ensuring data consistency. Proficiency in SQL and experience optimizing queries for cloud warehouses. Demonstrated ability to write robust tests (pytest/unittest) and integrate them with CI/CD pipelines. Familiarity with containerization, orchestration, and workflow tools such as Docker, Kubernetes, Airflow, or Step Functions. Strong documentation skills and ability to coach other engineers on sustainable development practices. Preferred Skills Experience with dbt-modeled data warehouses and collaboration with analytics engineers. Knowledge of MLOps tools, model validation frameworks, and feature stores. Ability to implement automated testing frameworks and data quality checks for ML pipelines. Success Metrics All Python ML and analytics applications migrated to Redshift with verified parity. Applications updated to a modern architecture, complete with testing, documentation, and deployment automation. Team empowered with guidance, processes, and runbooks to maintain the applications independently after the engagement.
Product Lead - Inference
Location
San Francisco, San Francisco, United States
Salary
$$230-280K Base + Equity - Per Annum
Product Lead – Inference Location: San Francisco (Onsite) Employment Type: Full Time   A well-funded and fast-growing AI company is building a next-generation platform for safe, performant, and cost-efficient AI agent deployment across enterprise environments. With a team of top researchers, engineers, and product leaders, they’ve developed proprietary multi-model architectures designed to reduce hallucinations and improve reliability at scale.   They’ve recently closed a major funding round from leading institutional investors, bringing total funding to over $400M and valuing the business north of $3B. They’re now expanding their platform team to continue scaling their custom LLMs, inference infrastructure, and cloud-native agent tooling.     About the Role   As Product Lead for the Inference Platform, you’ll own the roadmap and execution for the infrastructure powering model deployment, orchestration, and usage across multiple cloud environments. This is a highly cross-functional IC role with visibility across engineering, research, and go-to-market.   You’ll be responsible for defining scalable, high-performance systems that support rapid model experimentation, SaaS application launches, and cloud cost optimization. Ideal for someone who thrives in a technically complex environment and wants to shape the underlying foundation of production-grade AI products.     What You’ll Do   Own product strategy for the multi-cloud inference platform and agent hosting systems Collaborate with research and infra eng to forecast and scale model and application capacity Monitor and optimize usage, latency, and cost across LLM and voice inference workloads Drive decisions around GPU allocation, cloud cost efficiency, and workload orchestration Define internal tools to support evaluation, logging, and performance observability Work closely with GTM and operations to align platform performance with business goals Partner with finance and leadership on pricing and margin strategy across the agent stack Must Have: 7+ years of product management experience 2+ years building AI/ML platform, LLMOps, or infra products Deep understanding of inference, training, and cloud compute (AWS/GCP/Azure) Experience aligning Eng, Research, and GTM around complex technical products Familiarity with cloud cost modeling, GPU orchestration, and workload optimization Analytical mindset, strong execution, and bias toward measurable outcomes Nice to Have: Background in distributed systems, model evaluation, or GPU infra Experience launching dev tooling or internal platforms for AI teams Prior work with LLMs, voice agents, or AI-native applications Strong technical intuition with hands-on engineering exposure a plus
DOMO migration engineer
Location
Chicago, chicago, United States
USA contract remote role 3 months duration asap start   eeking an experienced Domo professional to support a major BI migration from an existing SQL Server environment to Amazon Redshift. This role focuses on rebuilding datasets, refactoring data transformations, validating dashboard accuracy, and ensuring a seamless cutover for business users with no interruption in reporting, security, or data accessibility. Key Responsibilities Rebuild Domo datasets to source data from Redshift using ODBC/JDBC connections, Workbench configurations, and federated query options while preserving scheduling, row-level security, and governance controls. Configure and optimize Redshift federated connections in Domo, including authentication, data pipelines, refresh cadence, and dependency orchestration. Refactor Magic ETL, DataFlows, DataFusion logic, and Beast Mode calculations to align with new warehouse structures, data models, and transformation conventions. Recreate and validate row-level access policies (PDP) so that authorized audiences retain correct filtered visibility after migration. Perform full dashboard parity checks—including KPIs, filters, drill paths, alerts, and visual layouts—to confirm the Redshift-backed versions match the legacy system. Identify, document, and resolve discrepancies by collaborating with engineering and analytical stakeholders. Improve dashboard performance through tuning methods such as caching strategies, optimized dataflows, and partitioned dataset designs. Manage a structured migration tracker covering approximately 150 datasets and 50 dashboards, including status, validation evidence, issue logs, and sign-off checkpoints. Facilitate user acceptance testing, gather feedback, and deliver clear cutover documentation such as data lineage, support instructions, and release notes.
Lead AI Engineer
Location
US Remote, United States
Job title: Lead AI Engineer Job type: Contract Contract Length: 6 months (extension and permanent possible) Rate: $80 - $100 Role Location: US Remote The company: Our client is a nationwide supplier of industrial safety equipment, construction supplies, environmental products, and disaster response solutions. Established over 40 years ago, the company has grown into a nationally recognized provider serving industries such as general construction, industrial safety, petrochemical, energy, environmental, and hospital sectors. With over 15 distribution centers strategically located across the United States, The firm prides itself on a bespoke approach and emphasizes where "Service Meets Supply," delivering high-quality products and exceptional customer service. Committed to innovation, they leverage advanced technology to enhance its custom-built software platforms, addressing industry challenges and driving efficiency for clients. Role and Responsibilities: Lead the development and implementation of AI agents, including agent-to-agent communication systems, to automate and optimize processes within the firm’s custom-built software platforms. Collaborate with the existing engineering team to integrate AI capabilities directly into software development workflows, enabling AI-driven enhancements and self-improving systems. Advance multiple AI initiatives quickly and efficiently, focusing on innovative solutions that address industry needs for advanced technology in safety, construction, and environmental management. Design, build, and deploy AI models that enhance operational efficiency, such as predictive analytics for inventory management, automated disaster response tools, or intelligent supply chain optimizations. Conduct research on emerging AI technologies and recommend integrations to complement the firm’s traditional software setups, ensuring seamless scalability and security. Provide technical leadership, including code reviews, mentoring junior team members, and documenting AI architectures for future maintenance and expansion.   Troubleshoot and resolve complex AI-related issues, ensuring high performance, reliability, and compliance with industry standards. Work cross-functionally with stakeholders to align AI projects with business goals, delivering measurable improvements in efficiency and user experience. Job Requirements: Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field (or equivalent experience). 7+ years of experience in AI engineering, with a proven track record in building AI agents, multi-agent systems, and integrating AI into software development processes. Strong proficiency in programming languages such as Python, Java, or C++, and frameworks like TensorFlow, PyTorch, or LangChain for agentic AI development. Experience with cloud platforms (e.g., AWS, Azure, or Google Cloud) for deploying AI solutions, including containerization (Docker/Kubernetes) and API integrations. Demonstrated ability to lead AI projects from ideation to deployment, with a focus on efficiency and innovation in fast-paced environments. Excellent problem-solving skills, with experience in debugging complex AI systems and optimizing for performance. Familiarity with ethical AI practices, data privacy regulations (e.g., GDPR), and industry-specific applications (e.g., safety or supply chain) is a plus. Accessibility Statement: Read and apply for this role in the way that works for you by using our Recite Me assistive technology tool. Click the circle at the bottom right side of the screen and select your preferences.     We make an active choice to be inclusive towards everyone every day.? Please let us know if you require any accessibility adjustments through the application or interview process.     Our Commitment to Diversity, Equity, and Inclusion: Signify’s mission is to empower every person, regardless of their background or?circumstances, with an equitable chance to achieve the careers?they deserve. Building a diverse future, one placement at a?time.   Check out our DE&I page here    

Leave A Review

Form Banner