About iBusiness Funding
iBusiness Funding is a leader in providing innovative Software as a Service (SaaS) solutions for banks and lenders, with a specialization in SBA lending. We build scalable lending platforms that streamline the business lending process, allowing lenders to efficiently deliver capital to small and medium-sized businesses.
To date, we’ve processed over $7 billion in SBA loans and handle more than 1,000 business loan applications daily. Our team is driven by our core values of innovation, integrity, enjoyment, and family.
As a top five SBA 7(a) preferred lender, our parent company offers SBA express and small loan capabilities. Join us and be part of a team that’s transforming the finance industry and empowering businesses to thrive!
Position Description:
*Please note this position is not eligible for candidates located in the following U.S. States: CA, CO, WA, or NY. This position is U.S. Based Remote Only*
Do you have deep expertise in the end-to-end development of large datasets across a variety of platforms? Are you great at designing data systems and redefining best practices with a cloud-based approach to scalability and automation? Join iBusiness Funding as a Database Engineer where you will be an integral player at enabling data driven solutions to our internal and external clients.
We need your experience to scale our existing infrastructure, incorporate new data sources, and build robust data pipelines. Collaborating with product and business / operations teams, you will work backwards from our business questions to drive scalable solutions. Be part of a collaborative team where you can bring your database administration expertise and passion about working with data to play a pivotal role in our company’s data-driven initiatives!
Key Job Responsibilities:
In this role, you will have the opportunity to display and develop your skills in the following areas:
-
Evolve our data models and architecture to support our business’ demanding self-service needs
-
Develop and support ETL pipelines with robust monitoring and alarming
-
Develop data models that are optimized and aggregated for business needs
-
Develop and optimize data tables using best practices for partitioning, compression, parallelization, etc.
-
Build robust and scalable data integration (ETL) pipelines using SQL, Python, and AWS services such as Glue, Lambda, and Step Functions
-
Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL/Redshift
-
Interface with business customers, gathering requirements, and delivering complete reporting solutions
-
Continually improve ongoing reporting and analysis processes and automating or simplifying self-service support for customers
-
Work cross-functionally with other teams such as risk, analytics, product and more to develop data driven products and tools.
What You Will Need:
-
Bachelor's degree in Computer Science, Engineering, Mathematics, or a related technical discipline
-
Experience working with large-scale data repositories like data lakes or dimensional data warehouses.
-
Ability to write high quality, maintainable, and robust code, often in SQL and Python.
-
3+ Years of Data Warehouse Experience with Oracle, Redshift, Postgres, Snowflake etc. with demonstrated strength in SQL, Python, data modeling, ETL development, and data warehousing
-
Extensive experience working with cloud services (AWS or Azure or GCS) with a strong understanding of cloud databases (e.g. Redshift/Aurora/DynamoDB), compute engines (e.g. EMR/EC2), data streaming (e.g. Kinesis), storage (e.g. S3) etc.
-
Fundamental understanding of version control software such as Git
-
Experience with CI/CD, automated testing, and DevOps best practices
-
Candidates must be authorized to work in the U.S.
What Would Be Nice To Have:
-
Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
-
Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
-
Masters in computer science, mathematics, statistics, economics, or other quantitative fields
-
5+ years of experience in data engineering related field in a company with large, complex data sources
-
Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
-
Experience working with AWS (Redshift, S3, EMR, Glue, Airflow, Kinesis, Step Functions)
-
Hands-on in any scripting language (BASH, C#, Java, Python, Typescript)
-
Hands on experience using ETL tools (SSIS, Alteryx, Talend)
-
Background in non-relational databases or OLAP is a plus
-
Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations
-
Strong analytical skills, 5+ years’ experience with Python, Scala and an interest in real time data processing
-
Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and data engineering strategy
Conclusion:
This job description is intended to convey information essential to understanding the scope of the job and the general nature and level of work performed by job holders within this job. This job description is not intended to be an exhaustive list of qualifications, skills, efforts, duties, responsibilities, or working conditions associated with the position.
The company is an equal opportunity employer and will consider all applications without regard to race, sex, age, color, religion, national origin, veteran status, disability, genetic information, or any other characteristic protected by law.