A lightweight ORM framework for Kotlin with strong-typed SQL DSL and sequence APIs.

Overview

Ktorm

Build Status Maven Central Apache License 2 Codacy Badge Awesome Kotlin Badge

What's Ktorm?

Ktorm is a lightweight and efficient ORM Framework for Kotlin directly based on pure JDBC. It provides strong-typed and flexible SQL DSL and convenient sequence APIs to reduce our duplicated effort on database operations. All the SQL statements, of course, are generated automatically. Ktorm is open source and available under the Apache 2.0 license. Please leave a star if you've found this library helpful!

For more documentation, go to our site: https://www.ktorm.org.

🇺🇸 English | 🇨🇳 简体中文 | 🇯🇵 日本語

Features

  • No configuration files, no XML, no annotations, even no third-party dependencies, lightweight, easy to use.
  • Strong typed SQL DSL, exposing low-level bugs at compile time.
  • Flexible queries, fine-grained control over the generated SQLs as you wish.
  • Entity sequence APIs, writing queries via sequence functions such as filter, map, sortedBy, etc., just like using Kotlin's native collections and sequences.
  • Extensible design, write your own extensions to support more operators, data types, SQL functions, database dialects, etc.

Quick Start

Ktorm was deployed to maven central, so you just need to add a dependency to your pom.xml file if you are using maven:

<dependency>
    <groupId>org.ktorm</groupId>
    <artifactId>ktorm-core</artifactId>
    <version>${ktorm.version}</version>
</dependency>

Or Gradle:

compile "org.ktorm:ktorm-core:${ktorm.version}"

Firstly, create Kotlin objects to describe your table schemas:

object Departments : Table<Nothing>("t_department") {
    val id = int("id").primaryKey()
    val name = varchar("name")
    val location = varchar("location")
}

object Employees : Table<Nothing>("t_employee") {
    val id = int("id").primaryKey()
    val name = varchar("name")
    val job = varchar("job")
    val managerId = int("manager_id")
    val hireDate = date("hire_date")
    val salary = long("salary")
    val departmentId = int("department_id")
}

Then, connect to your database and write a simple query:

fun main() {
    val database = Database.connect("jdbc:mysql://localhost:3306/ktorm", user = "root", password = "***")

    for (row in database.from(Employees).select()) {
        println(row[Employees.name])
    }
}

Now you can run this program, Ktorm will generate a SQL select * from t_employee, selecting all employees in the table and printing their names. You can use the for-each loop here because the query object returned by the select function overloads the iteration operator.

SQL DSL

Let's add some filter conditions to the query:

database
    .from(Employees)
    .select(Employees.name)
    .where { (Employees.departmentId eq 1) and (Employees.name like "%vince%") }
    .forEach { row -> 
        println(row[Employees.name])
    }

Generated SQL:

select t_employee.name as t_employee_name 
from t_employee 
where (t_employee.department_id = ?) and (t_employee.name like ?) 

That's the magic of Kotlin, writing a query with Ktorm is easy and natural, the generated SQL is exactly corresponding to the origin Kotlin code. And moreover, it's strong-typed, the compiler will check your code before it runs, and you will be benefited from the IDE's intelligent sense and code completion.

Dynamic query that will apply different filter conditions in different situations:

val query = database
    .from(Employees)
    .select(Employees.name)
    .whereWithConditions {
        if (someCondition) {
            it += Employees.managerId.isNull()
        }
        if (otherCondition) {
            it += Employees.departmentId eq 1
        }
    }

Aggregation:

val t = Employees.aliased("t")
database
    .from(t)
    .select(t.departmentId, avg(t.salary))
    .groupBy(t.departmentId)
    .having { avg(t.salary) greater 100.0 }
    .forEach { row -> 
        println("${row.getInt(1)}:${row.getDouble(2)}")
    }

Union:

val query = database
    .from(Employees)
    .select(Employees.id)
    .unionAll(
        database.from(Departments).select(Departments.id)
    )
    .unionAll(
        database.from(Departments).select(Departments.id)
    )
    .orderBy(Employees.id.desc())

Joining:

data class Names(val name: String?, val managerName: String?, val departmentName: String?)

val emp = Employees.aliased("emp")
val mgr = Employees.aliased("mgr")
val dept = Departments.aliased("dept")

val results = database
    .from(emp)
    .leftJoin(dept, on = emp.departmentId eq dept.id)
    .leftJoin(mgr, on = emp.managerId eq mgr.id)
    .select(emp.name, mgr.name, dept.name)
    .orderBy(emp.id.asc())
    .map { row -> 
        Names(
            name = row[emp.name],
            managerName = row[mgr.name],
            departmentName = row[dept.name]
        )
    }

Insert:

database.insert(Employees) {
    set(it.name, "jerry")
    set(it.job, "trainee")
    set(it.managerId, 1)
    set(it.hireDate, LocalDate.now())
    set(it.salary, 50)
    set(it.departmentId, 1)
}

Update:

database.update(Employees) {
    set(it.job, "engineer")
    set(it.managerId, null)
    set(it.salary, 100)
    where {
        it.id eq 2
    }
}

Delete:

database.delete(Employees) { it.id eq 4 }

Refer to detailed documentation for more usages about SQL DSL.

Entities and Column Binding

In addition to SQL DSL, entity objects are also supported just like other ORM frameworks do. We need to define entity classes firstly and bind table objects to them. In Ktorm, entity classes are defined as interfaces extending from Entity<E>:

interface Department : Entity<Department> {
    companion object : Entity.Factory<Department>()
    val id: Int
    var name: String
    var location: String
}

interface Employee : Entity<Employee> {
    companion object : Entity.Factory<Employee>()
    val id: Int
    var name: String
    var job: String
    var manager: Employee?
    var hireDate: LocalDate
    var salary: Long
    var department: Department
}

Modify the table objects above, binding database columns to entity properties:

object Departments : Table<Department>("t_department") {
    val id = int("id").primaryKey().bindTo { it.id }
    val name = varchar("name").bindTo { it.name }
    val location = varchar("location").bindTo { it.location }
}

object Employees : Table<Employee>("t_employee") {
    val id = int("id").primaryKey().bindTo { it.id }
    val name = varchar("name").bindTo { it.name }
    val job = varchar("job").bindTo { it.job }
    val managerId = int("manager_id").bindTo { it.manager.id }
    val hireDate = date("hire_date").bindTo { it.hireDate }
    val salary = long("salary").bindTo { it.salary }
    val departmentId = int("department_id").references(Departments) { it.department }
}

Naming Strategy: It's highly recommended to name your entity classes by singular nouns, name table objects by plurals (eg. Employee/Employees, Department/Departments).

Now that column bindings are configured, so we can use sequence APIs to perform many operations on entities. Let's add two extension properties for Database first. These properties return new created sequence objects via sequenceOf and they can help us improve the readability of the code:

val Database.departments get() = this.sequenceOf(Departments)
val Database.employees get() = this.sequenceOf(Employees)

The following code uses the find function to obtain an employee by its name:

val employee = database.employees.find { it.name eq "vince" }

We can also filter the sequence by the function filter. For example, obtaining all the employees whose names are vince:

val employees = database.employees.filter { it.name eq "vince" }.toList()

The find and filter functions both accept a lambda expression, generating a select sql with the condition returned by the lambda. The generated SQL auto left joins the referenced table t_department:

select * 
from t_employee 
left join t_department _ref0 on t_employee.department_id = _ref0.id 
where t_employee.name = ?

Save entities to database:

val employee = Employee {
    name = "jerry"
    job = "trainee"
    hireDate = LocalDate.now()
    salary = 50
    department = database.departments.find { it.name eq "tech" }
}

database.employees.add(employee)

Flush property changes in memory to database:

val employee = database.employees.find { it.id eq 2 } ?: return
employee.job = "engineer"
employee.salary = 100
employee.flushChanges()

Delete a entity from database:

val employee = database.employees.find { it.id eq 2 } ?: return
employee.delete()

Detailed usages of entity APIs can be found in the documentation of column binding and entity query.

Entity Sequence APIs

Ktorm provides a set of APIs named Entity Sequence, which can be used to obtain entity objects from databases. As the name implies, its style and use pattern are highly similar to the sequence APIs in Kotlin standard lib, as it provides many extension functions with the same names, such as filter, map, reduce, etc.

Most of the entity sequence APIs are provided as extension functions, which can be divided into two groups, they are intermediate operations and terminal operations.

Intermediate Operations

These functions don’t execute the internal queries but return new-created sequence objects applying some modifications. For example, the filter function creates a new sequence object with the filter condition given by its parameter. The following code obtains all the employees in department 1 by using filter:

val employees = database.employees.filter { it.departmentId eq 1 }.toList()

We can see that the usage is almost the same as kotlin.sequences, the only difference is the == in the lambda is replaced by the eq function. The filter function can also be called continuously, as all the filter conditions are combined with the and operator.

val employees = database.employees
    .filter { it.departmentId eq 1 }
    .filter { it.managerId.isNotNull() }
    .toList()

Generated SQL:

select * 
from t_employee 
left join t_department _ref0 on t_employee.department_id = _ref0.id 
where (t_employee.department_id = ?) and (t_employee.manager_id is not null)

Use sortedBy or soretdByDescending to sort entities in a sequence:

val employees = database.employees.sortedBy { it.salary }.toList()

Use drop and take for pagination:

val employees = database.employees.drop(1).take(1).toList()

Terminal Operations

Terminal operations of entity sequences execute the queries right now, then obtain the query results and perform some calculations on them. The for-each loop is a typical terminal operation, and the following code uses it to print all employees in the sequence:

for (employee in database.employees) {
    println(employee)
}

Generated SQL:

select * 
from t_employee 
left join t_department _ref0 on t_employee.department_id = _ref0.id

The toCollection functions (including toList, toSet, etc.) are used to collect all the elements into a collection:

val employees = database.employees.toCollection(ArrayList())

The mapColumns function is used to obtain the results of a column:

val names = database.employees.mapColumns { it.name }

Additionally, if we want to select two or more columns, we just need to wrap our selected columns by tupleOf in the closure, and the function’s return type becomes List<TupleN<C1?, C2?, .. Cn?>>.

database.employees
    .filter { it.departmentId eq 1 }
    .mapColumns { tupleOf(it.id, it.name) }
    .forEach { (id, name) ->
        println("$id:$name")
    }

Generated SQL:

select t_employee.id, t_employee.name
from t_employee 
where t_employee.department_id = ?

Other familiar functions are also supported, such as fold, reduce, forEach, etc. The following code calculates the total salary of all employees:

val totalSalary = database.employees.fold(0L) { acc, employee -> acc + employee.salary }

Sequence Aggregation

The entity sequence APIs not only allow us to obtain entities from databases just like using kotlin.sequences, but they also provide rich support for aggregations, so we can conveniently count the columns, sum them, or calculate their averages, etc.

The following code obtains the max salary in department 1:

val max = database.employees
    .filter { it.departmentId eq 1 }
    .aggregateColumns { max(it.salary) }

Also, if we want to aggregate two or more columns, we just need to wrap our aggregate expressions by tupleOf in the closure, and the function’s return type becomes TupleN<C1?, C2?, .. Cn?>. The example below obtains the average and the range of salaries in department 1:

val (avg, diff) = database.employees
    .filter { it.departmentId eq 1 }
    .aggregateColumns { tupleOf(avg(it.salary), max(it.salary) - min(it.salary)) }

Generated SQL:

select avg(t_employee.salary), max(t_employee.salary) - min(t_employee.salary) 
from t_employee 
where t_employee.department_id = ?

Ktorm also provides many convenient helper functions implemented based on aggregateColumns, they are count, any, none, all, sumBy, maxBy, minBy, averageBy.

The following code obtains the max salary in department 1 using maxBy instead:

val max = database.employees
    .filter { it.departmentId eq 1 }
    .maxBy { it.salary }

Additionally, grouping aggregations are also supported, we just need to call groupingBy before calling aggregateColumns. The following code obtains the average salaries for each department. Here, the result's type is Map<Int?, Double?>, in which the keys are departments' IDs, and the values are the average salaries of the departments.

val averageSalaries = database.employees
    .groupingBy { it.departmentId }
    .aggregateColumns { avg(it.salary) }

Generated SQL:

select t_employee.department_id, avg(t_employee.salary) 
from t_employee 
group by t_employee.department_id

Ktorm also provides many convenient helper functions for grouping aggregations, they are eachCount(To), eachSumBy(To), eachMaxBy(To), eachMinBy(To), eachAverageBy(To). With these functions, we can write the code below to obtain average salaries for each department:

val averageSalaries = database.employees
    .groupingBy { it.departmentId }
    .eachAverageBy { it.salary }

Other familiar functions are also supported, such as aggregate, fold, reduce, etc. They have the same names as the extension functions of kotlin.collections.Grouping, and the usages are totally the same. The following code calculates the total salaries for each department using fold:

val totalSalaries = database.employees
    .groupingBy { it.departmentId }
    .fold(0L) { acc, employee -> 
        acc + employee.salary 
    }

Detailed usages of entity sequence APIs can be found in the documentation of entity sequence and sequence aggregation.

Comments
  • 联表查询问题

    联表查询问题

    情况1:

    val q = database.sequenceOf(Employees).query.where { Employees.department.id.greater(0) }
    
    // 生成SQL(结果正确)
    SELECT `t_employee`.`id` AS `t_employee_id`, `t_employee`.`name` AS `t_employee_name`, `t_employee`.`job` AS `t_employee_job`, `t_employee`.`manager_id` AS `t_employee_manager_id`, `t_employee`.`hire_date` AS `t_employee_hire_date`, `t_employee`.`salary` AS `t_employee_salary`, `t_employee`.`department_id` AS `t_employee_department_id`, `_ref0`.`id` AS `_ref0_id`, `_ref0`.`name` AS `_ref0_name`, `_ref0`.`location` AS `_ref0_location`, `_ref0`.`mixedCase` AS `_ref0_mixedCase` 
    FROM `t_employee` 
    LEFT JOIN `t_department` `_ref0` ON `t_employee`.`department_id` = `_ref0`.`id` 
    WHERE `_ref0`.`id` > ? 
    

    情况2:

    val q = database.from(Employees).select().where { Employees.department.id.greater(0) }
    
    // 生成SQL(未能自动联表)
    SELECT * 
    FROM `t_employee` 
    WHERE `_ref0`.`id` > ? 
    

    情况3:

    val q = database.from(Employees)
        .leftJoin(Departments, on = Departments.id.eq(Employees.departmentId)) // 错误引用
        .select()
        .where { Employees.department.id.greater(0) }
    
    // 生成SQL(有联表,表别名错误)
    SELECT * 
    FROM `t_employee` 
    LEFT JOIN `t_department` ON `t_department`.`id` = `t_employee`.`department_id`
    WHERE `_ref0`.`id` > ? 
    

    情况4:

    val q = database.from(Employees)
        .leftJoin(Employees.department, on = Employees.department.id.eq(Employees.departmentId)) // 正确引用
        .select()
        .where { Employees.department.id.greater(0) }
    
    // 生成SQL(有联表,结果正确)
    SELECT * 
    FROM `t_employee` 
    LEFT JOIN `t_department` `_ref0` ON `_ref0`.`id` = `t_employee`.`department_id` 
    WHERE `_ref0`.`id` > ? 
    

    总结如下:

    1. 情况1、情况4为正确使用ktorm,其中情况4的写法在官网似乎没有例子;
    2. 情况3为错误使用实例,但我认为大多数人依靠直觉会去尝试这么写;
    3. 情况2与情况1相似,作为新手我直觉上就这么尝试了;

    想法和建议

    1. 如果可能,建议增加对情况2的支持;
    2. 针对易混淆的语法,建议完善文档。如果可能,建议屏蔽这些不规范的写法;
    enhancement 
    opened by ymind 17
  • Support for PostgreSQL SKIP LOCKED

    Support for PostgreSQL SKIP LOCKED

    This code allows for SKIP LOCKED to be used in the postgres dialect.

    I had to upgrade some libraries for the gradle build to run green on my machine, these upgrades are also included here.

    Please let me know if something here isn't up to your standards, and I'll try to fix it.

    opened by sigmanil 15
  • Potential bug with references

    Potential bug with references

    I think I found a bug with entity references/foreign keys. So, I have the following database schema (specified with DBML https://www.dbml.org/home/): image

    Table users {
      id integer [pk, unique, increment, not null]
      username varchar(10) [not null, unique]
      full_name varchar(20) [not null, unique]
      email varchar(20) [not null, unique]
      password_hash varchar(60) [not null]
    }
    
    Table projects {
      id integer [pk, unique, increment, not null]
      name varchar(30) [not null, unique]
      description text [not null]
      project_owner_id integer [ref: - users.id, not null]
    }
    
    Table project_membership {
      id integer [pk, unique, increment, not null]
      user_id integer [not null]
      project_id integer [not null]
    }
    
    Ref: users.id - project_membership.user_id [delete: cascade]
    Ref: projects.id - project_membership.project_id [delete: cascade]
    
    Table variables {
      id integer [pk, unique, increment, not null]
      name varchar(20) [not null]
      type variable_type [not null]
      unit varchar(5) [not null]
      dimension text [not null]
      objective objective [not null]
      exogenous boolean [not null]
      system_descriptive boolean [not null]
      project_id integer [not null]
    }
    
    enum variable_type {
      EFFORT
      FLOW
      CONNECTING
      DISPLACEMENT
      MOMENTUM
      TYPELESS
    }
    
    Ref: projects.id - variables.project_id [delete: cascade]
    
    Table component_specifications {
      id integer [pk, unique, increment, not null]
      name varchar(20) [unique, not null]
      type component_type [not null]
      project_id integer [not null]
    }
    
    enum component_type {
      RESISTOR
      CAPACITOR
      INERTIA
      SOURCE_OF_FLOW
      SOURCE_OF_EFFORT
      TRANSFORMER
      GYRATOR
      FLOW_JUNCTION
      EFFORT_JUNCTION
      SINK
    }
    
    Ref: projects.id - component_specifications.project_id [delete: cascade]
    
    Table component_specification_internal_variables {
      id integer [pk, unique, increment, not null]
      component_id integer [not null]
      variable_id integer [not null]
    }
    
    Ref: component_specifications.id - component_specification_internal_variables.component_id [delete: cascade]
    Ref: variables.id - component_specification_internal_variables.variable_id [delete: cascade]
    
    Table component_specification_additional_internal_variables {
      id integer [pk, unique, increment, not null]
      component_id integer [not null]
      variable_id integer [not null]
    }
    
    Ref: component_specifications.id - component_specification_additional_internal_variables.component_id [delete: cascade]
    Ref: variables.id - component_specification_additional_internal_variables.variable_id [delete: cascade]
    

    The component_specification_internal_variables and component_specification_additional_internal_variables tables are mapping tables for many-to-many references between a component specification's additional internal and internal variables. Both tables are virtually identical, with only the name being different, so I would expect them to behave identically in Ktorm.

    Here is the corresponding Ktorm schema:

    interface User : Entity<User> {
        companion object : Entity.Factory<User>()
    
        val id: Int
        var username: String
        var fullName: String
        var email: String
        var passwordHash: String
    }
    
    object Users : Table<User>("users") {
        val id by int("id").primaryKey().bindTo { it.id }
        val username by varchar("username").bindTo { it.username }
        val fullName by varchar("full_name").bindTo { it.fullName }
        val email by varchar("email").bindTo { it.email }
        val passwordHash by varchar("password_hash").bindTo { it.passwordHash }
    }
    
    interface Project : Entity<Project> {
        companion object : Entity.Factory<Project>()
    
        val id: Int
        var name: String
        var description: String
        var owner: User
    }
    
    object Projects : Table<Project>("projects") {
        val id by int("id").primaryKey().bindTo { it.id }
        val name by varchar("name").bindTo { it.name }
        val description by text("description").bindTo { it.description }
        val projectOwnerId by int("project_owner_id").references(Users) { it.owner }
    }
    
    interface ProjectMembershipMapping : Entity<ProjectMembershipMapping> {
        companion object : Entity.Factory<ProjectMembershipMapping>()
    
        val id: Int
        var user: User
        var project: Project
    }
    
    object ProjectMembership : Table<ProjectMembershipMapping>("project_membership") {
        val id by int("id").primaryKey().bindTo { it.id }
        val userId by int("user_id").references(Users) { it.user }
        val projectId by int("project_id").references(Projects) { it.project }
    }
    
    interface ComponentSpecificationEntity : Entity<ComponentSpecificationEntity> {
        companion object : Entity.Factory<ComponentSpecificationEntity>()
    
        val id: Int
        var name: String
        var type: ComponentType
        var project: Project
    }
    
    object ComponentSpecifications : Table<ComponentSpecificationEntity>("component_specifications") {
        val id by int("id").primaryKey().bindTo { it.id }
        val name by varchar("name").bindTo { it.name }
        val type by componentType("type").bindTo { it.type }
        val projectId by int("project_id").references(Projects) { it.project }
    }
    
    interface VariableEntity : Entity<VariableEntity> {
        companion object : Entity.Factory<VariableEntity>()
    
        val id: Int
        var name: String
        var type: VariableType
        var unit: String
        var dimension: String
        var objective: Objective
        var exogenous: Boolean
        var systemDescriptive: Boolean
        var project: Project
    }
    
    object Variables : Table<VariableEntity>("variables") {
        val id by int("id").primaryKey().bindTo { it.id }
        val name by varchar("name").bindTo { it.name }
        val type by variableType("type").bindTo { it.type }
        val unit by varchar("unit").bindTo { it.unit }
        val dimension by varchar("dimension").bindTo { it.dimension }
        val objective by objective("objective").bindTo { it.objective }
        val exogenous by boolean("exogenous").bindTo { it.exogenous }
        val systemDescriptive by boolean("system_descriptive").bindTo { it.systemDescriptive }
        val projectId by int("project_id").references(Projects) { it.project }
    }
    
    interface ComponentSpecificationInternalVariableMapping : Entity<ComponentSpecificationInternalVariableMapping> {
        companion object : Entity.Factory<ComponentSpecificationInternalVariableMapping>()
    
        val id: Int
        var component: ComponentSpecificationEntity
        var variable: VariableEntity
    }
    
    object ComponentSpecificationInternalVariables :
        Table<ComponentSpecificationInternalVariableMapping>("component_specification_internal_variables") {
        val id by int("id").primaryKey().bindTo { it.id }
        val componentId by int("component_id").references(ComponentSpecifications) { it.component }
        val variableId by int("variable_id").references(Variables) { it.variable }
    }
    
    interface ComponentSpecificationAdditionalInternalVariableMapping :
        Entity<ComponentSpecificationAdditionalInternalVariableMapping> {
        companion object : Entity.Factory<ComponentSpecificationAdditionalInternalVariableMapping>()
    
        val id: Int
        var component: ComponentSpecificationEntity
        var variable: VariableEntity
    }
    
    object ComponentSpecificationAdditionalInternalVariables :
        Table<ComponentSpecificationAdditionalInternalVariableMapping>("component_specification_additional_internal_variables") {
        val id by int("id").primaryKey().bindTo { it.id }
        val componentId by int("component_id").references(ComponentSpecifications) { it.component }
        val variableId by int("variable_id").references(Variables) { it.variable }
    }
    

    So, for the following test code:

    database.useTransaction {
      database.sequenceOf(Users).add(
        User {
          username = "testuser"
          fullName = "Test Tester"
          email = "[email protected]"
          passwordHash = passwordHashForTestUsers
        }
      )
    
      val ownerUser: User =
        database.sequenceOf(Users).find { it.username eq "testuser" }
          ?: fail("Couldn't find recently created user")
    
      database.sequenceOf(Projects).add(
        Project {
          name = "Test Project"
          description = "Some description"
          owner = ownerUser
        }
      )
    
      val project: Project =
        database.sequenceOf(Projects).find { it.name eq "Test Project" }
          ?: fail("Couldn't find recently created project")
    
      database.sequenceOf(Variables).add(
        VariableEntity {
          name = "T1"
          type = VariableType.EFFORT
          unit = "T^2"
          dimension = "T.T"
          objective = Objective.NONE
          exogenous = false
          systemDescriptive = false
          this.project = project
        }
      )
    
      val variable: VariableEntity =
        database.sequenceOf(Variables).find { it.name eq "T1" }
          ?: fail("Couldn't find recently created variable")
    
      database.sequenceOf(ComponentSpecifications).add(
        ComponentSpecificationEntity {
          name = "SOE"
          type = ComponentType.SOURCE_OF_EFFORT
          this.project = project
        }
      )
      database.sequenceOf(ComponentSpecifications).add(
        ComponentSpecificationEntity {
          name = "SNK"
          type = ComponentType.SINK
          this.project = project
        }
      )
    
      val sourceOfEffortComponent: ComponentSpecificationEntity =
        database.sequenceOf(ComponentSpecifications).find { it.name eq "SOE" }
          ?: fail("Couldn't find recently created component specification")
      val sinkComponent: ComponentSpecificationEntity =
        database.sequenceOf(ComponentSpecifications).find { it.name eq "SNK" }
          ?: fail("Couldn't find recently created component specification")
    
      database.sequenceOf(ComponentSpecificationInternalVariables).add(
        ComponentSpecificationInternalVariableMapping {
          component = sourceOfEffortComponent
          this.variable = variable
        }
      )
      database.sequenceOf(ComponentSpecificationAdditionalInternalVariables).add(
        ComponentSpecificationAdditionalInternalVariableMapping {
          component = sinkComponent
          this.variable = variable
        }
      )
    
      val internalVariableMapping: ComponentSpecificationInternalVariableMapping =
        database
          .sequenceOf(ComponentSpecificationInternalVariables)
          .find { it.componentId eq sourceOfEffortComponent.id }
          ?: fail("Couldn't find recently created internal variable mapping")
      val additionalInternalVariableMapping: ComponentSpecificationAdditionalInternalVariableMapping =
        database
          .sequenceOf(ComponentSpecificationAdditionalInternalVariables)
          .find { it.componentId eq sinkComponent.id }
          ?: fail("Couldn't find recently created additional internal variable mapping")
    
      println("Internal variable name: ${internalVariableMapping.variable.name}")
      println("Additional internal variable name: ${additionalInternalVariableMapping.variable.name}")
    }
    

    I get the following results: Internal variable name: T1 Additional internal variable name:

    As you can see, the variable loses it's name information, when referenced through additional internal variables, even though both tables and the Ktorm schema for them is virtually identical.

    Here are the SQL queries generated by ktorm: For internal variables: select component_specification_internal_variables.id as component_specification_internal_variables_id, component_specification_internal_variables.component_id as component_specification_internal_variables_component_id, component_specification_internal_variables.variable_id as component_specification_internal_variables_variable_id, _ref0.id as _ref0_id, _ref0.name as _ref0_name, _ref0.type as _ref0_type, _ref0.project_id as _ref0_project_id, _ref1.id as _ref1_id, _ref1.name as _ref1_name, _ref1.description as _ref1_description, _ref1.project_owner_id as _ref1_project_owner_id, _ref2.id as _ref2_id, _ref2.username as _ref2_username, _ref2.full_name as _ref2_full_name, _ref2.email as _ref2_email, _ref2.password_hash as _ref2_password_hash, _ref3.id as _ref3_id, _ref3.name as _ref3_name, _ref3.type as _ref3_type, _ref3.unit as _ref3_unit, _ref3.dimension as _ref3_dimension, _ref3.objective as _ref3_objective, _ref3.exogenous as _ref3_exogenous, _ref3.system_descriptive as _ref3_system_descriptive, _ref3.project_id as _ref3_project_id, _ref4.id as _ref4_id, _ref4.name as _ref4_name, _ref4.description as _ref4_description, _ref4.project_owner_id as _ref4_project_owner_id, _ref5.id as _ref5_id, _ref5.username as _ref5_username, _ref5.full_name as _ref5_full_name, _ref5.email as _ref5_email, _ref5.password_hash as _ref5_password_hash from component_specification_internal_variables left join component_specifications _ref0 on component_specification_internal_variables.component_id = _ref0.id left join projects _ref1 on _ref0.project_id = _ref1.id left join users _ref2 on _ref1.project_owner_id = _ref2.id left join variables _ref3 on component_specification_internal_variables.variable_id = _ref3.id left join projects _ref4 on _ref3.project_id = _ref4.id left join users _ref5 on _ref4.project_owner_id = _ref5.id where component_specification_internal_variables.component_id = ? limit ?

    For additional internal variables: select component_specification_additional_internal_variables.id as component_specification_additional_internal_variables_id, component_specification_additional_internal_variables.component_id as component_specification_additional_internal_variables_component_id, component_specification_additional_internal_variables.variable_id as component_specification_additional_internal_variables_variable_id, _ref0.id as _ref0_id, _ref0.name as _ref0_name, _ref0.type as _ref0_type, _ref0.project_id as _ref0_project_id, _ref1.id as _ref1_id, _ref1.name as _ref1_name, _ref1.description as _ref1_description, _ref1.project_owner_id as _ref1_project_owner_id, _ref2.id as _ref2_id, _ref2.username as _ref2_username, _ref2.full_name as _ref2_full_name, _ref2.email as _ref2_email, _ref2.password_hash as _ref2_password_hash, _ref3.id as _ref3_id, _ref3.name as _ref3_name, _ref3.type as _ref3_type, _ref3.unit as _ref3_unit, _ref3.dimension as _ref3_dimension, _ref3.objective as _ref3_objective, _ref3.exogenous as _ref3_exogenous, _ref3.system_descriptive as _ref3_system_descriptive, _ref3.project_id as _ref3_project_id, _ref4.id as _ref4_id, _ref4.name as _ref4_name, _ref4.description as _ref4_description, _ref4.project_owner_id as _ref4_project_owner_id, _ref5.id as _ref5_id, _ref5.username as _ref5_username, _ref5.full_name as _ref5_full_name, _ref5.email as _ref5_email, _ref5.password_hash as _ref5_password_hash from component_specification_additional_internal_variables left join component_specifications _ref0 on component_specification_additional_internal_variables.component_id = _ref0.id left join projects _ref1 on _ref0.project_id = _ref1.id left join users _ref2 on _ref1.project_owner_id = _ref2.id left join variables _ref3 on component_specification_additional_internal_variables.variable_id = _ref3.id left join projects _ref4 on _ref3.project_id = _ref4.id left join users _ref5 on _ref4.project_owner_id = _ref5.id where component_specification_additional_internal_variables.component_id = ? limit ?

    Sorry for this issue being so long, but I don't really know how else to convey the information about the potential bug.

    bug 
    opened by JoonasC 13
  • 请问在sql dsl 里怎么聚合数据?

    请问在sql dsl 里怎么聚合数据?

    请问我有一个查询订单列表的sql

    database.from(OrderDao)
                .leftJoin(OrderDetailsDao, on = OrderDetailsDao.orderNo eq OrderDao.orderNo)
                .select()
    

    查询出来的结果是这样的 image 请问我应该再用什么操作符可以将结果转换成{ orderNo:xx , orderTime:xxx , goodsList:[ ] }这样的结构? 突然想不通应该怎么写了...

    opened by luckykelan 12
  • Adding bulk inserts or updates for PostgreSQL

    Adding bulk inserts or updates for PostgreSQL

    Adding support for PostgreSQL bulk insert or update (also allowing to bulk ignore conflicts).

             database.bulkInsert(Employees) {
                    item {
                        set(it.id, 1)
                        set(it.name, "vince")
                        set(it.job, "engineer")
                        set(it.salary, 1000)
                        set(it.hireDate, LocalDate.now())
                        set(it.departmentId, 1)
                    }
                    item {
                        set(it.id, 5)
                        set(it.name, "vince")
                        set(it.job, "engineer")
                        set(it.salary, 1000)
                        set(it.hireDate, LocalDate.now())
                        set(it.departmentId, 1)
                    }
    
                    onDuplicateKey(Employees.id) {
                            set(it.salary, it.salary + 900)
                    }
                }
    

    (Will add +900 to each already existing employee salary)

             database.bulkInsert(Employees) {
                    item {
                        set(it.id, 1)
                        set(it.name, "vince")
                        set(it.job, "engineer")
                        set(it.salary, 1000)
                        set(it.hireDate, LocalDate.now())
                        set(it.departmentId, 1)
                    }
                    item {
                        set(it.id, 5)
                        set(it.name, "vince")
                        set(it.job, "engineer")
                        set(it.salary, 1000)
                        set(it.hireDate, LocalDate.now())
                        set(it.departmentId, 1)
                    }
    
                    onDuplicateKey(Employees.id) {
                    }
                }
    

    (Will ignore any already existing employee and continue the query)

    opened by PedroD 11
  • How to deserialize from JSON?

    How to deserialize from JSON?

    Say you are exposing some REST endpoint and the client POSTs you a modified entity, is it possible to deserialize it directly in a Ktorm object? If so, can you update the entity on the database as well?

    question 
    opened by lamba92 11
  • EntitySequence我没找到怎么支持动态SQL的方式,特向你请教一下

    EntitySequence我没找到怎么支持动态SQL的方式,特向你请教一下

    代码: this.database.employees.find{ it.name eq request.name!! } 上面代码我必须添加!!才能查询,但是这个name参数是可选的,也就是空的情况下这个代码会抛错,我想让name参数不存在的情况下当前条件不生效,换句话说就是用EntitySequence方式的话,该怎么写动态SQL?

    opened by kamjin1996 10
  • datetime column throws exception

    datetime column throws exception

    I have database columns with mysql type datetime. If I declare the ktorm members as LocalDate it works, but of course I get no time information. Declaring them as LocalDateTime and using the datetime extension function causes this execption:

    Caused by: java.lang.IllegalArgumentException: Class should have a single no-arg constructor: class java.time.LocalDateTime
    	at kotlin.reflect.full.KClasses.createInstance(KClasses.kt:281)
    	at me.liuwj.ktorm.schema.ColumnBindingHandlerKt.getDefaultValue(ColumnBindingHandler.kt:109)
    	at me.liuwj.ktorm.entity.EntityImplementation.getDefaultValue(EntityImplementation.kt:111)
    
    opened by clydebarrow 10
  • mariadb timestamp issue

    mariadb timestamp issue

    When trying to update a timestamp column using an entity with type Instant, the timestamp value looks something like: "2019-09-05T00:00:56.287169Z". Looking at the logged SQL, everything looks fine (no errors), but the db is not updated.

    Manually running the sql shows an error like: "Incorrect datetime value: '2019-09-05T00:00:56.287169Z' for column..." (but the column has type "timestamp")

    Shouldn't ktorm report an error when the query fails? I found that if I remove the 'Z' from the end of the value, the query works. What is the recommended way to correctly format a timestamp value from an Instant?

    UPDATE: I just tried using a datetime/LocalDateTime in ktorm (even though the db column is still a timestamp) and the value doesn't have the Z at the end, so it works when I run it manually. But for some reason, the flushChanges() is not working even though I am getting the entity from the findOne() method.

    opened by schmeic 10
  • JsonMappingException when return response;

    JsonMappingException when return response;

    I modifier a new project from ktorm-example-spring-boot ,it works well.

    but then I try to integration ktorm to my old project. just simple http api like this: image the list can query success from db,but throw ex after return

    This is my maven config(other libs is omit ): kt version is 1.3.41

      <dependency>
                <groupId>org.jetbrains.kotlin</groupId>
                <artifactId>kotlin-stdlib-jdk8</artifactId>
                <version>${kotlin.version}</version>
            </dependency>
            <dependency>
                <groupId>org.jetbrains.kotlin</groupId>
                <artifactId>kotlin-reflect</artifactId>
                <version>${kotlin.version}</version>
            </dependency>
      <dependency>
                <groupId>mysql</groupId>
                <artifactId>mysql-connector-java</artifactId>
            </dependency>
    
            <dependency>
                <groupId>com.fasterxml.jackson.module</groupId>
                <artifactId>jackson-module-kotlin</artifactId>
                <version>2.9.9</version>
            </dependency>
            <dependency>
                <groupId>me.liuwj.ktorm</groupId>
                <artifactId>ktorm-core</artifactId>
                <version>2.4</version>
            </dependency>
            <dependency>
                <groupId>me.liuwj.ktorm</groupId>
                <artifactId>ktorm-jackson</artifactId>
                <version>2.4</version>
            </dependency>
            <dependency>
                <groupId>me.liuwj.ktorm</groupId>
                <artifactId>ktorm-support-mysql</artifactId>
                <version>2.4</version>
            </dependency>
    

    the error log

    17:06:55.071 default [http-nio-8090-exec-3] ERROR c.z.p.d.c.ExceptionHandlerAdvice - 错误发生在:/txTest/test3
    17:06:55.072 default [http-nio-8090-exec-3] ERROR c.z.p.d.c.ExceptionHandlerAdvice - handleBadRequest StackTrace error 
    org.springframework.http.converter.HttpMessageNotWritableException: Could not write JSON: org.jetbrains.kotlin.name.ClassId; nested exception is com.fasterxml.jackson.databind.JsonMappingException: org.jetbrains.kotlin.name.ClassId (through reference chain: java.util.ArrayList[0]->com.sun.proxy.$Proxy191["entityClass"])
    	at org.springframework.http.converter.json.AbstractJackson2HttpMessageConverter.writeInternal(AbstractJackson2HttpMessageConverter.java:296)
    	at org.springframework.http.converter.AbstractGenericHttpMessageConverter.write(AbstractGenericHttpMessageConverter.java:103)
    	at org.springframework.web.servlet.mvc.method.annotation.AbstractMessageConverterMethodProcessor.writeWithMessageConverters(AbstractMessageConverterMethodProcessor.java:290)
    	at org.springframework.web.servlet.mvc.method.annotation.RequestResponseBodyMethodProcessor.handleReturnValue(RequestResponseBodyMethodProcessor.java:180)
    	at org.springframework.web.method.support.HandlerMethodReturnValueHandlerComposite.handleReturnValue(HandlerMethodReturnValueHandlerComposite.java:82)
    	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:122)
    	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:892)
    	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797)
    	at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
    	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1039)
    	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)
    	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005)
    	at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:908)
    	at javax.servlet.http.HttpServlet.service(HttpServlet.java:660)
    	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882)
    	at javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
    	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
    	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    	at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
    	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)
    	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
    	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
    	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
    	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202)
    	at org.apache.catalina.core.StandardContextValve.__invoke(StandardContextValve.java:96)
    	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:41002)
    	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490)
    	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
    	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
    	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
    	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
    	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408)
    	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
    	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:853)
    	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1587)
    	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: com.fasterxml.jackson.databind.JsonMappingException: org.jetbrains.kotlin.name.ClassId (through reference chain: java.util.ArrayList[0]->com.sun.proxy.$Proxy191["entityClass"])
    	at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:394)
    	at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:353)
    	at com.fasterxml.jackson.databind.ser.std.StdSerializer.wrapAndThrow(StdSerializer.java:316)
    	at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:727)
    	at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:155)
    	at com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serializeContents(CollectionSerializer.java:145)
    	at com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serialize(CollectionSerializer.java:107)
    	at com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serialize(CollectionSerializer.java:25)
    	at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(DefaultSerializerProvider.java:480)
    	at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:400)
    	at com.fasterxml.jackson.databind.ObjectWriter$Prefetch.serialize(ObjectWriter.java:1392)
    	at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:913)
    	at org.springframework.http.converter.json.AbstractJackson2HttpMessageConverter.writeInternal(AbstractJackson2HttpMessageConverter.java:287)
    	... 41 common frames omitted
    Caused by: java.lang.ClassNotFoundException: org.jetbrains.kotlin.name.ClassId
    	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
    	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    	at kotlin.reflect.jvm.internal.KDeclarationContainerImpl.parseType(KDeclarationContainerImpl.kt:277)
    	at kotlin.reflect.jvm.internal.KDeclarationContainerImpl.loadReturnType(KDeclarationContainerImpl.kt:292)
    	at kotlin.reflect.jvm.internal.KDeclarationContainerImpl.findMethodBySignature(KDeclarationContainerImpl.kt:223)
    	at kotlin.reflect.jvm.internal.KPropertyImplKt.computeCallerForAccessor(KPropertyImpl.kt:223)
    	at kotlin.reflect.jvm.internal.KPropertyImplKt.access$computeCallerForAccessor(KPropertyImpl.kt:1)
    	at kotlin.reflect.jvm.internal.KPropertyImpl$Getter$caller$2.invoke(KPropertyImpl.kt:156)
    	at kotlin.reflect.jvm.internal.KPropertyImpl$Getter$caller$2.invoke(KPropertyImpl.kt:147)
    	at kotlin.reflect.jvm.internal.ReflectProperties$LazyVal.invoke(ReflectProperties.java:62)
    	at kotlin.reflect.jvm.internal.ReflectProperties$Val.getValue(ReflectProperties.java:31)
    	at kotlin.reflect.jvm.internal.KPropertyImpl$Getter.getCaller(KPropertyImpl.kt)
    	at kotlin.reflect.jvm.ReflectJvmMapping.getJavaMethod(ReflectJvmMapping.kt:62)
    	at com.fasterxml.jackson.module.kotlin.KotlinAnnotationIntrospector.getCorrespondingGetter(KotlinAnnotationIntrospector.kt:101)
    	at com.fasterxml.jackson.module.kotlin.KotlinAnnotationIntrospector.hasRequiredMarker(KotlinAnnotationIntrospector.kt:66)
    	at com.fasterxml.jackson.module.kotlin.KotlinAnnotationIntrospector.access$hasRequiredMarker(KotlinAnnotationIntrospector.kt:23)
    	at com.fasterxml.jackson.module.kotlin.KotlinAnnotationIntrospector$hasRequiredMarker$1.invoke(KotlinAnnotationIntrospector.kt:33)
    	at com.fasterxml.jackson.module.kotlin.KotlinAnnotationIntrospector$hasRequiredMarker$1.invoke(KotlinAnnotationIntrospector.kt:23)
    	at com.fasterxml.jackson.module.kotlin.ReflectionCache.javaMemberIsRequired(KotlinModule.kt:92)
    	at com.fasterxml.jackson.module.kotlin.KotlinAnnotationIntrospector.hasRequiredMarker(KotlinAnnotationIntrospector.kt:26)
    	at com.fasterxml.jackson.databind.introspect.AnnotationIntrospectorPair.hasRequiredMarker(AnnotationIntrospectorPair.java:307)
    	at com.fasterxml.jackson.databind.introspect.AnnotationIntrospectorPair.hasRequiredMarker(AnnotationIntrospectorPair.java:307)
    	at com.fasterxml.jackson.databind.introspect.POJOPropertyBuilder$4.withMember(POJOPropertyBuilder.java:655)
    	at com.fasterxml.jackson.databind.introspect.POJOPropertyBuilder$4.withMember(POJOPropertyBuilder.java:652)
    	at com.fasterxml.jackson.databind.introspect.POJOPropertyBuilder.fromMemberAnnotations(POJOPropertyBuilder.java:1143)
    	at com.fasterxml.jackson.databind.introspect.POJOPropertyBuilder._findRequired(POJOPropertyBuilder.java:652)
    	at com.fasterxml.jackson.databind.introspect.POJOPropertyBuilder.getMetadata(POJOPropertyBuilder.java:220)
    	at com.fasterxml.jackson.databind.ser.BeanSerializerFactory._constructWriter(BeanSerializerFactory.java:771)
    	at com.fasterxml.jackson.databind.ser.BeanSerializerFactory.findBeanProperties(BeanSerializerFactory.java:583)
    	at com.fasterxml.jackson.databind.ser.BeanSerializerFactory.constructBeanSerializer(BeanSerializerFactory.java:368)
    	at com.fasterxml.jackson.databind.ser.BeanSerializerFactory.findBeanSerializer(BeanSerializerFactory.java:279)
    	at com.fasterxml.jackson.databind.ser.BeanSerializerFactory._createSerializer2(BeanSerializerFactory.java:231)
    	at com.fasterxml.jackson.databind.ser.BeanSerializerFactory.createSerializer(BeanSerializerFactory.java:165)
    	at com.fasterxml.jackson.databind.SerializerProvider._createUntypedSerializer(SerializerProvider.java:1388)
    	at com.fasterxml.jackson.databind.SerializerProvider._createAndCacheUntypedSerializer(SerializerProvider.java:1336)
    	at com.fasterxml.jackson.databind.SerializerProvider.findPrimaryPropertySerializer(SerializerProvider.java:668)
    	at com.fasterxml.jackson.databind.ser.impl.PropertySerializerMap.findAndAddPrimarySerializer(PropertySerializerMap.java:64)
    	at com.fasterxml.jackson.databind.ser.BeanPropertyWriter._findAndAddDynamic(BeanPropertyWriter.java:897)
    	at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:705)
    	at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:719)
    	... 50 common frames omitted
    
    question 
    opened by JsonSong89 10
  • 写sql 想要map成一个自定义的Entity<?>的class 是可以支持的吗

    写sql 想要map成一个自定义的Entity<?>的class 是可以支持的吗

    我自己写sql执行 有很方面的方式map成一个自定义的Entity<?>的class吗

     var testResult = db.useConnection { conn ->
                var sql ="select `Tid`,`Key`,`Value` from config where tid = ? limit 1" 
                conn.prepareStatement(sql).use { statement->
                    statement.setLong(1,2)
                    statement.executeQuery().iterable().map { it.getString(1) }
                }
            }
    上面的代码 有没有办法很方便的能后让结果集 map成我下面的class??
    
    interface Test : Entity<Test> {
        val Tid: Long
        var Value:String
        var Key:String
    }
    
    
    question 
    opened by yuzd 10
  • How to deal with Parent Child relationship in query

    How to deal with Parent Child relationship in query

    I have difficulty understanding how to recursively go through all the children doing multiple joins and conditions at the same time. I often get this error: Parameter specified as non-null is null: method org.ktorm.schema.ReferenceBinding., parameter referenceTable or this one: null cannot be cast to non-null type ch.skyfy.ghuperms.db.Groups

    my tables look like this:

    open class Groups(alias: String?) : Table<Group>("group", alias) {
        companion object : Groups(null)
        override fun aliased(alias: String) = Groups(alias)
    
        val id = int("id").primaryKey().bindTo { it.id }
        val identifier = varchar("identifier").bindTo { it.identifier }
        val weight = int("weight").bindTo { it.weight }
        val description = varchar("description").bindTo { it.description }
        val parentId = int("parent_id").bindTo { it.parent?.id }
    //    val parentId = int("parent_id").references(Groups) { it.parent }
    
        val parent: Groups get() = parentId.referenceTable as Groups
    }
    
    
    interface Group : org.ktorm.entity.Entity<Group> {
        companion object : org.ktorm.entity.Entity.Factory<Group>()
    
        val id: Int
        var identifier: String
        var weight: Int
        var description: String
        var parent: Group?
    }
    
        // get all permissions For all groups (and child groups) of which the player is a member
        val g = Groups.aliased("g")
        val gp = GroupPermissions.aliased("gp")
        val p = Permissions.aliased("p")
        val pg = PlayerGroups.aliased("pg")
        val tplayer = Players.aliased("tplayer")
    
        val f = DatabaseManager.db.from(g)
            .leftJoin(gp, on = g.id eq gp.groupId)
            .leftJoin(p, on = p.id eq gp.permissionId)
            .leftJoin(pg, on = g.id eq pg.groupId)
            .leftJoin(tplayer, on = pg.playerId eq tplayer.id)
    
        val permsList = mutableListOf<Perm>()
    
        val list = f.select(g.weight, p.identifier, g.identifier, g.parentId, p.value).where { (tplayer.name like player.name) }.orderBy(g.weight.desc(), p.value.asc()).map { row ->
            permsList.add(Perm(row[g.weight], row[p.value]))
            if(row[g.parentId] != null){
                println()
            }
    //        DatabaseManager.db.from(g)
    //            .leftJoin(p, on = p.id eq gp.permissionId).select(g.weight, p.value).where { g.parentId eq g.parent.parentId }.map { row2 ->
    //                println()
    //            }
    
            row[p.value]
        }.filterNotNull()
    

    if someone has already deal with that or know something, it will help me a lot

    opened by AmibeSkyfy16 0
  • Equals does not work correctly on entities with nullable properties

    Equals does not work correctly on entities with nullable properties

    Currently equals does not work correctly on entities with nullable properties.

    This is a side-effect of inconsistency when populating the values map in EntityImplementation:

    1. Loading the entity does not populate null values ( does not create property: value entry ) in EntityImplementation.values
    2. Initializing a new entity via the Entity factory companion does populate null values ( creates property: value entry ) in EntityImplementation.values
    3. EntityImplementation delegates equals() to the LinkedHashMap holding the values.

    Therefore for an entity declared as E(A, B, C?): if we do :

    • e1 = E { a1, b1, null }
    • database.sequenceOf(E).add(e1)

    Then we get

    • e1 != database.sequenceOf(E).find { it.a eq a1 }
    opened by IvanGeorgiev89 0
  • Postgres bulkInsertOrUpdate should have similar item() parameters as batchInsert

    Postgres bulkInsertOrUpdate should have similar item() parameters as batchInsert

    The item() of batchInsert looks like this: fun item(block: AssignmentsBuilder.(T) -> Unit)

    The item() of bulkInsertOrUpdate in the Postgres module looks like this: fun item(block: AssignmentsBuilder.() -> Unit)

    Note the missing "T" type parameter on the block parameter in bulkInsertOrUpdate.

    To us, this means the bulkInsertOrUpdate can't be used with the same convenience assignment methods we use for our other insertion methods as the types don't match. A fairly minor annoyance, but it does force us to duplicate some code. Is there a good reason these two methods don't work the same?

    opened by sigmanil 1
  • groupingBy multiple columns?

    groupingBy multiple columns?

    Is it possible to use the native entity DSL to perform a group by on multiple columns?

    database.sales.groupingBy {
      it.salesRegion, it.productFamily, it.month, it.channel
    }.aggregateColumns {
      sum(it.salesVolume)
    }
    

    I thought tupleOf() would work for the above, since groupingBy and aggregateColumns have similar parameters (below) but this does not work.

    groupingBy: keySelector: (T) -> ColumnDeclaring<K> // does not accept tupleOf()

    aggregateColumns: aggregationSelector: (T) -> ColumnDeclaring<C> // accepts tupleOf()

    enhancement 
    opened by 2x2xplz 1
  • [Feature] Add support for window functions

    [Feature] Add support for window functions

    ANSI SQL2003 allows for a window_clause in aggregate function calls, the addition of which makes those functions into window functions. Nowadays, many mainstream relational databases, including MySQL, SQLite, Postgres, have supported window functions syntax. However, it is still rare to see an ORM framework that supports it. Ktorm, as a lightweight but powerful ORM framework, has the ability to be a pioneer. This pull request mainly focuses on implementing window function expression for MySQL dialect.

    opened by michaelfyc 0
Releases(v3.5.0)
  • v3.5.0(Jun 5, 2022)

    • Upgrade Kotlin version to 1.5.32, upgrade min JDK version to 1.8.
    • New operator functions gt, gte, lt, lte, neq, synonyms for greater, greaterEq, less, lessEq, notEq.
    • Compatible with JVM default methods for entity non-abstract members.
    • Support using inline classes as column values and entity properties. #253
    • Support PostgreSQL cube & earthdistance data type and their extension functions, by @KocproZ in #365
    • Support SQLite bulk insert statement and some utility functions, by @2938137849 in #370
    • Fix stack overflow bug when using large where clauses, by @ecopoesis in #328
    • Deprecate the ktorm-global module which will be completely removed in the future.
    Source code(tar.gz)
    Source code(zip)
  • v3.4.1(May 10, 2021)

  • v3.4.0(May 9, 2021)

    Support Locking Clause for MySQL & PostgreSQL #247

    Now Ktorm supports locking clause like for update, for share both for MySQL & PostgreSQL, for example:

    val employee = database.employees
        .filter { it.name eq "vince" }
        .locking(LockingMode.FOR_UPDATE, wait = LockingWait.SKIP_LOCKED)
        .firstOrNull()
    

    Generated SQL:

    SELECT *
    FROM t_employee
    WHERE t_employee.name = ? 
    LIMIT ?, ? 
    FOR UPDATE SKIP LOCKED
    

    Refer to these two functions for detailed usage:

    Support insert ... returning ... for PostgreSQL #233

    With an insert ... returning ... statement, we can insert records to the database, and at the same time, retrieve some generated columns. For example:

    val id = database.insertReturning(Employees, Employees.id) {
        set(it.name, "pedro")
        set(it.job, "engineer")
        set(it.salary, 1500)
        set(it.hireDate, LocalDate.now())
        set(it.departmentId, 1)
    }
    

    Returning multiple columns is also supported:

    val (id, job) = database.insertReturning(Employees, Pair(Employees.id, Employees.job)) {
        set(it.name, "vince")
        set(it.job, "engineer")
        set(it.salary, 1000)
        set(it.hireDate, LocalDate.now())
        set(it.departmentId, 1)
    }
    

    Generated SQL:

    insert into t_employee (name, job, salary, hire_date, department_id) 
    values (?, ?, ?, ?, ?) returning id, job
    

    There are also some other versions of xxxReturning functions, check the API docs for details:

    Other Optimizations & Bug Fixes

    • PostgreSQL: support onConflict { doNothing() } for insertOrUpdate & bulkInsertOrUpdate #255
    • ~~PostgreSQL: Fix type mismatch error for JsonSqlType #268~~
    • Value semantics for Entity: add default equals & hashCode function #242
    • Auto transformation between JSR-310 classes and JDBC date & time #252
    • Support using unsigned integers as column types #253
    • Fix null-value-ignoring bug for add function #273
    Source code(tar.gz)
    Source code(zip)
  • v3.3.0(Jan 9, 2021)

    • sortedBy function supports lambda varargs to allow using multiple sorted columns, eg. sortedBy({ it.salary.desc() }, { it.hireDate.asc() }).
    • Added bulkInsert & bulkInsertOrUpdate for PostgreSQL. #226
    • Added more overloaded functions to allow specifying offset & limit separately. #198
    • Fixed bug of missing catalog & schema in generated insert SQLs. #207, #209
    • Changed the default transaction isolation to null (stands for the default isolation level of the underlying datastore), not REPEATABLE_READ anymore. #231
    • Upgraded Kotlin version to 1.4.21.
    Source code(tar.gz)
    Source code(zip)
  • v3.2.0(Oct 8, 2020)

    Package Name Changed to org.ktorm

    Finally, we have our own domain and the official website becomes https://www.ktorm.org

    Accordingly, starting from Ktorm 3.2, we have changed our group ID to org.ktorm, so you need to modify your Maven dependency:

    <dependency>
        <groupId>org.ktorm</groupId>
        <artifactId>ktorm-core</artifactId>
        <version>3.2.0</version>
    </dependency>
    

    Or Gradle:

    compile "org.ktorm:ktorm-core:3.2.0"
    

    Package names are also changed to org.ktorm.*, so you also need to modify your import statements:

    import org.ktorm.database.*
    import org.ktorm.dsl.*
    import org.ktorm.entity.*
    import org.ktorm.schema.*
    

    With the only two steps, you have completed all the migration work to version 3.2. Everything should work well as there are no other changes apart from this.

    Source code(tar.gz)
    Source code(zip)
  • v3.1.0(Sep 19, 2020)

    Upgrade to Kotlin 1.4

    To use the new features of Kotlin, we've upgraded its version to 1.4.10, which brings some changes to us:

    • Explicit API mode to ensure we don't expose some internal things accidentally.

    • Contract support for database.useConnection { .. } & database.useTransaction { .. } to let the compiler know our callback functions are called in place exactly once.

    • Deprecation of mapColumnsN & aggregateColumnsN functions. As Kotlin 1.4 supports overload resolution by lambda return type, we don't need these functions anymore, mapColumns & aggregateColumns is enough.

      database.employees
          .filter { it.departmentId eq 1 }
          .mapColumns { tupleOf(it.id, it.name) }
          .forEach { (id, name) ->
              println("$id:$name")
          }
      

    Entity Based Update Function

    In Ktorm 3.1, we provide an update function that can update all the non-null properties of an entity object to the database. Using this function, the entity object is not required to be associated with a table first. That means, comparing to flushChanges, we don’t have to obtain an entity object from the database first before performing the update. The usage is as follows:

    val employee = Employee {
        id = 5
        job = "engineer"
        salary = 100
    }
    
    database.employees.update(employee)
    

    Generated SQL:

    update t_employee set job = ?, salary = ? where id = ?
    

    Syntax Refactoring of Insert & Update DSL

    Previously, we inserted a record into the table like this:

    database.insert(Employees) {
        it.name to "jerry"
        it.job to "trainee"
        it.managerId to 1
        it.hireDate to LocalDate.now()
        it.salary to 50
        it.departmentId to 1
    }
    

    Here, we used it.name to "jerry" to set the name to jerry in the closure. And the to function is a member of AssignmentsBuilder, but not the to function used to create Pair instances of Kotlin standard lib.

    It is very easy for users to get confused with these two functions, so in Ktorm 3.1, we provide another set function as an alternative. The to function is marked deprecated and will be removed in the future. You can learn the new syntax here https://ktorm.liuwj.me/en/dml.html

    database.insert(Employees) {
        set(it.name, "jerry")
        set(it.job, "trainee")
        set(it.managerId, 1)
        set(it.hireDate, LocalDate.now())
        set(it.salary, 50)
        set(it.departmentId, 1)
    }
    

    Other Optimizations and Bug Fixes

    • Add ShortSqlType. #160
    • Add MySQL IF function. #163
    • Support mixed-case column names & auto case transform. #175
    • Allow specify PostgreSQL insertOrUpdate conflict columns. #181
    • Support select .. for update. #69
    • Support catalog & schema in table definition. #89, #154, #183
    • Check max column name length. #122
    • Fix timestamp fraction bug. #130
    • Update the syntax of defining JSON columns from json("foo", typeRef<List<Int>>) to json<List<Int>>("foo").
    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(Jun 16, 2020)

    Some break changes in Ktorm 3.0:

    • Completely remove the deprecated global database APIs and provide equivalent things in a new module ktorm-global.
    • Use = instead of property delegation to define columns.
    • Query doesn't implement Iterable anymore.
    • Support compound primary keys.

    In addition to the break changes above, there are also many updates from enthusiasts in the open source community, thanks for their contributions:

    • MySQL bulkInsert function supports on duplcate key update. Thank @hangingman
    • PostgreSQL hstore data type and a series of operators for it. Thank @arustleund
    • ktorm-jackson supports simple Jackson annotations, like @JsonProperty, @JsonAlias, @JsonIgnore. Thank @onXoot

    For more details about the new version, see https://ktorm.liuwj.me/en/break-changes-in-ktorm-3.0.html

    Source code(tar.gz)
    Source code(zip)
  • v2.7.2(Feb 22, 2020)

  • v2.7.1(Feb 10, 2020)

  • v2.6.1(Feb 10, 2020)

  • v2.7(Feb 2, 2020)

    In Ktorm 2.7, we did a refactoring of the code. This refactoring deprecated Database.global and a series of functions implemented based on it, making users explicitly specify the Database instances to use while performing database operations, instead of implicitly use Database.global. More details can be found at https://ktorm.liuwj.me/en/about-deprecating-database-global.html

    Note that these APIs are still available in version 2.7, but they have been marked as @Deprecated and will be completely removed in the future.

    In previous versions, although Database.connect returns a new created Database object, we usually ignore it because Ktorm automatically saves it to an internal global variable. But now, we have to define a variable by ourselves to hold the return value:

    val database = Database.connect("jdbc:mysql://localhost:3306/ktorm?user=root&password=***")
    

    We used to create queries by the extension function Table.select before:

    // Old API
    for (row in Employees.select()) {
        println(row[Employees.name])
    }
    

    This query uses Database.global, obtaining all records from Employees table, which is indeed very implicit as you can see. Now we have to specify the database instance explicitly and use the syntax of database.from(..).select(..) to create queries:

    for (row in database.from(Employees).select()) {
        println(row[Employees.name])
    }
    

    Here is another example:

    val t = Employees.aliased("t")
    database
        .from(t)
        .select(t.departmentId, avg(t.salary))
        .groupBy(t.departmentId)
        .having { avg(t.salary) greater 100.0 }
        .forEach { row -> 
            println("${row.getInt(1)}:${row.getDouble(2)}")
        }
    

    As for sequence APIs, we used to create sequence objects via asSequence before, and now we just need to change it to sequenceOf. For example:

    val employees = database.sequenceOf(Employees).toList()
    

    Another example using sequenceOf:

    val employees = database
        .sequenceOf(Employees)
        .filter { it.departmentId eq 1 }
        .filter { it.managerId.isNotNull() }
        .sortedBy { it.salary }
        .toList()
    

    These are the two most significant changes in this refactoring. The documents on Ktorm's official website have now been updated for version 2.7. You can refer to the latest documents for what you are interested in. https://ktorm.liuwj.me/

    Feel free to raise issues on GitHub if you have any questions.

    Source code(tar.gz)
    Source code(zip)
  • v2.6(Nov 2, 2019)

    Support Running on Android Devices (#22)

    Now, Ktorm is available for Android SQLite with the support of SQLDroid driver. And technically, any other JDBC driver is also supported (if you really need them running on Android).

    Update JVM Target to 1.6

    For maximum compatibility, we updated the compiler option -jvm-target to 1.6. This option is used to specify the version of the generated JVM bytecode. Moreover, to support running on Android and JDK 1.6, we added three SqlType implementations, they supports java.sql.Timestamp, java.sql.Date and java.sql.Time, because JSR-310 is not available on those platforms.

    Support Multiple Bindings on One Column

    Now, we can bind a column to multiple properties by calling the bindTo or references functions continuously. In this way, when an entity object is retrieved from the database, the value of this column will be filled to each property it binds.

    interface Config : Entity<Config> {
        val key: String
        var value1: String
        var value2: String
    }
    
    object Configs : Table<Config>("t_config") {
        val key by varchar("key").primaryKey().bindTo { it.key }
        val value by varchar("value").bindTo { it.value1 }.bindTo { it.value2 }
    }
    

    In the example above, we bound the value column to both value1 and value2, so the values of these two properties would be the same in an entity object obtained from the database.

    Please note that multiple bindings are only available for query operations. When we are inserting or updating an entity, the first binding will prevail, and other bindings will be ignored.

    Support Column Alias DSL (by @waluo, #37)

    Now, we can assign aliases to the selected columns of a query and use them in subsequent clauses such as group by and having, just like the as keyword in SQL. Here is an example. This query selects departments whose average salary is greater than 100, then returns the average salaries along with their department’s IDs.

    val deptId = Employees.departmentId.aliased("dept_id")
    val salaryAvg = avg(Employees.salary).aliased("salary_avg")
    
    Employees
        .select(deptId, salaryAvg)
        .groupBy(deptId)
        .having { salaryAvg greater 100.0 }
        .forEach { row ->
            println("${row[deptId]}:${row[salaryAvg]}")
        }
    

    Generated SQL:

    select t_employee.department_id as dept_id, avg(t_employee.salary) as salary_avg 
    from t_employee 
    group by dept_id 
    having salary_avg > ?
    

    Other Optimizations and Bug Fixes

    • Refactoring the implementation of QueryRowSet.
    • Support SQL Server datetimeoffset data type.
    • Max/min aggregation functions' type arguments should be Comparable instead of Number (#46).
    Source code(tar.gz)
    Source code(zip)
  • v2.5(Aug 24, 2019)

    • Support defining entities as any kind of classes, such as data class or POJO, see https://ktorm.liuwj.me/en/define-entities-as-any-kind-of-classes.html
    • Fix bug #33

    Appreciation for the support and suggestions from @waluo

    Source code(tar.gz)
    Source code(zip)
  • v2.4(Jun 26, 2019)

    • Upgrade Kotlin version to 1.3.40.
    • Auto detect the third-party logging framework we are using from the classpath, and delegate Ktorm's logs to it. #15
    • Use JDK ServiceLoader to find a dialect. Now we don't have to specify the dialect parameter explicitly while creating Database instances. #5
    • Add match and against functions for MySQL fulltext search, translated to its match ... against ... syntax. #25
    • Add insertOrUpdate function for PostgreSQL's data "upsert", translated to its on conflict (key) do update set syntax. #26
    • Other optimizations and bug fixes.
    Source code(tar.gz)
    Source code(zip)
  • v2.3(Jun 9, 2019)

    • Documents for all public classes and functions.
    • Throw DialectFeatureNotSupportedException while a feature is not supported by a dialect.
    • Other optimizations and bug fixes.
    Source code(tar.gz)
    Source code(zip)
  • v2.2(May 5, 2019)

    Add a logging facade and the slf4j dependency is removed. (#4) Documents can be found at https://ktorm.liuwj.me/en/connect-to-databases.html#Logging

    Source code(tar.gz)
    Source code(zip)
  • v2.1(Apr 26, 2019)

    • Add function asSequenceWithoutReferences to disable the auto joining of reference tables.
    • Add function mapColumns to select a particular column from tables.
    • Add dateDiff function for MySQL.
    • Rename function aggregate to aggregateColumns.
    • Mark some internal APIs as @PublishedApi.
    Source code(tar.gz)
    Source code(zip)
  • v2.0(Apr 12, 2019)

    • Add EntitySequence APIs (#7), obtaining entities just like using kotlin.Sequence, document: https://ktorm.liuwj.me/en/entity-sequence.html
    • New column binding syntax (#8)
    • Other optimizations and bug fixes.
    Source code(tar.gz)
    Source code(zip)
  • v1.3(Mar 28, 2019)

  • v1.2(Dec 27, 2018)

  • v1.1.final(Dec 14, 2018)

  • 1.1(Dec 14, 2018)

Owner
KTORM.ORG
Kotlin ORM framework with strong-typed SQL DSL and sequence APIs.
KTORM.ORG
Upsert DSL extension for Exposed, Kotlin SQL framework

Exposed Upsert Upsert DSL extension for Exposed, Kotlin SQL framework. Project bases on various solutions provided by community in the official "Expos

Reposilite Playground 23 Oct 6, 2022
A lightweight wrapper around SQLiteOpenHelper which introduces reactive stream semantics to SQL operations.

SQL Brite A lightweight wrapper around SupportSQLiteOpenHelper and ContentResolver which introduces reactive stream semantics to queries. Deprecated T

Square 4.6k Jan 5, 2023
lightweight and minimalist ORM for Java/Android. works with SQLite & MySQL. (not actively maintained)

Description ORMAN is an minimalistic and lightweight ORM framework for Java which can handle your common database usage without writing SQL and strugg

Ahmet Alp Balkan 246 Nov 20, 2022
lightweight and minimalist ORM for Java/Android. works with SQLite & MySQL. (not actively maintained)

Description ORMAN is an minimalistic and lightweight ORM framework for Java which can handle your common database usage without writing SQL and strugg

Ahmet Alp Balkan 246 Nov 20, 2022
ORMDroid is a simple ORM persistence framework for your Android applications.

ORMDroid is a simple ORM persistence framework for your Android applications, providing an easy to use, almost-zero-config way to handle model persist

Ross Bamford 87 Nov 10, 2022
LiteOrm is a fast, small, powerful ORM framework for Android. LiteOrm makes you do CRUD operarions on SQLite database with a sigle line of code efficiently.

#LiteOrm:Android高性能数据库框架 A fast, small, powerful ORM framework for Android. LiteOrm makes you do CRUD operarions on SQLite database with a sigle line

马天宇 1.5k Nov 19, 2022
sql-delight example, a plugin by Square which is pure kotlin and it is useful in kmm

Sql-Delight-Example01 Developed by Mahdi Razzaghi Ghaleh first example of sql-delight What is SqlDelight? Kotlin Multiplatform is one of the most inte

rq_mehdi 0 Jan 24, 2022
AndroidQuery is an Android ORM for SQLite and ContentProvider which focuses on easy of use and performances thanks to annotation processing and code generation

WARNING: now that Room is out, I no longer maintain that library. If you need a library to easy access to default android ContentProvider, I would may

Frédéric Julian 19 Dec 11, 2021
requery - modern SQL based query & persistence for Java / Kotlin / Android

A light but powerful object mapping and SQL generator for Java/Kotlin/Android with RxJava and Java 8 support. Easily map to or create databases, perfo

requery 3.1k Dec 29, 2022
A simple NoSQL client for Android. Meant as a document store using key/value pairs and some rudimentary querying. Useful for avoiding the hassle of SQL code.

SimpleNoSQL A simple NoSQL client for Android. If you ever wanted to just save some data but didn't really want to worry about where it was going to b

Colin Miller 389 Sep 25, 2022
Android library for auto generating SQL schema and Content provider

Android-AnnotatedSQL Android library for auto generating SQL schema and Content Provider by annotations. You will get a full-featured content provider

Gennadiy Dubina 161 Dec 3, 2022
JAKO: Just Another Kotlin Orm (PostgreSQL)

JAKO: Just Another Kotlin Orm (PostgreSQL) JAKO is a simple, minimal, no-dependency library to build and execute postgresql statements using a fluent

Alessio 6 May 27, 2022
A blazing fast, powerful, and very simple ORM android database library that writes database code for you.

README DBFlow is fast, efficient, and feature-rich Kotlin database library built on SQLite for Android. DBFlow utilizes annotation processing to gener

Andrew Grosner 4.9k Dec 30, 2022
A blazing fast, powerful, and very simple ORM android database library that writes database code for you.

README DBFlow is fast, efficient, and feature-rich Kotlin database library built on SQLite for Android. DBFlow utilizes annotation processing to gener

Andrew Grosner 4.9k Dec 30, 2022
An ORM for Android with type-safety and painless smart migrations

Android Orma Orma is a ORM (Object-Relation Mapper) for Android SQLiteDatabase. Because it generates helper classes at compile time with annotation pr

The Maskarade project 440 Nov 25, 2022
greenDAO is a light & fast ORM solution for Android that maps objects to SQLite databases.

Check out ObjectBox Check out our new mobile database ObjectBox (GitHub repo). ObjectBox is a superfast object-oriented database with strong relation

Markus Junginger 12.6k Jan 3, 2023
Android ORM

Shillelagh Shillelagh is an sqlite library. It was built to make life easier. The entire library was built around simplicity when using sqlite in Andr

Andrew Reitz 49 Sep 11, 2020
Compile-time active record ORM for Android

Ollie Compile-time active record ORM for Android. Multiple mapping methods. SQLiteDatabase-like interface (QueryUtils.java). Lightweight query builder

Michael Pardo 423 Dec 30, 2022
a 3d database ORM experiment. (used in two commercial projects)

Android-TriOrm a 3d database ORM experiment for Android. (used in two commercial projects). based around small tables concept and JVM Serialization. H

Tomer Shalev 19 Nov 24, 2021