Powerful, elegant and flexible test framework for Kotlin with additional assertions, property testing and data driven testing

Overview

Kotest

Build Status intellij-badge GitHub kotest @ kotlinlang.slack.com Average time to resolve an issue

Kotest is a flexible and comprehensive testing tool for Kotlin with multiplatform support.

To learn more about Kotest, visit kotest.io or see our quick start guide. Looking to upgrade? - see the changelog

Community

Yourkit Logo

YourKit supports open source projects with innovative and intelligent tools for monitoring and profiling Java and .NET applications. YourKit is the creator of YourKit Java Profiler, YourKit .NET Profiler, and YourKit YouMonitor.

Comments
  • Question: one instance per test with nested context

    Question: one instance per test with nested context

    I know there's a blurb about why this was yanked in the changelog, but I'm loving being able to use nested contexts when describing my tests, and really missing the ability to have the established context reset between each test--I end up with a bunch of code trying to re-establish the state for a set of tests.

    Is there any hope of this returning? It sounds like it wasn't that a single bug was hit so much that it affected the complexity quite a bit, but I'd be interested in playing around with re-adding this if you thought it was feasible.

    enhancement 
    opened by bbaldino 58
  • Tests have huge delay, and

    Tests have huge delay, and "Default test timeout: 600000ms" shows

    I'm trying Kotest for the first time (version 4.0.2). I created a new project in Android Studio 3.6.2, using Gradle 5.6.4. I added the basic example tests:

    class MyTests : StringSpec({
        "length should return size of string" {
            "hello".length shouldBe 5
        }
        "startsWith should test for a prefix" {
            "world" should startWith("wor")
        }
    })
    

    I ran the tests, and they passed, but they took several seconds to run. The following was printed in the Android Studio 'Run' window:

    ~~~ Kotest Configuration ~~~
    -> Parallelism: 1 thread(s)
    -> Default test timeout: 600000ms
    -> Default test order: TestCaseOrder
    -> Default isolation mode: IsolationMode
    -> Global soft assertations: False
    -> Write spec failure file: False
    -> Fail on ignored tests: False
    -> Spec execution order: LexicographicSpecExecutionOrder
    -> Extensions
      - io.kotest.core.extensions.SystemPropertyTagExtension
      - io.kotest.core.extensions.RuntimeTagExtension
      - io.kotest.core.extensions.IgnoredSpecDiscoveryExtension
      - io.kotest.core.extensions.TagFilteredDiscoveryExtension
    

    There is no exception message; just the above. I get the same output if I run gradlew test from the command line.

    I tried both the JUnit 5 runner and the JUnit 4 runner, with the same results each time. I also tried Gradle 6.3, with the same results.

    The following SO question seems to be the same issue:

    https://stackoverflow.com/questions/60351366/kotest-freezes-after-migrating-to-4-0-0-beta1-on-testdebugunittest-task

    bug android 
    opened by tom-jb 55
  • Property Test Discussion

    Property Test Discussion

    I am working on overhauling our property testing as part of the upcoming 4.0 release. To this end, I have brought together the requirements (based off tickets existing in this tracker), and come up with a basic design. I would like feedback on this design before I fully implement it. There are also some questions that I don't have an answer to yet that I would like to discuss. At this stage everything is open to change.

    Property Test Requirements

    Deterministic Re-runs

    If a test failed it is useful to be able to re-run the tests with the same values. Especially in cases where shrinking is not available. Therefore, the test functions accept a seed value which is used to create the Random instance used by the tests. This seed can then be programatically set to re-run the tests with the same random instance.

    By default the seed is null, which means the seed changes each time the test is run.

    Exhaustive

    The generators are passed an Exhaustivity enum value which determines the behavior of generated values.

    • Random - all values should be randomly generated
    • Exhaustive - every value should be generated at least once. If, for the given iteration count, all values cannot be generated, then the test should fail immediately.
    • Auto - Exhaustive if possible and supported, otherwise random.

    By default Auto mode is used.

    Question - do we want to be able to specify exhaustivity per parameter?

    In #1101 @EarthCitizen talks about Series vs Gen. I do like the nomenclature, but we would need another abstraction (Arbitrary?) on top which would then be able to provide a Gen or Series as required based on the exhaustive flag.

    Question - do we want to implement this way as opposed to the way I have outlined in the code below

    Min and Max Successes

    These values determine bounds on how many tests should pass. Typically min and max success would be equal to the iteration count, which gives the forAll behavior. For forNone behavior, min and max would both be zero. Other values can be used to mimic behavior like forSome, forExactly(n) and so on.

    By default, min and max success are set to the iteration count.

    Distribution

    It is quite common to want to generate values across a large number space, but have a bias towards certain values. For example, when writing a function to test emails, it might be more useful to generate more strings with a smaller number of characters than larger amounts. Most emails are probably < 50 characters for example.

    The distribution mode can be used to bias values by setting the bound from which each test value is generated.

    • Uniform - values are distributed evenly across the space. For an integer generator of values from 1 to 1000 with 10 runs, a random value would be generated from 0.100, another from 101..200 and so on.
    • Pareto - values are biased towards the lower end on a roughly 80/20 rule.

    By default the uniform distribution is used.

    The distribution mode may be ignored by a generator if it has no meaning for the types produced by that generator.

    The distribution mode has no effect if the generator is acting in exhaustive mode.

    Question - it would be nice to be able to specify specific "biases" when using specific generators. For example, a generator of A-Z chars may choose to bias towards vowels. How to specify this when distribution is a sealed type? Use an interface and allow generators to create their own implementations?

    Shrinking Mode

    The ShrinkingMode determines how failing values are shrunk.

    • Off - Shrinking is disabled for this generator
    • Unbounded - shrinking will continue until the minimum case is reached as determined by the generator
    • Bounded(n) - the number of shrink steps is capped at n. After this, the shrinking process will halt, even if the minimum case has not been reached. This mode is useful to avoid long running tests.

    By default shrinking is set to Bounded(1000).

    Question1 - do we want to be able to control shrinking per parameter? Turn it off for some parameters, and not others?

    When mapping on a generator, shrinking becomes tricky. If you have a mapper from GenT to GenU and a value u fails, you need to turn that u back into a t, so you can feed that t into the original shrinker. So you can either keep the association between the original value and mapped value, or precompute (lazily?) shinks along with the value.

    Question2 - which is the best approach?

    Gen design

    The gens accept the Random instance used for this run. They accept an iterations parameter so they know the sample space when calculating based on a distribution They accept the exhausitivity mode and the distribution mode. Question - move the iteration count into the distribution parameter itself?

    Note that the gens no longer specify a shrinker, but should provide the shrinks along with the value (see shrinker section for discussion).

    /**
     * A Generator, or [Gen] is responsible for generating data* to be used in property testing.
     * Each generator will generate data for a specific type <T>.
     *
     * The idea behind property testing is the testing framework will automatically test a range
     * of different values, including edge cases and random values.
     *
     * There are two types of values to consider.
     *
     * The first are values that should usually be included on every test run: the edge cases values
     * which are common sources of bugs. For example, a function using [Int]s is more likely to fail
     * for common edge cases like zero, minus 1, positive 1, [Int.MAX_VALUE] and [Int.MIN_VALUE]
     * rather than random values like 159878234.
     *
     * The second set of values are random values, which are used to give us a greater breadth to the
     * test cases. In the case of a functioin using [Int]s, these random values could be from across
     * the entire integer number line.
     */
    interface Gen<T> {
    
       /**
        * Returns the values that are considered common edge case for this type.
        *
        * For example, for [String] this may include the empty string, a string with white space,
        * a string with unicode, and a string with non-printable characters.
        *
        * The result can be empty if for type T there are no common edge cases.
        *
        * @return the common edge cases for type T.
        */
       fun edgecases(): Iterable<T>
    
       /**
        * Returns a sequence of values to be used for testing. Each value should be provided together
        * with a [Shrinker] to be used if the given value failed to pass.
        *
        * This function is invoked with an [Int] specifying the nth test value.
        *
        * @param random the [Random] instance to be used for random values. This random instance is
        * seeded using the seed provided to the test framework so that tests can be deterministically rerun.
        *
        * @param iterations the number of values that will be required for a successful test run.
        * This parameter is provided so generators know the sample space that will be required and can thus
        * distribute values accordingly.
        *
        * @param exhaustivity specifies the [Exhaustivity] mode for this generator.
        *
        * @param distribution specifies the [Distribution] to use when generating values.
        *
        * @return the test values as a lazy sequence.
        */
       fun generate(
          random: Random,
          iterations: Int,
          exhaustivity: Exhaustivity = Exhaustivity.Auto,
          distribution: Distribution
       ): Sequence<Pair<T, Shrinker<T>>>
    
       companion object
    }
    
    fun Gen.Companion.int(lower: Int, upper: Int) = object : Gen<Int> {
       private val literals = listOf(Int.MIN_VALUE, Int.MAX_VALUE, 0)
       override fun edgecases(): Iterable<Int> = literals
       override fun generate(
          random: Random,
          iterations: Int,
          exhaustivity: Exhaustivity,
          distribution: Distribution
       ): Sequence<Pair<Int, Shrinker<Int>>> {
    
          val randomized = infiniteSequence { k ->
             val range = distribution.get(k, iterations, lower.toLong()..upper.toLong())
             random.nextLong(range).toInt()
          }
    
          val exhaustive = generateInfiniteSequence {
             require(iterations <= upper - lower)
             (lower..upper).iterator().asSequence()
          }.flatten()
    
          val seq = when (exhaustivity) {
             Exhaustivity.Auto -> when {
                iterations <= upper - lower -> exhaustive
                else -> randomized
             }
             Exhaustivity.Random -> randomized
             Exhaustivity.Exhaustive -> exhaustive
          }
          return seq.map { Pair(it, IntShrinker) }
       }
    }
    
    fun <T, U> Gen<T>.map(f: (T) -> U): Gen<U> {
       val outer = this
       return object : Gen<U> {
          override fun edgecases(): Iterable<U> = outer.edgecases().map(f)
          override fun generate(
             random: Random,
             iterations: Int,
             exhaustivity: Exhaustivity,
             distribution: Distribution
          ): Sequence<Pair<U, Sequence<U>>> =
             outer.generate(random, iterations, exhaustivity, distribution)
                .map { (value, shrinks) ->
                   Pair(f(value), shrinks.map { f(it) }.asSequence())
                }
       }
    }
    
    sealed class Distribution {
    
       abstract fun get(k: Int, iterations: Int, range: LongRange): LongRange
    
       /**
        * Splits the range into discrete "blocks" to ensure that random values are distributed
        * across the entire range in a uniform manner.
        */
       object Uniform : Distribution() {
          override fun get(k: Int, iterations: Int, range: LongRange): LongRange {
             val step = (range.last - range.first) / iterations
             return (step * k)..(step * (k + 1))
          }
       }
    
       /**
        * Values are distributed according to the Pareto distribution.
        * See https://en.wikipedia.org/wiki/Pareto_distribution
        * Sometimes referred to as the 80-20 rule
        *
        * tl;dr - more values are produced at the lower bound than the upper bound.
        */
       object Pareto : Distribution() {
          override fun get(k: Int, iterations: Int, range: LongRange): LongRange {
             // this isn't really the pareto distribution so either implement it properly, or rename this implementation
             val step = (range.last - range.first) / iterations
             return 0..(step * k + 1)
          }
       }
    }
    
    sealed class Exhaustivity {
    
       /**
        * Uses [Exhaustive] where possible, otherwise defaults to [Random].
        */
       object Auto : Exhaustivity()
    
       /**
        * Forces random generation of values.
        */
       object Random : Exhaustivity()
    
       /**
        * Forces exhausive mode.
        */
       object Exhaustive : Exhaustivity()
    }
    
    sealed class ShrinkingMode {
    
       /**
        * Shrinking disabled
        */
       object Off : ShrinkingMode()
    
       /**
        * Shrinks until no smaller value can be found. May result in an infinite loop if shrinkers are not coded properly.
        */
       object Unbounded : ShrinkingMode()
    
       /**
        * Shrink a maximum number of times
        */
       data class Bounded(val bound: Int) : ShrinkingMode()
    }
    
    enhancement discussion property-testing 
    opened by sksamuel 49
  • New logo

    New logo

    As we move very close to releasing 4.0 it's time to look at the logo.

    We can either go with the familiar with do something funky. Please add your own designs to.

    new_logo1 Screenshot from 2020-01-20 17-35-06 Screenshot from 2020-01-20 17-36-16

    discussion 
    opened by sksamuel 46
  • 3.2 Release Plan

    3.2 Release Plan

    • [x] Kotlin 1.3
    • [x] Isolation Levels #379
    • [x] Co-variant Gens #471
    • [x] Package selectors in discovery #461
    • [x] Upgrade from reflections to classgraph #459
    • [x] output to show the test cased generated in Property testing or table testing #411
    • [x] Better support for comparing multi-line strings #402
    • [x] Arrow to 0.80 #464
    • [x] Remember previous failing test cases #388
    • [x] New matchers for 3.2 #325
    • [x] Test Listener / Extension rework #494
    • [x] shouldThrow fix #479
    • [x] Customisable location of project config #470
    • [x] New matchers #393 #325 #435
    • [x] Failure first spec ordering #388
    • [x] BehaviorSpec doesn't allow config #495
    • [x] co-routines #386
    opened by sksamuel 45
  • Make matchers more consistent

    Make matchers more consistent

    should be... vs shouldBe should be unified. And I would prefer to have a consistent syntax in usual function call style OR DSL style (without dots and parenthesis), but not both. See my post.

    enhancement 
    opened by helmbold 40
  • Clarify on the use of useJUnitPlatform()

    Clarify on the use of useJUnitPlatform()

    The changelog states that even with Gradle 4.6 applying the JUnit Platform plugin is still required despite it being deprecated by the JUnit team. However, my hunch is that this is only the case if no

    test {
        useJUnitPlatform()
    }
    

    is used. Could you please clarify (in the change log) whether the note even applies with useJUnitPlatform()?

    question 
    opened by sschuberth 38
  • Multiplatform - only one common test gets executed on JS and time reporting for JVM tests doesn't work

    Multiplatform - only one common test gets executed on JS and time reporting for JVM tests doesn't work

    Which version of Kotest are you using 5.0.0

    I have 4 common tests: SyntaxCheckerTests, TokenizerTests, ConnectionTests and ObjectCreationUtilsTests, for some reason with a multiplatform setup similar to https://github.com/kotest/kotest-examples-multiplatform only the ConnectionTests test gets run on browser JS and all the other tests get ignored.

    All of the tests are StringSpec tests, so I don't see why the others get ignored.

    Also, time reporting doesn't work properly for JVM tests: image All the tests in SyntaxCheckerTests take at least 500ms each, yet IDEA reports 0 ms for them.

    opened by JoonasC 34
  • Test not found when run from gradle

    Test not found when run from gradle

    Using Gradle 4.6 and KotlinTest 3.0.1, executing gradlew clean test returns a bunch of warnings from org.reflections, but does not run any tests. When run using the junit runner within intellij, it works fine.

    I'm currently trying to run a spring-boot integration test, shown below. I've also listed my build.gradle for reference.

    test class:

    @SpringBootTest
    class SampleRepositoryIntegrationSpec : BehaviorSpec() {
    
    	override fun listeners(): List<TestListener> = listOf(SpringListener)
    
    	@Autowired
    	lateinit var repository: SampleRepository
    
    	init {
    		given("A valid instance of Sample") {
    			val expected = Sample(comment = UUID.randomUUID().toString())
    
    			When("it is persisted") {
    				val persisted = repository.save(expected)
    
    				then("a valid uuid has been generated") {
    					persisted.uuid shouldNotBe null
    				}
    
    				When("it is queried by uuid") {
    					val found = repository.findOne(persisted.uuid)
    
    					then("is found") {
    						found shouldNotBe null
    						found.uuid shouldBe persisted.uuid
    					}
    				}
    			}
    		}
    	}
    }
    

    Build.gradle:

    buildscript { ext { spring_boot_version = '1.5.10.RELEASE' spring_version = '4.3.14.RELEASE' kotlin_version = '1.2.31' junit_version = '5.1.0' junit_plugin_version = '1.1.0'
    kotlin_test_version = '3.0.1' } repositories { mavenCentral() } dependencies { classpath "org.springframework.boot:spring-boot-gradle-plugin:$spring_boot_version" classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" classpath "org.jetbrains.kotlin:kotlin-allopen:$kotlin_version" classpath "org.jetbrains.kotlin:kotlin-noarg:$kotlin_version" classpath "org.junit.platform:junit-platform-gradle-plugin:$junit_plugin_version" } }

    apply plugin: 'org.springframework.boot' apply plugin: 'war' apply plugin: 'java' apply plugin: 'kotlin' apply plugin: 'kotlin-spring' apply plugin: 'kotlin-jpa' apply plugin: 'org.junit.platform.gradle.plugin'

    sourceCompatibility = 1.8

    repositories { mavenCentral() }

    configurations { providedRuntime }

    springBoot { buildInfo() }

    dependencies { //Kotlin compile "org.jetbrains.kotlin:kotlin-stdlib-jdk8:$kotlin_version" compile "org.jetbrains.kotlin:kotlin-reflect:$kotlin_version"

    //Load core spring-boot starters:
    compile "org.springframework.boot:spring-boot-starter:$spring_boot_version"
    compile "org.springframework:spring-web:$spring_version"
    
    //Load data-access dependencies:
    compile "org.springframework.boot:spring-boot-starter-data-jpa:$spring_boot_version"
    runtime 'mysql:mysql-connector-java'
    
    providedCompile "org.springframework.boot:spring-boot-starter-tomcat:$spring_boot_version"
    
    //Load test dependencies
    testCompile 'org.springframework:spring-context'
    testCompile 'org.springframework:spring-test'
    testCompile "org.springframework.boot:spring-boot-test:$spring_boot_version"
    
    testCompile "org.junit.platform:junit-platform-launcher:$junit_plugin_version"
    testCompile "org.junit.platform:junit-platform-runner:$junit_plugin_version"
    testCompile "io.kotlintest:kotlintest-runner-junit5:$kotlin_test_version"
    testCompile "io.kotlintest:kotlintest-extensions-spring:$kotlin_test_version"
    

    }

    compileKotlin { kotlinOptions.jvmTarget = java_version } compileTestKotlin { kotlinOptions.jvmTarget = java_version }

    bootRun { systemProperties["spring.profiles.active"] = System.properties["spring.profiles.active"] ?: "development" }

    test { useJUnitPlatform() systemProperties["spring.profiles.active"] = System.properties["spring.profiles.active"] ?: "test" testLogging { exceptionFormat = 'full' } }

    bug 
    opened by bondpp7 33
  • Error compiling iOS tests with 5.0.0.M2 and compiler plugin

    Error compiling iOS tests with 5.0.0.M2 and compiler plugin

    Which version of Kotest are you using 5.0.0.M2

    I'm trying out the 5.0.0.M2 preview version with the gradle and compiler plugin, it works for JS and JVM but I've having trouble getting it working for iOS. Trying to run the iosX64Test task results in the following error during the linking step:

    The /Applications/Xcode-12.5.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld command returned non-zero exit code: 1.
    output:
    Undefined symbols for architecture x86_64:
      "_kfun:io.kotest.core.spec.DisplayName#<get-name>(){}kotlin.String", referenced from:
          _kfun:io.kotest.engine.test.names.DefaultDisplayNameFormatter#format(kotlin.reflect.KClass<*>){}kotlin.String in result.o
    ld: symbol(s) not found for architecture x86_64
    

    Am I missing a depedency specific to iOS? I have the kotest-framework-engine dependency in my commonTest source set.

    bug external 
    opened by drampelt 32
  • withClue() fails with EmptyStackException if a coroutine switches threads

    withClue() fails with EmptyStackException if a coroutine switches threads

    Version: kotest-assertions-core:4.6.2

    As described in the coroutine docs on thread-local data, a coroutine may switch threads at suspension points.

    withClue() relies on thread-local data via ThreadLocalErrorCollector, without taking coroutine thread-switching into account. It fails if its block contains a coroutine resuming on a different thread.

    Example

    import io.kotest.assertions.withClue
    import kotlinx.coroutines.Dispatchers
    import kotlinx.coroutines.delay
    import kotlinx.coroutines.runBlocking
    import org.junit.jupiter.api.Test
    
    class TestCase {
        @Test
        fun `coroutine changing threads`() = runBlocking(Dispatchers.Unconfined) {
            withClue("Hello") {
                Thread.currentThread().run { println("withClue() block begins on $name, id $id") }
                delay(10)  // First suspension makes the Unconfined dispatcher resume on a different thread
                Thread.currentThread().run { println("withClue() block ends   on $name, id $id") }
            }
        }
    }
    
    Build script
    import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
    
    plugins {
        kotlin("jvm") version "1.5.30"
        application
    }
    
    group = "me.oliver"
    version = "1.0-SNAPSHOT"
    
    repositories {
        mavenCentral()
    }
    
    dependencies {
        implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:1.5.1")
        testImplementation(kotlin("test"))
        testImplementation("io.kotest:kotest-assertions-core:4.6.2")
    }
    
    tasks.test {
        useJUnitPlatform()
    }
    
    tasks.withType<KotlinCompile>() {
        kotlinOptions.jvmTarget = "11"
    }
    
    application {
        mainClass.set("MainKt")
    }
    
    Result
    > Task :test FAILED
    withClue() block begins on Test worker @coroutine#1, id 12
    withClue() block ends   on kotlinx.coroutines.DefaultExecutor @coroutine#1, id 15
    
    java.util.EmptyStackException
    	at java.base/java.util.Stack.peek(Stack.java:102)
    	at java.base/java.util.Stack.pop(Stack.java:84)
    	at io.kotest.assertions.ThreadLocalErrorCollector.popClue(ErrorCollector.kt:32)
    	at TestCase$coroutine changing threads$1.invokeSuspend(TestCase.kt:23)
    
    bug assertions 
    opened by OliverO2 32
  • shouldBeEqualToIgnoringFields does not check types of 'other' and 'this'

    shouldBeEqualToIgnoringFields does not check types of 'other' and 'this'

    Which version of Kotest are you using 5.5.4

    Bug description

    import io.kotest.matchers.equality.shouldBeEqualToIgnoringFields
    import org.junit.jupiter.api.Test
    
    data class Sample(val field1: String, val field2: String)
    class CompletelyDifferent(numberField: Int) {
        private val numberField: Int = numberField;
    }
    
    internal class MyTest {
        @Test
        fun `test passes, but should fail`() {
            val output = CompletelyDifferent(1)
            val expectedOutput = Sample("a", "b")
    
            output.shouldBeEqualToIgnoringFields(expectedOutput, Sample::field1)
        }
    }
    

    The above test passes, even though output and expectedOutput are instances of completely different classes.

    To make the test fail if something wrong is returned, an additional assertion is needed:

            output shouldBeSameInstanceAs expectedOutput
            output.shouldBeEqualToIgnoringFields(expectedOutput, Sample::field1)
    

    This reduces the simplicity and is something that you can easily forget to add (as you assume the second assertion covers this as well)

    opened by Byte27 0
  • Node JS tests do not report failures correctly

    Node JS tests do not report failures correctly

    When writing tests using kotlin-test, a failure is properly reported in the console/XML file, for example the following code

    class Test {
        @Test
        fun failTest() {
            assertEquals("foo", "bar")
        }
    }
    

    gives the following output

    > Task :nodeTest
    
    AssertionError: Expected <foo>, actual <bar>.
    AssertionError: Expected <foo>, actual <bar>.
    	at DefaultJsAsserter.assertTrue_5alkc2(/.../kotest-test-reporting/jsMainSources/main/kotlin/kotlin/test/JsImpl.kt:23)
    	at ......
    
    Test.failTest FAILED
        AssertionError at /.../kotest-test-reporting/src/test/kotlin/Test.kt:7
    
    1 test completed, 1 failed
    There were failing tests
    > Task :test FAILED
    

    But if I use Kotest, the following code

    class Test : FunSpec({
        test("failTest") {
            "foo" shouldBe "bar"
        }
    })
    

    The only output I get is

    > Task :nodeTest FAILED
    FAILURE: Build failed with an exception.
    * What went wrong:
    Execution failed for task ':nodeTest'.
    > command '/.../.gradle/nodejs/node-v16.13.0-darwin-x64/bin/node' exited with errors (exit code: 1)
    * Try:
    > Run with --stacktrace option to get the stack trace.
    > Run with --info or --debug option to get more log output.
    > Run with --scan to get full insights.
    * Get more help at https://help.gradle.org
    

    After a bit of investigation, Kotlin had a similar issue with 1.7.20 ( fix commit ) so there might be some pointers in there. It should be noted that the tests run correctly though, if you run the nodeJS command manually, you get the Mocha test output and can see the failures reported correctly

    Versions used :

    • Kotlin 1.7.21
    • Kotest 5.5.4
    opened by SeekDaSky 0
  • IJ language injection

    IJ language injection

    Partial fix for #2916

    Re-implements @Language to be Kotlin-multiplatform, using workaround suggested by JetBrains.

    This injection makes the assertions easier to use, as IntelliJ will provide highlighting, autocompletion, and quick fixes (read more about language injection).

    TODO

    • [ ] Verify that there is no clash on JVM between Kotest's @Language and IntelliJ's implementation

    Examples

    image image

    Further improvements

    Languages aren't injected into

    • receivers https://youtrack.jetbrains.com/issue/KTIJ-12951/
    • infix parameters image

    Perhaps the Kotest IJ plugin could correct implement the language injections to work with Kotlin.

    Related

    • https://github.com/kotest/kotest-intellij-plugin/pull/218 - this PR tried to add the language injections on a per-function basis, but I don't think it worked once the plugin was released.
    opened by aSemy 0
  • how to mock stdin and stdout correctly?

    how to mock stdin and stdout correctly?

    studying competive programming with kotest. since the problems are evalutated with stdin and stdout, I've been using this mock to test my functions.

    fun testSolution(input: String, output: String, fn: () -> Unit) =
        mock(input.trimIndent()) { fn() } shouldBe output.toOutput()
    
    
    fun mock(input: String, block: () -> Unit): String =
        captureSystemOut { mockSystemIn(input, block) }
    
    fun String.toOutput() = this.trimIndent() + '\n'
    
    private fun mockSystemIn(input: String, block: () -> Unit) {
        val old = System.`in`
        System.setIn(input.byteInputStream())
    
        block()
    
        System.setIn(old)
    }
    
    private fun captureSystemOut(block: () -> Unit): String {
        val old = System.out
        val newOut = ByteArrayOutputStream()
        val printStream = PrintStream(newOut)
    
        System.setOut(printStream)
        System.out.flush()
    
        block()
    
        System.setOut(old)
        return newOut.toString()
    }
    

    however, with this approach, error messages aren't very helpful. image

    opened by scarf005 3
  • Fix for issue-3306 - Removing default location for htmlReporter and using the default value from the constructor

    Fix for issue-3306 - Removing default location for htmlReporter and using the default value from the constructor

    Context

    #3306

    1. The outputDir customizable field in HTMLReporter does not work if the gradle.build.dir system property is not set.

    2. Running tests using the kotest sidebar menu will produce reports in top level directory even when gradle.build.dir system property is set in build.gradle and a different outputDir is used (probably because the kotest sidebar execution context doesn't have the gradle.build.dir system property set).

    Root Cause

    In HTMLReporter, if the gradle.build.dir system property is not set, a DefaultLocation is used and ignores the outputDir field

    Changes Made

    Removing the DefaultLocation variable and replacing it's usage with outputDir.

    Expected Result

    When a user does not specify an outputDir, the default string from the constructor will be used and there is no change in logic.

    When a user does specify an outputDir, the html report will be placed within the specified outputDir

    opened by arvarik 0
  • Where are the Android matchers?

    Where are the Android matchers?

    opened by christophehenry 0
Releases(v5.5.4)
Owner
Kotest
A Kotlin test framework, assertions library, property testing and more!
Kotest
A powerful test framework for Android

Cafe A powerful test framework for Android named Case Automated Framework for Everyone. Home Page http://baiduqa.github.com/Cafe/ How to make Cafe dow

Baidu 367 Nov 22, 2022
A powerful test framework for Android

Cafe A powerful test framework for Android named Case Automated Framework for Everyone. Home Page http://baiduqa.github.com/Cafe/ How to make Cafe dow

Baidu 367 Nov 22, 2022
Kotlin wrapper for React Test Renderer, which can be used to unit test React components in a Kotlin/JS project.

Kotlin API for React Test Renderer Kotlin wrapper for React Test Renderer, which can be used to unit test React components in a Kotlin/JS project. How

Xavier Cho 7 Jun 8, 2022
Toster - Small test dsl based on adb commands that allows you to test the mobile application close to user actions

toster Small test dsl based on adb commands that allows you to test the mobile a

Alexander Kulikovskiy 31 Sep 1, 2022
Lbc-test-app - Test Android Senior Leboncoin

Test Android Senior Leboncoin ?? Mathieu EDET Overview Min API version : 24 This

null 0 Feb 7, 2022
Selenium locators for Java/Kotlin that resemble the Testing Library (testing-library.com).

Selenium Testing Library Testing Library selectors available as Selenium locators for Kotlin/Java. Why? When I use Selenium, I don't want to depend on

Luís Soares 5 Dec 15, 2022
PowerMock is a Java framework that allows you to unit test code normally regarded as untestable.

Writing unit tests can be hard and sometimes good design has to be sacrificed for the sole purpose of testability. Often testability corresponds to go

PowerMock 3.9k Jan 5, 2023
PowerMock is a Java framework that allows you to unit test code normally regarded as untestable.

Writing unit tests can be hard and sometimes good design has to be sacrificed for the sole purpose of testability. Often testability corresponds to go

PowerMock 3.9k Jan 2, 2023
Snapshot Testing framework for Kotlin.

KotlinSnapshot Snapshot Testing framework for Kotlin. What is this? Snapshot testing is an assertion strategy based on the comparision of the instance

Pedro Gómez 157 Nov 13, 2022
A programmer-oriented testing framework for Java.

JUnit 4 JUnit is a simple framework to write repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks. For more infor

JUnit 8.4k Jan 9, 2023
Android Unit Testing Framework

Robolectric is the industry-standard unit testing framework for Android. With Robolectric, your tests run in a simulated Android environment inside a

Robolectric 5.6k Jan 3, 2023
A programmer-oriented testing framework for Java.

JUnit 4 JUnit is a simple framework to write repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks. For more infor

JUnit 8.4k Dec 28, 2022
Morsa: Jetpack Compose UI Testing Framework

Morsa: Jetpack Compose UI Testing Framework Test library to ease UI testing with Jetpack Compose Purpose This library aims to add some useful wrappers

HyperDevs 10 Dec 3, 2022
Strikt is an assertion library for Kotlin intended for use with a test runner such as JUnit, Minutest, Spek, or KotlinTest.

Strikt is an assertion library for Kotlin intended for use with a test runner such as JUnit, Minutest, Spek, or KotlinTest.

Rob Fletcher 447 Dec 26, 2022
Proyecto de Kotlin y JPA sobre Hibernate, con algunos test usando JUnit 5 y Mockito.

Contactos Kotlin JPA Ejemplos de una aplicación de manejo de contactos con Kotlin y JPA. Usando para testear la aplicación JUnit 5 y Mockito. Almacena

José Luis González Sánchez 3 Sep 13, 2022
null 866 Dec 27, 2022
Barista makes developing UI test faster, easier and more predictable. Built on top of Espresso

Barista makes developing UI test faster, easier and more predictable. Built on top of Espresso, it provides a simple and discoverable API, removing most of the boilerplate and verbosity of common Espresso tasks. You and your Android team will write tests with no effort.

Adevinta Spain 1.6k Jan 5, 2023
A custom instrumentation test runner for Android that generates XML reports for integration with other tools.

Android JUnit Report Test Runner Introduction The Android JUnit report test runner is a custom instrumentation test runner for Android that creates XM

Jason Sankey 148 Nov 25, 2022
Linkester is an Android library that aims to help Android developers test their deep links implementation.

Linkester Linkester is an Android library that aims to help Android developers test their deep links implementation. The idea is to have a new launche

Ahmad Melegy 79 Dec 9, 2022