Asynchronous Http and WebSocket Client library for Java

Overview

Async Http Client Build Status Maven Central

Follow @AsyncHttpClient on Twitter.

The AsyncHttpClient (AHC) library allows Java applications to easily execute HTTP requests and asynchronously process HTTP responses. The library also supports the WebSocket Protocol.

It's built on top of Netty. It's currently compiled on Java 8 but runs on Java 9 too.

New Roadmap RFCs!

Well, not really RFCs, but as I am ramping up to release a new version, I would appreciate the comments from the community. Please add an issue and label it RFC and I'll take a look!

This Repository is Actively Maintained

@TomGranot is the current maintainer of this repository. You should feel free to reach out to him in an issue here or on Twitter for anything regarding this repository.

Installation

Binaries are deployed on Maven Central.

Import the AsyncHttpClient Bill of Materials (BOM) to add dependency management for AsyncHttpClient artifacts to your project:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.asynchttpclient</groupId>
            <artifactId>async-http-client-bom</artifactId>
            <version>LATEST_VERSION</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

Add a dependency on the main AsyncHttpClient artifact:

<dependencies>
    <dependency>
    	<groupId>org.asynchttpclient</groupId>
    	<artifactId>async-http-client</artifactId>
    </dependency>
</dependencies>

The async-http-client-extras-* and other modules can also be added without having to specify the version for each dependency, because they are all managed via the BOM.

Version

AHC doesn't use SEMVER, and won't.

  • MAJOR = huge refactoring
  • MINOR = new features and minor API changes, upgrading should require 1 hour of work to adapt sources
  • FIX = no API change, just bug fixes, only those are source and binary compatible with same minor version

Check CHANGES.md for migration path between versions.

Basics

Feel free to check the Javadoc or the code for more information.

Dsl

Import the Dsl helpers to use convenient methods to bootstrap components:

import static org.asynchttpclient.Dsl.*;

Client

import static org.asynchttpclient.Dsl.*;

AsyncHttpClient asyncHttpClient = asyncHttpClient();

AsyncHttpClient instances must be closed (call the close method) once you're done with them, typically when shutting down your application. If you don't, you'll experience threads hanging and resource leaks.

AsyncHttpClient instances are intended to be global resources that share the same lifecycle as the application. Typically, AHC will usually underperform if you create a new client for each request, as it will create new threads and connection pools for each. It's possible to create shared resources (EventLoop and Timer) beforehand and pass them to multiple client instances in the config. You'll then be responsible for closing those shared resources.

Configuration

Finally, you can also configure the AsyncHttpClient instance via its AsyncHttpClientConfig object:

import static org.asynchttpclient.Dsl.*;

AsyncHttpClient c = asyncHttpClient(config().setProxyServer(proxyServer("127.0.0.1", 38080)));

HTTP

Sending Requests

Basics

AHC provides 2 APIs for defining requests: bound and unbound. AsyncHttpClient and Dsl` provide methods for standard HTTP methods (POST, PUT, etc) but you can also pass a custom one.

import org.asynchttpclient.*;

// bound
Future<Response> whenResponse = asyncHttpClient.prepareGet("http://www.example.com/").execute();

// unbound
Request request = get("http://www.example.com/").build();
Future<Response> whenResponse = asyncHttpClient.executeRequest(request);

Setting Request Body

Use the setBody method to add a body to the request.

This body can be of type:

  • java.io.File
  • byte[]
  • List<byte[]>
  • String
  • java.nio.ByteBuffer
  • java.io.InputStream
  • Publisher<io.netty.bufferByteBuf>
  • org.asynchttpclient.request.body.generator.BodyGenerator

BodyGenerator is a generic abstraction that let you create request bodies on the fly. Have a look at FeedableBodyGenerator if you're looking for a way to pass requests chunks on the fly.

Multipart

Use the addBodyPart method to add a multipart part to the request.

This part can be of type:

  • ByteArrayPart
  • FilePart
  • InputStreamPart
  • StringPart

Dealing with Responses

Blocking on the Future

execute methods return a java.util.concurrent.Future. You can simply block the calling thread to get the response.

Future<Response> whenResponse = asyncHttpClient.prepareGet("http://www.example.com/").execute();
Response response = whenResponse.get();

This is useful for debugging but you'll most likely hurt performance or create bugs when running such code on production. The point of using a non blocking client is to NOT BLOCK the calling thread!

Setting callbacks on the ListenableFuture

execute methods actually return a org.asynchttpclient.ListenableFuture similar to Guava's. You can configure listeners to be notified of the Future's completion.

ListenableFuture<Response> whenResponse = ???;
Runnable callback = () -> {
	try  {
		Response response = whenResponse.get();
		System.out.println(response);
	} catch (InterruptedException | ExecutionException e) {
		e.printStackTrace();
	}
};
java.util.concurrent.Executor executor = ???;
whenResponse.addListener(() -> ???, executor);

If the executor parameter is null, callback will be executed in the IO thread. You MUST NEVER PERFORM BLOCKING operations in there, typically sending another request and block on a future.

Using custom AsyncHandlers

execute methods can take an org.asynchttpclient.AsyncHandler to be notified on the different events, such as receiving the status, the headers and body chunks. When you don't specify one, AHC will use a org.asynchttpclient.AsyncCompletionHandler;

AsyncHandler methods can let you abort processing early (return AsyncHandler.State.ABORT) and can let you return a computation result from onCompleted that will be used as the Future's result. See AsyncCompletionHandler implementation as an example.

The below sample just capture the response status and skips processing the response body chunks.

Note that returning ABORT closes the underlying connection.

import static org.asynchttpclient.Dsl.*;
import org.asynchttpclient.*;
import io.netty.handler.codec.http.HttpHeaders;

Future<Integer> whenStatusCode = asyncHttpClient.prepareGet("http://www.example.com/")
.execute(new AsyncHandler<Integer>() {
	private Integer status;
	@Override
	public State onStatusReceived(HttpResponseStatus responseStatus) throws Exception {
		status = responseStatus.getStatusCode();
		return State.ABORT;
	}
	@Override
	public State onHeadersReceived(HttpHeaders headers) throws Exception {
		return State.ABORT;
	}
	@Override
	public State onBodyPartReceived(HttpResponseBodyPart bodyPart) throws Exception {
		return State.ABORT;
	}
	@Override
	public Integer onCompleted() throws Exception {
		return status;
	}
	@Override
	public void onThrowable(Throwable t) {
	}
});

Integer statusCode = whenStatusCode.get();

Using Continuations

ListenableFuture has a toCompletableFuture method that returns a CompletableFuture. Beware that canceling this CompletableFuture won't properly cancel the ongoing request. There's a very good chance we'll return a CompletionStage instead in the next release.

CompletableFuture<Response> whenResponse = asyncHttpClient
            .prepareGet("http://www.example.com/")
            .execute()
            .toCompletableFuture()
            .exceptionally(t -> { /* Something wrong happened... */  } )
            .thenApply(response -> { /*  Do something with the Response */ return resp; });
whenResponse.join(); // wait for completion

You may get the complete maven project for this simple demo from org.asynchttpclient.example

WebSocket

Async Http Client also supports WebSocket. You need to pass a WebSocketUpgradeHandler where you would register a WebSocketListener.

WebSocket websocket = c.prepareGet("ws://demos.kaazing.com/echo")
      .execute(new WebSocketUpgradeHandler.Builder().addWebSocketListener(
          new WebSocketListener() {

          @Override
          public void onOpen(WebSocket websocket) {
              websocket.sendTextFrame("...").sendTextFrame("...");
          }

          @Override
          public void onClose(WebSocket websocket) {
          }
          
    		  @Override
          public void onTextFrame(String payload, boolean finalFragment, int rsv) {
          	System.out.println(payload);
          }

          @Override
          public void onError(Throwable t) {
          }
      }).build()).get();

Reactive Streams

AsyncHttpClient has built-in support for reactive streams.

You can pass a request body as a Publisher<ByteBuf> or a ReactiveStreamsBodyGenerator.

You can also pass a StreamedAsyncHandler<T> whose onStream method will be notified with a Publisher<HttpResponseBodyPart>.

See tests in package org.asynchttpclient.reactivestreams for examples.

WebDAV

AsyncHttpClient has build in support for the WebDAV protocol. The API can be used the same way normal HTTP request are made:

Request mkcolRequest = new RequestBuilder("MKCOL").setUrl("http://host:port/folder1").build();
Response response = c.executeRequest(mkcolRequest).get();

or

Request propFindRequest = new RequestBuilder("PROPFIND").setUrl("http://host:port").build();
Response response = c.executeRequest(propFindRequest, new AsyncHandler() {
  // ...
}).get();

More

You can find more information on Jean-François Arcand's blog. Jean-François is the original author of this library. Code is sometimes not up-to-date but gives a pretty good idea of advanced features.

User Group

Keep up to date on the library development by joining the Asynchronous HTTP Client discussion group

Google Group

Contributing

Of course, Pull Requests are welcome.

Here are the few rules we'd like you to respect if you do so:

  • Only edit the code related to the suggested change, so DON'T automatically format the classes you've edited.
  • Use IntelliJ default formatting rules.
  • Regarding licensing:
    • You must be the original author of the code you suggest.
    • You must give the copyright to "the AsyncHttpClient Project"
Comments
  • Grizzly provider TimeoutException making async requests

    Grizzly provider TimeoutException making async requests

    When making async requests using the Grizzly provider (from AHC 2.0.0-SNAPSHOT), I get some TimeoutExceptions that should not occur. The server is serving these requests very rapidly, and the JVM isn't GCing very much. The requests serve in a fraction of a second, but the Grizzly provider says they timed out after 9 seconds. If I set the Grizzly provider's timeout to a higher number of seconds then it times out after that many seconds instead..

    Some stack trace examples:

    java.util.concurrent.TimeoutException: Timeout exceeded at org.asynchttpclient.providers.grizzly.GrizzlyAsyncHttpProvider.timeout(GrizzlyAsyncHttpProvider.java:485) at org.asynchttpclient.providers.grizzly.GrizzlyAsyncHttpProvider$3.onTimeout(GrizzlyAsyncHttpProvider.java:276) at org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:382) at org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:362) at org.glassfish.grizzly.utils.DelayedExecutor$DelayedRunnable.run(DelayedExecutor.java:158) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)


    another stack trace:

    java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timeout exceeded at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) at java.util.concurrent.FutureTask.get(FutureTask.java:111) at org.asynchttpclient.providers.grizzly.GrizzlyResponseFuture.get(GrizzlyResponseFuture.java:165) at org.ebaysf.webclient.benchmark.NingAhcGrizzlyBenchmarkTest.asyncWarmup(NingAhcGrizzlyBenchmarkTest.java:105) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.ebaysf.webclient.benchmark.AbstractBenchmarkTest.doBenchmark(AbstractBenchmarkTest.java:168) at org.ebaysf.webclient.benchmark.NingAhcGrizzlyBenchmarkTest.testAsyncLargeResponses(NingAhcGrizzlyBenchmarkTest.java:84) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:45) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:119) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:101) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.maven.surefire.booter.ProviderFactory$ClassLoaderProxy.invoke(ProviderFactory.java:103) at com.sun.proxy.$Proxy0.invoke(Unknown Source) at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:150) at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:91) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69) Caused by: java.util.concurrent.TimeoutException: Timeout exceeded at org.asynchttpclient.providers.grizzly.GrizzlyAsyncHttpProvider.timeout(GrizzlyAsyncHttpProvider.java:485) at org.asynchttpclient.providers.grizzly.GrizzlyAsyncHttpProvider$3.onTimeout(GrizzlyAsyncHttpProvider.java:276) at org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:382) at org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:362) at org.glassfish.grizzly.utils.DelayedExecutor$DelayedRunnable.run(DelayedExecutor.java:158) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)

    Here's what my asyncWarmup() method looks like:

    public void asyncWarmup(final String testUrl) {
        List<Future<Response>> futures = new ArrayList<Future<Response>>(warmupRequests);
        for (int i = 0; i < warmupRequests; i++) {
            try {
                futures.add(this.client.prepareGet(testUrl).execute());
            } catch (IOException e) {
                System.err.println("Failed to execute get at iteration #" + i);
            }
        }
    
        for (Future<Response> future : futures) {
            try {
                future.get();
            } catch (InterruptedException e) {
                e.printStackTrace();
            } catch (ExecutionException e) {
                e.printStackTrace();
            }
        }
    }
    

    And here's how the client is initialized:

    @Override
    protected void setup() {
        super.setup();
    
        GrizzlyAsyncHttpProviderConfig providerConfig = new GrizzlyAsyncHttpProviderConfig();
        AsyncHttpClientConfig config = new AsyncHttpClientConfig.Builder()
                .setAsyncHttpClientProviderConfig(providerConfig)
                .setMaximumConnectionsTotal(-1)
                .setMaximumConnectionsPerHost(4500)
                .setCompressionEnabled(false)
                .setAllowPoolingConnection(true /* keep-alive connection */)
                // .setAllowPoolingConnection(false /* no keep-alive connection */)
                .setConnectionTimeoutInMs(9000).setRequestTimeoutInMs(9000)
                .setIdleConnectionInPoolTimeoutInMs(3000).build();
    
        this.client = new AsyncHttpClient(new GrizzlyAsyncHttpProvider(config), config);
    
    }
    
    Grizzly 
    opened by jbrittain 55
  • Allow DefaultSslEngineFactory subclass customization of the SslContext

    Allow DefaultSslEngineFactory subclass customization of the SslContext

    See #1170 for context.

    If you have ideas for how to usefully test this, I'm happy to write them up, but it wasn't obvious to me how to usefully test this change.

    Enhancement 
    opened by marshallpierce 38
  • FeedableBodyGenerator - LEAK: ByteBuf.release() was not called before it's garbage-collected.

    FeedableBodyGenerator - LEAK: ByteBuf.release() was not called before it's garbage-collected.

    When I use custom FeedableBodyGenerator or SimpleFeedableBodyGenerator I see the next error message:

    [error] i.n.u.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected. Enable advanced leak reporting to find out where the leak occurred. To enable advanced leak reporting, specify the JVM option '-Dio.netty.leakDetection.level=advanced' or call ResourceLeakDetector.setLevel() See http://netty.io/wiki/reference-counted-objects.html for more information.

    I guess the main problem is here: https://github.com/netty/netty/blob/4.0/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java#L990-L992 they just assign null to message, but should also call 'release' (ReferenceCountUtil.release(msg);) Am I right?

    Defect 
    opened by mielientiev 37
  • AsyncHttpClient does not close sockets under heavy load (1.9 only)

    AsyncHttpClient does not close sockets under heavy load (1.9 only)

    If you create 1000 requests in a very short time frame and use connection pool with AsyncHttpClient 1.9.21 and Netty 3.10.1, then some sockets will leak and stay open even past the idle socket reaper. This was initially filed as https://github.com/playframework/playframework/issues/5215, but can be replicated without Play WS.

    Created a reproducing test case here: https://github.com/wsargent/asynchttpclient-socket-leak

    if you have 50 requests, then they'll all be closed immediately. if you have 1000 requests, they'll stay open for a while. After roughly two minutes, AHC will close off all idle sockets, but up to 30 will never die and will always be established.

    To see the dangling sockets, run the id of the java process:

    sudo lsof -i | grep 31602
    

    You'll see

    java      31602       wsargent   89u  IPv6 0xe1b25a8062380645      0t0  TCP 192.168.1.106:58646->ec2-54-173-126-144.compute-1.amazonaws.com:https (ESTABLISHED)
    

    The client port number is your key into the application: if you search for "58646" in application.log, then you'll see that there's a connection associated with it:

    2015-11-02 20:41:38,496 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #1 - [id: 0x5650b318, /192.168.1.106:58646 => playframework.com/54.173.126.144:443] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=2357, cap=2357)
    

    You can see the lifecycle of a handle by using grep:

    grep "0x5650b318" application.log
    

    and what's interesting is that while most ids will have a CLOSE / CLOSED lifecycle associated with them:

    2015-11-02 20:41:45,878 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in Hashed wheel timer #1 - [id: 0x34804fcc, /192.168.1.106:59122 => playframework.com/54.173.126.144:443] WRITE: BigEndianHeapChannelBuffer(ridx=0, widx=69, cap=69)
    2015-11-02 20:41:46,427 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #2 - [id: 0x34804fcc, /192.168.1.106:59122 => playframework.com/54.173.126.144:443] CLOSE
    2015-11-02 20:41:46,427 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #2 - [id: 0x34804fcc, /192.168.1.106:59122 :> playframework.com/54.173.126.144:443] DISCONNECTED
    2015-11-02 20:41:46,434 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #2 - [id: 0x34804fcc, /192.168.1.106:59122 :> playframework.com/54.173.126.144:443] UNBOUND
    2015-11-02 20:41:46,434 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #2 - [id: 0x34804fcc, /192.168.1.106:59122 :> playframework.com/54.173.126.144:443] CLOSED
    

    In the case of "0x5650b318", there's no CLOSE event happening here. In addition, there's a couple of lines that say it's a cached channel:

    2015-11-02 20:41:33,340 [DEBUG] from com.ning.http.client.providers.netty.request.NettyRequestSender in default-akka.actor.default-dispatcher-4 - Using cached Channel [id: 0x5650b318, /192.168.1.106:58646 => playframework.com/54.173.126.144:443]
    2015-11-02 20:41:33,340 [DEBUG] from com.ning.http.client.providers.netty.request.NettyRequestSender in default-akka.actor.default-dispatcher-4 - Using cached Channel [id: 0x5650b318, /192.168.1.106:58646 => playframework.com/54.173.126.144:443]
    

    So I think Netty is not closing cached channels even if they are idle, in some circumstances.

    Defect Netty Contributions Welcome! 
    opened by wsargent 37
  • IOException: Too many connections per host <#>

    IOException: Too many connections per host <#>

    After we set .setMaxConnectionsPerHost(64), our server seemed to happily work. We can pound it with traffic and see very few issues with connections since its so efficient at pooling connections that are in good condition.

    Lib: async-http-client 1.9.15 java version "1.7.0_67"

    After a while, however (about 24 hours), we start getting the above exception coming from the ChannelManager.

    Looking at the NettyResponseListener code, I noticed something odd.

    https://github.com/AsyncHttpClient/async-http-client/blob/b85d5b3505d9f6e80d278fef88876f6546e73079/providers/netty4/src/main/java/org/asynchttpclient/providers/netty4/request/NettyRequestSender.java

    In NettyRequestSender.sendRequestWithNewChannel(), there's this bit:

            boolean channelPreempted = false;
            String partition = null;
    
            try {            // Do not throw an exception when we need an extra connection for a
                // redirect.
                if (!reclaimCache) {
    
                    // only compute when maxConnectionPerHost is enabled
                    // FIXME clean up
                    if (config.getMaxConnectionsPerHost() > 0)
                        partition = future.getPartitionId();
    
                    channelManager.preemptChannel(partition);
                }
    
                if (asyncHandler instanceof AsyncHandlerExtensions)
                    AsyncHandlerExtensions.class.cast(asyncHandler).onOpenConnection();
    
                ChannelFuture channelFuture = connect(request, uri, proxy, useProxy, bootstrap, asyncHandler);
                channelFuture.addListener(new NettyConnectListener<T>(future, this, channelManager, channelPreempted, partition));
    
            } catch (Throwable t) {
                if (channelPreempted)
                    channelManager.abortChannelPreemption(partition);
    
                abort(null, future, t.getCause() == null ? t : t.getCause());
            }
    

    If you notice, channelPreempted never gets written to. Isn't channelPreempted = true missing from the block where the channel is preempted?

    Shouldn't it be:

            boolean channelPreempted = false;
            String partition = null;
    
            try {            // Do not throw an exception when we need an extra connection for a
                // redirect.
                if (!reclaimCache) {
    
                    // only compute when maxConnectionPerHost is enabled
                    // FIXME clean up
                    if (config.getMaxConnectionsPerHost() > 0)
                        partition = future.getPartitionId();
    
                    channelManager.preemptChannel(partition);
                    channelPreempted = true;
                }
    
                if (asyncHandler instanceof AsyncHandlerExtensions)
                    AsyncHandlerExtensions.class.cast(asyncHandler).onOpenConnection();
    
                ChannelFuture channelFuture = connect(request, uri, proxy, useProxy, bootstrap, asyncHandler);
                channelFuture.addListener(new NettyConnectListener<T>(future, this, channelManager, channelPreempted, partition));
    
            } catch (Throwable t) {
                if (channelPreempted)
                    channelManager.abortChannelPreemption(partition);
    
                abort(null, future, t.getCause() == null ? t : t.getCause());
            }
    

    The same class for netty3 has the correct code:

    https://github.com/AsyncHttpClient/async-http-client/blob/b85d5b3505d9f6e80d278fef88876f6546e73079/providers/netty3/src/main/java/org/asynchttpclient/providers/netty3/request/NettyRequestSender.java

    Waiting for user 
    opened by yoeduardoj 27
  • Grizzly provider fails to handle HEAD with Content-Length header

    Grizzly provider fails to handle HEAD with Content-Length header

    I am trying to use Grizzly provider (v1.7.6), and noticed a timeout for simple HEAD request. Since this is local test, with 15 second timeout, it looks like this is due to blocking. Same does not happen with Netty provider.

    My best guess to underlying problem is that Grizzly provider expects there to be content to read since Content-Length is returned. This would be incorrect assumption, since HTTP specification explicitly states that HEAD requests may contain length indicator, but there is never payload entity to return.

    Looking at Netty provider code, I can see explicit handling for this use case, where connection is closed and any content flushes (in case server did send something). I did not see similar handling in Grizzly provider, but since implementation code structure is very different it may reside somewhere else.

    opened by cowtowncoder 24
  • Spawning AHC 2.0 w/ Netty 4 instances very fast leads to fd/thread/memory starvation

    Spawning AHC 2.0 w/ Netty 4 instances very fast leads to fd/thread/memory starvation

    After updating the Play codebase to AHC 2.0-alpha9, I've started to experience issues with non terminating tests because of OutOfMemoryException. Until today, I wrongly assumed this was caused by the changes I made in AHC to support reactive streams, but the assumption turned out to be wrong. In fact, I can reproduce even by using AHC 2.0-alpha8, which doesn't include the reactive stream support.

    Here is a link to the truly long thread dump demonstrating that AHC 2.0-alpha8 is leaking threads https://gist.github.com/dotta/6e388962cf0d904e8170

    This issue is currently preventing https://github.com/playframework/playframework/pull/5082 to successfully build.

    Defect Netty 
    opened by dotta 23
  • Backpressure in AsyncHandler

    Backpressure in AsyncHandler

    AsyncHandler provides no mechanism to send back pressure on receiving the body parts.

    Imagine you have a server that stores large files on Amazon S3, and streams them out to clients, using async http client to connect to S3. Now imagine you have a very slow client, that connects and downloads a file. The slow client pushes back on the server via TCP. However, async http client will keep on calling onBodyPartReceived as fast as S3 provides it with data. The AsyncHandler implementation will have three choices:

    1. Block. Then it's blocking a worker thread, preventing other concurrent operations from happening. This is not an option.
    2. Buffer. Eventually this will cause an OutOfMemoryError. This is not an option.
    3. Drop. Then the client gets a corrupted file. This is not an option.

    AsyncHandler therefore needs a mechanism to propagate back pressure when handling body parts. One possibility here is to provide a method to say whether you are interested in receiving more data or not. This would correspond to a Channel.setReadable(true/false) in the netty provider, which will push back via TCP flow control. This could either be provided by injecting some sort of "channel" object into the AsyncHandler, or, since HttpResponseBodyPart already provides mechanisms for talking back to the channel (eg closeUnderlyingConnection()), it could be provided there.

    Enhancement Contributions Welcome! 
    opened by jroper 23
  • Upgrade to Netty 4.1

    Upgrade to Netty 4.1

    We've just release 2.0 that target Netty 4.0, so we won't rush into this.

    This issue is more of a mind map of what's to do:

    • drop ChannelId backport
    • drop DNS backport
    • investigate Netty's ChannelPool
    Enhancement 
    opened by slandelle 22
  • wss through proxy can't connect

    wss through proxy can't connect

    I am unable to established a wss connection using async-http-client using either the netty or the grizzly async handler, when connecting through a proxy server. In the netty case What appears to happen is that NettyAsyncHttpProvider issues a connect request to the proxy, however the next request, which I would expect to be the upgrade request is not correct.

    The logs look like

    DefaultHttpRequest(chunked: false)
    CONNECT 192.168.1.124:443 HTTP/1.0
    Upgrade: WebSocket
    Connection: Upgrade
    Origin: http://192.168.1.124:443
    Sec-WebSocket-Key: y3xU3BMOqCn6b3JBwKtEVA==
    Sec-WebSocket-Version: 13
    Host: 192.168.1.124
    Proxy-Connection: keep-alive
    Accept: /
    User-Agent: NING/1.0
    
    using Channel
    [id: 0x38827968]
    
    WebSocket Closed
    

    The important piece being that there is no GET between the CONNECT and the Upgrade:WebSocket. It's also not clear if it's reading the response from the proxy server before sending the https request. It appears that the websocket is closed when a HTTP/1.0 200 Connection established is received from the proxy server, which is intrepreted as an invalid response.

    opened by peoplesmeat 22
  • StreamedResponsePublisher cancelled() does not close the channel properly.

    StreamedResponsePublisher cancelled() does not close the channel properly.

    StreamedResponsePublisher does not cancel the channel properly.

    in https://github.com/AsyncHttpClient/async-http-client/blob/master/client/src/main/java/org/asynchttpclient/netty/handler/StreamedResponsePublisher.java

    We are using play-ws client which wraps AsyncHttpClient. When streams are cancelled (such as the downloader cancelling their download) we notice the following:

    1. this.logger.debug("Subscriber cancelled, ignoring the rest of the body"); is called - so the publisher does know that the stream is cancelled.

    However, it does not result in Connection.close.
    The callback is registered here (https://github.com/AsyncHttpClient/async-http-client/blob/90124a5caf414658d537799116c7d4f3d1ad45dd/client/src/main/java/org/asynchttpclient/netty/channel/ChannelManager.java#L407) But it is never called, never resulting in a channel closed exception.

    KeepAlive is false and we have the various connection pool timeouts at 15 seconds.

    Also it appears that the callback is set as an attribute on the channel - is that the correct approach or is it possible to have the callback overwritten by more incoming data or another thread?

    Any thoughts?

    Defect 
    opened by vsabella 21
  • java.io.IOException: Invalid Status code=200 text=OK

    java.io.IOException: Invalid Status code=200 text=OK

    Hello ! I'm attempting to setup a websocket with AsyncHttpClient similar to how its setup here - https://www.baeldung.com/async-http-client-websockets

    But I always get the same error and closing of the socket. java.io.IOException: Invalid Status code=200 text=OK Any ideas of what I might try ? I know the websocket server is working ... I tested it with a browser plugin and I can see the messages streaming image

    opened by supertick 2
  • [Question] Is it possible to reuse the http parser in the library?

    [Question] Is it possible to reuse the http parser in the library?

    Hi all, as my question suggests, I am currently looking for a way to get the corresponding http request (or response) when giving a byte array as input. Is this possible anyhow?

    Thanks!

    opened by DSimsek000 1
  • Request compression support in async-http-client

    Request compression support in async-http-client

    Hi,

    Does async-http-client supports sending compressed(compression does by algo mentioned in header 'content-encoding') request to server ? Can you share the code snippet how to add the support of request compression?

    opened by chandrav723 4
  • Request times out after being sent in AHC

    Request times out after being sent in AHC

    Hi everyone, I am using the org.asynchttpclient library version(2.12.3).I am using a service which makes a lot of downstream api calls to different services.Earlier I was using a single AHC object instance for all the downstream service calls with (maxConnectionPool=1000) and (maxConnectionsPerHost=500) with keep-alive=true. What happened was the requests were being sent from my client service to the target services but many of them timed out with the message "Request timeout to after ms". What I observed is the response from the services was within the timeout duration but for some reason the request timed out. As an experiment I increased the number of AHC instances and used a separate instance for each of the downstream service with the config remaining the same.I observed that the timeouts decreased significantly and the overall response time observed from the downstream calls improved.But still there are false timeouts which are seen. Also have seen this issue happening when using an open channel.When using a new channel this never seems to happen. I guess this is due to blocking of thread due to some reason that it is not able to process the new responses arriving in the channel.Is this some known bug or there may be some issue with my implementation,can someone throw some light on it?

    Defect 
    opened by saurabh782 4
  • Huge delay in request processing when using AsyncHttpClient

    Huge delay in request processing when using AsyncHttpClient

    I created a simple HTTP request using the AHC client. I am seeing it is taking around 680ms for an endpoint which normally takes 50ms (verified by using curl) . What is the reason of such large response time with AHC client. Do i need to set something before using AHC client. Not sure what I am missing. Note that I am only sending single request a time.

    import org.asynchttpclient.AsyncHttpClient;
    import org.asynchttpclient.Response;
    import static org.asynchttpclient.Dsl.asyncHttpClient;
    
    AsyncHttpClient client = asyncHttpClient();
    
    Long startTime = System.nanoTime();
    Future<Response> responseFuture = asyncHttpClient.prepareGet(healthCheckRequest.getUriString()).execute();
    Response res = responseFuture.get();
     Long endTime = HealthCheckUtils.getMillisSince(startTime);
     System.out.println("End time: " + endTime);
    
    opened by swasti7 0
Owner
AsyncHttpClient
AsyncHttpClient
LiteHttp is a simple, intelligent and flexible HTTP framework for Android. With LiteHttp you can make HTTP request with only one line of code! It could convert a java model to the parameter and rander the response JSON as a java model intelligently.

Android network framework: LiteHttp Tags : litehttp2.x-tutorials Website : http://litesuits.com QQgroup : 42960650 , 47357508 Android网络通信为啥子选 lite-htt

马天宇 829 Dec 29, 2022
An android asynchronous http client built on top of HttpURLConnection.

Versions 1.0.0 1.0.1 1.0.2 1.0.3 1.0.4 1.0.5 1.0.6 Version 1.0.6 Description An android asynchronous http client based on HttpURLConnection. Updates U

David 15 Mar 29, 2020
Unirest in Java: Simplified, lightweight HTTP client library.

Unirest for Java Install With Maven: <!-- Pull in as a traditional dependency --> <dependency> <groupId>com.konghq</groupId> <artifactId>unire

Kong 2.4k Jan 5, 2023
Unirest in Java: Simplified, lightweight HTTP client library.

Unirest for Java Install With Maven: <!-- Pull in as a traditional dependency --> <dependency> <groupId>com.konghq</groupId> <artifactId>unire

Kong 2.4k Dec 24, 2022
Android Easy Http - Simplest android http request library.

Android Easy Http Library 繁體中文文檔 About Android Easy Http Library Made on OkHttp. Easy to do http request, just make request and listen for the respons

null 13 Sep 30, 2022
A js websocket server that handle an android app that connect to the sever with a QR code, to move a red square on a webpage with the gyroscope and accelerometer

online game a js websocket server with an express server, with a mobile app. backend express is used to handle the creation page, game page and the cr

null 2 Oct 7, 2021
Java HTTP Request Library

Http Request A simple convenience library for using a HttpURLConnection to make requests and access the response. This library is available under the

Kevin Sawicki 3.3k Jan 6, 2023
Square’s meticulous HTTP client for the JVM, Android, and GraalVM.

OkHttp See the project website for documentation and APIs. HTTP is the way modern applications network. It’s how we exchange data & media. Doing HTTP

Square 43.4k Jan 5, 2023
A type-safe HTTP client for Android and the JVM

Retrofit A type-safe HTTP client for Android and Java. For more information please see the website. Download Download the latest JAR or grab from Mave

Square 41k Jan 5, 2023
Ktorfit - a HTTP client/Kotlin Symbol Processor for Kotlin Multiplatform (Js, Jvm, Android, iOS, Linux) using KSP and Ktor clients inspired by Retrofit

Ktorfit is a HTTP client/Kotlin Symbol Processor for Kotlin Multiplatform (Js, Jvm, Android, iOS, Linux) using KSP and Ktor clients inspired by Retrofit

Jens Klingenberg 637 Dec 25, 2022
Multiplatform coroutine-based HTTP client wrapper for Kotlin

networkinkt This is a lightweight HTTP client for Kotlin. It relies on coroutines on both JS & JVM platforms. Here is a simple GET request: val text =

Egor Zhdan 31 Jul 27, 2022
Kotlin DSL http client

Introduction Kotlin DSL http client Features ?? Developers Experience-driven library without verbosity. ?? Native way to use http client in Kotlin. ??

Sergei Rybalkin 461 Dec 13, 2022
Android Asynchronous Networking and Image Loading

Android Asynchronous Networking and Image Loading Download Maven Git Features Kotlin coroutine/suspend support Asynchronously download: Images into Im

Koushik Dutta 6.3k Dec 27, 2022
Easy, asynchronous, annotation-based SOAP for Android

IceSoap IceSoap provides quick, easy, asynchronous access to SOAP web services from Android devices. It allows for SOAP responses to be bound to Java

Alex Gilleran 75 Nov 29, 2022
Write your asynchronous Network / IO call painlessly in Kotlin !!

Asynkio : Write asynced IO/ Network calls painlessly on android | | | Documentation Write your network requests, IO calls in android with Kotlin seaml

Nikhil Chaudhari 82 Jan 26, 2022
HttpMocker is a simple HTTP mocking library written in Kotlin to quickly and easily handle offline modes in your apps

HttpMocker HttpMocker is a very lightweight Kotlin library that allows to mock HTTP calls relying on either OkHttp or the Ktor client libraries. It ca

David Blanc 174 Nov 28, 2022
Volley is an HTTP library that makes networking for Android apps easier and, most importantly, faster.

Volley Volley is an HTTP library that makes networking for Android apps easier and, most importantly, faster. For more information about Volley and ho

Google 3.3k Jan 1, 2023
Kotlin-echo-client - Echo client using Kotlin with Ktor networking library

Overview This repository contains an echo server implemented with Kotlin and kto

Elliot Barlas 2 Sep 1, 2022