id
int64
0
5.38k
issuekey
stringlengths
4
16
created
stringlengths
19
19
title
stringlengths
5
252
description
stringlengths
1
1.39M
storypoint
float64
0
100
717
XD-2942
04/10/2015 13:33:44
Add ftp source to default source modules
It would be nice to have a simple ftp source. I have to do it for one of my projects. Same as XD-2139 but for source modules.
2
718
XD-2948
04/14/2015 14:35:34
Document how to specify custom-modules location via Environment variable.
It is possible to specify the location of custom modules via the environment variable {{XD_CUSTOMMODULE_HOME}} which is provided by Spring Boot property key derivation mechanism (in this case derived from {{xd.customModule.home}}). This allows a user to specify a custom modules location that survives a complete wipe of spring-xd installations.
1
719
XD-2949
04/15/2015 07:02:14
Error Message for "Missing Job Description" needs to be updated
When using the rest interface to create a Job with an empty description, used to generate the following exception, "Definition can not be empty". Now generates "XD112E:(pos 0): Unexpectedly ran out of input^". The correct error should be, "definition cannot be blank or null"
2
720
XD-2984
04/23/2015 13:31:28
xd-admin script fails when providing --hadoopDistro option
XD-2837 added back the --hadoopDistro option for xd-admin scripts. However, if I try to use it I get an error message saying: "--hadoopDistro" is not a valid option
3
721
XD-3000
04/27/2015 13:23:51
Enhance TupleCodec performance
Profile TupleCodec and implement performance optimizations
5
722
XD-3015
04/30/2015 07:42:41
RemoteFileToHadoopTests fails on 1.1.x
This error surfaced recently as a result of a fix to a bug in HostNotWindowsRule which disabled this test in all environments. Now the test has been reactivated it is failing on the 1.1.x branch. The test runs OK on master. {noformat} Encountered an error executing step step1-master in job job org.springframework.messaging.MessageDeliveryException: failed to send Message to channel 'null'; nested exception is java.lang.IllegalStateException: ThreadPoolTaskExecutor not initialized at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:292) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:239) at org.springframework.xd.dirt.integration.bus.local.LocalMessageBus$3.handleMessage(LocalMessageBus.java:262) at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116) at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101) at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97) at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:277) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:239) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45) at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:95) at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:248) at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:171) at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:119) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:105) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78) at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116) at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101) at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97) at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:277) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:239) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45) at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:95) at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:85) at org.springframework.batch.integration.partition.MessageChannelPartitionHandler.handle(MessageChannelPartitionHandler.java:224) at org.springframework.batch.core.partition.support.PartitionStep.doExecute(PartitionStep.java:106) at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:198) at org.springframework.batch.core.job.SimpleStepHandler.handleStep(SimpleStepHandler.java:148) at org.springframework.batch.core.job.flow.JobFlowExecutor.executeStep(JobFlowExecutor.java:64) at org.springframework.batch.core.job.flow.support.state.StepState.handle(StepState.java:67) at org.springframework.batch.core.job.flow.support.SimpleFlow.resume(SimpleFlow.java:165) at org.springframework.batch.core.job.flow.support.SimpleFlow.start(SimpleFlow.java:144) at org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:134) at org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:304) at org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:135) at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) at org.springframework.batch.core.launch.support.SimpleJobLauncher.run(SimpleJobLauncher.java:128) at org.springframework.batch.integration.x.RemoteFileToHadoopTests.testSimple(RemoteFileToHadoopTests.java:161) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:73) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:82) at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:73) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:217) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:83) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:68) at org.springframework.xd.test.HostNotWindowsRule$1.evaluate(HostNotWindowsRule.java:38) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:163) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:74) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:211) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:67) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134) Caused by: java.lang.IllegalStateException: ThreadPoolTaskExecutor not initialized at org.springframework.util.Assert.state(Assert.java:385) at org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor.getThreadPoolExecutor(ThreadPoolTaskExecutor.java:221) at org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor.execute(ThreadPoolTaskExecutor.java:252) at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:89) at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:277) ... 76 more java.lang.AssertionError: Expected :exitCode=COMPLETED;exitDescription= Actual :exitCode=FAILED;exitDescription= <Click to see difference> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:144) at org.springframework.batch.integration.x.RemoteFileToHadoopTests.testSimple(RemoteFileToHadoopTests.java:162) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:73) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:82) at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:73) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:217) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:83) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:68) at org.springframework.xd.test.HostNotWindowsRule$1.evaluate(HostNotWindowsRule.java:38) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:163) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:74) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:211) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:67) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134) {noformat}
2
723
XD-3018
04/30/2015 12:17:44
Update to spring-data-hadoop 2.2.0.M1
We should update to use spring-data-hadoop 2.2.0.M1in order to use the fixes available for the HDFS writing there (syncable writes, timeout). A few things to keep in mind: - this updates Cloudera CDH to 5.3.3 - Kite version is now 1.0 - need to test the hdfs-dataset sink
2
724
XD-3022
04/30/2015 13:36:52
Kafka Message Bus ignores consumer concurrency when computing partition count
This is a combination of two issues: - the internal property `next.module.concurrency` is computed from `concurrency` when it should be computed from `consumer.concurrency` - even if `next.module.concurrency` is set, the KafkaMessageBus rejects it, since it's not set in SUPPORTED_CONSUMER_PROPERTIES As a result, the value used in partition calculation is always 1. A workaround exists, by setting the `module.[moduleName].producer.minPartitionCount` property to the expected total value.
3
725
XD-3029
05/06/2015 05:39:36
SqoopRunner class not found errror
We have installed the SpringXD 1.2 M1 release via the rpm and it seems that the sqoop-1.4.5-hadoop200.jar file are not part of the rpm. The sqoop jar file are not in the xd/lib directory. This is causing a problem during customer module development if we include the sqoop-1.4.5-hadoop200 dependency as part of the pom file and forces us to redeploy the our jar as separate deployment. Should we be referencing different dependencies or have or should the sqoop-1.4.5-hadoop200.jar be part of the rpm definition so it part of the xd/lib? I have currently the following dependency in the pom file: {code} <!-- Sqoop --> <dependency> <groupId>org.apache.sqoop</groupId> <artifactId>sqoop</artifactId> <version>1.4.5</version> <classifier>hadoop200</classifier> </dependency> {code} It would be great be great if the sqoop jar are part of rpm so we don't have to do any additional jar deployment. Thanks,
2
726
XD-3036
05/07/2015 12:45:05
Fix section headers in reference TOC
See: http://docs.spring.io/spring-xd/docs/current-SNAPSHOT/reference/html/#_introduction_26 There should be chapter/section title before this.
1
727
XD-3047
05/11/2015 09:22:27
Complete Camera Ready DEBS submission
Complete and submit DEBS 2015 paper as described here: http://www.debs2015.org/camera-ready-instructions.html
5
728
XD-3048
05/11/2015 09:44:44
RabbitMQ queue cleanup uses wildcard unexpectedly
Calling the API to delete queues uses a wildcard-like behaviour unexpectedly. If I request to delete: {{test-1}} I expect it to delete streams named with the pattern: {{test-1.*}} For example, it would delete: {{test-1.0, test-1.1, etc}} In fact I believe it wildcards before and after the period, e.g.: {{test-1*.*}} And hence would delete: {{test-1.0, test-11.0, test-123.0, etc}} That way of working is potentially helpful, but it's also dangerous because it removes the ability to know that you're only deleting the exact queue you want to in all cases. For the record the commit (https://github.com/spring-projects/spring-xd/commit/2d5f3f706330a6ead8e91c9a7a23d4372715614d) implies that it should work in the more restricted way above, not the less restricted way. (Note: I've marked this as an improvement because, absent documentation, I don't know what the correct functionality is and hence can't say this is a bug)
1
729
XD-3051
05/13/2015 10:13:12
Gradle launch task is broken
Spring XD has a gradle task available in the build called launch that starts a single node instance. This is currently broken. The command I was using for this command was: {code} $ ./gradlew clean build -x test -x javadoc launch {code}
1
730
XD-3056
05/14/2015 03:23:49
Add a new source module to capture video frame from camera or video files
This is a source module for video ingestion: the modules captures video frames from a camera or from a video file. For camera, the frames are grabbed from the rtsp video stream. This module will generate message with the frame image (encoded with JPEG) as the payload.
8
731
XD-3063
05/15/2015 06:34:49
Add Property maxMessagesPerPoll to All Polled Sources
Polled message sources return only one message per poll by default. When polling, say, a file directory with many files, files will be emitted once per {{fixedDelay}}. As a user I need to configure a limit for the number of messages that will be emitted per poll.
3
732
XD-3064
05/15/2015 07:25:23
HdfsMongoDB Job failing due because of missing ID in Default Tuple
Looks to have been introduced by https://github.com/spring-projects/spring-xd/pull/1577 Deployment: single admin, 2 container deployment using +RabbitMQ+ as the transport. Below is a partial stacktrace (please check log for full stacktrace). Log is attached. {noformat) 2015-05-15 10:50:15,843 1.2.0.SNAP ERROR xdbus.job:ec2Job3-1 step.AbstractStep - Encountered an error executing step readResourcesStep in job ec2Job3 org.springframework.dao.InvalidDataAccessApiUsageException: Cannot autogenerate id of type java.util.UUID for entity of type org.springframework.xd.tuple.DefaultTuple! at org.springframework.data.mongodb.core.MongoTemplate.assertUpdateableIdIfNotSet(MongoTemplate.java:1153) at org.springframework.data.mongodb.core.MongoTemplate.doSave(MongoTemplate.java:882) at org.springframework.data.mongodb.core.MongoTemplate.save(MongoTemplate.java:837) at org.springframework.batch.item.data.MongoItemWriter.doWrite(MongoItemWriter.java:128) at org.springframework.batch.item.data.MongoItemWriter$1.beforeCommit(MongoItemWriter.java:156) at org.springframework.transaction.support.TransactionSynchronizationUtils.triggerBeforeCommit(TransactionSynchronizationUtils.java:95) at org.springframework.transaction.support.AbstractPlatformTransactionManager.triggerBeforeCommit(AbstractPlatformTransactionManager.java:928) at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:740) at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:726) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) {noformat)
3
733
XD-3066
05/15/2015 09:34:42
Make Enum Conversions for ModuleOptions more lenient
If you have a an option *--mode=textLine*, presently the enum MUST be named *textLine*. I think it would improve the user-experience if we allowed users to pass in values such as: * --mode=textLine * --mode=text_line * --mode=TEXT_LINE
3
734
XD-3070
05/18/2015 08:08:17
Spike: introduce xolpoc-admin to XD Admin
The POC for XD on Lattice uses the following interface for module deployment: https://github.com/markfisher/xolpoc-admin/blob/master/src/main/java/xolpoc/spi/ModuleDeployer.java {code} public interface ModuleDeployer { void deploy(ModuleDescriptor descriptor); void undeploy(ModuleDescriptor descriptor); ModuleStatus getStatus(ModuleDescriptor descriptor); } {code} This spike is to introduce this interface and the Lattice implementation in the XD admin. The goals are to: * Demo a POC showing simple stream deployment with the existing shell/admin to Lattice * Learn from the experience to help guide the re-architecture/splitting of stream/job repositories (especially in regard to {{AbstractDeployer}} and related classes). Note that this work will not necessarily be merged into XD itself, although some of the concepts may be included in a future PR.
5
735
XD-3078
05/19/2015 02:48:44
Spring XD admin fails to redeploy modules after Spring XD container successfully reconnectes to Zookeeper
We are running Spring XD 1.1.1 in our production environment and Zookeeper 3.4.5. Zookeeper is running in failover mode and consists of three independent nodes set up on three separate VMs. From time to time we get "Connection to Zookeeper Suspended" event which causes one of the containers in the cluster to be removed from the SpringXD cluster. Modules being deployed on this removed node fail to be re-deployed to other containers in the cluster. Affected versions: - SpringXD 1.1.1 - Zookeeper 3.4.5 and 3.4.6 Cluster set up in PROD environment where error occurs: - 4 Spring-XD dedicated servers - 4 spring-xd containers (each running on designated server ) - 2 spring-xd admins ( each running alongside one spring-xd container) - 3 Zookeeper nodes ( 3 designated servers on PAITO environment ) Cluster set up in TEST environment where error also occurred: - 2 Spring-XD dedicated servers running one spring-xd container and one spring-xd admin each - 3 Zookeeper nodes running on 3 dedicated servers (PAITO Test environment) Cluster set up to reproduce error found in PROD environment: - 1 spring-xd admin - 3 spring xd-containers (each running on a designated VM ) - 3 zookeeper servers running on one VM Steps to reproduce: 1) Set up three node Zookeeper cluster. Attached is example zoo.cfg, we are using default configuration values. In this particular test case we run all Zookeeper nodes on a single VM as we were not testing network layer interruptions. 2) Set up one Spring XD admin node. Please note that we have also observed this on two node Spring XD admin cluster. 3) Set up three Spring XD container nodes. All of them belong to one group (SA) and two of them also belong to second group (HA1). This is configured in $XD_HOME/config/servers.yml however so far group configuration never influenced test outcome. 4) Create and deploy a test stream using following XD Shell commands: stream create --name test-zookeeper-failover --definition "syslog-udp --port=5140 | transform | file --dir='/opt/pivotal/spring-xd/xd/output'" stream deploy --name test-zookeeper-failover --properties "module.syslog-udp.criteria=groups.contains('HA1'),module.syslog-udp.count=2,module.file.criteria=groups.contains('SA'),module.file.count=3,module.transform.criteria=groups.contains('SA')" 5) Ensure that test stream works and handles traffic on UDP port 5140 6) Shutdown one of the Zookeeper nodes by issuing a stop command. 7) Two Spring XD containers were not affected and remained in Spring XD cluster. 8) One Spring XD container was kicked out of Spring XD cluster and was no longer visible on Spring XD admin Web UI. Modules previously deployed to this container were not redeployed to other cluster members. 9) On the failed Spring XD container we have observed CONNECTION_SUSPEND, CONNECTION_RECONECTED and CHILD_REMOVE Zookeeper events (attached is container-log.txt). Please note that Java process is still running and we see “ConnectionStateManager-0 server.ContainerRegistrar - Waiting for supervisor to clean up prior deployments” messages. 10) Spring XD admin failed with exception in DepartingContainerModuleRedeployer (attached is admin-log.txt). 11) We have observed that departing container node in Zookeeper (/sa/deployments/modules/allocated/1d3fd4cc-5a70-47ed-b4f3-22deef1f4d4f/) had no children. We did this after few minutes so we are not sure at which point it was cleared. 12) Restarting failed Spring XD container fixed the problem, modules were correctly redeployed. Exception from point 10 is very similar to XD-1983 and this code was rewritten in XD-2004.
8
736
XD-3079
05/19/2015 09:01:17
Create a new Kerberos ticket instead of renew the current one
Running Spring-XD singlenode with a kerberized hadoop cluster on CDH 5.3.2. with JDK 1.7 and JCE 1.7. The kerberos ticket policies are: * expiration: 24 hours * renew: 7 days I need to keep the Spring XD server running constantly because my flows are always waiting for incoming files to be ingested into the HDFS, but the kerberos session expires if there aren't jobs to run before the expiration date. The expiration policies can't be changed due internal company policies. Is there a way which Spring XD can generate a new ticket instead of renew the current one when a job or stream start executing? The Spring XD server has configured the hadoop.properties like: # Use servers.yml to change URI for namenode # You can add additional properties in this file dfs.namenode.kerberos.principal=hdfs/_HOST@EDA.COMPANY.COM yarn.resourcemanager.principal=yarn/_HOST@EDA.COMPANY.COM yarn.application.classpath=/opt/cloudera/parcels/CDH/lib/hadoop/*,/opt/cloudera/parcels/CDH/lib/hadoop/lib/*,/opt/cloudera/parcels/CDH/lib/hadoop-hdfs/*,/opt/cloudera/parcels/CDH/lib/hadoop-hdfs/lib/*,/opt/cloudera/parcels/CDH/lib/hadoop-yarn/*,/opt/cloudera/parcels/CDH/lib/hadoop-yarn/lib/*,/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/*,/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/lib/* hadoop.security.authorization=true hadoop.security.authentication=kerberos spring.hadoop.userKeytab=file:///export/home/user/user.keytab spring.hadoop.userPrincipal=user@ERS.COMPANY.COM #Connecting to Kerberized Hadoop (Spring XD doc configuration Appendix D) spring.hadoop.security.authMethod=kerberos spring.hadoop.security.userKeytab=/export/home/user/user.keytab spring.hadoop.security.userPrincipal=user@ERS.COMPANY.COM spring.hadoop.security.namenodePrincipal=hdfs/_HOST@EDA.COMPANY.COM spring.hadoop.security.rmManagerPrincipal=yarn/_HOST@EDA.COMPANY.COM
5
737
XD-3081
05/19/2015 12:45:57
When using file as a source and sink user can not use file sink --mode
Cluster Type: SingleNode Machine: Mac PR: https://github.com/spring-projects/spring-xd/pull/1624,https://github.com/spring-projects/spring-xd/pull/1626 Stream that reproduces the problem: {noformat} stream create foo --definition "filein: file --dir=/tmp/xd/a0180520-c7fa-4d9d-8cc3-e36fbf59496a --pattern=de59d1b8-f99c-4c43-a8c0-2f6043546689.out --mode=contents | fileout: file --binary=true --mode=replace " {noformat} Error Message: {noformat} Command failed org.springframework.xd.rest.client.impl.SpringXDException: Error with option(s) for module file of type sink: mode: Failed to convert property value of type 'java.lang.String' to required type 'org.springframework.xd.dirt.modules.metadata.FileSinkOptionsMetadata$Mode' for property 'mode'; nested exception is java.lang.IllegalStateException: Cannot convert value of type [java.lang.String] to required type [org.springframework.xd.dirt.modules.metadata.FileSinkOptionsMetadata$Mode] for property 'mode': no matching editors or conversion strategy found {noformat} Stacktrace: {noformat} 2015-05-19 14:30:56,329 1.2.0.SNAP ERROR qtp671416633-35 rest.RestControllerAdvice - Caught exception while handling a request org.springframework.xd.dirt.plugins.ModuleConfigurationException: Error with option(s) for module file of type sink: mode: Failed to convert property value of type 'java.lang.String' to required type 'org.springframework.xd.dirt.modules.metadata.FileSinkOptionsMetadata$Mode' for property 'mode'; nested exception is java.lang.IllegalStateException: Cannot convert value of type [java.lang.String] to required type [org.springframework.xd.dirt.modules.metadata.FileSinkOptionsMetadata$Mode] for property 'mode': no matching editors or conversion strategy found at org.springframework.xd.dirt.plugins.ModuleConfigurationException.fromBindException(ModuleConfigurationException.java:55) at org.springframework.xd.dirt.stream.XDStreamParser.buildModuleDescriptors(XDStreamParser.java:191) at org.springframework.xd.dirt.stream.XDStreamParser.parse(XDStreamParser.java:122) at org.springframework.xd.dirt.stream.AbstractDeployer.validateBeforeSave(AbstractDeployer.java:115) at org.springframework.xd.dirt.rest.XDController.save(XDController.java:260) at sun.reflect.GeneratedMethodAccessor191.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:137) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:110) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:776) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:705) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:966) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:868) at javax.servlet.http.HttpServlet.service(HttpServlet.java:755) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:842) at javax.servlet.http.HttpServlet.service(HttpServlet.java:848) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1496) at org.springframework.boot.actuate.autoconfigure.EndpointWebMvcAutoConfiguration$ApplicationContextHeaderFilter.doFilterInternal(EndpointWebMvcAutoConfiguration.java:291) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:77) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467) at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:87) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467) at org.springframework.boot.actuate.trace.WebRequestTraceFilter.doFilterInternal(WebRequestTraceFilter.java:102) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467) at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:186) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467) at org.springframework.boot.actuate.autoconfigure.MetricFilterAutoConfiguration$MetricsFilter.doFilterInternal(MetricFilterAutoConfiguration.java:90) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.handle(Server.java:370) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494) at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:982) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1043) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Thread.java:745) Caused by: org.springframework.validation.BindException: org.springframework.validation.BeanPropertyBindingResult: 1 errors Field error in object 'target' on field 'mode': rejected value [replace]; codes [typeMismatch.target.mode,typeMismatch.mode,typeMismatch.org.springframework.xd.dirt.modules.metadata.FileSinkOptionsMetadata$Mode,typeMismatch]; arguments [org.springframework.context.support.DefaultMessageSourceResolvable: codes [target.mode,mode]; arguments []; default message [mode]]; default message [Failed to convert property value of type 'java.lang.String' to required type 'org.springframework.xd.dirt.modules.metadata.FileSinkOptionsMetadata$Mode' for property 'mode'; nested exception is java.lang.IllegalStateException: Cannot convert value of type [java.lang.String] to required type [org.springframework.xd.dirt.modules.metadata.FileSinkOptionsMetadata$Mode] for property 'mode': no matching editors or conversion strategy found] at org.springframework.xd.module.options.PojoModuleOptionsMetadata.bindAndValidate(PojoModuleOptionsMetadata.java:205) at org.springframework.xd.module.options.PojoModuleOptionsMetadata.interpolate(PojoModuleOptionsMetadata.java:139) at org.springframework.xd.module.options.FlattenedCompositeModuleOptionsMetadata.interpolate(FlattenedCompositeModuleOptionsMetadata.java:152) at org.springframework.xd.module.options.EnvironmentAwareModuleOptionsMetadataResolver$ModuleOptionsMetadataWithDefaults.interpolate(EnvironmentAwareModuleOptionsMetadataResolver.java:168) at org.springframework.xd.dirt.stream.XDStreamParser.buildModuleDescriptors(XDStreamParser.java:188) ... 61 more {noformat}
3
738
XD-3090
05/21/2015 07:36:03
JdbcHdfsTests sporadically fail
Acceptance tests sporadically fail after https://github.com/spring-projects/spring-xd/pull/1623 was merged XD-2309. Additional tests were added but used fixed timeouts. Will replace them with waitForJob.
2
739
XD-3092
05/21/2015 11:40:07
Synchronous deployment/undeployments
There are a range of issues (such as XD-3083, XD-2671) that are caused by asynchronous deployments issued by the REST API. The flow of events is: * deploy/undeploy request received by REST API * controller queues up request to be processed by supervisor * controller returns HTTP 2xx This proposal is to have the thread executing the deploy/undeploy request block until the request has been processed by the supervisor. This will have the side effect of deploys appearing to take longer, but when the HTTP request completes, the deployment/undeployment will have been fulfilled.
2
740
XD-3093
05/21/2015 12:18:02
Sqoop list-tables doesn't work oob
Commands from docs: xd:>job create sqoopListTables --definition "sqoop --command=list-tables" --deploy xd:>job launch --name sqoopListTables 2015-05-21 19:12:36,211 1.2.0.M1 ERROR task-scheduler-1 sqoop.SqoopTasklet - Sqoop job for 'list-tables' finished with exit code: 1 2015-05-21 19:12:36,212 1.2.0.M1 ERROR task-scheduler-1 sqoop.SqoopTasklet - Sqoop err: Error: Required argument --connect is missing. Adding --connect results xd:>job create sqoopListTables --definition "sqoop --command=list-tables --connect=jdbc:hsqldb:hsql://localhost:9101/xdjob" --deploy Command failed org.springframework.xd.rest.client.impl.SpringXDException: Error with option(s) for module sqoop of type job: connect: option named 'connect' is not supported This is with singlenode.
1
741
XD-3100
05/26/2015 08:42:48
module.*.count > 1 duplicates messages on taps
Using module.name.count > 1 when deploying taps causes duplication of messages in those modules. This impacts balancing of the containers and modules in a cluster as messages should not be duplicated across modules if the same module is deployed twice to two containers in order to spread the load. We use taps quite heavily in our project mainly for analytics of the life feed in real time but due to issue we have discovered and described in this bug we are currently facing a limitation where heavily processing modules can not be load balanced across the cluster as they are causing duplication of the messages and therefore the same module deployed to two containers would still process the same message twice. To demonstrate the problem please see test case scenario set up below: h4. 1. Environment - Spring-XD version 1.1.1-RELEASE - Running two spring-xd containers and one spring-xd admin h4. 2. Set up Stream definition is as follows: {quote}stream create --name test-module-count --definition "syslog-udp --port=5140 | transform | log" stream deploy --name test-module-count --properties "module.*.count=2" stream create --name tap-test-module-count --definition "tap:stream:test-module-count.syslog-udp > transform --expression='payload.toString() + \"TAPPED\"' | log" stream deploy --name tap-test-module-count --properties "module.*.count=2"{quote} Please refer to the screen shots attached to see that after deploying those two streams we have: - streams successfully deployed ( module-count-spring-xd-streams.png ) - streams successfully deployed with count=2 to both containers ( module-count-spring-xd-containers.png ) - 5 queues created in Rabbit ( module-count-rabbit.png ) where two were created for the syslog-udp collector as a result of using module.syslog-udp.count=2 - this is causing messages to be duplicated. Normally the expectation would be to have only one queue for the tap h4. 3. Test input data I have sent a very simple UDP message to the listening udp collector running on second container: {quote}echo test-module-count >> /dev/udp/host02/5140{quote} h4. 4. Test output data in the logs ( module-count-container01.log and module-count-container02.log ) h5. Expected result: Below messages logged only on 1 container (it does not matter which one) {quote}2015-05-26 09:52:21,630 1.1.1.RELEASE INFO xdbus.test-module-count.1-1 sink.test-module-count - {UNDECODED=test-module-count}{quote} Below message logged only on one container (it does not matter which one) {quote}2015-05-26 09:52:21,843 1.1.1.RELEASE INFO xdbus.tap-test-module-count.0-1 sink.tap-test-module-count - {UNDECODED=test-module-count }TAPPED{quote} h5. Actual result: Stream that has been create as a tap has duplicated the same message and as a result the same message was proccessed twice on both containers by the same module ( transformer ) and logged twice to the console on both containers Container01: {quote}2015-05-26 14:52:21,143 1.1.1.RELEASE INFO xdbus.tap-test-module-count.0-1 sink.tap-test-module-count - {UNDECODED=test-module-count }TAPPED{quote} Container02: {quote}2015-05-26 09:52:21,630 1.1.1.RELEASE INFO xdbus.test-module-count.1-1 sink.test-module-count - {UNDECODED=test-module-count } 2015-05-26 09:52:21,843 1.1.1.RELEASE INFO xdbus.tap-test-module-count.0-1 sink.tap-test-module-count - {UNDECODED=test-module-count }TAPPED{quote}
5
742
XD-3102
05/26/2015 09:13:59
Benchmark XD RC1 using Kafka 0.8.2 as transport
As a developer, I'd like to rerun _baseline_, _Tuple_, and _Serialized_ payloads, so I can compare the differences in performance between 0.8.1 and 0.8.2 Kafka releases. Sinks to be included in test: In-Memory Transport > Hdfs sink Direct Binding Transport > Hdfs Sink Kafka > Hdfs Sink
8
743
XD-3109
05/26/2015 22:01:52
SFTP socket closed error. Infinite loop
Having the follow messages poping up on xd log. It seems they are being generated indefinitely. Log files getting huge. [2015-05-27 15:57:51.039] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: server: ssh-rsa,ssh-dss [2015-05-27 15:57:51.039] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: server: aes256-ctr,aes192-ctr,aes128-ctr,arcfour256 [2015-05-27 15:57:51.039] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: server: aes256-ctr,aes192-ctr,aes128-ctr,arcfour256 [2015-05-27 15:57:51.039] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: server: hmac-sha2-512,hmac-sha2-256,hmac-sha1,hmac-ripemd160 [2015-05-27 15:57:51.039] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: server: hmac-sha2-512,hmac-sha2-256,hmac-sha1,hmac-ripemd160 [2015-05-27 15:57:51.039] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: server: none,zlib@openssh.com [2015-05-27 15:57:51.039] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: server: none,zlib@openssh.com [2015-05-27 15:57:51.039] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: server: [2015-05-27 15:57:51.039] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: server: [2015-05-27 15:57:51.040] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: client: diffie-hellman-group1-sha1,diffie-hellman-group-exchange-sha1 [2015-05-27 15:57:51.040] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: client: ssh-rsa,ssh-dss [2015-05-27 15:57:51.040] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: client: aes128-ctr,aes128-cbc,3des-ctr,3des-cbc,blowfish-cbc [2015-05-27 15:57:51.040] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: client: aes128-ctr,aes128-cbc,3des-ctr,3des-cbc,blowfish-cbc [2015-05-27 15:57:51.040] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: client: hmac-md5,hmac-sha1,hmac-sha2-256,hmac-sha1-96,hmac-md5-96 [2015-05-27 15:57:51.040] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: client: hmac-md5,hmac-sha1,hmac-sha2-256,hmac-sha1-96,hmac-md5-96 [2015-05-27 15:57:51.040] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: client: none [2015-05-27 15:57:51.040] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: client: none [2015-05-27 15:57:51.040] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: client: [2015-05-27 15:57:51.040] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: client: [2015-05-27 15:57:51.040] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: server->client aes128-ctr hmac-sha1 none [2015-05-27 15:57:51.040] boot - 2774 INFO [task-scheduler-1] --- jsch: kex: client->server aes128-ctr hmac-sha1 none [2015-05-27 15:57:51.044] boot - 2774 INFO [task-scheduler-1] --- jsch: SSH_MSG_KEXDH_INIT sent [2015-05-27 15:57:51.044] boot - 2774 INFO [task-scheduler-1] --- jsch: expecting SSH_MSG_KEXDH_REPLY [2015-05-27 15:57:51.049] boot - 2774 INFO [task-scheduler-1] --- jsch: ssh_rsa_verify: signature true [2015-05-27 15:57:51.049] boot - 2774 INFO [task-scheduler-1] --- jsch: Host 'XX.XXX.XX.X' is known and mathces the RSA host key [2015-05-27 15:57:51.049] boot - 2774 INFO [task-scheduler-1] --- jsch: SSH_MSG_NEWKEYS sent [2015-05-27 15:57:51.049] boot - 2774 INFO [task-scheduler-1] --- jsch: SSH_MSG_NEWKEYS received [2015-05-27 15:57:51.050] boot - 2774 INFO [task-scheduler-1] --- jsch: SSH_MSG_SERVICE_REQUEST sent [2015-05-27 15:57:51.050] boot - 2774 INFO [task-scheduler-1] --- jsch: SSH_MSG_SERVICE_ACCEPT received [2015-05-27 15:57:51.052] boot - 2774 INFO [task-scheduler-1] --- jsch: Authentications that can continue: gssapi-with-mic,publickey,keyboard-interactive,password [2015-05-27 15:57:51.052] boot - 2774 INFO [task-scheduler-1] --- jsch: Next authentication method: gssapi-with-mic [2015-05-27 15:57:51.054] boot - 2774 INFO [task-scheduler-1] --- jsch: Authentications that can continue: publickey,keyboard-interactive,password [2015-05-27 15:57:51.054] boot - 2774 INFO [task-scheduler-1] --- jsch: Next authentication method: publickey [2015-05-27 15:57:51.086] boot - 2774 INFO [task-scheduler-1] --- jsch: Authentication succeeded (publickey). [2015-05-27 15:57:51.113] boot - 2774 INFO [task-scheduler-1] --- jsch: Disconnecting from 10.100.103.5 port 22 [2015-05-27 15:57:51.113] boot - 2774 INFO [Connect thread XX.XXX.XXX.X session] --- jsch: Caught an exception, leaving main loop due to Socket closed
2
744
XD-3133
06/01/2015 17:24:41
Update YARN deployment classpath settings for HDP 2.2 and PHD 3.0
Need to update classpath settings for PHD 3.0 and HDP 2.2
1
745
XD-3136
06/02/2015 14:59:53
Example hashtag-count MR job fails when running XD on YARN with PHD 3.0
Running XD on YARN on PHD 3.0 Ambari install. Uploading and submitting a custom job fails with the following: {code} 2015-06-02 16:54:15,580 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1433273561345_0009_m_000000_0: Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.springframework.xd.examples.hadoop.HashtagCount$TokenizerMapper not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2076) at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:742) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.ClassNotFoundException: Class org.springframework.xd.examples.hadoop.HashtagCount$TokenizerMapper not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1982) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2074) ... 8 more {code} Same example jar works fine when submitted from XD cluster.
5
746
XD-3147
06/04/2015 14:42:44
Composing Jobs via the DSL
h2. Narrative As a developer, I want to be able to construct jobs using a DSL similar to the current syntax for streams. h2. Back story Streams currently provide a DSL for assembling modules into flows (streams) that consist of a source, n processors, and a sink. While constructing the steps of jobs themselves would be difficult in this manor, creating flows of jobs (essentially a job that consists only of job steps) would be very useful. It would allow a developer to create something like the following: {code} filejdbc | mycustomjob | jdbchdfs {code} This approach also allows the existing packaging/module registry/etc to work out of the box. This gets us closer to what Oozie provides out of the box without the need to create custom jobs to do the orchestration.
8
747
XD-3150
06/05/2015 06:59:25
the 'filepollhdfs' job fails on second submission
Definitions: >job create pollHdfs --definition "filepollhdfs --names=name,age" --deploy true >stream create csvStream --definition "file --mode=ref --dir=/Users/trisberg/Test/files --pattern=*.csv > queue:job:pollHdfs" --deploy Here is the exception: {code} org.springframework.data.hadoop.store.StoreException: Error while flushing stream; nested exception is java.nio.channels.ClosedChannelException at org.springframework.xd.batch.item.hadoop.HdfsTextItemWriter.update(HdfsTextItemWriter.java:135) at org.springframework.batch.item.support.CompositeItemStream.update(CompositeItemStream.java:74) at org.springframework.batch.core.step.tasklet.TaskletStep.doExecute(TaskletStep.java:250) at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:198) at org.springframework.batch.core.job.SimpleStepHandler.handleStep(SimpleStepHandler.java:148) at org.springframework.batch.core.job.flow.JobFlowExecutor.executeStep(JobFlowExecutor.java:64) at org.springframework.batch.core.job.flow.support.state.StepState.handle(StepState.java:67) at org.springframework.batch.core.job.flow.support.SimpleFlow.resume(SimpleFlow.java:165) at org.springframework.batch.core.job.flow.support.SimpleFlow.start(SimpleFlow.java:144) at org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:134) at org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:304) at org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:135) at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) at org.springframework.batch.core.launch.support.SimpleJobLauncher.run(SimpleJobLauncher.java:128) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.batch.core.configuration.annotation.SimpleBatchConfiguration$PassthruAdvice.invoke(SimpleBatchConfiguration.java:127) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207) at com.sun.proxy.$Proxy54.run(Unknown Source) at org.springframework.batch.integration.launch.JobLaunching {code}
3
748
XD-3161
06/08/2015 14:40:09
Add CI Acceptance Test for 1.2.x
Need acceptance tests to run on the 1.2.X branch. Needs to be setup as a child of the Publish 1.2.x
3
749
XD-3164
06/08/2015 20:04:52
Kafka bus defaults configurable at producer/consumer level
As a developer, I want to be able to override Kafka bus defaults for module consumers and producers, so that I can finely tune performance and behaviour. Such properties should include - autoCommitEnabled,queueSize,maxWait,fetchSize for consumers - batchSize,batchTimeout for producers
3
750
XD-3176
06/10/2015 12:59:23
Using HDFS for custom module home doesn't work with Kerberized Hadoop cluster
I tried setting the xd.customModule.home property to point to a Kerberized Hadoop cluster with all usual security config settings provided. It failed with the following exception: {code} org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'moduleRegistry' defined in class path resource [META-INF/spring-xd/internal/repositories.xml]: Cannot create inner bean 'org.springframework.xd.dirt.module.CustomModuleRegistryFactoryBean#19f459aa' of type [org.springframework.xd.dirt.module.CustomModuleRegistryFactoryBean] while setting constructor argument with key [1]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.xd.dirt.module.CustomModuleRegistryFactoryBean#19f459aa' defined in class path resource [META-INF/spring-xd/internal/repositories.xml]: Invocation of init method failed; nested exception is org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS] at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:313) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveManagedList(BeanDefinitionValueResolver.java:382) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:157) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.resolveConstructorArguments(ConstructorResolver.java:648) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:140) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1139) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1042) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:504) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:755) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:757) ~[spring-context-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:480) ~[spring-context-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:686) [spring-boot-1.2.3.RELEASE.jar:1.2.3.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:320) [spring-boot-1.2.3.RELEASE.jar:1.2.3.RELEASE] at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:139) [spring-boot-1.2.3.RELEASE.jar:1.2.3.RELEASE] at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:129) [spring-boot-1.2.3.RELEASE.jar:1.2.3.RELEASE] at org.springframework.xd.dirt.server.admin.AdminServerApplication.run(AdminServerApplication.java:95) [spring-xd-dirt-1.2.0.BUILD-SNAPSHOT.jar:1.2.0.BUILD-SNAPSHOT] at org.springframework.xd.dirt.server.admin.AdminServerApplication.main(AdminServerApplication.java:79) [spring-xd-dirt-1.2.0.BUILD-SNAPSHOT.jar:1.2.0.BUILD-SNAPSHOT] Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.xd.dirt.module.CustomModuleRegistryFactoryBean#19f459aa' defined in class path resource [META-INF/spring-xd/internal/repositories.xml]: Invocation of init method failed; nested exception is org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1574) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] ... 22 common frames omitted Caused by: org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.7.0_67] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[na:1.7.0_67] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.7.0_67] at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[na:1.7.0_67] at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) ~[hadoop-common-2.6.0.jar:na] at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) ~[hadoop-common-2.6.0.jar:na] at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2755) ~[hadoop-hdfs-2.6.0.jar:na] at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724) ~[hadoop-hdfs-2.6.0.jar:na] at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870) ~[hadoop-hdfs-2.6.0.jar:na] at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866) ~[hadoop-hdfs-2.6.0.jar:na] at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.6.0.jar:na] at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866) ~[hadoop-hdfs-2.6.0.jar:na] at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859) ~[hadoop-hdfs-2.6.0.jar:na] at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1817) ~[hadoop-common-2.6.0.jar:na] at org.springframework.xd.dirt.module.ExtendedResource$HdfsExtendedResource.mkdirs(ExtendedResource.java:127) ~[spring-xd-dirt-1.2.0.BUILD-SNAPSHOT.jar:1.2.0.BUILD-SNAPSHOT] at org.springframework.xd.dirt.module.WritableResourceModuleRegistry.afterPropertiesSet(WritableResourceModuleRegistry.java:123) ~[spring-xd-dirt-1.2.0.BUILD-SNAPSHOT.jar:1.2.0.BUILD-SNAPSHOT] at org.springframework.xd.dirt.module.CustomModuleRegistryFactoryBean.afterPropertiesSet(CustomModuleRegistryFactoryBean.java:79) ~[spring-xd-dirt-1.2.0.BUILD-SNAPSHOT.jar:1.2.0.BUILD-SNAPSHOT] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1633) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1570) ~[spring-beans-4.1.6.RELEASE.jar:4.1.6.RELEASE] ... 25 common frames omitted Caused by: org.apache.hadoop.ipc.RemoteException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS] at org.apache.hadoop.ipc.Client.call(Client.java:1468) ~[hadoop-common-2.6.0.jar:na] at org.apache.hadoop.ipc.Client.call(Client.java:1399) ~[hadoop-common-2.6.0.jar:na] at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) ~[hadoop-common-2.6.0.jar:na] at com.sun.proxy.$Proxy79.mkdirs(Unknown Source) ~[na:na] at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539) ~[hadoop-hdfs-2.6.0.jar:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_67] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_67] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_67] at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_67] at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.6.0.jar:na] at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.6.0.jar:na] at com.sun.proxy.$Proxy80.mkdirs(Unknown Source) ~[na:na] at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753) ~[hadoop-hdfs-2.6.0.jar:na] ... 37 common frames omitted 2015-06-10T14:49:20-0400 1.2.0.SNAP ERROR main boot.SpringApplication - Application startup failed {code}
3
751
XD-3184
06/16/2015 06:59:32
Update spring-xd-yarn servers.yml with settings for HDP 2.2.6.0
We need to add the settings needed to run XD on YARN when using Hortonworks HDP 2.2.6.0 which is the version you now get when installing with Ambari.
1
752
XD-3188
06/18/2015 09:00:46
FileDeletionListener resolves resources once
In the {{filejdbc}} job, there is the option to delete the imported files. This functionality is created using a listener called the {{FileDeletionStepExecutionListener}}. When you run the job the first time with the {{--deleteFiles=true}}, everything works as expected. The second time you run the job, the files are not deleted. I believe the issue here is that since the {{FileDeletionStepExecutionListener}} is a singleton, the resources are resolved only once (the first time the job runs) and so it works the first time, but if the job is run again later and new files match the expression, they are not picked up. I believe the fix is to make the {{FileDeletionStepExecutionListener}} used in this job step scoped.
1
753
XD-3189
06/18/2015 13:42:27
Testers need ability to wait for a file to be created in XD directory
User's need ability to wait for user specified time in millis for a file to be created in the XD directory. If file is not created in allotted time then return false else return true. Also check to see if a file exists in the XD directory.
3
754
XD-3206
06/24/2015 02:28:42
An error message occurs about the shortDescription (header-enricher)
Here is an error I got using the header-enricher from spring-xd-modules : {code:java} Field error in object 'info' on field 'shortDescription': rejected value [A Header Enricher to set message headers in a stream]; codes [Pattern.info.shortDescription,Pattern.shortDescription,Pattern.java.lang.String,Pattern]; arguments [org.springframework.context.support.DefaultMessageSourceResolvable: codes [info.shortDescription,shortDescription]; arguments []; default message [shortDescription],[Ljavax.validation.constraints.Pattern$Flag;@11eeec65,^\p{IsUppercase}.*\.$]; default message [Short description must start with a capital letter and end with a dot] {code} And if I look the config properties, indeed, short description doesn't end with a dot. {code:java} info.shortDescription = A Header Enricher to set message headers in a stream {code}
1
755
XD-3208
06/24/2015 11:07:01
Change in file source breaks backward compatibility
With version 1.2.0 the option ref of the file source was removed and a new option mode was introduced. see XD-2850 and PR https://github.com/spring-projects/spring-xd/pull/1624. This means you have to destroy all streams using the ref option before you do an upgrade. It would have been much better to leave the ref option in the code and emit a deprecation warning if it is still used. This way an upgrade would be possible without interruption.
1
756
XD-3214
07/02/2015 06:21:13
Enabling security breaks Jobs page in Admin UI
After enabling Spring XD security in {{XD_HOME/config/servers.yml}}: {code} spring: profiles: admin security: basic: enabled: true realm: SpringXD xd: security: authentication: file: enabled: true users: user: password, ROLE_VIEW admin: password, ROLE_VIEW, ROLE_CREATE, ROLE_ADMIN {code} after logging in as {{user}} with only {{ROLE_VIEW}} privilege, Jobs admin page is broken and is not displaying data. 403 error code is returned for following URLs: {code} http://localhost:9393/jobs/configurations.json?page=0&size=10 http://localhost:9393/jobs/definitions.json?page=0&size=10 {code} Looks like {{/jobs/configurations.\*}} and {{/jobs/definitions.\*}} URLs are not covered in security section of applications.yml file.
2
757
XD-3216
07/02/2015 15:27:26
On specific shutdown scenarios, the stream resumes from the start of the bus topic
https://github.com/spring-projects/spring-xd/issues/1727
2
758
XD-3222
07/07/2015 09:09:22
Find a way to connect Sqoop job to Teradata
As a user I would like to connect the Sqoop batch job to Teradata for import jobs. I have tried the Teradata JDBC driver directly using: {code}job create tdTest --definition "sqoop --command=import --args='--table Frequent_Flyers --connect jdbc:teradata://tdexpress/DATABASE=transportation --driver com.teradata.jdbc.TeraDriver --username dbc --password dbc --target-dir=/xd/teradata --num-mappers 1'" {code} but that results in an NPE. The only way so far is to use the Hortonworks Connector for Teradata - http://public-repo-1.hortonworks.com/HDP/tools/2.2.4.2/hdp-connector-for-teradata-1.3.4.2.2.4.2-2-distro.tar.gz That one allows me to use the following: {code}job create tdTest --definition "sqoop --command=import --args='--table Frequent_Flyers --connect jdbc:teradata://tdexpress/DATABASE=transportation --connection-manager org.apache.sqoop.teradata.TeradataConnManager --username dbc --password dbc --target-dir=/xd/teradata --num-mappers 1'" {code}
3
759
XD-3234
07/08/2015 09:01:03
Remove XML REST Endpoints
The XML REST endpoints: * are not working correctly * interfere with security * are not used
3
760
XD-3240
07/09/2015 04:02:50
Add better support for using control file with gpfdist
Currently only database connection info can be read from a control file yml format. Should add rest of the missing options to align how native format works.
2
761
XD-3241
07/09/2015 04:04:12
Add support for update in gpfdist sink
Currently we can only do plain inserts, should follow same logic from native gpfdist sink and add upserts.
1
762
XD-3261
07/16/2015 09:35:36
Update Groovy to 2.4.4
There is a vulnerability in Groovy that is fixed in 2.4.4: CVE-2015-3253: Remote execution of untrusted code See: http://groovy-lang.org/security.html http://mail-archives.apache.org/mod_mbox/incubator-groovy-users/201507.mbox/%3CCADQzvmmYC7RbZnsQ8O63XN4HCMYh9RGRdMiuWupVt=u=pjH8+g@mail.gmail.com%3E
1
763
XD-3262
07/16/2015 13:22:10
UI: Add Pagination to Containers Page
Add Pagination to Containers Page
2
764
XD-3263
07/16/2015 13:27:39
Pagination for containers, it is limited to only 20
Hi , Customer has 48 containers, but it only shows 20 containers. We need pagination to browse all containers.
2
765
XD-3266
07/20/2015 02:47:57
No pagination for Jobs / Deployments page in Admin UI
After successfully deploying 12 jobs the Jobs / Deployments page still shows only 10 results. It looks like {{http://localhost:9393/jobs/configurations.json?page=0&size=10}} always returns {{content.page.totalPages}} of 1 regardless of the {{size}} parameter.
2
766
XD-3295
07/24/2015 12:26:50
Spike: Determine options for configuring shared module dependencies
h2. Narrative As a developer, I'd like to be able to configure common dependencies for the entire environment. An example could be that I use MySql for my databases. I want to be able to configure the MySql driver once and have all modules use it. h2. Back story Spring Batch uses a database to store job state (the job repository). This is a shared resource across all jobs (both custom developed and OOTB). In order to support OOTB jobs, we'll need to have a way for users to provide the db driver to each module. Ideally this would be possible without requiring that each of our OOTB modules be repackaged.
8
767
XD-3296
07/24/2015 12:36:56
Spike: Design a tasks repository
h2. Narrative As a developer, I'd like to be able to run a boot jar as a task on CF and obtain the result reliably. h2. Back story Currently Lattice/Diego's tasks implementation provides the ability to run things as short lived tasks. However, obtaining the result of said task can be an issue. There are two ways to do so: # Poll for the result. # Register a callback URL to be called once the task completes. Since a task is only available for a short time after its completion before it is deleted, polling can run the risk of missing the result completely. When you consider the fact that the provided GUIDs that identify tasks can be re-used polling becomes a precarious option. Registering a callback URL would be a better option, however there are no good guarantees that the message will be delivered. The service will try to execute the callback until it's successful or the task is cleaned up. "Successful" is defined in this case as anything other than a 502 or a 503 return code. In order for Spring XD to be able to support Diego tasks, a more durable option for maintaining the result of tasks will need to be developed. *Note:* The outcome of this spike may be feature requests for the CF/Diego team.
8
768
XD-3298
07/24/2015 13:49:33
Create basic TaskLauncher
h2. Narrative As Spring XD, I will be able to launch Spring Boot jar files as Diego Tasks. h2. Back story The {{TaskLauncher}} will be responsible for listening for launch requests, looking up the definition in the {{TaskDescriptorRepository}}, and launching it. The first implementation of this would be a Receptor based implementation. The scope here is to produce a _basic_ version of {{TaskLauncher}} and incrementally evolve into comprehensive launch capabilities. *See:* https://docs.google.com/document/d/1q964adRCA-kJke_i0GBToJHLXJJTV_7TaTpQT0ymsbc/edit
5
769
XD-3300
07/24/2015 14:35:06
Spike: Determine best way to centrally configure the job repository for batch jobs.
h2. Narrative As a developer, I need to be able to run batch jobs that use the centrally configured job repository to store job state. h2. Back story The XD containers each used a {{BatchConfigurer}} implementation ({{RuntimeBatchConfigurer}}) to add a consistent configuration for the job repository. This functionality needs to be replicated in some way in just a regular Spring Boot application.
5
770
XD-3306
07/30/2015 07:04:37
[Flo] Some streams can't be created using FLO
Trying to create streams from the flo UI may end up in weird exceptions, whereas doing the same thing (copying/pasting the stream) directly from XD shell works smoothly. This simple stream is an example, but this situation happens in multiple scenarios (for example using the same module several times with labels). {code:java} trigger --cron='0 05 14 ? * MON-FRI' | mail --from='''xd@mycompany.com''' --to='''a-wise-guy@mycompany.com''' --bcc='''me@mycompany.com''' {code}
0
771
XD-3307
07/30/2015 11:29:01
Add support for offline module resolution
h2. Narrarive As a developer, I need to be able to test modules without pushing them to a remote maven repository. I should be able to do {{$ mvn install}} in my module project locally (which will install the artifact into my local repository) and have it resolvable by spring-cloud-streams.
5
772
XD-3308
07/30/2015 13:00:44
With Security - Unable to upload module
Once security is enabled, one cannot upload modules using the shell any longer.
2
773
XD-3335
08/04/2015 13:55:47
Kafka Source must set autoStartup=false on KafkaMessageDrivenChannelAdapter
If the value is not set, the source may start before being bound to the bus, throwing a "Dispatcher has no subscribers" error
3
774
XD-3358
08/06/2015 12:27:36
Admin UI deploys job with wrong module count
When deploying a job through admin UI with a count of 0 the module is actually deployed with count 1. More info here: [http://stackoverflow.com/questions/31858631/how-to-define-named-channel-consumer-module-deployment-properties]
2
775
XD-3373
08/07/2015 17:13:19
First deploy/launch of Pig job that includes yarn-site.xml file fails
Deploying and launching a Pig job that contains a yarn-site.xml config file fails on the first deploy after XD starts up. This happens consistently. The error is: Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster which indicates that the yarn-site.xml file never made it to the classpath. Un-deploying and re-deploying the job seems to fix the problem.
5
776
XD-3377
08/07/2015 21:10:47
Refactor Task parsing
Currently the DSL parsing for tasks is a copy and paste of what it is for streams (minus the ability to parse multiple modules). This results in a lot of duplication. This should be refactored to remove duplication and remove explicit references to either streams or tasks in common code.
8
777
XD-3385
08/10/2015 09:44:26
Can't build and run singlenode spring-cloud-data-rest app on Ubuntu
Building and then running spring-cloud-data-rest app on Ubuntu fails when trying to create the first stream. The configuration ends up with a CloudFoundryConfig instead of LocalConfig for the moduleDeployer. Env: Ubuntu 15.04 java version "1.8.0_51" Java(TM) SE Runtime Environment (build 1.8.0_51-b16) Java HotSpot(TM) 64-Bit Server VM (build 25.51-b03, mixed mode) Error: {code} 2015-08-10 11:43:47.199 ERROR 11062 --- [nio-9393-exec-1] o.s.c.d.r.c.RestControllerAdvice : Caught exception while handling a request java.lang.UnsupportedOperationException: null at org.springframework.cloud.data.module.deployer.cloudfoundry.CloudFoundryModuleDeployer.deploy(CloudFoundryModuleDeployer.java:30) ~[spring-cloud-data-module-deployer-cloudfoundry-1.0.0.BUILD-SNAPSHOT.jar!/:1.0.0.BUILD-SNAPSHOT] at org.springframework.cloud.data.rest.controller.StreamController.deployStream(StreamController.java:213) ~[spring-cloud-data-rest-1.0.0.BUILD-SNAPSHOT.jar!/:1.0.0.BUILD-SNAPSHOT] at org.springframework.cloud.data.rest.controller.StreamController.save(StreamController.java:140) ~[spring-cloud-data-rest-1.0.0.BUILD-SNAPSHOT.jar!/:1.0.0.BUILD-SNAPSHOT] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_51] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_51] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_51] at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_51] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221) ~[spring-web-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:137) ~[spring-web-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:111) ~[spring-webmvc-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:806) ~[spring-webmvc-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:729) ~[spring-webmvc-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) ~[spring-webmvc-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959) ~[spring-webmvc-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893) ~[spring-webmvc-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) [spring-webmvc-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872) [spring-webmvc-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:648) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) [spring-webmvc-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:291) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) [tomcat-embed-websocket-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.springframework.boot.actuate.autoconfigure.EndpointWebMvcAutoConfiguration$ApplicationContextHeaderFilter.doFilterInternal(EndpointWebMvcAutoConfiguration.java:235) [spring-boot-actuator-1.3.0.BUILD-SNAPSHOT.jar!/:1.3.0.BUILD-SNAPSHOT] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.springframework.boot.actuate.trace.WebRequestTraceFilter.doFilterInternal(WebRequestTraceFilter.java:102) [spring-boot-actuator-1.3.0.BUILD-SNAPSHOT.jar!/:1.3.0.BUILD-SNAPSHOT] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:87) [spring-web-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:77) [spring-web-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:85) [spring-web-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.springframework.boot.actuate.autoconfigure.MetricsFilter.doFilterInternal(MetricsFilter.java:69) [spring-boot-actuator-1.3.0.BUILD-SNAPSHOT.jar!/:1.3.0.BUILD-SNAPSHOT] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.2.0.RELEASE.jar!/:4.2.0.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:142) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1521) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1478) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_51] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-8.0.23.jar!/:8.0.23] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51] 2015-08-10 11:43:47.284 WARN 11062 --- [nio-9393-exec-1] .m.m.a.ExceptionHandlerExceptionResolver : Handler execution resulted in exception: null {code}
3
778
XD-3387
08/11/2015 15:20:45
Hide the passwords in custom modules from being displayed.
Hi, Passwords are visibly when using custom modules. Attached is example custom module code and xd-shell script to reproduce the problem using stream module of type processor on Spring XD 1.2.1.RELEASE. Compile with Maven (mvn clean install) and run xd-shell script (xd-shell --cmdfile ./runme.cmd).
2
779
XD-3423
08/21/2015 09:02:37
Update Shell to support tasks
h2. Narrative As a user, I need to be able to deploy a task (boot jar) via the CLI. h2. Back story Since the concept of jobs as an explicit primitive within Spring XD is going away in spring-cloud-data, the shell needs to be updated to reflect that.
5
780
XD-3456
09/02/2015 17:02:56
Create infrastructure for Spring cloud task modules
Create Parent pom file for build Create .settings file Migrate Timestamp task from SCSM to SCTM.
3
781
XD-3469
09/10/2015 21:42:50
The new SCSM twitterstream module should produce same json as old XD source
The new SCSM twitterstream module uses a different format than XD 1.x source module. It should match what Twitter uses so existing processors etc. will continue to work.
3
782
XD-3503
09/24/2015 14:40:46
Document the setting of the CORS allow_origin property
We do set a default value in: xd/lib/spring-xd-dirt-1.2.1.RELEASE.jar/application.yml {code} ... xd: data: home: file:${XD_HOME}/data config: home: file:${XD_HOME}/config module: home: file:${XD_HOME}/modules customModule: home: file:${XD_HOME}/custom-modules ui: home: file:${XD_HOME}/spring-xd-ui/dist/ allow_origin: http://localhost:9889 ... {code} We need to document how this property can be used to allow for external services to use the XD Rest API and how you can customize it using **servers.yml**
2
783
XD-3509
09/28/2015 16:29:34
CORS issue when trying to use HTTP in singlenode
When I'm trying to send a json object to spring-xd I get the following error even though I opened up requests to allow all. XMLHttpRequest cannot load http://localhost:9000/. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access. Config: spring: profiles: singlenode xd: transport: local ui: allow_origin: "*"
3
784
XD-3566
10/01/2015 14:39:32
TwitterStream test must use unique name to prevent test collision
XD Developer does not want the the twitter stream acceptance tests to interfere with other tests.
3
785
XD-3567
10/01/2015 22:44:39
Fix classpath and servlet container issues
Several issues with 1.3.0.M1 staged version - we now use Tomcat instead of Jetty which prevent s xd-admin from starting on YARN - we now have Guava 18.0 on classpath instead of 16.0.1 - xd-yarn push doesn't work, hadoop client for 2.7.1 needs Servlet API - updating Hadoop to 2.7.1 instead of 2.6.0 -- this causes Curator to also update to 2.7.1 which throws exception on startup
3
786
XD-3568
10/02/2015 04:02:25
AdminServer fails on HDP 2.3
Submitting XD on YARN for HDP 2.3 fails due to some Solr issue in Boot - https://github.com/spring-projects/spring-boot/issues/2795 The xd-admin sysout is: {code} Started : AdminServerApplication Documentation: https://github.com/spring-projects/spring-xd/wiki 02:51:36,624 ERROR main boot.SpringApplication - Application startup failed java.lang.IllegalStateException: Error processing condition on org.springframework.boot.autoconfigure.solr.SolrAutoConfiguration.solrServer at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:58) at org.springframework.context.annotation.ConditionEvaluator.shouldSkip(ConditionEvaluator.java:102) at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForBeanMethod(ConfigurationClassBeanDefinitionReader.java:178) at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForConfigurationClass(ConfigurationClassBeanDefinitionReader.java:140) at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitions(ConfigurationClassBeanDefinitionReader.java:116) at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:333) at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:243) at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanDefinitionRegistryPostProcessors(PostProcessorRegistrationDelegate.java:273) at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:98) at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:673) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:519) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:686) at org.springframework.boot.SpringApplication.run(SpringApplication.java:320) at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:139) at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:129) at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:129) at org.springframework.xd.dirt.server.admin.AdminServerApplication.run(AdminServerApplication.java:95) at org.springframework.xd.dirt.server.admin.AdminServerApplication.main(AdminServerApplication.java:79) Caused by: java.lang.IllegalArgumentException: @ConditionalOnMissingBean annotations must specify at least one bean (type, name or annotation) at org.springframework.util.Assert.isTrue(Assert.java:68) at org.springframework.boot.autoconfigure.condition.OnBeanCondition$BeanSearchSpec.<init>(OnBeanCondition.java:223) at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getMatchOutcome(OnBeanCondition.java:92) at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:45) ... 17 more 02:51:36,628 WARN main annotation.AnnotationConfigApplicationContext - Exception thrown from LifecycleProcessor on context close java.lang.IllegalStateException: LifecycleProcessor not initialized - call 'refresh' before invoking lifecycle methods via the context: org.springframework.context.annotation.AnnotationConfigApplicationContext@1cf1df22: startup date [Fri Oct 02 02:51:31 UTC 2015]; root of context hierarchy at org.springframework.context.support.AbstractApplicationContext.getLifecycleProcessor(AbstractApplicationContext.java:414) at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:966) at org.springframework.context.support.AbstractApplicationContext.close(AbstractApplicationContext.java:925) at org.springframework.boot.SpringApplication.run(SpringApplication.java:342) at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:139) at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:129) at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:129) at org.springframework.xd.dirt.server.admin.AdminServerApplication.run(AdminServerApplication.java:95) at org.springframework.xd.dirt.server.admin.AdminServerApplication.main(AdminServerApplication.java:79) 02:51:36,642 ERROR main admin.AdminServerApplication - Error processing condition on org.springframework.boot.autoconfigure.solr.SolrAutoConfiguration.solrServer {code}
3
787
XD-3569
10/02/2015 16:05:53
ResourceModuleRegistry doesn't support HA namenode for hdfs custom module location
As an XD module developer I would like to use HDFS for my custom module location even when my namenode is configured for HA. We had an issue filed in the `spring-xd-ambari` project: "It seems like custom module doesn't pickup namenode HA? and still use NameNodeProxies.createNonHAProxy?" see: https://github.com/spring-projects/spring-xd-ambari/issues/14
3
788
XD-3589
10/05/2015 18:40:45
Create Composed Job Module
h2. Narrative As an XD developer, I need to be able to create a composed job module as XML from the DSL an store it in the Module File repository. While the user uses the composed job as if it is a normal job including seeing only the DSL. In the background the JobFactory will deploy the composed job module. * When the user destroys the job the module will be deleted from the file module repository. * When the user creates the job a module will be created in the file Module repository. h2. Back story For the composed job story, we need to create a "real" job module to be expressed in XML, so that we can take advantage of the job execution tasklet in XD-3556, so that each job can be executed as a step in the composed job.
8
789
XD-3610
10/13/2015 02:04:44
Kafka source and sink headers shouldn't interfere with bus functionality
The Kafka sink should not make use of the message headers sent by the Kafka receivers in the Kafka bus. Similarly, the headers received from the Kafka source should not be propagated when sending to the Kakfa bus. https://github.com/spring-projects/spring-xd/issues/1804
1
790
XD-3613
10/13/2015 15:56:06
Multiple module instances consuming from taps or topics get duplicate messages on redis Message Bus
If I deploy more than one instance of a module (eg using module.name.count > 1 or module.name.count =0) that consumes from a tap or topic then I get duplicate messages if I’m using Redis as the message bus. It looks like this is the same issue as XD-3100 but the fix for that only fixed Rabbit as the message bus. This is easy to reproduce on a 2 container cluster using a Redis Message Bus: Create and deploy streams as follows: {code} stream create --definition "http | log" --name httpLog stream deploy --name httpLog --properties "module.*.count=0" stream create --definition "tap:stream:httpLog > transform --expression='payload.toString() + \" TAPPED\"' | log" --name httpLogTap stream deploy --name httpLogTap --properties "module.*.count=0" {code} On container 1 send a message: {code} curl --data "test message 001" http://localhost:9000/httpLog {code} Container 1 logs are then: {code} 2015-10-13 14:16:28.853 INFO 22774 --- [ol-28-thread-18] xd.sink.httpLog : test message 001 2015-10-13 14:16:28.855 INFO 22774 --- [enerContainer-4] xd.sink.httpLogTap : test message 001 TAPPED {code} and container 2: {code} 2015-10-13 14:16:28.859 INFO 22719 --- [enerContainer-4] xd.sink.httpLogTap : test message 001 TAPPED {code} Ie the tapped message is duplicated (picked up by both tap module instances) Similarly for topics create and deploy these streams: {code} stream create --definition "http > topic:mytopic" --name httpTopic stream deploy --name httpTopic --properties "module.*.count=0" stream create --definition "topic:mytopic > transform --expression='payload.toString() + \" TOPIC CONSUMER 1\"' | log" --name topicConsumer1 stream deploy --name topicConsumer1 --properties "module.*.count=0" stream create --definition "topic:mytopic > transform --expression='payload.toString() + \" TOPIC CONSUMER 2\"' | log" --name topicConsumer2 stream deploy --name topicConsumer2 --properties "module.*.count=0" {code} On container 1 send a message: {code} curl --data "test message 002" http://localhost:9000/httpLog {code} Container 1 logs are then: {code} 2015-10-13 14:34:23.168 INFO 22774 --- [enerContainer-2] xd.sink.topicConsumer2 : test message 002 TOPIC CONSUMER 2 2015-10-13 14:34:23.172 INFO 22774 --- [enerContainer-2] xd.sink.topicConsumer1 : test message 002 TOPIC CONSUMER 1 {code} and container 2: {code} 2015-10-13 14:34:23.173 INFO 22719 --- [enerContainer-2] xd.sink.topicConsumer2 : test message 002 TOPIC CONSUMER 2 2015-10-13 14:34:23.177 INFO 22719 --- [enerContainer-2] xd.sink.topicConsumer1 : test message 002 TOPIC CONSUMER 1 {code} Ie the topic message is picked up by each instance of the module in each stream. In this case I would expect each stream to pick up the message once ie I would get a single output for each stream test message 002 TOPIC CONSUMER 2 once (on either container) test message 002 TOPIC CONSUMER 1 once (on either container)
5
791
XD-3632
10/19/2015 23:29:25
Reactor message handlers log completions at error level
(copied from https://github.com/spring-projects/spring-xd/issues/1810): While testing a reactive processor that I was building, I saw the following in my test environment's logs: {noformat} 2015-10-19 18:33:22.594 +1100 INFO/MetadataDrivenFlatFileSplitter:114 - Start splitting file=/tmp/junit8525530428026993137/junit6584105040601814728.tmp 2015-10-19 18:33:22.612 +1100 INFO/MetadataDrivenFlatFileSplitter:86 - Done splitting file=/tmp/junit8525530428026993137/junit6584105040601814728.tmp 2015-10-19 18:33:23.833 +1100 ERROR/BroadcasterMessageHandler:153 - Consumer completed for [{push}] 2015-10-19 18:33:23.834 +1100 ERROR/BroadcasterMessageHandler:153 - Consumer completed for [{push}] 2015-10-19 18:33:23.835 +1100 ERROR/BroadcasterMessageHandler:153 - Consumer completed for [{push}] 2015-10-19 18:33:23.839 +1100 ERROR/BroadcasterMessageHandler:153 - Consumer completed for [{push}] 2015-10-19 18:33:23.839 +1100 ERROR/BroadcasterMessageHandler:153 - Consumer completed for [{push}] 2015-10-19 18:33:23.840 +1100 ERROR/BroadcasterMessageHandler:153 - Consumer completed for [{push}] {noformat} Completions don't really seem like error events. Perhaps this could be changed to INFO? (will open a PR shortly)
0
792
XD-3645
10/27/2015 02:57:41
Tuple unable to serialize objects with nested arrays of objects
Serializing a tuple object with that have a nested array which contains objects (as a tuple) fails to serialize. The error is: {noformat} Caused by: com.fasterxml.jackson.databind.JsonMappingException: No serializer found for class org.springframework.xd.tuple.DefaultTupleConversionService and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) ) (through reference chain: java.util.ArrayList[0]->org.springframework.xd.tuple.DefaultTuple["values"]->java.util.UnmodifiableRandomAccessList[0]->org.springframework.xd.tuple.DefaultTuple["conversionService"]) at com.fasterxml.jackson.databind.ser.impl.UnknownSerializer.failForEmpty(UnknownSerializer.java:59) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.impl.UnknownSerializer.serialize(UnknownSerializer.java:26) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:505) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:639) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:152) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serializeContents(IndexedListSerializer.java:100) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serializeContents(IndexedListSerializer.java:21) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.std.AsArraySerializerBase.serialize(AsArraySerializerBase.java:183) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:505) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:639) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:152) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serializeContents(IndexedListSerializer.java:100) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serializeContents(IndexedListSerializer.java:21) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.std.AsArraySerializerBase.serialize(AsArraySerializerBase.java:183) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:128) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:1902) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:280) ~[jackson-core-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.node.POJONode.serialize(POJONode.java:111) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.std.SerializableSerializer.serialize(SerializableSerializer.java:44) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.std.SerializableSerializer.serialize(SerializableSerializer.java:29) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:128) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:1902) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:280) ~[jackson-core-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.node.POJONode.serialize(POJONode.java:111) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.node.ObjectNode.serialize(ObjectNode.java:264) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.std.SerializableSerializer.serialize(SerializableSerializer.java:44) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.std.SerializableSerializer.serialize(SerializableSerializer.java:29) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:128) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:1902) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:280) ~[jackson-core-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.node.POJONode.serialize(POJONode.java:111) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.node.ObjectNode.serialize(ObjectNode.java:264) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.std.SerializableSerializer.serialize(SerializableSerializer.java:44) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.std.SerializableSerializer.serialize(SerializableSerializer.java:29) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:128) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:1902) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:280) ~[jackson-core-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.node.POJONode.serialize(POJONode.java:111) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.node.ObjectNode.serialize(ObjectNode.java:264) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.std.SerializableSerializer.serialize(SerializableSerializer.java:44) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.std.SerializableSerializer.serialize(SerializableSerializer.java:29) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:128) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ObjectMapper._configAndWriteValue(ObjectMapper.java:2881) ~[jackson-databind-2.4.5.jar:2.4.5] at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(ObjectMapper.java:2338) ~[jackson-databind-2.4.5.jar:2.4.5] at org.springframework.xd.tuple.TupleToJsonStringConverter.convert(TupleToJsonStringConverter.java:37) ~[spring-xd-tuple-1.3.0.M1.jar:1.3.0.M1] {noformat} when the input string (read from a Kafka topic in my case) looks something like: {noformat} { "body": [ { "dataType": "har", "har": { "log": { "browser": { "name": "Google Chrome", "version": "44.0.2403.155" }, "creator": { "name": "My extension", "version": "0.23.6" }, "pages": [ { "_requestTimings": { "blocked": -1, "connect": -1, "dns": -1, "receive": 11, "send": -1, "ssl": -1, "wait": 244 }, "_requestUrl": "https://google.com" }, { "_requestTimings": { "blocked": -1, "connect": -1, "dns": -1, "receive": 11, "send": -1, "ssl": -1, "wait": 244 }, "_requestUrl": "https://google.com" } ], "version": "1.2" } }, "testId": 1 } ], "bodyType": "models.MultiMessage", "headers": { "appInstance": "localhost/127.0.0.1:8080", "clientIp": "0:0:0:0:0:0:0:1", "host": "localhost:8080", "requestId": "27acf948-33ff-491c-8be7-1beb4b8c95d9", "requestMethod": "POST", "requestUrl": "http://localhost:8080/har", "timestamp": 1445914510549, "userPrincipal": "235" } } {noformat} If the inner array (the Pages array) is just an object, it works, when it is an array, it fails. The stream used: kafka --topic=agent_mixed --outputType=application/x-xd-tuple | splitter --expression=payload.body | log
2
793
XD-3652
10/28/2015 18:46:20
The shell processor module cannot be stopped while blocked in receive()
Both lifecycle and send/receive methods are synchronized, so if the shell command processor is blocked reading from the script's input - e.g. when no proper terminator is sent by the script, the stop() method can't acquire the object lock and proceed stopping the instance, and therefore the module.
5
794
XD-3685
11/02/2015 20:54:40
Job Definitions page fails to display definitions if page
In this scenario we created 30 jobs that can be used for a composed job. if the composed job uses jobs in its composition that are not present on the first page of the of the result set the following exception is thrown. {noformat} 2015-11-02T14:47:17-0500 1.3.0.SNAP ERROR qtp1587928736-26 rest.RestControllerAdvice - Caught exception while handling a request java.lang.IllegalStateException: Not all instances were looked at: fff at org.springframework.xd.dirt.rest.XDController.enhanceWithDeployments(XDController.java:244) ~[spring-xd-dirt-1.3.0.BUILD-SNAPSHOT.jar:1.3.0.BUILD-SNAPSHOT] at org.springframework.xd.dirt.rest.XDController.listValues(XDController.java:209) ~[spring-xd-dirt-1.3.0.BUILD-SNAPSHOT.jar:1.3.0.BUILD-SNAPSHOT] at org.springframework.xd.dirt.rest.JobsController.list(JobsController.java:128) ~[spring-xd-dirt-1.3.0.BUILD-SNAPSHOT.jar:1.3.0.BUILD-SNAPSHOT] at sun.reflect.GeneratedMethodAccessor133.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_67] at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_67] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:137) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:110) ~[spring-webmvc-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:806) ~[spring-webmvc-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:729) ~[spring-webmvc-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) ~[spring-webmvc-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959) ~[spring-webmvc-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893) ~[spring-webmvc-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) [spring-webmvc-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861) [spring-webmvc-4.2.2.RELEASE.jar:4.2.2.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:735) [javax.servlet-3.0.0.v201112011016.jar:na] at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) [spring-webmvc-4.2.2.RELEASE.jar:4.2.2.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:848) [javax.servlet-3.0.0.v201112011016.jar:na] at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684) [jetty-servlet-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1496) [jetty-servlet-8.1.14.v20131031.jar:8.1.14.v20131031] at org.springframework.boot.actuate.autoconfigure.EndpointWebMvcAutoConfiguration$ApplicationContextHeaderFilter.doFilterInternal(EndpointWebMvcAutoConfiguration.java:291) [spring-boot-actuator-1.2.3.RELEASE.jar:1.2.3.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467) [jetty-servlet-8.1.14.v20131031.jar:8.1.14.v20131031] at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:77) [spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467) [jetty-servlet-8.1.14.v20131031.jar:8.1.14.v20131031] at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:87) [spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467) [jetty-servlet-8.1.14.v20131031.jar:8.1.14.v20131031] at org.springframework.boot.actuate.trace.WebRequestTraceFilter.doFilterInternal(WebRequestTraceFilter.java:102) [spring-boot-actuator-1.2.3.RELEASE.jar:1.2.3.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467) [jetty-servlet-8.1.14.v20131031.jar:8.1.14.v20131031] at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:207) [spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:176) [spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467) [jetty-servlet-8.1.14.v20131031.jar:8.1.14.v20131031] at org.springframework.boot.actuate.autoconfigure.MetricFilterAutoConfiguration$MetricsFilter.doFilterInternal(MetricFilterAutoConfiguration.java:90) [spring-boot-actuator-1.2.3.RELEASE.jar:1.2.3.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467) [jetty-servlet-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499) [jetty-servlet-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) [jetty-server-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) [jetty-security-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) [jetty-server-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086) [jetty-server-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428) [jetty-servlet-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) [jetty-server-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020) [jetty-server-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) [jetty-server-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) [jetty-server-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.server.Server.handle(Server.java:370) [jetty-server-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494) [jetty-server-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:971) [jetty-server-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1033) [jetty-server-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644) [jetty-http-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) [jetty-http-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) [jetty-server-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667) [jetty-io-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) [jetty-io-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) [jetty-util-8.1.14.v20131031.jar:8.1.14.v20131031] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) [jetty-util-8.1.14.v20131031.jar:8.1.14.v20131031] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67] {noformat}
3
795
XD-3687
11/04/2015 16:37:10
Update Docs to add configs changes for Composed jobs
Need to add the following instructions to setup the configurations for the Batch Repo to Composed Job Docs to support parallel jobs: 1) uncomment and change the following from : ```spring: batch: # Configure other Spring Batch repository values. Most are typically not needed isolationLevel: ISOLATION_SERIALIZATION ``` to ```spring: batch: # Configure other Spring Batch repository values. Most are typically not needed isolationLevel: ISOLATION_READ_COMMITTED ``` And update the hsqldb datasource to: spring: datasource: url: jdbc:hsqldb:hsql://${hsql.server.host:localhost}:${hsql.server.port:9101}/${hsql.server.dbname:xdjob};sql.enforce_strict_size=true;hsqldb.tx=mvcc
1
796
XD-3690
11/05/2015 21:23:45
Improve "Server Configuration - Database Configuration" section
Make it more clear what drivers need to be copied where. See - https://github.com/spring-projects/spring-xd/issues/1653
1
797
XD-3691
11/06/2015 02:14:48
Ensure Job definitions are escaped in UI
If using the definition <aaa || bbb> where the definition starts with a "<" and ends with a ">" the definition for the composed job does not appear on the definition page.
2
798
XD-3709
11/21/2015 16:59:25
Duplicate MBean Names With router Sink
For some reason, the Integration {{MBeanExporterHelper}} is not preventing the standard context {{MBeanExporter}} from exporting the {{AbstractMessageRouter}}. This should be suppressed (when an IMBE is present) because it's annotated {{@IntegrationManagedResource}}. Causes {{InstanceAlreadyExistsException}}. Workaround in the stack overflow answer. http://stackoverflow.com/questions/33838502/error-deploying-more-than-one-stream-with-a-router-1-3-0 Could be an SI issue, but investigation needed. However, we should probably include the stream/job name in all MBeans for the stream (as is done for the integration exporter).
1
799
XD-3716
12/02/2015 23:15:01
Support Configuring the RabbitMessageBus MessagePropertiesConverter LongString Limit
http://stackoverflow.com/questions/34053997/passing-headerinformation-as-jsonobject-in-header-in-spring-xd
2
800
XD-3719
12/09/2015 06:21:39
Spring flo issue with unexpected char
In Flo when creating a stream if you use asterisk you get an error. See the image attached.
2
801
XD-3721
12/14/2015 18:40:55
XD Admin UI log out does not function properly
I am using XD 1.2.1.RELEASE. I have following environment variables XD_CONFIG_NAME = mycompany And SPRING_PROFILE_ACTIVE= prod, admin i have XD configuration file (mycompany-prod.yml) with following security configuration # Config to enable security on administration endpoints (consider adding ssl) spring: profiles: prod security: basic: enabled: true # false to disable security settings (default) realm: SpringXD xd: security: authentication: file: enabled: true users: xdadmin: pwd, ROLE_ADMIN,ROLE_VIEW,ROLE_CREATE I get a login screen, login works alright. When i logout - i still see all the tabs and contents in all the tabs. See the attached screenshot.
1
802
XD-3725
12/17/2015 18:11:37
EmbeddedHeadersMessageConverter Buffer Overflow
See https://github.com/spring-projects/spring-xd/issues/1871
1
803
XD-3730
01/08/2016 15:53:19
NPE in spring-integration when using kafka as message bus when using aggrzgation module
as stated in https://jira.spring.io/browse/INT-3908 sprint-integration in springxd can't use kafka as message bus in most case. Could it spring-xd integrat this fix for us to use it?
3
804
XD-3733
01/14/2016 13:10:10
Document redis pool properties in servers.yml
Add spring.redis.pool.* properties to server.yml, commented out to show default values., e.g., maxIdle: 8, minIdle: 0, maxActive: 8, maxWait: -1
1
805
XD-3736
02/01/2016 19:00:42
Rabbit Pub/Sub Consumers Should Support Concurrency
PubSub consumers can support concurrency since the threads are competing consumers on the queue.
2
806
XD-3737
02/02/2016 17:52:11
REST - Do not redirect after logout
In the following PR we removed the *RestLogoutSuccessHandler*. https://github.com/spring-projects/spring-xd/pull/1562 This is necessary, though, for REST calls and the Admin UI. Otherwise some weird UI behavior might occur due to the HTTP redirect.
1
807
XD-3738
02/03/2016 09:40:05
Encrypt secret information in XD configuration files
Spring XD keeps passwords in text files such sas servers.yml, properties files, and module configuration files. Some users have requested a way to store encrypted values rather than clear text. XD should provide a "hook" for users to provide a custom component to detect encrypted property values and decrypt them during container, admin, and module initialization.
2
808
XD-3739
02/05/2016 16:01:49
Incorrect refresh period for groovy scripts
All modules that allow groovy implementations (filter, script, transform, router, tcpclient) allow automatic refresh of the script when it changes. In the XD documentation it is stated that this refresh occurs every minute eg for filter at http://docs.spring.io/spring-xd/docs/1.3.0.RELEASE/reference/html/#filter "The script is checked for updates every 60 seconds, so it may be replaced in a running system. " This set up can be seen in the spring xml for the modules - eg (again for filter) {code:xml} <filter input-channel="to.script" output-channel="output"> <int-groovy:script location="${script:filter.groovy}" script-variable-generator="variableGenerator" refresh-check-delay="60"/> </filter> {code} However from the spring integration documentation http://docs.spring.io/spring-integration/docs/4.2.4.RELEASE/reference/html/messaging-endpoints-chapter.html#scripting-config it specifies that the refresh-check-delay parameter is actually in milliseconds - ie the above XD configuration would recheck the script every 60 milliseconds which may be a performance concern as it will be checking the lastmodified time of the script file. Ideally this parameter would be configurable - in our case we would usually eliminate the refresh check altogether (set to -1) as our scripts will not change (or if they did a redeploy of the module would pick it up)
5
809
XD-3743
02/17/2016 14:28:31
Update to Spring Integration 4.2.5 When Available (Fix Metrics)
See INT-3956
1
810
XD-3744
02/22/2016 15:27:31
Suppress DeliveryMode Header in RabbitMQ Source
Related to XD-2567 which fixed this problem, but only in the bus. {quote} 2016-02-19T18:25:24-0500 1.2.1.RELEASE WARN SimpleAsyncTaskExecutor-1 support.DefaultAmqpHeaderMapper - skipping header 'amqp_deliveryMode' since it is not of expected type [class org.springframework.amqp.core.MessageDeliveryMode], it is [class org.springframework.amqp.core.MessageDeliveryMode] {quote}
1