question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I'm going to edit the Opera Mini v6.5 server because it is blocked in our country.
Now I have unpacked the .apk file extracted classes.Dex then converted it via dex2jar.bat, now modified the server.
My problem is I want to repack the .jar or .class to classes.Dex. How do I do it?
| Here is a solution that was helpful in my case...
Suppose .jar file sits in "c:\temp\in.jar". In command prompt window cd to ..\android-sdk\platform-tools.
To get .apk execute:
dx --dex --output="c:\temp\app.apk" "c:\temp\in.jar"
To get .dex file execute:
dx --dex --output="c:\temp\dx.dex" "c:\temp\in.jar"
| Dex | 8,348,144 | 19 |
I initially thought it was exclusively for official libraries like the support library. However any time I add a custom JAR using the "Add to build path" menu item, it automatically shows up in both "Referenced Libraries" and "Android Private Libraries". One of these is redundant since there is no point in exporting both at build time. Additionally, "Android Private Libraries" cannot be modified so I can't (easily) add java doc or source paths. Not to mention it just seems wrong that a custom JAR is in "Android Private Libraries".
I cannot delete the entry from "Android Private Libraries" to use only "Referenced Libraries" either.
| In my case i always add libraries to libs folder.
According to my understanding thats the way its suppose to be. Android Private Library folder references these jars in libs folder. The Android Private Library is used during the creation of apk (probably during the conversion of your java code into dex file).
| Dex | 20,604,193 | 19 |
I have a working android application. of which i dont have a source code.
I would like to debug a functionality of this application. I could successfully reverse engineer this application apk file using apktool - https://code.google.com/p/android-apktool/
This tool generates class files in smali format.
My requirement is :
To be able to debug a method by adding debug logs.
To be able to debug method calls by printing a stack trace.
To achieve this I need to inject/insert smali equivalent of a debug log or stack trace. I tried adding some smali instruction at the start of one of the method but it crashed with ClassVerifyError.
Example smali code -
.method public declared-synchronized b()V
.locals 2
.prologue
.line 87
monitor-enter p0
:try_start_0
iget-object v0, p0, Lcom/example/viewerlib/t/d;->a:Ljava/lang/Thread;
invoke-virtual {v0}, Ljava/lang/Thread;->isAlive()Z
:
:
Could someone help me out in adding smali debug logs. Thnx in advance.
| 1. Debug log in smali
Debug log in smali. Say for example inside method test() you want to print "Inside Test()" debug log. At the start of method in smali add following instructions :
sget-object v0, Ljava/lang/System;->out:Ljava/io/PrintStream;
const-string v1, "Inside Test()"
invoke-virtual {v0, v1}, Ljava/io/PrintStream;->println(Ljava/lang/String;)V
Note - You need to be careful while using registers v0,v1 here. In code execution flow, you have to check that you are not using one of the register which is used later in the flow. Or you may get Exception.
2. StackTrace
Here is the code of smali to print stacktrace of a method
Java code
public static void printStackTraces() {
StackTraceElement[] stackTraceElements = Thread.currentThread().getStackTrace();
for (StackTraceElement element : stackTraceElements) {
System.out.println("Class name :: " + element.getClassName() + " || method name :: " + element.getMethodName());
}
}
And equivalent smali code is
.method public static printStackTraces()V
.locals 7
.prologue
.line 74
invoke-static {}, Ljava/lang/Thread;->currentThread()Ljava/lang/Thread;
move-result-object v2
invoke-virtual {v2}, Ljava/lang/Thread;->getStackTrace()[Ljava/lang/StackTraceElement;
move-result-object v1
.line 75
.local v1, stackTraceElements:[Ljava/lang/StackTraceElement;
array-length v3, v1
const/4 v2, 0x0
:goto_0
if-lt v2, v3, :cond_0
.line 78
return-void
.line 75
:cond_0
aget-object v0, v1, v2
.line 76
.local v0, element:Ljava/lang/StackTraceElement;
sget-object v4, Ljava/lang/System;->out:Ljava/io/PrintStream;
new-instance v5, Ljava/lang/StringBuilder;
const-string v6, "Class name :: "
invoke-direct {v5, v6}, Ljava/lang/StringBuilder;-><init>(Ljava/lang/String;)V
invoke-virtual {v0}, Ljava/lang/StackTraceElement;->getClassName()Ljava/lang/String;
move-result-object v6
invoke-virtual {v5, v6}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v5
const-string v6, " || method name :: "
invoke-virtual {v5, v6}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v5
invoke-virtual {v0}, Ljava/lang/StackTraceElement;->getMethodName()Ljava/lang/String;
move-result-object v6
invoke-virtual {v5, v6}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v5
invoke-virtual {v5}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v5
invoke-virtual {v4, v5}, Ljava/io/PrintStream;->println(Ljava/lang/String;)V
.line 75
add-int/lit8 v2, v2, 0x1
goto :goto_0
.end method
Add this method into any smali file. And call as
(Assuming you added above smali code into com.example.pakagename.ClassName)
invoke-static {}, Lcom/example/packagename/ClassName;->printStackTraces()V
Hope this helps .....
| Dex | 20,879,950 | 19 |
So I am wondering why I encounter the 64k dex method limit when trying to run my app on android versions older than lollipop, when it runs just fine on the more recent versions.
Could it be, because the support libraries are actually being referenced when running on the older versions?
This is my gradle:
apply plugin: 'com.android.application'
android {
compileSdkVersion 23
buildToolsVersion '23.0.2'
lintOptions {
checkReleaseBuilds true
// Or, if you prefer, you can continue to check for errors in release builds,
// but continue the build even when errors are found:
abortOnError false
}
defaultConfig {
applicationId "com.domain.myapp"
minSdkVersion 16
targetSdkVersion 23
versionCode 27
versionName "1.2"
// Vector compat
vectorDrawables.useSupportLibrary = true
}
buildTypes {
release {
minifyEnabled true
shrinkResources true
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_7
targetCompatibility JavaVersion.VERSION_1_7
}
}
dependencies {
compile files('libs/commons-io-2.4.jar')
compile files('libs/activation.jar')
compile files('libs/additionnal.jar')
compile files('libs/mail.jar')
compile project(':libraries:preferencefragment')
// Gmail API
compile('com.google.api-client:google-api-client-android:1.20.0') {
exclude group: 'org.apache.httpcomponents'
}
compile('com.google.apis:google-api-services-gmail:v1-rev29-1.20.0') {
exclude group: 'org.apache.httpcomponents'
}
// Play Services
compile 'com.google.android.gms:play-services-location:8.4.0'
compile 'com.google.android.gms:play-services-maps:8.4.0'
compile 'com.google.android.gms:play-services-ads:8.4.0'
compile 'com.google.android.gms:play-services-analytics:8.4.0'
compile 'com.google.android.gms:play-services-identity:8.4.0'
// Support libraries
compile 'com.android.support:support-v4:23.3.0'
compile 'com.android.support:appcompat-v7:23.3.0'
compile 'com.android.support:cardview-v7:23.3.0'
compile 'com.android.support:design:23.3.0'
compile 'com.github.bumptech.glide:glide:3.6.1'
compile 'com.anjlab.android.iab.v3:library:1.0.20'
compile 'com.sothree.slidinguppanel:library:2.0.3'
compile 'com.commit451:PhotoView:1.2.5'
compile('com.github.afollestad.material-dialogs:core:0.8.5.7@aar') {
transitive = true
}
}
EDIT:
To clarify: This happens when I try to run the app on an emulator running e.g. KitKat API 19
Crash logs from the gradle console:
...
:App:generateDebugInstantRunAppInfo
:App:transformClassesWithDexForDebug
AGPBI: {"kind":"error","text":"The number of method references in a .dex file cannot exceed 64K.\nLearn how to resolve this issue at https://developer.android.com/tools/building/multidex.html","sources":[{}],"original":"UNEXPECTED TOP-LEVEL EXCEPTION:\ncom.android.dex.DexIndexOverflowException: method ID not in [0, 0xffff]: 65536\n\tat com.android.dx.merge.DexMerger$6.updateIndex(DexMerger.java:484)\n\tat com.android.dx.merge.DexMerger$IdMerger.mergeSorted(DexMerger.java:261)\n\tat com.android.dx.merge.DexMerger.mergeMethodIds(DexMerger.java:473)\n\tat com.android.dx.merge.DexMerger.mergeDexes(DexMerger.java:161)\n\tat com.android.dx.merge.DexMerger.merge(DexMerger.java:188)\n\tat com.android.dx.command.dexer.Main.mergeLibraryDexBuffers(Main.java:504)\n\tat com.android.dx.command.dexer.Main.runMonoDex(Main.java:334)\n\tat com.android.dx.command.dexer.Main.run(Main.java:277)\n\tat com.android.dx.command.dexer.Main.main(Main.java:245)\n\tat com.android.dx.command.Main.main(Main.java:106)\n","tool":"Dex"}
FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':App:transformClassesWithDexForDebug'.
> com.android.build.api.transform.TransformException: com.android.ide.common.process.ProcessException: java.util.concurrent.ExecutionException: com.android.ide.common.process.ProcessException: org.gradle.process.internal.ExecException: Process 'command '/Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/bin/java'' finished with non-zero exit value 2
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
Crash message from the Message window:
:App:transformClassesWithDexForDebug
Error:The number of method references in a .dex file cannot exceed 64K.
Learn how to resolve this issue at https://developer.android.com/tools/building/multidex.html
Error:Execution failed for task ':App:transformClassesWithDexForDebug'.
> com.android.build.api.transform.TransformException: com.android.ide.common.process.ProcessException: java.util.concurrent.ExecutionException: com.android.ide.common.process.ProcessException: org.gradle.process.internal.ExecException: Process 'command '/Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/bin/java'' finished with non-zero exit value 2
| To answer your specific question is:
This method count limitation is on the DEX (Dalvik Executable) file.
A common workaround for this limitation is to have multiple DEX files.
Older versions of Android does not natively support multiple DEX files.
Starting from Lollipop the system supports it natively.
So that's why it fails on older devices but it works on newer devices. It has nothing to do with support libraries.
To add more useful content to the my answer:
But maybe what some might want to know is how to workaround it on older devices, please note that I used the word natively on my bullet points above. That means, that there're workarounds to make older platforms support it, but you have to code those workarounds into your app.
The workaround (and even the issue itself) is detailed explained on this Google guide: http://developer.android.com/tools/building/multidex.html
The base of it is:
add multidex to your dependencies compile 'com.android.support:multidex:+'
enable multiDex on your build script inside android->default config multiDexEnabled true
on the manifest make your application extends from MultidexApplication android:name="android.support.multidex.MultiDexApplication" or, if you need to sub-class Application for your own app, make your Application extends from it.
| Dex | 36,559,835 | 19 |
I have 2 app versions - pro and lite. They are both already on the market at v1.01. I am trying to release v1.1 for both. This update includes SwawrmConnect integration in order to use their global leaderboards.
I should start off by saying I know I am not maintaining my code correctly. I have 2 completely separate apps and that share probably 90% of their code. I maintain them separately because after a week or 2 or 3 of failing to figure out how to do a library and share code, I gave up and just went this way with it.
SwarmConnect is the first jar I have used and had to make a library to two apps (see screenshot of file structure below).
Right now my lite version is working and is ready for release. I am now trying to get my pro version to where it needs to be for release. I am fairly certain all java/xml files are up to date and ready. When I went to run the pro version in the emulator, I get the below error:
[2013-04-18 11:24:41 - Dex Loader] Unable to execute dex: Multiple dex files define Lcom/swarmconnect/loopj/android/http/AsyncHttpResponseHandler;
[2013-04-18 11:24:41 - BibleTriviaPro] Conversion to Dalvik format failed: Unable to execute dex: Multiple dex files define Lcom/swarmconnect/loopj/android/http/AsyncHttpResponseHandler;
Things I've tried:
Clean/rebuild
Update Eclipse
Delete bin and gen folders
Restart Eclipse
Plus some other stuff
My file structure:
Could the problem be is I am trying to use SwarmConnect as a library for 2 projects (lite and pro)?
EDIT:
Below is the file structure for the lite version that is working perfectly. Compiles and runs on emulator.
| Coincidentally I ran into the same issue just day before yesterday. Here's what I suggest you to do.
First and foremost make sure that you have a backup of all the jars presently residing in the 'Android Dependencies'/'libs' folder.
Now, lets fix the lite version first by following these steps.
Remove all jar files except android-support-v4.jar from the 'Android Dependencies' folder under Project Explorer in Eclipse.
Similarly remove all Jar files except android-support-v4.jar from the libs folder under Project Explorer in Eclipse.
Now Right click on your project-> Select Properties-> Select Java Build Path-> Select Add External JARs. Add all the necessary jar files (just make sure you add a particular jar file only once).
Finally clean the project and build it. Now apply the same sequence of steps to the pro version.
That should do it.
UPDATE:- In case you see Eclipse cribbing about some compile time errors after doing all this all you might have to do is to just fix those compile time errors by doing the necessary imports by pressing Ctrl+Shift+O.
[I assume that there's no linkage between the pro and lite versions of the project in terms of source dependencies etc.. what I mean to say basically they are totally independent.]
Hope this helps.
| Dex | 16,087,341 | 18 |
I got this error when we run apk file of our application. In build.gradle we set multidex and compile multidex is existed in Gradle file . We changed the version of Firebase versions to above and below but that's did not work for us . This is our full log in Run console :
D/AndroidRuntime: Shutting down VM
E/AndroidRuntime: FATAL EXCEPTION: main
Process: ir.parsinteam.ojoobe, PID: 5141
java.lang.NoClassDefFoundError: Failed resolution of: Lcom/google/android/gms/common/api/Api$zzf;
at com.google.android.gms.location.LocationServices.<clinit>(Unknown Source)
at ir.adad.client.LocationMethods.callAndroidLocationService(LocationMethods.java:101)
at ir.adad.client.LocationMethods.<init>(LocationMethods.java:40)
at ir.adad.client.LocationMethods.getInstance(LocationMethods.java:45)
at ir.adad.client.AdadScript.urlParameters(AdadScript.java:390)
at ir.adad.client.AdadScript.downloadClient(AdadScript.java:148)
at ir.adad.client.AdadScript.initializeInternal(AdadScript.java:134)
at ir.adad.client.AdadScript.initializeClient(AdadScript.java:110)
at ir.adad.client.Adad.initialize(Adad.java:22)
at ir.parsinteam.ojoobe.activities.MainActivity.onCreate(MainActivity.java:62)
at android.app.Activity.performCreate(Activity.java:6662)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1118)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2599)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2707)
at android.app.ActivityThread.-wrap12(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1460)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6077)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:866)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:756)
Caused by: java.lang.ClassNotFoundException: Didn't find class "com.google.android.gms.common.api.Api$zzf" on path: DexPathList[[zip file "/data/app/ir.parsinteam.ojoobe-2/base.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_dependencies_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_0_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_1_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_2_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_3_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_4_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_5_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_6_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_7_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_8_apk.apk", zip file "/data/app/ir.parsinteam.ojoobe-2/split_lib_slice_9_apk.apk"],nativeLibraryDirectories=[/data/app/ir.parsinteam.ojoobe-2/lib/x86, /data/app/ir.parsinteam.ojoobe-2/base.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_dependencies_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_0_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_1_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_2_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_3_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_4_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_5_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_6_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_7_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_8_apk.apk!/lib/x86, /data/app/ir.parsinteam.ojoobe-2/split_lib_slice_9_apk.apk!/lib/x86, /system/lib, /vendor/lib]]
at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:56)
at java.lang.ClassLoader.loadClass(ClassLoader.java:380)
at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
at com.google.android.gms.location.LocationServices.<clinit>(Unknown Source)
at ir.adad.client.LocationMethods.callAndroidLocationService(LocationMethods.java:101)
at ir.adad.client.LocationMethods.<init>(LocationMethods.java:40)
at ir.adad.client.LocationMethods.getInstance(LocationMethods.java:45)
at ir.adad.client.AdadScript.urlParameters(AdadScript.java:390)
at ir.adad.client.AdadScript.downloadClient(AdadScript.java:148)
at ir.adad.client.AdadScript.initializeInternal(AdadScript.java:134)
at ir.adad.client.AdadScript.initializeClient(AdadScript.java:110)
at ir.adad.client.Adad.initialize(Adad.java:22)
at ir.parsinteam.ojoobe.activities.MainActivity.onCreate(MainActivity.java:62)
at android.app.Activity.performCreate(Activity.java:6662)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1118)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2599)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2707)
at android.app.ActivityThread.-wrap12(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1460)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6077)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:866)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:756)
Application terminated.
| In your build.gradle, upgrade play-services-gcm and play-services-location to 15.0.1:
com.google.android.gms:play-services-gcm:15.0.1
com.google.android.gms:play-services-location:15.0.1
| Dex | 51,388,073 | 18 |
Why should (or shouldnt) I include a gradle dependency as @aar,
What are the benefits/drawbacks if any?
As you can see I added @aar to the libraries below that supported it. But everything seemed to work before doing that as well...
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.android.support:appcompat-v7:22.1.1'
compile 'com.google.android.gms:play-services-maps:7.3.+'
compile 'com.google.guava:guava:18.0'
compile 'com.octo.android.robospice:robospice-spring-android:1.4.14'
compile 'org.codehaus.jackson:jackson-mapper-asl:1.9.13'
compile 'com.mcxiaoke.volley:library-aar:1.0.0@aar'
compile 'de.psdev.licensesdialog:licensesdialog:1.7.0@aar'
}
| Libraries can be uploaded in multiple formats, most of the time you'll be using .jar or .aar.
When you don't specify the @ suffix, you'll be downloading the library in it's default format (defined by its author, if not then .jar) along with all its dependencies.
compile 'com.android.support:appcompat-v7:22.1.1'
When you specify the @ suffix you enforce downloading the library in the format you specify (which may or may not exist). This is useful e.g. when author forgot to specify that the library is an .aar and maven (or gradle, not sure) treats it as .jar by default. When the @ suffix is specified the dependencies of this library are no longer downloaded so you have to ensure that manually.
compile 'com.android.support:appcompat-v7:22.1.1@aar'
compile 'com.android.support:support-v4:22.1.1@jar'
To ensure the full dependency tree of the library is downloaded while the @ suffix is specified you have to write it in the following way:
compile ('com.android.support:appcompat-v7:22.1.1@aar') {
transitive = true
}
| Dex | 30,157,575 | 17 |
Maybe is too soon to ask, but as Jack and Jill was announced today I get very excited with it. I really want to go for it, but they also state:
Various tools that read .class files (such as JaCoCo, Mockito, and some lint checks) are currently not compatible with the Jack compiler.
There is already an mockito alternative for jack compiler ?
| Mockito doesn't generate any byte code at compile time and hence is not affected by the used compiler. Same holds true for dexmaker. (they don't have any hooks into Gradle during build)
So you can simply continue to use Mockito, even with Jack compiler.
Note that I have a test project which confirms this.
| Dex | 35,917,993 | 17 |
After adding Google Guava r09 to our Android project the build time increased significantly, especially the DEX generation phase. I understand that DEX generation takes all our classes + all jars we depend on and translates them to DEX format. Guava is a pretty big jar around 1.1MB
Can it be the cause for the build slowdown?
Are there anything can be done to speed this up?
P.S. Usually I build from Intellij, but I also tried building with Maven - same results.
Thanks
Alex
| For what it's worth, my gut is that this isn't the cause. It's hard to take a long time doing anything with a mere 1.1MB of bytecode; I've never noticed dex taking any significant time. But let's assume it is the issue for sake of argument.
If it matters enough, you could probably slice up the Guava .jar to remove whole packages you don't use. It is composed of several pieces that aren't necessarily all inter-related.
I don't think this is going to speed things up, but maybe worth mentioning: if you run the build through Proguard (the optimizer now bundled with the SDK), it can remove unused classes before you get to DEX (and, do a bunch of other great optimization on the byte code). But of course that process probably takes longer itself than dex-ing.
| Dex | 7,548,038 | 16 |
I'm compiling my (fairly simple, just 5 files with few hundred LOC) app from command line on OSX using:
ant debug
It works. But it works slowly:
BUILD SUCCESSFUL
Total time:
26 seconds
Why is that? It takes this much time even if I change only one line in one java file. Most of this time is spent in dex stage (about 20 seconds), which is AFAIK creating Dalvik bytecode. But my friend that also works on the same project on Windows using Eclipse says that compiling takes only a second or two on his machine. Is there anything I can do to speed up this proccess?
| I finally found a solution for this! It's a bit of a hack, but it works.
First, go to your ANDROID-SDK/platform-tools directory, then rename dx app to something else, like dextool, and finally create new dx file with contents:
#!/bin/sh
shift
dextool --dex --incremental --no-optimize $@
Replace "dextool" with the name you chose before. This will prepend (undocumented) --incremental attribute to every dex invocation, which will massively decrease build times by dexing only classes that have changed between builds. Now it looks like this:
[dx] Merged dex A (1 defs/11,3KiB) with dex B (359 defs/1253,2KiB). Result is 359 defs/1519,3KiB. Took 0,5s
0.5s instead of 20s is a huge difference!
Edit - few remarks:
you have to compile your project at least once before using this, because it uses previous classes.dex file
you can run into problems when using other Android toolchains than ant
UPDATE:
Google released SDK Tools 21.0, which renders above tweak absolete, because it does supports pre-dexing. Finally!
| Dex | 12,088,375 | 16 |
I have hit the magic dex limit because my application uses a lot of jars (drive API, greendao, text to pdf, support.. ).
My current solution was that I literally created a second apk just for google drive which I called from the main apk. But now I found out that android finally supports this with this library. My problem is just that I don't know how to implement it(preferably without gradle). I can't find any good tutorials for it.
Okey I am losing my mind trying to implement this... I have found this
And I added:
android:name="android.support.multidex.MultiDexApplication"
To my manifest file and
protected void attachBaseContext(Context base) {
super.attachBaseContext(base);
MultiDex.install(this);
}
To my mainactivity.java
Also installed gradle plugin for eclipse, exported gradle to get build.gradle file which I changed to:
apply plugin: 'android'
dependencies {
compile fileTree(dir: 'libs', include: '*.jar')
compile project(':android-support-v7-appcompat')
compile project(':Sync')
compile project(':gdrive:google-play-services_lib')
}
android {
compileSdkVersion 14
buildToolsVersion "21.1.1"
sourceSets {
main {
manifest.srcFile 'AndroidManifest.xml'
java.srcDirs = ['src-gen','src']
resources.srcDirs = ['src-gen','src']
aidl.srcDirs = ['src-gen','src']
renderscript.srcDirs = ['src-gen','src']
res.srcDirs = ['res']
assets.srcDirs = ['assets']
}
// Move the tests to tests/java, tests/res, etc...
instrumentTest.setRoot('tests')
// Move the build types to build-types/<type>
// For instance, build-types/debug/java, build-types/debug/AndroidManifest.xml, ...
// This moves them out of them default location under src/<type>/... which would
// conflict with src/ being used by the main source set.
// Adding new build types or product flavors should be accompanied
// by a similar customization.
debug.setRoot('build-types/debug')
release.setRoot('build-types/release')
}
dexOptions {
preDexLibraries = false
}
}
afterEvaluate {
tasks.matching {
it.name.startsWith('dex')
}.each { dx ->
if (dx.additionalParameters == null) {
dx.additionalParameters = ['--multi-dex']
} else {
dx.additionalParameters += '--multi-dex'
}
}
}
But The error is still the same :(
| The Blog was the old solution.
With Android Studio 0.9.2 & Gradle Plugin 0.14.1, you only need to:
Add to AndroidManifest.xml:
.
android:name="android.support.multidex.MultiDexApplication"
or
Add
MultiDex.install(this);
in your custom Application's attachBaseContext method
or your custom Application extend MultiDexApplication
add multiDexEnabled = true in your build.gradle
.
android {
defaultConfig {
...
multiDexEnabled = true
}
}
Done.
Sorry for my poor English
Related Resources:
http://developer.android.com/tools/building/multidex.html
https://plus.google.com/+XavierDucrohet/posts/1FnzwdcBnyC
| Dex | 26,925,264 | 16 |
With the release of Android Studio 3.0 Beta release, the android studio provides next-generation dex compiler, D8 to compile code and build android APK. Currently, D8 is available for preview.
Check more details:
https://android-developers.googleblog.com/2017/08/next-generation-dex-compiler-now-in.html
How to enable build using D8 in android studio?
| To enable D8 for your Android Studio 3.0 Beta, you can add following line in your project's gradle.properties file:
android.enableD8=true
| Dex | 45,648,215 | 16 |
I found that after my app reached a fair size (e.g. by adding multiple libraries), running the app threw java.lang.SecurityException: writable dex file '.../code_cache/.overlay/base.apk/classes2.dex' is not allowed.
If I then remove most of the libraries leaving only those that were added by default, and run again, it could work. But then if I add a tiny bit of code, like a log, it could fail with the same error.
If I want it to run without this error, I have to uninstall the app and then run again from Android Studio. This is very inconvenient, because every time I make some changes, I have to uninstall the app. I wouldn't imagine anyone would like to develop Android apps like this.
Does anyone know a solution to this problem?
| I was also having this problem, so I took a look at this documentation DexClassLoader, and decided to do this.
package com.example
import android.app.Application
class BaseApp : Application() {
override fun onCreate() {
super.onCreate()
val dexOutputDir: File = codeCacheDir
dexOutputDir.setReadOnly()
}
}
Just putting dexOutputDir.setReadOnly() in my application solved the problem
Uninstall and reinstall the app again
| Dex | 76,498,531 | 16 |
If you find yourself writing a big Android application that depends on many different libraries (which I would recommend instead of reinventing the wheel) it is very likely that you have already come across the 65k method limit of the Dalvik executable file classes.dex. Furthermore, if you depend on large libraries like the Google Play Services SDK which itself in already contained more than 20k methods in version 5.0 you are forced to use tricks like stripping packages or multidex support to avoid errors while packaging. With Android's new runtime ART which is publicly available since Android Lollipop multiple dex files are easier to handle, but currently developers are still forced to do method counting.
What is the simplest way to reduce your application`s method count while using Google Play Services?
| The biggest change for developers that came with the 6.5 release of the Google Play Services was probably the Granular Dependency Management. Google managed to split up it's library to allow developers to depend only on certain components which they really need for their apps.
Since version 6.5 developers are no longer forced to implement the complete Google Play Services library in their app, but can selectively depend on components like this:
compile 'com.google.android.gms:play-services-fitness:6.5.+'
compile 'com.google.android.gms:play-services-wearable:6.5.+'
compile 'com.gogole.android.gms:play-services-maps:6.5.+'
...
If you want to compile the complete library into your app, you can still do so:
compile 'com.google.android.gms:play-services:6.5.+'
A complete list of available packages can be found on the Android Developers site.
| Dex | 27,589,560 | 14 |
I'm using Android Studio for the first time and I got the following error after importing the project (previously it was an eclipse project where I had issues too.)
Here is the information given:
Error:Execution failed for task ':app:dexDebug'.
> com.android.ide.common.internal.LoggedErrorException: Failed to run command:
/home/crash-id/Development/SDK/adt-bundle-linux-x86_64-20140702/sdk/build-tools/21.1.2/dx --dex --no-optimize --output /home/crash-id/AndroidstudioProjects/LocalSin/app/build/intermediates/dex/debug --input-list=/home/crash-id/AndroidstudioProjects/LocalSin/app/build/intermediates/tmp/dex/debug/inputList.txt
Error Code:
2
Output:
UNEXPECTED TOP-LEVEL EXCEPTION:
com.android.dex.DexException: Multiple dex files define Lcom/google/ads/AdRequest$ErrorCode;
at com.android.dx.merge.DexMerger.readSortableTypes(DexMerger.java:596)
at com.android.dx.merge.DexMerger.getSortedTypes(DexMerger.java:554)
at com.android.dx.merge.DexMerger.mergeClassDefs(DexMerger.java:535)
at com.android.dx.merge.DexMerger.mergeDexes(DexMerger.java:171)
at com.android.dx.merge.DexMerger.merge(DexMerger.java:189)
at com.android.dx.command.dexer.Main.mergeLibraryDexBuffers(Main.java:454)
at com.android.dx.command.dexer.Main.runMonoDex(Main.java:303)
at com.android.dx.command.dexer.Main.run(Main.java:246)
at com.android.dx.command.dexer.Main.main(Main.java:215)
at com.android.dx.command.Main.main(Main.java:106)
:app:dexDebug
UNEXPECTED TOP-LEVEL EXCEPTION:
com.android.dex.DexException: Multiple dex files define Lcom/google/ads/AdRequest$ErrorCode;
at com.android.dx.merge.DexMerger.readSortableTypes(DexMerger.java:596)
at com.android.dx.merge.DexMerger.getSortedTypes(DexMerger.java:554)
at com.android.dx.merge.DexMerger.mergeClassDefs(DexMerger.java:535)
at com.android.dx.merge.DexMerger.mergeDexes(DexMerger.java:171)
at com.android.dx.merge.DexMerger.merge(DexMerger.java:189)
at com.android.dx.command.dexer.Main.mergeLibraryDexBuffers(Main.java:454)
at com.android.dx.command.dexer.Main.runMonoDex(Main.java:303)
at com.android.dx.command.dexer.Main.run(Main.java:246)
at com.android.dx.command.dexer.Main.main(Main.java:215)
at com.android.dx.command.Main.main(Main.java:106)
FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:dexDebug'.
> com.android.ide.common.internal.LoggedErrorException: Failed to run command:
/home/crash-id/Development/SDK/adt-bundle-linux-x86_64-20140702/sdk/build-tools/21.1.2/dx --dex --no-optimize --output /home/crash-id/AndroidstudioProjects/LocalSin/app/build/intermediates/dex/debug --input-list=/home/crash-id/AndroidstudioProjects/LocalSin/app/build/intermediates/tmp/dex/debug/inputList.txt
Error Code:
2
Output:
UNEXPECTED TOP-LEVEL EXCEPTION:
com.android.dex.DexException: Multiple dex files define Lcom/google/ads/AdRequest$ErrorCode;
at com.android.dx.merge.DexMerger.readSortableTypes(DexMerger.java:596)
at com.android.dx.merge.DexMerger.getSortedTypes(DexMerger.java:554)
at com.android.dx.merge.DexMerger.mergeClassDefs(DexMerger.java:535)
at com.android.dx.merge.DexMerger.mergeDexes(DexMerger.java:171)
at com.android.dx.merge.DexMerger.merge(DexMerger.java:189)
at com.android.dx.command.dexer.Main.mergeLibraryDexBuffers(Main.java:454)
at com.android.dx.command.dexer.Main.runMonoDex(Main.java:303)
at com.android.dx.command.dexer.Main.run(Main.java:246)
at com.android.dx.command.dexer.Main.main(Main.java:215)
at com.android.dx.command.Main.main(Main.java:106)
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
The app build.gradle is this:
apply plugin: 'com.android.application'
android {
compileSdkVersion 21
buildToolsVersion "21.1.2"
defaultConfig {
applicationId "com.myapp.test"
minSdkVersion 11
targetSdkVersion 14
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt'
}
}
}
dependencies {
compile 'com.android.support:support-v4:18.0.0'
compile 'com.android.support:appcompat-v7:21.0.3'
compile 'com.google.android.gms:play-services:+'
compile files('libs/google-play-services.jar')
compile files('libs/httpcore-4.3.2.jar')
compile files('libs/httpmime-4.3.4.jar')
}
And the project built.gradle is this:
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:1.0.0'
}
}
allprojects {
repositories {
jcenter()
}
}
Do you have any idea what the problem could be and how can it be solved? A good explanation for the problem would be useful too as I am new to Android Studio. Thanks in advance :)
| cd android/ && ./gradlew clean && cd .. && react-native run-android
| Dex | 27,787,747 | 14 |
We found an issue on Amazon market that IAP doesn't work if it's receivers located not in main DEX file.
The question is how to force gradle to put specific classes (receivers) into main DEX file.
Here are the gradle DEX settings:
afterEvaluate {
tasks.matching {
it.name.startsWith('dex')
}.each { dx ->
if (dx.additionalParameters == null) {
dx.additionalParameters = []
}
dx.additionalParameters += '--multi-dex'
dx.additionalParameters += "--main-dex-list=class_files.txt"
}
}
dexOptions {
javaMaxHeapSize "4g"
preDexLibraries = false
}
compile('com.android.support:multidex:1.0.0')
| With Android Plugin for Gradle, Revision 2.2.0 (Released in September 2016) you can use multiDexKeepFile api
android {
buildTypes {
debug {
...
multiDexEnabled true
multiDexKeepFile file('multidex_keep_file.txt')
}
}
}
Where multidex_keep_file.txt is file with single class per line which needs to be explicitly added to the main dex
com/example/MyClass.class
com/example/MyClass2.class
You can also use multiDexKeepProguard to keep whole package
-keep class com.example.** { *; }
| Dex | 30,081,386 | 14 |
With the advent of ASMDEX (ASM for dex files) and dexmaker, shouldn't it be possible to port Groovy to Android? Both frameworks allow the generation of dex bytecode at runtime.
As I understand it, it is impossible to modify dex classes from the APK in memory. But wouldn't it be possible to copy those classes to writable memory, modify those copies at runtime and use them?
What else needs to be ported to handle dex class files? CGLIB?
| The original porting project is named discobot then some guys made a new project called discobot2 Afaik the first project had no runtime transformation of classes, but was able to run first Groovy programs on Android, with a very slow startup time. As for the second project the last to me known state is that they solved most issues and are now translating classes at runtime. But I never tried it out.
Update: since Groovy 2.4 a third version to run Groovy on Android is possible
| Dex | 10,777,560 | 13 |
I use ant release and got this error:
[dx] UNEXPECTED TOP-LEVEL EXCEPTION:
[dx] com.android.dx.util.DexException: Multiple dex files define Lcom/android/vending/billing/IMarketBillingService;
[dx] at com.android.dx.merge.DexMerger.readSortableTypes(DexMerger.java:580)
[dx] at com.android.dx.merge.DexMerger.getSortedTypes(DexMerger.java:538)
[dx] at com.android.dx.merge.DexMerger.mergeClassDefs(DexMerger.java:519)
[dx] at com.android.dx.merge.DexMerger.mergeDexBuffers(DexMerger.java:168)
[dx] at com.android.dx.merge.DexMerger.merge(DexMerger.java:186)
[dx] at com.android.dx.command.dexer.Main.mergeLibraryDexBuffers(Main.java:300)
[dx] at com.android.dx.command.dexer.Main.run(Main.java:232)
[dx] at com.android.dx.command.dexer.Main.main(Main.java:174)
[dx] at com.android.dx.command.Main.main(Main.java:91)
I have same error, but answers didn't help me
I tried:
Reinstall android sdk to other dir
Remove bin/gen dirs and clean project
Remove and add libs in eclipse
Change android API from 17 to 10
Make sure my bin folder is not included in my build path
I have 1 main project and 2 lib projects (Facebook and Expansion files downloader)
Thanks for answers!
| Please check if the package includes com/android/vending/billing/IMarketBillingService is reference twice or more in your project settings.
| Dex | 15,869,893 | 12 |
I am working on a project that is quickly approaching the 64K method limit for dex files. This Android Developer blog post (from July 2011) explains how to get dynamic class loading working with a command-line build driven by Ant, but does not explore how to get it working from within IDEs (besides saying it won't work within Eclipse).
I looked around and couldn't find anything on getting this system to work from within IntelliJ. Does IntelliJ supports building apps with multiple dex files? If so, how does one set it up?
| Try using ProGuard to strip out unused classes and methods from your project and you should (hopefully) find you don't need multiple dex files.
That said if you do: IntelliJ and Eclipse are just IDEs -- they don't directly build your code -- so you will need to identify how your project is being built -- most likely Ant or Gradle.
If your project is a Gradle project then there will be a build.gradle file in the project root -- if this is the case then you will need to look how to accomplish the same with the Android Gradle plugin, a good place to start would be http://tools.android.com/tech-docs/new-build-system/user-guide#TOC-Manipulating-tasks.
| Dex | 21,146,959 | 12 |
Currently working on my android application after including play services and firebase library in my project I'm getting this error and unable to run my code
:app:prePackageMarkerForDebug
:app:transformClassesWithDexForDebug
To run dex in process, the Gradle daemon needs a larger heap.
It currently has approximately 910 MB.
For faster builds, increase the maximum heap size for the Gradle daemon to more than 2048 MB.
To do this set org.gradle.jvmargs=-Xmx2048M in the project gradle.properties.
For more information see https://docs.gradle.org/current/userguide/build_environment.html
Error:The number of method references in a .dex file cannot exceed 64K.
Learn how to resolve this issue at https://developer.android.com/tools/building/multidex.html
:app:transformClassesWithDexForDebug FAILED
Error:Execution failed for task ':app:transformClassesWithDexForDebug'.
com.android.build.api.transform.TransformException: com.android.ide.common.process.ProcessException: java.util.concurrent.ExecutionException: com.android.ide.common.process.ProcessException: org.gradle.process.internal.ExecException: Process 'command '/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/bin/java'' finished with non-zero exit value 2
My build.gradle file is here :
apply plugin: 'com.android.application'
android {
compileSdkVersion 23
buildToolsVersion "23.0.2"
defaultConfig {
applicationId "xyz.in.network"
minSdkVersion 16
targetSdkVersion 23
versionCode 1
versionName "1.0"
}
buildTypes {
release {
shrinkResources true
minifyEnabled true
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
multiDexEnabled true
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
testCompile 'junit:junit:4.12'
compile project(':libs:ViewPagerIndicator')
compile 'com.google.android.gms:play-services:9.0.0'
compile 'com.android.support:appcompat-v7:23.4.0'
compile 'com.android.support:design:23.4.0'
compile 'com.google.android.gms:play-services-maps:9.0.0'
compile 'com.google.android.gms:play-services-location:9.0.0'
compile 'com.android.support:cardview-v7:23.4.0'
compile 'com.getbase:floatingactionbutton:1.10.1'
compile 'com.squareup.picasso:picasso:2.5.2'
compile 'com.android.volley:volley:1.0.0'
compile 'com.google.firebase:firebase-messaging:9.0.0'
compile 'com.android.support:multidex:1.0.1'
}
apply plugin: 'com.google.gms.google-services'
And my manifestfile is here
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:name="android.support.multidex.MultiDexApplication"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".Util.DisconnectedNetwork"
android:screenOrientation="portrait"
android:theme="@style/Theme.Transparent"></activity>
<service android:name=".FCM.FirebaseMessagingHandler">
<intent-filter>
<action android:name="com.google.firebase.MESSAGING_EVENT"/>
</intent-filter>
</service>
<service android:name=".FCM.FirebaseRegistrationTokenHandler">
<intent-filter>
<action android:name="com.google.firebase.INSTANCE_ID_EVENT"/>
</intent-filter>
</service>
<meta-data
android:name="com.google.android.gms.version"
android:value="@integer/google_play_services_version" />
</application>
After increasing the heap size to 2048M. Gradle give this error
Error:Execution failed for task ':app:transformClassesWithDexForDebug'.
com.android.build.api.transform.TransformException: com.android.ide.common.process.ProcessException: java.util.concurrent.ExecutionException: com.android.dex.DexIndexOverflowException: method ID not in [0, 0xffff]: 65536
I follow all the instruction given on android developer site but still got this problem. How to solve this problem?
| You need to enable multidex in the android default config then:
android {
compileSdkVersion 23
buildToolsVersion '23.0.3'
defaultConfig {
applicationId "com.example.case"
minSdkVersion 16
targetSdkVersion 23
versionCode 43
versionName "4.0.13"
// Enabling multidex support.
multiDexEnabled true
}
When you are building you application in a daily routine, you normally use the debug default flavor. So if you application has more than 65k method, you need ot enable it on all flavor.
As a side note, you may want to use Proguard on the debug build so you won't have to enable multiDex on it.
Full app/build.gradle (documentation)
android {
compileSdkVersion 21
buildToolsVersion "21.1.0"
defaultConfig {
...
minSdkVersion 14
targetSdkVersion 21
...
// Enabling multidex support.
multiDexEnabled true
}
...
}
dependencies {
compile 'com.android.support:multidex:1.0.1'
}
Last part: add the MultiDex application in the manifest (or as a parent of your own Application
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.android.multidex.myapplication">
<application
...
android:name="android.support.multidex.MultiDexApplication">
...
</application>
</manifest>
| Dex | 37,430,331 | 12 |
I got an error in the build server when sending an Android build during the dex phase.
Googling a bit I learned that there is a hard limit of 64K functions (including all libs, the heaviest is google play services), or you can use the multiple dex mechanism.
How do I activate this for Codename One?
I understand Codename One uses Ant and as far as I understand this only works for gradle.
FYI this is the workaround, that splits google play services into sub libraries with native android:
http://android-developers.blogspot.com.es/2014/12/google-play-services-and-dex-method.html
| I had a very similar issue and corresponded with Codename One's pro support on this. Gradle support was something they just recently announced so its not as documented but should be available in the next update.
You need to add the following build hints to your project:
android.gradle=true
android.multidex=true
I understand that gradle will be the default build once 3.3 rolls around so in the future only the multidex option will be needed.
| Dex | 34,260,220 | 11 |
I think there must be a bug with the 27.1.0 v7 support lib, just released. After updating my project to use it (from 26.1.0), I keep getting this compilation error:
Task :app:transformDexArchiveWithDexMergerForRegularDebug FAILED
D8 is used to merge dex.
Program type already present: android.support.v7.recyclerview.extensions.ListAdapter
FAILURE: Build failed with an exception.
What went wrong:
Execution failed for task ':app:transformDexArchiveWithDexMergerForRegularDebug'.
com.android.build.api.transform.TransformException: com.android.tools.r8.errors.CompilationError: Program type already present: android.support.v7.recyclerview.extensions.ListAdapter
Try:
Run with --info or --debug option to get more log output.
Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:transformDexArchiveWithDexMergerForRegularDebug'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:100)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:70)
at org.gradle.api.internal.tasks.execution.OutputDirectoryCreatingTaskExecuter.execute(OutputDirectoryCreatingTaskExecuter.java:51)
at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:62)
at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:60)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:97)
at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:87)
at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:248)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:241)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:230)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:123)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:79)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:104)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:98)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:626)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:581)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:98)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
Caused by: java.lang.RuntimeException: com.android.build.api.transform.TransformException: com.android.tools.r8.errors.CompilationError: Program type already present: android.support.v7.recyclerview.extensions.ListAdapter
at com.android.builder.profile.Recorder$Block.handleException(Recorder.java:55)
at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:104)
at com.android.build.gradle.internal.pipeline.TransformTask.transform(TransformTask.java:213)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73)
at org.gradle.api.internal.project.taskfactory.IncrementalTaskAction.doExecute(IncrementalTaskAction.java:46)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:39)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:26)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:121)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:110)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:92)
... 29 more
Caused by: com.android.build.api.transform.TransformException: com.android.tools.r8.errors.CompilationError: Program type already present: android.support.v7.recyclerview.extensions.ListAdapter
at com.android.build.gradle.internal.transforms.DexMergerTransform.transform(DexMergerTransform.java:230)
at com.android.build.gradle.internal.pipeline.TransformTask$2.call(TransformTask.java:222)
at com.android.build.gradle.internal.pipeline.TransformTask$2.call(TransformTask.java:218)
at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:102)
... 41 more
Caused by: com.android.tools.r8.errors.CompilationError: Program type already present: android.support.v7.recyclerview.extensions.ListAdapter
at com.android.tools.r8.utils.ProgramClassCollection.resolveClassConflictImpl(ProgramClassCollection.java:61)
at com.android.tools.r8.utils.ProgramClassCollection.lambda$create$0(ProgramClassCollection.java:22)
at com.android.tools.r8.utils.ProgramClassCollection.create(ProgramClassCollection.java:22)
at com.android.tools.r8.graph.DexApplication$Builder.build(DexApplication.java:408)
at com.android.tools.r8.dex.ApplicationReader.read(ApplicationReader.java:96)
at com.android.tools.r8.D8.runForTesting(D8.java:179)
at com.android.tools.r8.D8.runForTesting(D8.java:152)
at com.android.tools.r8.D8.run(D8.java:71)
at com.android.builder.dexing.D8DexArchiveMerger.mergeDexArchives(D8DexArchiveMerger.java:73)
at com.android.build.gradle.internal.transforms.DexMergerTransformCallable.call(DexMergerTransformCallable.java:97)
at com.android.build.gradle.internal.transforms.DexMergerTransformCallable.call(DexMergerTransformCallable.java:35)
I have looked at my project's dependencies (with ./gradlew :<module>:dependencies [where <module> is all my modules]) and verified that I have only one dependency that should be pulling in the support ListAdapter class, and it is this one:
implementation "com.android.support:recyclerview-v7:27.1.0"
I also use
configurations.all {
resolutionStrategy {
...
force "com.android.support:recyclerview-v7:27.1.0"
}
}
I have tried cleaning/rebuilding. I have tried invalidating caches and restarting. I have also tried manually deleting all my build folders and .gradle folders. I have also tried disabling D8, but then my build just hangs forever. The problem persists. I'm not even using ListAdapter!
| Figured it out! Turns out the android.arch.paging:runtime-1.0.0-alpha4-1 dependency also had ListAdapter declared. After updating the paging lib to alpha6, the problem was resolved.
EDIT For some reason, this question is getting a lot of attention! So, I thought I'd add this comment as a "teach a person to fish" sort of moment. The question: how did I figure out where my ListAdapters were coming from? The answer? If you're using Android Studio / IntelliJ IDEA, hit ctrl+n to begin searching for class names. You'll see this dialog:
Please note the checkbox! If you don't have that checked, you will never find a class included by a library. With it checked, it'll show the provenance of every class in your project.
| Dex | 49,038,630 | 11 |
Since this morning I cannot build my Android app because I get this error
What went wrong: Execution failed for task ':app:transformDexArchiveWithDexMergerForDebug'.
com.android.build.api.transform.TransformException: com.android.dex.DexException: Multiple dex files define
Lcom/google/android/gms/internal/measurement/zzabn;
I have tried bumping the Firebase versions accordingly to 15.0.2 but then I get an other error...
Task :app:processDebugGoogleServices Found com.google.android.gms:play-services-maps:15.0.0, but version 15.0.2
is needed for the google-services plugin.
com.google.android.gms:play-services-maps:15.0.2 is not even released yet?
I have a build to push to production, what is the best way to build the app?
| Please update the google-service plugin to:
classpath 'com.google.gms:google-services:3.3.0'
to be able to use the latest version of Firebase and to avoid the errors.
Read the following for more information:
https://android-developers.googleblog.com/2018/05/announcing-new-sdk-versioning.html
Compilation failed to complete:Program type already present: com.google.android.gms.internal.measurement.zzabn
| Dex | 50,182,756 | 11 |
So I've just hit the maximum method count limit for my android project, which fails to build with the following error message:
Error: null, Cannot fit requested classes in a single dex file (# methods: 117407 > 65536)
I understand what the message means, and how to resolve it (running proguard, enabling multidex etc). My problem is that I don't understand why I'm suddenly getting this message - I was doing was removing some old bits of code which were redundant, hit build, and now I get this message.
Question 1: How can it be possible that my method count (117407 according to the error message) is suddenly massively over the limit (65536), even though I did not add any library dependencies? I actually removed code, and suddenly I have like 50 thousand methods too many?
Now this is where it gets really weird: I wanted to analyse the APK to figure out what's causing the problem, but of course I can't build it. So instead of enabling multidex I decided to revert my code to yesterday (which definitely absolutely did build fine yesterday - I have the app on my phone to prove it!), but I still get this build error message. I don't understand how this is possible. I tried reverting to several days ago, same thing (cloning a new repo and checking out an earlier commit).
So, question 2: How am I getting this build error for the exact same code which just yesterday built fine without error?
The only thing I can think of is that a library that I am using as a dependency has suddenly increased in size - but I'm declaring specific versions of everything in my gradle build, for example:
// RxJava
implementation 'io.reactivex.rxjava2:rxandroid:2.1.0'
implementation 'io.reactivex.rxjava2:rxjava:2.2.4'
// Retrofit
implementation 'com.squareup.retrofit2:retrofit:2.5.0'
implementation 'com.squareup.retrofit2:converter-gson:2.5.0'
So, surely my dependencies should not have changed?
Any ideas what I can do to figure this out are greatly appreciated. I've tried cleaning my project, and invalidating caches/restart in android studio. I really don't want to enable multidex or have to run proguard on my debug build.
Here's the full build.gradle:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin: 'kotlin-kapt'
android {
compileSdkVersion 28
defaultConfig {
applicationId "XXXXXXXXX"
minSdkVersion 19
targetSdkVersion 28
versionCode 1
versionName "0.1"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
vectorDrawables.useSupportLibrary = true // see https://developer.android.com/studio/write/vector-asset-studio#sloption
}
buildTypes {
release {
minifyEnabled false
// Do code shrinking!
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
// Core stuff
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
implementation 'com.android.support:appcompat-v7:28.0.0'
implementation 'com.android.support:recyclerview-v7:28.0.0'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.2'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
implementation 'android.arch.lifecycle:extensions:1.1.1'
implementation 'com.android.support:design:28.0.0'
implementation 'com.android.support:support-vector-drawable:28.0.0'
implementation 'com.google.android.gms:play-services-wearable:16.0.1'
// Dagger
implementation 'com.google.dagger:dagger:2.21'
kapt 'com.google.dagger:dagger-compiler:2.21'
// Dagger for Android
implementation 'com.google.dagger:dagger-android:2.21'
implementation 'com.google.dagger:dagger-android-support:2.21' // if you use the support libraries
kapt 'com.google.dagger:dagger-android-processor:2.21'
// Constraint layout
implementation 'com.android.support.constraint:constraint-layout:1.1.3'
// Associated WearOS project
wearApp project(':wear')
// Common library project
implementation project(':common')
// These were added to resolve gradle error on the 'com.android.support:appcompat-v7:28.0.0' implementation:
// All com.android.support libraries must use the exact same version specification (mixing versions can lead to
// runtime crashes). Found versions 28.0.0, 26.1.0. Examples include com.android.support:animated-vector-drawable:28.0.0
// and com.android.support:support-media-compat:26.1.0
// This seems to be related to linking the wear project. If the wear project was not linked, the error went away.
implementation 'com.android.support:support-media-compat:28.0.0'
implementation 'com.android.support:support-v4:28.0.0'
// RxJava
implementation 'io.reactivex.rxjava2:rxandroid:2.1.0'
implementation 'io.reactivex.rxjava2:rxjava:2.2.4'
// Retrofit
implementation 'com.squareup.retrofit2:retrofit:2.5.0'
implementation 'com.squareup.retrofit2:converter-gson:2.5.0'
// Retrofit RxJava
implementation 'com.squareup.retrofit2:adapter-rxjava2:2.5.0'
// Retrofit logging:
implementation 'com.squareup.okhttp3:logging-interceptor:3.12.1'
// Room
def room_version = "1.1.1"
implementation "android.arch.persistence.room:runtime:$room_version"
implementation "android.arch.persistence.room:common:$room_version"
implementation "android.arch.persistence.room:rxjava2:$room_version"
kapt "android.arch.persistence.room:compiler:$room_version"
// For modern time handling (java.time requires API 26 or higher)
implementation 'com.jakewharton.threetenabp:threetenabp:1.1.1'
// Graphing
implementation 'com.github.PhilJay:MPAndroidChart:v3.1.0-alpha'
// Dropbox
implementation 'com.dropbox.core:dropbox-core-sdk:3.0.11'
// OpenCSV
implementation 'com.opencsv:opencsv:4.5'
}
EDIT
So after enabling multidex, there are some heavy dependencies showing up under the following TLDs when I analyse the APK using Android Studio (I'm not sure if I should be looking at defined or referenced method numbers?):
com.dropbox: 26000 defined methods, 34000 referenced methods
com.android (mainly support libraries): 18700 defined, 24600 referenced
org.apache (commons, log etc): 15000 defined, 15700 referenced
These alone take me up to the limit. I still don't get why this is suddenly happening though :( Surely if I have not added any libraries, these numbers should not have changed?
| Simple add this to your gradle (Module: app) >> multiDexEnabled true
android {
defaultConfig {
...
minSdkVersion 21
targetSdkVersion 28
multiDexEnabled true
}
...
}
then Rebuild Project
in Menu click => Build>Rebuild Project.
| Dex | 54,911,906 | 11 |
I am having trouble with intellij idea ide.
It was working fine , but suddenly it started showing error:
Android Dex: [untitled3] Error: Could not create the Java Virtual Machine.
Android Dex: [untitled3] Error: A fatal exception has occurred. Program will exit.
I have checked my sdk, jdk path.
i have done re-installing it but still the problem is same.
Any help would be appreciated.
Thanks.
| The problem was caused by the too high heap size for the DX compiler, it can be changed here (File | Settings | Compiler | Android DX Compiler).
Check this document that explains why it happens when 32-bit JDK is used.
| Dex | 18,095,117 | 10 |
What is the dex in Gradle or in Android?
In Gradle, what's the meaning of dexoptions?
Sometimes my project does not compile because of some dexerrors.
I need to activate ProGuard to compile my Android app.
| In the standard java world:
When you compile standard java code : the compiler produce *.class file. A *class file contains standard java bytecode that can be executed on a standard JVM.
In the Android world:
It is different. You use the java language to write your code, but the compiler don't produce *.class files, it produce *.dex file. A *.dex file contains bytecode that can be executed on the Android Virtual Machine (dalvik) and this is not a standard Java Virtual Machine.
To be clear: a dex file in android is the equivalent of class in standard java.
So dexoptions is a gradle object where some options to configure this java-code-to-android-bytecode transformation are defined. The options configured via this object are :
targetAPILevel
force-jumbo mode (when enabled it allows a larger number of strings in the dex files)
To enable jumboMode :
android {
dexOptions {
jumboMode = true
}
}
| Dex | 24,224,186 | 10 |
I want to use Android L compat libs. after adding the relevant code to gradle, I get the error:
Error Code:
2
Output:
objc[36290]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
UNEXPECTED TOP-LEVEL EXCEPTION:
java.lang.IllegalArgumentException: method ID not in [0, 0xffff]: 65536
at com.android.dx.merge.DexMerger$6.updateIndex(DexMerger.java:501)
at com.android.dx.merge.DexMerger$IdMerger.mergeSorted(DexMerger.java:276)
at com.android.dx.merge.DexMerger.mergeMethodIds(DexMerger.java:490)
at com.android.dx.merge.DexMerger.mergeDexes(DexMerger.java:167)
at com.android.dx.merge.DexMerger.merge(DexMerger.java:188)
at com.android.dx.command.dexer.Main.mergeLibraryDexBuffers(Main.java:439)
at com.android.dx.command.dexer.Main.runMonoDex(Main.java:287)
at com.android.dx.command.dexer.Main.run(Main.java:230)
at com.android.dx.command.dexer.Main.main(Main.java:199)
at com.android.dx.command.Main.main(Main.java:103)
I saw questions about it this here and here, and tried out the solution from this blog post, and I still get an error, where in the case of the blog post, I get:
Error Code:
2 Output:
objc[36323]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
UNEXPECTED TOP-LEVEL EXCEPTION:
com.android.dex.DexException: Library dex files are not supported in multi-dex mode
at com.android.dx.command.dexer.Main.runMultiDex(Main.java:322)
at com.android.dx.command.dexer.Main.run(Main.java:228)
at com.android.dx.command.dexer.Main.main(Main.java:199)
at com.android.dx.command.Main.main(Main.java:103)
These are my android gradle settings:
android {
compileSdkVersion 21
buildToolsVersion "20.0.0"
defaultConfig {
applicationId "com.my.package"
minSdkVersion 9
targetSdkVersion 21
}
buildTypes {
release {
runProguard false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-project.txt'
}
}
}
These are my dependencies:
dependencies {
compile project(':libraries:ecoGallery')
compile project(':libraries:facebookSDK')
compile 'com.android.support:support-v4:21.0.0'
compile 'com.android.support:appcompat-v7:21.0.0'
compile 'com.google.android.gms:play-services:6.1.71'
compile 'com.j256.ormlite:ormlite-android:4.48'
compile 'com.j256.ormlite:ormlite-core:4.48'
compile 'com.mixpanel.android:mixpanel-android:4.3.1@aar'
compile 'com.nostra13.universalimageloader:universal-image-loader:1.9.3'
compile 'com.nineoldandroids:library:2.4.0'
compile 'oauth.signpost:signpost-commonshttp4:1.2.1.2'
compile 'oauth.signpost:signpost-core:1.2.1.2'
compile 'com.uservoice:uservoice-android-sdk:+@aar'
compile 'com.newrelic.agent.android:android-agent:4.87.0'
compile 'com.google.guava:guava:18.0'
compile files('libs/android-support-multidex.jar')
}
Does anyone have any ideas for what I might be doing wrong?
| Gradle plugin v0.14.0 for Android adds full multidex support.
Remove all the build.gradle changes you made (for multidex), and simply add the following:
android {
defaultConfig {
...
multiDexEnabled = true
}
}
| Dex | 26,633,591 | 10 |
I have several projects which I build to create an .aar. I then import this .aar into into Android Studio under /libs. The build.gradle file for this dependency looks as follows:
repositories{
flatDir{
dirs 'libs'
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.android.support:appcompat-v7:22.0.0'
compile 'com.google.android.gms:play-services:7.0.0'
compile 'com.android.support:multidex:+'
compile(name: 'customApi-debug', ext:'aar')
}
Since the library is quite large I have set multiDexEnabled = true. Android Studio finds the library and autocomplete works. Building works fine too but running the app gives the following error:
java.lang.NoClassDefFoundError: com.companyx.android.api.ui.vision.metaio.MetaIoView
at com.companyx.android.api.ui.vision.metaio.MetaIoView$$InjectAdapter.<init>(MetaIoView$$InjectAdapter.java:29)
I uncompressed and disassembled the .aar and dex files, respectively, and verified that the classes its complaining about actually exist. I've tried existing approaches for dealing with this problem but none of them worked.
Anyone else experienced this? Thanks in advance.
| I run into the same issue. The fix is firstly to deploy the AAR file to a local maven (I utilized the plugin at https://github.com/dcendents/android-maven-gradle-plugin). Then I referenced to the local maven as described at https://stackoverflow.com/a/23045791/2563009. And eventually I declared the dependencies with a transitive option, like this:
dependencies {
compile('com.myapp.awesomelib:awesomelib:0.0.1@aar') {
transitive = true
}
}
The error would be gone then.
| Dex | 29,857,141 | 10 |
I have multiple library projects and they all have dependency to Support Library. My application has dependency to these multiple library projects. Every library project contains references to support library's resources in their R.java file. This inflates the field ID count because of redundancy.
My app gets
DexIndexOverflowException: field ID not in [0, 0xffff]: 65536
because of this redundant R.java references.
Because of this my app has 47k methods while 65k field ids.
Edit:
I won't use multi-dex, it is not a solution to my problem. I want to shave redundant field IDs.
The question is not about how to work around the problem, the question is about how to get rid of the redundant field IDs. Using multi-dex won't remove the redundant field IDs.
|
DexIndexOverflowException: field ID not in [0, 0xffff]: 65536
Android has pre-defined upper limit of Methods of 65536.
When?
The size of the DEX file’s method index is 16 bit, so it means that
65536 represents the total number of references that can be invoked by
the code within a single DEX file. If overcome then arise this error.
Once you begin to include enough libraries that causes the 64K method limit to be reached, you need to remove extraneous dependencies.
How? Without using multiDex
You should add proguard.
ProGuard optimizes the bytecode, removes unused code instructions, and
obfuscates the remaining classes, fields, and methods with short
names.Resource shrinking is available with the Android plugin for
Gradle, which removes unused resources from your packaged app,
including unused resources in code libraries. It works in conjunction
with code shrinking such that once unused code has been removed, any
resources no longer referenced can be safely removed as well .
How to Enable Proguard
add minifyEnabled true to the appropriate build type in your build.gradle file.
android {
buildTypes {
release { //You can add this in debug mode
minifyEnabled true
proguardFiles getDefaultProguardFile('proguard-android.txt'),
'proguard-rules.pro'
}
}
}
NOTE
The getDefaultProguardFile('proguard-android.txt') method gets the
default ProGuard settings from the Android SDK tools->proguard->folder .
The proguard-rules.pro file is where you can add custom ProGuard rules .
Resource shrinking
Resource shrinking works only in conjunction with code shrinking. After the code shrinker removes all unused code, the resource shrinker can identify which resources the app still uses.
buildTypes {
release {
minifyEnabled true
shrinkResources true //You can add this in debug mode
}
}
| Dex | 46,805,025 | 10 |
I have succeeded in dynamically loading classes from a dex file in the following way
enter code here
File file = getDir("dex", 0);
DexClassLoader dexClassLoader = new DexClassLoader("/data/data/com.example.callerapp/files/test.dex", file.getAbsolutePath(), null, getClassLoader());
try {
Class<Object> _class = (Class<Object>)
dexClassLoader.loadClass("com.example.calledapp.test");
Object object = _class.newInstance();
Method method = _class.getMethod("function");
method.invoke(object);
} catch (Exception e) {
e.printStackTrace();
}
But what I want to do is load the class dynamically from the aar file, as shown in the android dev page(DexClassLoader : A class loader that loads classes from .jar and .apk files containing a classes.dex entry. This can be used to execute code not installed as part of an application.)
I created a library module("testlibrary") in the Android studio, created Test.java(what I want to load dynimically at caller app) in the library module, and created an aar file through the Gradle Project -> Excute Gradle Task
How can I dynamically load a class via the dexclassloader in an aar file created in this general way? I have moved aar file via provider to CallerApp from CalledApp
Or is the process of creating an aar file wrong?
During runtime, an error message appears
02-10 09:43:48.744 16487-16487/com.example.callerapp W/System.err: java.lang.ClassNotFoundException: Didn't find class "com.example.calledlibrary.Test" on path: DexPathList[[zip file "/data/data/com.example.callerapp/files/testlibrary.aar"],nativeLibraryDirectories=[/system/lib64, /vendor/lib64]]
02-10 09:43:48.744 16487-16487/com.example.callerapp W/System.err: at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:93)
02-10 09:43:48.744 16487-16487/com.example.callerapp W/System.err: at java.lang.ClassLoader.loadClass(ClassLoader.java:379)
02-10 09:43:48.744 16487-16487/com.example.callerapp W/System.err: at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
02-10 09:43:48.745 16487-16487/com.example.callerapp W/System.err: at com.example.callerapp.CallerActivity.onClick(CallerActivity.java:42)
02-10 09:43:48.745 16487-16487/com.example.callerapp W/System.err: at android.view.View.performClick(View.java:6877)
02-10 09:43:48.745 16487-16487/com.example.callerapp W/System.err: at android.widget.TextView.performClick(TextView.java:12651)
02-10 09:43:48.745 16487-16487/com.example.callerapp W/System.err: at android.view.View$PerformClick.run(View.java:26069)
02-10 09:43:48.745 16487-16487/com.example.callerapp W/System.err: at android.os.Handler.handleCallback(Handler.java:789)
02-10 09:43:48.746 16487-16487/com.example.callerapp W/System.err: at android.os.Handler.dispatchMessage(Handler.java:98)
02-10 09:43:48.746 16487-16487/com.example.callerapp W/System.err: at android.os.Looper.loop(Looper.java:164)
02-10 09:43:48.746 16487-16487/com.example.callerapp W/System.err: at android.app.ActivityThread.main(ActivityThread.java:6938)
02-10 09:43:48.746 16487-16487/com.example.callerapp W/System.err: at java.lang.reflect.Method.invoke(Native Method)
02-10 09:43:48.746 16487-16487/com.example.callerapp W/System.err: at com.android.internal.os.Zygote$MethodAndArgsCaller.run(Zygote.java:327)
02-10 09:43:48.747 16487-16487/com.example.callerapp W/System.err: at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1374)
02-10 09:43:48.747 16487-16487/com.example.callerapp W/System.err: Suppressed: java.io.IOException: No original dex files found for dex location (arm64) /data/data/com.example.caller/files/testlibrary.aar
02-10 09:43:48.747 16487-16487/com.example.callerapp W/System.err: at dalvik.system.DexFile.openDexFileNative(Native Method)
02-10 09:43:48.747 16487-16487/com.example.callerapp W/System.err: at dalvik.system.DexFile.openDexFile(DexFile.java:353)
02-10 09:43:48.747 16487-16487/com.example.callerapp W/System.err: at dalvik.system.DexFile.<init>(DexFile.java:100)
02-10 09:43:48.748 16487-16487/com.example.callerapp W/System.err: at dalvik.system.DexFile.<init>(DexFile.java:74)
02-10 09:43:48.748 16487-16487/com.example.callerapp W/System.err: at dalvik.system.DexPathList.loadDexFile(DexPathList.java:374)
02-10 09:43:48.748 16487-16487/com.example.callerapp W/System.err: at dalvik.system.DexPathList.makeDexElements(DexPathList.java:337)
02-10 09:43:48.748 16487-16487/com.example.callerapp W/System.err: at dalvik.system.DexPathList.<init>(DexPathList.java:157)
02-10 09:43:48.748 16487-16487/com.example.callerapp W/System.err: at dalvik.system.BaseDexClassLoader.<init>(BaseDexClassLoader.java:65)
02-10 09:43:48.748 16487-16487/com.example.callerapp W/System.err: at dalvik.system.DexClassLoader.<init>(DexClassLoader.java:57)
02-10 09:43:48.748 16487-16487/com.example.callerapp W/System.err: at com.example.caller.CallerActivity.onClick(CallerActivity.java:40)
02-10 09:43:48.749 16487-16487/com.example.callerapp W/System.err: ... 10 more
| You can not load aar file at runtime because aar file contains resources and classes.jar file and does not conatain a dex file.
But
you can use injector gradle plugin to get dex from your aar and merge all your aar resources into your project and after that you can use injector-android lib to load that dex files at runtime. Check out inject-example project
| Dex | 48,716,303 | 10 |
I am trying to understand the difference between google_service_account_iam_binding and google_service_account_iam_member in the GCP terraform provider at https://www.terraform.io/docs/providers/google/r/google_service_account_iam.html.
I understand that google_service_account_iam_binding is for granting a role to a list of members whereas google_service_account_iam_member is for granting a role to a single member, however I'm not clear on what is meant by "Authoritative" and "Non-Authoritative" in these definitions:
google_service_account_iam_binding: Authoritative for a given role. Updates the IAM policy to grant a role to a list of members. Other roles within the IAM policy for the service account are preserved.
google_service_account_iam_member: Non-authoritative. Updates the IAM policy to grant a role to a new member. Other members for the role for the service account are preserved.
Can anyone elaborate for me please?
| "Authoritative" means to change all related privileges, on the other hand, "non-authoritative" means not to change related privileges, only to change ones you specified.
Otherwise, you can interpret authoritative as the single source of truth, and non-authoritative as a piece of truth.
| Terraform | 63,915,353 | 28 |
How can this S3 bucket IAM policy, which has multiple conditions, be re-written as aws_iam_policy_document data block, please?
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control",
"aws:SourceAccount": "xxxxxxxxxxxx"
},
"ArnLike": {
"aws:SourceArn": "arn:aws:s3:::my-tf-test-bucket"
}
}
With the aws_iam_policy_document condition data block syntax 1:
condition {
test = "StringEquals"
values = []
variable = ""
}
| The aws_iam_policy_document supports multiple condition directives.
The following Terraform configuration should help:
data "aws_iam_policy_document" "test" {
statement {
effect = "Deny"
actions = ["backup:*"]
resources = ["*"]
condition {
test = "StringEquals"
values = ["bucket-owner-full-control"]
variable = "s3:x-amz-acl"
}
condition {
test = "StringEquals"
values = ["xxxxxxxxxxxx"]
variable = "aws:SourceAccount"
}
condition {
test = "ArnLike"
values = ["arn:aws:s3:::my-tf-test-bucket"]
variable = "aws:SourceArn"
}
}
}
output "policy" {
value = data.aws_iam_policy_document.test.json
}
If we do a terraform plan on that we will get:
terraform plan
data.aws_iam_policy_document.test: Reading...
data.aws_iam_policy_document.test: Read complete after 0s [id=3933526891]
Changes to Outputs:
+ policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "backup:*"
+ Condition = {
+ ArnLike = {
+ "aws:SourceArn" = "arn:aws:s3:::my-tf-test-bucket"
}
+ StringEquals = {
+ "aws:SourceAccount" = "xxxxxxxxxxxx"
+ "s3:x-amz-acl" = "bucket-owner-full-control"
}
}
+ Effect = "Deny"
+ Resource = "*"
},
]
+ Version = "2012-10-17"
}
)
| Terraform | 62,831,874 | 28 |
I am trying to use a certificate issued in eu-central-1 for my apigateway which is regional and works in the same region.
My terraform code is as follows:
//ACM Certificate
provider "aws" {
region = "eu-central-1"
alias = "eu-central-1"
}
resource "aws_acm_certificate" "certificate" {
provider = "aws.eu-central-1"
domain_name = "*.kumite.xyz"
validation_method = "EMAIL"
}
//Apigateway
resource "aws_api_gateway_rest_api" "kumite_writer_api" {
name = "kumite_writer_api"
endpoint_configuration {
types = ["REGIONAL"]
}
}
resource "aws_api_gateway_domain_name" "domain_name" {
certificate_arn = aws_acm_certificate.certificate.arn
domain_name = "recorder.kumite.xyz"
endpoint_configuration {
types = ["REGIONAL"]
}
}
Unfortunately, I constantly get this error:
Error: Error creating API Gateway Domain Name: BadRequestException: Cannot import certificates for EDGE while REGIONAL is active.
What I am missing here? I think my ApiGateway is not EDGE but REGIONAL so cannot find sense to the error...
| Change certificate_arn to regional_certificate_arn.
From documentation (emphasis mine):
When referencing an AWS-managed certificate, the following arguments are supported:
certificate_arn - (Optional) The ARN for an AWS-managed certificate. AWS Certificate Manager is the only supported source. Used when an edge-optimized domain name is desired. Conflicts with certificate_name, certificate_body, certificate_chain, certificate_private_key, regional_certificate_arn, and regional_certificate_name.
regional_certificate_arn - (Optional) The ARN for an AWS-managed certificate. AWS Certificate Manager is the only supported source. Used when a regional domain name is desired. Conflicts with certificate_arn, certificate_name, certificate_body, certificate_chain, and certificate_private_key.
| Terraform | 57,231,202 | 28 |
I am sitting with a situation where I need to provision EC2 instances with some packages on startup. There are a couple of (enterprise/corporate) constraints that exist:
I need to provision on top of a specific AMI, which adds enterprisey stuff such as LDAP/AD access and so on
These changes are intended to be used for all internal development machines
Because of mainly the second constraint, I was wondering where is the best place to place the provisioning. This is what I've come up with
Provision in Terraform
As it states, I simply provision in terraform for the necessary instances. If I package these resources into modules, then provisioning won't "leak out". The disadvantages
I won't be able to add a different set of provisioning steps on top of the module?
A change in the provisioning will probably result in instances being destroyed on apply?
Provisioning takes a long time because of the packages it tries to install
Provisioning in Packer
This is based on the assumption that Packer allows you to provision on top of AMIs so that AMIs can be "extended". Also, this will only be used in AWS so it won't use other builders necessarily. Provisioning in Packer makes the Terraform Code much simpler and terraform applies will become faster because it's just an AMI that you fire up.
For me both of these methods have their place. But what I really want to know is when do you choose Packer Provisioning over Terraform Provisioning?
| Using Packer to create finished (or very nearly finished) images drastically shortens the time it takes to deploy new instances and also allows you to use autoscaling groups.
If you have Terraform run a provisioner such as Chef or Ansible on every EC2 instance creation you add a chunk of time for the provisioner to run at the time you need to deploy new instances. In my opinion it's much better to do the configuration up front and ahead of time using Packer to bake as much as possible into the AMI and then use user data scripts/tools like Consul-Template to provide environment specific differences.
Packer certainly can build on top of images and in fact requires a source_ami to be specified. I'd strongly recommend tagging your AMIs in a way that allows you to use source_ami_filter in Packer and Terraform's aws_ami data source so when you make changes to your AMIs Packer and Terraform will automatically pull those in to be built on top of or deployed at the next opportunity.
I personally bake a reasonably lightweight "Base" AMI that does some basic hardening and sets up monitoring and logging that I want for all instances that are deployed and also makes sure that Packer encrypts the root volume of the AMI. All other images are then built off the latest "Base" AMI and don't have to worry about making sure those things are installed/configured or worry about encrypting the root volume.
By baking your configuration into the AMI you are also able to move towards the immutable infrastructure model which has some major benefits in that you know that you can always throw away an instance that is having issues and very quickly replace it with a new one. Depending on your maturity level you could even remove access to the instances so that it's no longer possible to change anything on the instance once it has been deployed which, in my experience, is a major factor in operational issues.
Very occasionally you might come across something that makes it very difficult to bake an AMI for and in those cases you might choose to run your provisioning scripts in a Terraform provisioner when it is being created. Sometimes it's simply easier to move an existing process over to using provisioners with Terraform than baking the AMIs but I would push to move things over to Packer where possible.
| Terraform | 49,314,752 | 28 |
I am getting this error when I try to do any operation:
Error locking state: Error acquiring the state lock: state blob is already locked
How can I list the people currently have a lock and how long the lock has been acquired for?
| The easiest fix for this issue is to:
(1) navigate to the storage account,
(2) then to the container in the Azure portal that holds the state file.
(3) The blob will show as ‘Leased’ under the leased state column.
(4) Select the state file, and hit the ‘break lease’ button.
*FYI: You need PIM (Privileged Identity Management (PIM)) to do this.
Quote from Fixing Terraform ‘Error acquiring state lock’ in Azure
| Terraform | 64,690,427 | 27 |
I am writing a small script that takes a small file from my local machine and puts it into an AWS S3 bucket.
My terraform.tf:
provider "aws" {
region = "us-east-1"
version = "~> 1.6"
}
terraform {
backend "s3" {
bucket = "${var.bucket_testing}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
key = "testexport/exportFile.tfstate"
region = "us-east-1"
encrypt = true
}
}
data "aws_s3_bucket" "pr-ip" {
bucket = "${var.bucket_testing}"
}
resource "aws_s3_bucket_object" "put_file" {
bucket = "${data.aws_s3_bucket.pr-ip.id}"
key = "${var.file_path}/${var.file_name}"
source = "src/Datafile.txt"
etag = "${md5(file("src/Datafile.txt"))}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
server_side_encryption = "aws:kms"
}
However, when I init:
terraform init
#=>
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working with Terraform immediately by creating Terraform configuration files.
and then try to apply:
terraform apply
#=>
Error: No configuration files found!
Apply requires configuration to be present. Applying without a configuration would mark everything for destruction, which is normally not what is desired. If you would like to destroy everything, please run 'terraform destroy' instead which does not require any configuration files.
I get the error above. Also, I have setup my default AWS Access Key ID and value.
What can I do?
| This error means that you have run the command in the wrong place. You have to be in the directory that contains your configuration files, so before running init or apply you have to cd to your Terraform project folder.
| Terraform | 52,351,809 | 27 |
When starting of I was using the default workspace. Due to increased complexity I would like to use multiple workspaces. I want to move what is in default workspace into its own workspace or rename the default workspace as another workspace. How can I do this?
| Yes it is possible to migrate state between workspaces.
I'm assuming that you are using S3 remote backend and terraform version >= 0.13
Let's see how this state surgery looks like:
Sample resource config that needs to be migrated between workspaces:
provider "local" {
version = "2.1.0"
}
resource "local_file" "foo" {
content = "foo!"
filename = "foo.bar"
}
terraform {
backend "s3" {
bucket = ""
region = ""
kms_key_id = ""
encrypt = ""
key = ""
dynamodb_table = ""
}
}
Let's initialise the backend for the default workspace and apply:
terraform init
<Initialize the backend>
terraform workspace list
* default
terraform apply
local_file.foo: Refreshing state... [id=<>]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed
So, as you can see a local file was already created and the state is stored in the default workspace. Terraform apply didn't change anything.
Now, we want to migrate to a new workspace:
Pull the state while you are still in the default workspace
terraform state pull > default.tfstate
Create a new workspace; let's call it test
terraform workspace new test
Created and switched to workspace "test"!
If you try to run terraform state list, you should not see any state.
Let's push the state to newly created workspace and see what's in the state; also what happens when we apply.
terraform state push default.tfstate
terraform state list
local_file.foo
terraform apply
local_file.foo: Refreshing state... [id=<>]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Your local_file.foo has been migrated to the test workspace.
Don't forget to switch back to the default workspace and remove state references for this file.
terraform workspace select default
terraform state rm local_file.foo
Removed local_file.foo
Successfully removed 1 resource instance(s).
PS: I would highly recommend reading more about managing Terraform state.
| Terraform | 66,979,732 | 26 |
AWS supports IAM Roles for Service Accounts (IRSA) that allows cluster operators to map AWS IAM Roles to Kubernetes Service Accounts.
To do so, one has to create an iamserviceaccount in an EKS cluster:
eksctl create iamserviceaccount \
--name <AUTOSCALER_NAME> \
--namespace kube-system \
--cluster <CLUSTER_NAME> \
--attach-policy-arn <POLICY_ARN> \
--approve \
--override-existing-serviceaccounts
The problem is that I don't want to use the above eksctl command because I want to declare my infrastructure using terraform.
Does eksctl command do anything other than creating a service account? If it only creates a service account, what is the YAML representation of it?
| I am adding my answer here because I stumble upon the same issue, and accepted answer (and other answers above), do not provide full resolution to the issue - no code examples. They are just guidelines which I had to use to research much deeper. There are some issues which is really easy to miss - and without code examples its quite hard to conclude what is happening (especially part related with Conditions/StringEquals while creating IAM role)
The whole purpose of creating a service account which is going to be tied with the role - is possibility of creating aws resources from within cluster (most common case is load balancer, or roles for pushing logs to the cloudwatch).
So, question is how we can do this, using terraform, instead of using eks commands.
What we need to do, is:
create eks oidc (which can be done with terraform)
create AWS IAM role (which can be done with terraform), create and use proper policies
Create k8s service account (needs to be done with kubectl commands - or with terraform using kubernetes resources
Annotate k8s service account with IAM role we created (meaning that we are linking k8s service account with IAM role)
After this setup, our k8s service account will have k8s cluster role and k8s cluster role binding (which will allow that service account to perform actions within the k8s) and, our k8s service account will have IAM role attached to it, which will allow to perform actions outside of the cluster (like creating aws resources)
So lets start with it. Assumption bellow is that your eks cluster is already created with terraform, and we are focusing on creating resources areound that eks cluster necessary for working service account.
Create eks_oidc
### First we need to create tls certificate
data "tls_certificate" "eks-cluster-tls-certificate" {
url = aws_eks_cluster.eks-cluster.identity[0].oidc[0].issuer
}
# After that create oidc
resource "aws_iam_openid_connect_provider" "eks-cluster-oidc" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.eks-cluster-tls-certificate.certificates[0].sha1_fingerprint]
url = aws_eks_cluster.eks-cluster.identity[0].oidc[0].issuer
}
Now, lets create AWS IAM role with all necessary policies.
Terraform declarative code bellow will:
create ALBIngressControllerIAMPolicy policy
create alb-ingress-controller-role role
attach ALBIngressControllerIAMPolicyr policy to alb-ingress-controller-role role
attach already existing AmazonEKS_CNI_Policy policy to the role
Make a note that i used suffixes as alb ingress controller here, because that is primary use of my role from within the cluster. You can change the name of policy of the role or you can change permission access for the policy as well in dependency of what you are planing to do with it.
data "aws_caller_identity" "current" {}
locals {
account_id = data.aws_caller_identity.current.account_id
eks_oidc = replace(replace(aws_eks_cluster.eks-cluster.endpoint, "https://", ""), "/\\..*$/", "")
}
# Policy which will allow us to create application load balancer from inside of cluster
resource "aws_iam_policy" "ALBIngressControllerIAMPolicy" {
name = "ALBIngressControllerIAMPolicy"
description = "Policy which will be used by role for service - for creating alb from within cluster by issuing declarative kube commands"
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Action = [
"elasticloadbalancing:ModifyListener",
"wafv2:AssociateWebACL",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:DescribeInstances",
"wafv2:GetWebACLForResource",
"elasticloadbalancing:RegisterTargets",
"iam:ListServerCertificates",
"wafv2:GetWebACL",
"elasticloadbalancing:SetIpAddressType",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:SetWebAcl",
"ec2:DescribeInternetGateways",
"elasticloadbalancing:DescribeLoadBalancers",
"waf-regional:GetWebACLForResource",
"acm:GetCertificate",
"shield:DescribeSubscription",
"waf-regional:GetWebACL",
"elasticloadbalancing:CreateRule",
"ec2:DescribeAccountAttributes",
"elasticloadbalancing:AddListenerCertificates",
"elasticloadbalancing:ModifyTargetGroupAttributes",
"waf:GetWebACL",
"iam:GetServerCertificate",
"wafv2:DisassociateWebACL",
"shield:GetSubscriptionState",
"ec2:CreateTags",
"elasticloadbalancing:CreateTargetGroup",
"ec2:ModifyNetworkInterfaceAttribute",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"ec2:RevokeSecurityGroupIngress",
"elasticloadbalancing:DescribeTargetGroupAttributes",
"shield:CreateProtection",
"acm:DescribeCertificate",
"elasticloadbalancing:ModifyRule",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:DescribeRules",
"ec2:DescribeSubnets",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"waf-regional:AssociateWebACL",
"tag:GetResources",
"ec2:DescribeAddresses",
"ec2:DeleteTags",
"shield:DescribeProtection",
"shield:DeleteProtection",
"elasticloadbalancing:RemoveListenerCertificates",
"tag:TagResources",
"elasticloadbalancing:RemoveTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:DescribeListeners",
"ec2:DescribeNetworkInterfaces",
"ec2:CreateSecurityGroup",
"acm:ListCertificates",
"elasticloadbalancing:DescribeListenerCertificates",
"ec2:ModifyInstanceAttribute",
"elasticloadbalancing:DeleteRule",
"cognito-idp:DescribeUserPoolClient",
"ec2:DescribeInstanceStatus",
"elasticloadbalancing:DescribeSSLPolicies",
"elasticloadbalancing:CreateLoadBalancer",
"waf-regional:DisassociateWebACL",
"elasticloadbalancing:DescribeTags",
"ec2:DescribeTags",
"elasticloadbalancing:*",
"elasticloadbalancing:SetSubnets",
"elasticloadbalancing:DeleteTargetGroup",
"ec2:DescribeSecurityGroups",
"iam:CreateServiceLinkedRole",
"ec2:DescribeVpcs",
"ec2:DeleteSecurityGroup",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:SetSecurityGroups",
"elasticloadbalancing:DescribeTargetGroups",
"shield:ListProtections",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:DeleteListener"
],
Resource = "*"
}
]
})
}
# Create IAM role
resource "aws_iam_role" "alb-ingress-controller-role" {
name = "alb-ingress-controller"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Federated": "${aws_iam_openid_connect_provider.eks-cluster-oidc.arn}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${replace(aws_iam_openid_connect_provider.eks-cluster-oidc.url, "https://", "")}:sub": "system:serviceaccount:kube-system:alb-ingress-controller",
"${replace(aws_iam_openid_connect_provider.eks-cluster-oidc.url, "https://", "")}:aud": "sts.amazonaws.com"
}
}
}
]
}
POLICY
depends_on = [aws_iam_openid_connect_provider.eks-cluster-oidc]
tags = {
"ServiceAccountName" = "alb-ingress-controller"
"ServiceAccountNameSpace" = "kube-system"
}
}
# Attach policies to IAM role
resource "aws_iam_role_policy_attachment" "alb-ingress-controller-role-ALBIngressControllerIAMPolicy" {
policy_arn = aws_iam_policy.ALBIngressControllerIAMPolicy.arn
role = aws_iam_role.alb-ingress-controller-role.name
depends_on = [aws_iam_role.alb-ingress-controller-role]
}
resource "aws_iam_role_policy_attachment" "alb-ingress-controller-role-AmazonEKS_CNI_Policy" {
role = aws_iam_role.alb-ingress-controller-role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
depends_on = [aws_iam_role.alb-ingress-controller-role]
}
After executing terraform above, you have successfully created terraform part of the resources. Now we need to create a k8s service account and bind IAM role with that service account.
Creating cluster role, cluster role binding and service account
You can use
https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/master/docs/examples/rbac-role.yaml
directly (from the master branch), but having in mind that we need to annotate the iam arn, i have tendency to download this file, update it and store it as updated within my kubectl config files.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
rules:
- apiGroups:
- ""
- extensions
resources:
- configmaps
- endpoints
- events
- ingresses
- ingresses/status
- services
- pods/status
verbs:
- create
- get
- list
- update
- watch
- patch
- apiGroups:
- ""
- extensions
resources:
- nodes
- pods
- secrets
- services
- namespaces
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: alb-ingress-controller
subjects:
- kind: ServiceAccount
name: alb-ingress-controller
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: <ARN OF YOUR ROLE HERE>
...
At the bottom of this file, you will notice annotation where you will need to place your ANR role.
Double check
And that would be it. After that you have a k8s service account which is connected with iam role.
Check with:
kubectl get sa -n kube-system
kubectl describe sa alb-ingress-controller -n kube-system
And you should get output similar to this (annotations is the most important part, because it confirms the attachment of iam role):
Name: alb-ingress-controller
Namespace: kube-system
Labels: app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=alb-ingress-controller
Annotations: eks.amazonaws.com/role-arn: <YOUR ANR WILL BE HERE>
meta.helm.sh/release-name: testrelease
meta.helm.sh/release-namespace: default
Image pull secrets: <none>
Mountable secrets: alb-ingress-controller-token-l4pd8
Tokens: alb-ingress-controller-token-l4pd8
Events: <none>
From now on, you can use this service to manage internal k8s resources and external which are allowed by the policies you attached.
In my case, as mentioned before, I used it (beside other things) for creation of alb ingress controller and load balancer, hence all of the prefixes with "alb-ingress"
| Terraform | 65,934,606 | 26 |
I am trying to work out how to iterate over nested variables from a complex object given in the following tfvars file using Terraform 0.12.10:
example.tfvars
virtual_network_data = {
1 = {
product_instance_id = 1
location = "somewhere"
address_space = ["192.168.0.0/23"]
dns_servers = []
custom_tags = {"test":"test value"}
subnets = [
{
purpose = "mgmt"
newbits = 4
item = 0
},
{
purpose = "transit"
newbits = 4
item = 1
}
]
}
}
example.tf
variable "virtual_network_data" {} #Data comes from example.tfvars
variable "resource_group_name" {
default = "my_resource_group"
}
variable "virtual_network_name" {
default = "my_virtual_network"
}
####
resource "azurerm_subnet" "pool" {
for_each = var.virtual_network_data
name = format("%s%s%02d", "subnet_", s.purpose, s.item)
resource_group_name = var.resource_group_name
virtual_network_name = var.virtual_network_name
address_prefix = cidrsubnet(each.value["address_space"], s.newbits, s.item)
}
In example.tf I can use each.value["address_space"] to get to the top level variables, but I can't work out how to get to the items in subnets (s.purpose, s.item & s.newbits).
I have used dynamic blocks, as part of a parent resource (below), which works but in this case, I need to move the subnet into its own resource. Simply put, how do I get the first for_each to behave like the second for_each in the dynamic block?
resource "azurerm_virtual_network" "pool" {
for_each = var.virtual_network_data
name = format("%s%02d", local.resource_name, each.key)
resource_group_name = var.resource_group_name
location = each.value["location"]
address_space = each.value["address_space"]
dns_servers = each.value["dns_servers"]
tags = merge(local.tags, each.value["custom_tags"])
dynamic "subnet" {
for_each = [for s in each.value["subnets"]: {
name = format("%s%s%02d", "subnet_", s.purpose, s.item)
prefix = cidrsubnet(element(each.value["address_space"],0), s.newbits, s.item)
}]
content {
name = subnet.value.name
address_prefix = subnet.value.prefix
}
}
}
Cheeky bonus, is there a way to replace s.item with something like each.key or count.index?
TIA
| The technique in this situation is to use other Terraform language features to transform your collection to be a suitable shape for the for_each argument: one element per resource instance.
For nested data structures, you can use flatten in conjunction with two or more for expressions to produce a flat data structure with one element per nested object:
locals {
network_subnets = flatten([
for network_key, network in var.virtual_network_data : [
for subnet in network.subnets : {
network_key = network_key
purpose = subnet.purpose
parent_cidr_block = network.address_space[0]
newbits = subnet.newbits
item = subnet.item
}
]
])
}
Then you can use local.network_subnets as the basis for repetition:
resource "azurerm_subnet" "pool" {
# Each instance must have a unique key, so we'll construct one
# by combining the network key, the subnet "purpose", and the "item".
for_each = {
for ns in local.network_subnets : "${ns.network_key}.${ns.purpose}${ns.item}" => ns
}
name = format("%s%s%02d", "subnet_", each.value.purpose, each.value.item)
resource_group_name = var.resource_group_name
virtual_network_name = var.virtual_network_name
address_prefix = cidrsubnet(each.value.parent_cidr_block, each.value.newbits, each.value.item)
}
There's a similar example in the flatten documentation, as some additional context.
| Terraform | 58,343,258 | 26 |
Im trying to iterate through a variable type map and i'm not sure how to
This is what i have so far
In my main.tf:
resource "aws_route_53_record" "proxy_dns" {
count = "${length(var.account_name)}"
zone_id = "${infrastructure.zone_id}"
name = "proxy-${element(split(",", var.account_name), count.index)}-dns
type = CNAME
ttl = 60
records = ["{records.dns_name}"]
}
And in my variables.tf
variable "account_name" {
type = "map"
default = {
"account1" = "accountA"
"account2" = "accountB"
}
}
I want to be able to create multiple resources with the different account names
| If you are using Terraform 0.12.6 or later then you can use for_each instead of count to produce one instance for each element in your map:
resource "aws_route53_record" "proxy_dns" {
for_each = var.account_name
zone_id = infrastructure.zone_id
name = "proxy-${each.value}-dns"
# ... etc ...
}
The primary advantage of for_each over count is that Terraform will identify the instances by the key in the map, so you'll get instances like aws_route53_record.proxy_dns["account1"] instead of aws_route53_record.proxy_dns[0], and so you can add and remove elements from your map in future with Terraform knowing which specific instance belongs to each element.
each.key and each.value in the resource type arguments replace count.index when for_each is used. They evaluate to the key and value of the current map element, respectively.
| Terraform | 57,503,110 | 26 |
I am currently working through the beta book "Terraform Up & Running, 2nd Edition". In chapter 2, I created an auto scaling group and a load balancer in AWS.
Now I made my backend server HTTP ports configurable. By default they listen on port 8080.
variable "server_port" {
…
default = 8080
}
resource "aws_launch_configuration" "example" {
…
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -f -p ${var.server_port} &
EOF
…
}
resource "aws_security_group" "instance" {
…
ingress {
from_port = var.server_port
to_port = var.server_port
…
}
}
The same port also needs to be configured in the application load balancer's target group.
resource "aws_lb_target_group" "asg" {
…
port = var.server_port
…
}
When my infrastructure is already deployed, for example with the configuration for the port set to 8080, and then I change the variable to 80 by running terraform apply --var server_port=80, the following error is reported:
> Error: Error deleting Target Group: ResourceInUse: Target group
> 'arn:aws:elasticloadbalancing:eu-central-1:…:targetgroup/terraform-asg-example/…'
> is currently in use by a listener or a rule status code: 400,
How can I refine my Terraform infrastructure definition to make this change possible? I suppose it might be related to a lifecycle option somewhere, but I didn't manage to figure it out yet.
For your reference I attach my whole infrastructure definition below:
provider "aws" {
region = "eu-central-1"
}
output "alb_location" {
value = "http://${aws_lb.example.dns_name}"
description = "The location of the load balancer"
}
variable "server_port" {
description = "The port the server will use for HTTP requests"
type = number
default = 8080
}
resource "aws_lb_listener_rule" "asg" {
listener_arn = aws_lb_listener.http.arn
priority = 100
condition {
field = "path-pattern"
values = ["*"]
}
action {
type = "forward"
target_group_arn = aws_lb_target_group.asg.arn
}
}
resource "aws_lb_target_group" "asg" {
name = "terraform-asg-example"
port = var.server_port
protocol = "HTTP"
vpc_id = data.aws_vpc.default.id
health_check {
path = "/"
protocol = "HTTP"
matcher = "200"
interval = 15
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
}
resource "aws_lb_listener" "http" {
load_balancer_arn = aws_lb.example.arn
port = 80
protocol = "HTTP"
default_action {
type = "fixed-response"
fixed_response {
content_type = "text/plain"
message_body = "404: page not found"
status_code = 404
}
}
}
resource "aws_lb" "example" {
name = "terraform-asg-example"
load_balancer_type = "application"
subnets = data.aws_subnet_ids.default.ids
security_groups = [aws_security_group.alb.id]
}
resource "aws_security_group" "alb" {
name = "terraform-example-alb"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_autoscaling_group" "example" {
launch_configuration = aws_launch_configuration.example.name
vpc_zone_identifier = data.aws_subnet_ids.default.ids
target_group_arns = [aws_lb_target_group.asg.arn]
health_check_type = "ELB"
min_size = 2
max_size = 10
tag {
key = "Name"
value = "terraform-asg-example"
propagate_at_launch = true
}
}
resource "aws_launch_configuration" "example" {
image_id = "ami-0085d4f8878cddc81"
instance_type = "t2.micro"
security_groups = [aws_security_group.instance.id]
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -f -p ${var.server_port} &
EOF
lifecycle {
create_before_destroy = true
}
}
resource "aws_security_group" "instance" {
name = "terraform-example-instance"
ingress {
from_port = var.server_port
to_port = var.server_port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
data "aws_subnet_ids" "default" {
vpc_id = data.aws_vpc.default.id
}
data "aws_vpc" "default" {
default = true
}
| From the issue link in the comment on Cannot rename ALB Target Group if Listener present:
Add a lifecycle rule to your target group so it becomes:
resource "aws_lb_target_group" "asg" {
name = "terraform-asg-example"
port = var.server_port
protocol = "HTTP"
vpc_id = data.aws_vpc.default.id
health_check {
path = "/"
protocol = "HTTP"
matcher = "200"
interval = 15
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
lifecycle {
create_before_destroy = true
}
}
However you will need to choose a method for changing the name of your target group as well. There is further discussion and suggestions on how to do this.
But one possible solution is to simply use a guid but ignore changes to the name:
resource "aws_lb_target_group" "asg" {
name = "terraform-asg-example-${substr(uuid(), 0, 3)}"
port = var.server_port
protocol = "HTTP"
vpc_id = data.aws_vpc.default.id
health_check {
path = "/"
protocol = "HTTP"
matcher = "200"
interval = 15
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
lifecycle {
create_before_destroy = true
ignore_changes = [name]
}
}
| Terraform | 57,183,814 | 26 |
I'm using terraform to provision some resources in azure and I can't seem to get helm to install nginx-ingress because it timeouts waiting for condition
helm_release.nginx_ingress: 1 error(s) occurred:
helm_release.nginx_ingress: rpc error: code = Unknown desc = release nginx-ingress failed: timed out waiting for the condition
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with any
resources that successfully completed. Please address the error above
and apply again to incrementally change your infrastructure. main.tf
data "azurerm_public_ip" "nginx_ingress" {
name = "xxxx-public-ip"
resource_group_name = "xxxx-public-ip"
}
resource "azurerm_resource_group" "xxxx_RG" {
name = "${var.name_prefix}"
location = "${var.location}"
}
resource "azurerm_kubernetes_cluster" "k8s" {
name = "${var.name_prefix}-aks"
kubernetes_version = "${var.kubernetes_version}"
location = "${azurerm_resource_group.xxxx_RG.location}"
resource_group_name = "${azurerm_resource_group.xxxx_RG.name}"
dns_prefix = "AKS-${var.dns_prefix}"
agent_pool_profile {
name = "${var.node_pool_name}"
count = "${var.node_pool_size}"
vm_size = "${var.node_pool_vmsize}"
os_type = "${var.node_pool_os}"
os_disk_size_gb = 30
}
service_principal {
client_id = "${var.client_id}"
client_secret = "${var.client_secret}"
}
tags = {
environment = "${var.env_tag}"
}
}
provider "helm" {
install_tiller = true
kubernetes {
host = "${azurerm_kubernetes_cluster.k8s.kube_config.0.host}"
client_certificate = "${base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate)}"
client_key = "${base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)}"
}
}
# Add Kubernetes Stable Helm charts repo
resource "helm_repository" "stable" {
name = "stable"
url = "https://kubernetes-charts.storage.googleapis.com"
}
# Install Nginx Ingress using Helm Chart
resource "helm_release" "nginx_ingress" {
name = "nginx-ingress"
repository = "${helm_repository.stable.metadata.0.name}"
chart = "nginx-ingress"
wait = "true"
set {
name = "rbac.create"
value = "false"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.loadBalancerIP"
value = "${data.azurerm_public_ip.nginx_ingress.ip_address}"
}
}
Then deploying my application with this
provider "kubernetes" {
host = "${azurerm_kubernetes_cluster.k8s.kube_config.0.host}"
username = "${azurerm_kubernetes_cluster.k8s.kube_config.0.username}"
password = "${azurerm_kubernetes_cluster.k8s.kube_config.0.password}"
client_certificate = "${base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate)}"
client_key = "${base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)}"
}
resource "kubernetes_deployment" "flask_api_deployment" {
metadata {
name = "flask-api-deployment"
}
spec {
replicas = 1
selector {
match_labels {
component = "api"
}
}
template {
metadata {
labels = {
component = "api"
}
}
spec {
container {
image = "xxxx.azurecr.io/sampleflask:0.1.0"
name = "flask-api"
port {
container_port = 5000
}
}
}
}
}
}
resource "kubernetes_ingress" "flask_api_ingress_service" {
metadata {
name = "flask-api-ingress-service"
}
spec {
backend {
service_name = "flask-api-cluster-ip-service"
service_port = 5000
}
}
}
resource "kubernetes_service" "flask_api_cluster_ip-service" {
metadata {
name = "flask-api-cluster-ip-service"
}
spec {
selector {
component = "api"
}
port {
port = 5000
target_port = 5000
}
}
}
I'm not sure what condition its waiting for. I can set the timeout larger but that doesn't seem to help. I can also set wait = false in the helm release but then no resources seem to get provisioned.
EDIT: From some testing I've done I see there is an issue when specifying the loadbalancerIP in the helm release. If I comment that out it completes just fine.
EDIT: From more testing I've found that the load balancer that is created is failing to be created. controller: user supplied IP Address 52.xxx.x.xx was not found in resource group MC_xxxxxxxx
So I guess the question is how do I allow specifying an IP from a different resource group?
| To install the nginx-ingress in AKS cluster through helm in Terraform, here I show one way that available here. In this way, you need to install the helm in the machine which you want to run the terraform script. And then you also need to configure the helm to your AKS cluster. The steps in Configure the helm to AKS. You can check if the helm configured to AKS through installing something to the AKS.
When everything is ready. You just need to set the helm provider and use the resource helm_release. The Terraform script to install the nginx-ingress shows here:
provider "helm" {
version = "~> 0.9"
}
resource "helm_release" "ingress" {
name = "application1"
chart = "stable/nginx-ingress"
version = "1.10.2"
namespace = "ingress-basic"
set {
name = "controller.replicaCount"
value = "1"
}
...
}
The process shows here:
This is just to install the nginx-ingress through helm in Terraform. If you want to create resources of the kubernetes. You can use the kubernetes in Terraform.
Update:
OK, to use a static public IP in another resource group for your ingress, you need to do two more steps.
The service principal used by the AKS cluster must have delegated permissions to the other resource group which the public IP in. The permission should be "Network Contributor" at least.
Set the ingress service annotations with the value of the resource group which the public IP in.
The annotation in the yaml file would like this:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup
For more details, see Use a static IP address outside of the node resource group.
Update1:
The code in the "helm_release":
resource "helm_release" "ingress" {
name = "application1223"
chart = "stable/nginx-ingress"
version = "1.10.2"
namespace = "ingress-basic"
set {
name = "controller.replicaCount"
value = "1"
}
set {
name = "controller.service.annotations.\"service\\.beta\\.kubernetes\\.io/azure-load-balancer-resource-group\""
value = "v-chaxu-xxxx"
}
set {
name = "controller.service.loadBalancerIP"
value = "13.68.175.40"
}
}
When it deploys successfully, the ingress service shows like this:
The info of the public IP which is in another resource group:
| Terraform | 57,019,284 | 26 |
In the documentation or in their bug database, both authors seem to prefer to write out the expression this way:
var.a != "" ? var.a : "default-a"
The value is explicitly tested to be not equal to empty string, then binary choice is made accordingly.
However, does this work too?
var.a ? var.a : "default-a"
I have not found it explicitly stated either way.
| Handling of type conversions like these is always a tradeoff in language design, and different languages make different compromises here.
For Terraform's language in particular, the philosophy is "explicit is better than implicit": the idea is that ideally someone who is unfamiliar with a configuration and possibly unfamiliar even with Terraform itself can read a Terraform configuration and make a good guess as to what it means, without needing to have memorized a lot of implicit conversion rules.
With that said, Terraform does have an automatic conversion from string to boolean, but not such that an empty string converts to false. Instead, the string values "true" and "false" map to true and false respectively, and any other string will produce a conversion error.
The allowance of converting those particular string values is mainly motivated by backward compatibility: prior to Terraform 0.12, there was no boolean type and thus strings containing those values were the only way to represent booleans.
When testing whether a string is empty, Terraform requires that to be written out explicitly as var.string == "" or var.string != "" so that the intent is explicit and clear to the reader.
(I am one of the authors of the documentation linked from the question and the author of the comment linked from the question. At the time of writing, I work on Terraform at HashiCorp.)
| Terraform | 56,967,975 | 26 |
Is there any way to get local variables within Terraform console?
> local.name
unknown values referenced, can't compute value
Seems like Terraform console allows only to check input variables and module output variables.
> var.in
2
> module.abc.out
3
Configuration file examples:
# main.tf
locals {
name = 1
}
variable "in" {
value = 2
}
module "abc" {
source "path/to/module"
}
# path/to/module/main.tf
output "out" {
value = 3
}
| This should work in recent Terraform releases.
$ terraform version
Terraform v1.0.5
$ terraform console
> local.name
1
> var.in
2
And it can be scripted (non-interactive) using Bash here string, for example.
$ terraform console <<<local.name
1
This is might be really useful for custom tooling, and can even be quite sophisticated.
$ terraform console <<<terraform.workspace
"default"
$ terraform console <<<local.credentials[local.stack].username
"user1234"
The tested 'main.tf'
locals {
name = 1
stack = terraform.workspace
credentials={
default: {username:"user1234",password:"1234",endpoint:"http://localhost"}
}
}
variable in {
default = 2
}
| Terraform | 53,158,080 | 26 |
Is there a way to conditionally add statement blocks in aws_iam_policy_document? I'm looking for something like:
data "aws_iam_policy_document" "policy" {
statement {
sid = "PolicyAlways"
...
}
if (var.enable_optional_policy) {
statement {
sid = "PolicySometimes"
...
}
}
}
| Yes. You can use a dynamic block with a boolean to optionally include the block.
data "aws_iam_policy_document" "policy" {
statement {
sid = "PolicyAlways"
...
}
dynamic "statement" {
# The contents of the list below are arbitrary, but must be of length one.
# It is only used to determine whether or not to include this statement.
for_each = var.enable_optional_policy ? [1] : []
content {
sid = "PolicySometimes"
...
}
}
}
| Terraform | 62,029,196 | 25 |
We store our latest approved AMIs in AWS parameter store. When creating new instances with Terraform I would like to programatically get this AMI ID. I have a command to pull the AMI ID but I'm not sure how to use it with Terraform.
Here is the command I use to pull the AMI ID:
$(aws ssm get-parameter --name /path/to/ami --query 'Parameter.Value' --output text)
And here is my Terraform script:
resource "aws_instance" "nginx" {
ami = "ami-c58c1dd3" # pull value from parameter store
instance_type = "t2.micro"
#key_name = "${var.key_name}"
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
How can I use the command to pull the AMI ID in the Terraform script?
| You can use the aws_ssm_parameter data source to fetch the value of a parameter at runtime:
data "aws_ssm_parameter" "ami" {
name = "/path/to/ami"
}
resource "aws_instance" "nginx" {
ami = data.aws_ssm_parameter.ami.value # pull value from parameter store
instance_type = "t2.micro"
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
However, a better approach might be to use the aws_ami data source to filter for the AMI you want more directly instead of pushing the AMI ID to SSM parameter store and then looking it up later. You can filter on a number of criteria including name, account owner and tags. Here's the example from the aws_instance resource documentation that is looking for the latest Ubuntu 20.04 AMI:
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
tags = {
Name = "HelloWorld"
}
}
| Terraform | 57,776,524 | 25 |
I am working with Terraform provisionar. and in one scenario I need to execute a 'local-exec' provisionar and use the output [This is array of IP addesses] of the command into next 'remote-exec' provisionar.
And i am not able to store the 'local-exec' provisionar output in local variable to use later. I can store it in local file but not in intermediate variable
count = "${length(data.local_file.instance_ips.content)}"
this is not working.
resource "null_resource" "get-instance-ip-41" {
provisioner "local-exec" {
command = "${path.module}\\scripts\\findprivateip.bat > ${data.template_file.PrivateIpAddress.rendered}"
}
}
data "template_file" "PrivateIpAddress" {
template = "/output.log"
}
data "local_file" "instance_ips" {
filename = "${data.template_file.PrivateIpAddress.rendered}"
depends_on = ["null_resource.get-instance-ip-41"]
}
output "IP-address" {
value = "${data.local_file.instance_ips.content}"
}
# ---------------------------------------------------------------------------------------------------------------------
# Update the instnaces by installing newrelic agent using remote-exec
# ---------------------------------------------------------------------------------------------------------------------
resource "null_resource" "copy_file_newrelic_v_29" {
depends_on = ["null_resource.get-instance-ip-41"]
count = "${length(data.local_file.instance_ips.content)}"
triggers = {
cluster_instance_id = "${element(values(data.local_file.instance_ips.content[count.index]), 0)}"
}
provisioner "remote-exec" {
connection {
agent = "true"
bastion_host = "${aws_instance.bastion.*.public_ip}"
bastion_user = "ec2-user"
bastion_port = "22"
bastion_private_key = "${file("C:/keys/nvirginia-key-pair-ajoy.pem")}"
user = "ec2-user"
private_key = "${file("C:/keys/nvirginia-key-pair-ajoy.pem")}"
host = "${self.triggers.cluster_instance_id}"
}
inline = [
"echo 'license_key: 34adab374af99b1eaa148eb2a2fc2791faf70661' | sudo tee -a /etc/newrelic-infra.yml",
"sudo curl -o /etc/yum.repos.d/newrelic-infra.repo https://download.newrelic.com/infrastructure_agent/linux/yum/el/6/x86_64/newrelic-infra.repo",
"sudo yum -q makecache -y --disablerepo='*' --enablerepo='newrelic-infra'",
"sudo yum install newrelic-infra -y"
]
}
}
| Unfortunately you can't. The solution I have found is to instead use an external data source block. You can run a command from there and retrieve the output(s), the only catch is that the command needs to produce json to standard output (stdout). See documentation here. I hope this is some help to others trying to solve this problem.
| Terraform | 56,474,709 | 25 |
I need to upload a folder to S3 Bucket. But when I apply for the first time. It just uploads. But I have two problems here:
uploaded version outputs as null. I would expect some version_id like 1, 2, 3
When running terraform apply again, it says Apply complete! Resources: 0 added, 0 changed, 0 destroyed. I would expect to upload all the times when I run terraform apply and create a new version.
What am I doing wrong? Here is my Terraform config:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my_bucket_name"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = "my_files.zip"
}
output "my_bucket_file_version" {
value = "${aws_s3_bucket_object.file_upload.version_id}"
}
| Terraform only makes changes to the remote objects when it detects a difference between the configuration and the remote object attributes. In the configuration as you've written it so far, the configuration includes only the filename. It includes nothing about the content of the file, so Terraform can't react to the file changing.
To make subsequent changes, there are a few options:
You could use a different local filename for each new version.
You could use a different remote object path for each new version.
You can use the object etag to let Terraform recognize when the content has changed, regardless of the local filename or object path.
The final of these seems closest to what you want in this case. To do that, add the etag argument and set it to be an MD5 hash of the file:
resource "aws_s3_bucket_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = "${path.module}/my_files.zip"
etag = "${filemd5("${path.module}/my_files.zip")}"
}
With that extra argument in place, Terraform will detect when the MD5 hash of the file on disk is different than that stored remotely in S3 and will plan to update the object accordingly.
(I'm not sure what's going on with version_id. It should work as long as versioning is enabled on the bucket.)
| Terraform | 56,107,258 | 25 |
I am trying to deploy a Cloudfront distribution with Terraform and getting an error while specifying the origin_id
Cloudfront is pointing at a load balancer via a Route53 lookup.
resource "aws_cloudfront_distribution" "my-app" {
origin {
custom_origin_config {
http_port = 443
https_port = 443
origin_protocol_policy = "https-only"
origin_ssl_protocols = ["TLSv1.2"]
}
domain_name = "${var.domain_name}"
origin_id = "Custom-${var.domain_name}"
}
...
default_cache_behavior {
allowed_methods = ["GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT", "DELETE"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.origin_id}"
...
where var.domain_name is a route53 record and local.origin_id is a unique id.
When performing the terraform apply I get this error:
aws_cloudfront_distribution.my-app: error creating CloudFront Distribution: NoSuchOrigin: One or more of your origins or origin groups do not exist.
The documentation states: origin_id (Required) - A unique identifier for the origin. which it is.
| The error relates to the cache behaviour.
You need to make sure that the target_origin_id relates to an origin_id within a cache behaviour.
Like so:
resource "aws_cloudfront_distribution" "my-app" {
origin {
custom_origin_config {
http_port = 443
https_port = 443
origin_protocol_policy = "https-only"
origin_ssl_protocols = ["TLSv1.2"]
}
domain_name = "${var.domain_name}"
origin_id = "Custom-${var.domain_name}"
}
...
default_cache_behavior {
allowed_methods = ["GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT", "DELETE"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "Custom-${var.domain_name}"
...
| Terraform | 55,972,204 | 25 |
I have seen many examples on how to use Terraform to launch AWS resources. I have also seen many claims that Terraform is cloud agnostic.
What I have not seen is an example of how I can launch a VPC with some subnets, some instances, some ELB's, and a few databases in either AWS or Azure using a single tf file.
Does any one have an example of that?
| While Terraform as a tool is cloud agnostic (in that it will support anything that exposes its API and has enough developer support to create a "provider" for it), Terraform itself will not natively abstract this at all and I'd seriously consider whether this is a good idea at all unless you have a really good use case.
If you did need to do this you would need to build a bunch of modules on top of things that abstracts the cloud layer from the module users and just allow them to specify the cloud provider as a variable (potentially controllable from some outside script).
As a basic example to abstract DNS you might have something like this (untested):
modules/google/dns/record/main.tf
variable "count" = {}
variable "domain_name_record" = {}
variable "domain_name_zone" = {}
variable "domain_name_target" = {}
resource "google_dns_record_set" "frontend" {
count = "${variable.count}"
name = "${var.domain_name_record}.${var.domain_name_zone}"
type = "CNAME"
ttl = 300
managed_zone = "${var.domain_name_zone}"
rrdatas = ["${var.domain_name_target}"]
}
modules/aws/dns/record/main.tf
variable "count" = {}
variable "domain_name_record" = {}
variable "domain_name_zone" = {}
variable "domain_name_target" = {}
data "aws_route53_zone" "selected" {
count = "${variable.count}"
name = "${var.domain_name_zone}"
}
resource "aws_route53_record" "www" {
count = "${variable.count}"
zone_id = "${data.aws_route53_zone.selected.zone_id}"
name = "${var.domain_name_record}.${data.aws_route53_zone.selected.name}"
type = "CNAME"
ttl = "60"
records = [${var.domain_name_target}]
}
modules/generic/dns/record/main.tf
variable "cloud_provider" = { default = "aws" }
variable "domain_name_record" = {}
variable "domain_name_zone" = {}
variable "domain_name_target" = {}
module "aws_dns_record" {
source = "../../aws/dns/record"
count = "${var.cloud_provider == "aws" ? 1 : 0}"
domain_name_record = "${var.domain_name_record}"
domain_name_zone = "${var.domain_name_zone}"
domain_name_target = "${var.domain_name_target}"
}
module "google_dns_record" {
source = "../../google/dns/record"
count = "${var.cloud_provider == "google" ? 1 : 0}"
domain_name_record = "${var.domain_name_record}"
domain_name_zone = "${var.domain_name_zone}"
domain_name_target = "${var.domain_name_target}"
}
Obviously this will get complicated pretty fast but it does mean that you can expose the "generic" module to others and allow them to use the abstractions you are building on things. How you cope with things where there isn't feature parity between different clouds is a whole separate question and probably not best suited for StackOverflow.
| Terraform | 42,789,247 | 25 |
Im trying to use EC2 Container service. Im using terraform for creating it.
I have defined a ecs cluster, autoscaling group, launch configuration. All seems to work. Except one thing. The ec2 instances are creating, but they are not register in the cluster, cluster just says no instances available.
In ecs agent log on created instance i found logs flooded with one error:
Error registering: NoCredentialProviders: no valid providers in chain
The ec2 instances are created with a proper role ecs_role. This role has two policies, one of them is following, like docs required:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:StartTelemetrySession",
"ecs:Submit*",
"ecs:StartTask"
],
"Resource": "*"
}
]
}
Im using ami ami-6ff4bd05. Latest terraform.
| It was a problem with trust relationships in the role as the role should include ec2. Unfortunately the error message was not all that helpful.
Example of trust relationship:
{
"Version": "2008-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": ["ecs.amazonaws.com", "ec2.amazonaws.com"]
},
"Effect": "Allow"
}
]
}
| Terraform | 34,582,908 | 25 |
In an attempt to create a route key named $disconnect for an API Gateway, I'm running the snippet below, while var.route_name should receive the string "disconnect":
resource "aws_apigatewayv2_route" "route" {
api_id = var.apigw_api.id
route_key = "$${var.route_name}"
# more stuff...
}
But it's not escaping it correctly. I coulnd't find a proper way to emit a $, followed by var.route_name's content.
How to do that?
| In Terraform's template language, the sequence $${ is the escape sequence for literal ${, and so unfortunately in your example Terraform will understand $${var.route_name} as literally ${var.route_name}, and not as a string interpolation at all.
To avoid this, you can use any strategy that causes the initial $ to be separate from the following ${, so that Terraform will understand the first $ as a literal and the remainder as an interpolation sequence.
One way to do that would be to present that initial literal $ via an interpolation sequence itself:
"${"$"}${var.route_name}"
The above uses an interpolation sequence that would typically be redundant -- its value is a literal string itself -- but in this case it's grammatically useful to change Terraform's interpretation of that initial dollar sign.
Some other permutations:
join("", ["$", var.route_name])
format("$%s", var.route_name)
locals {
dollar = "$"
}
resource "aws_apigatewayv2_route" "route" {
route_key = "${local.dollar}${var.route_name}"
# ...
}
Again, all of these are just serving to present the literal $ in various ways that avoid it being followed by either { or ${ and thus avoid Terraform's parser treating it as a template sequence or template escape.
| Terraform | 66,953,938 | 24 |
I am using terraform to gnerate certificates. Looking for information on how to dump pem and cert values to disk file using terrafrom. here is the output variable. i want to dump them to variable. any reference code snippet ??
output "private_key" {
description = "The venafi private key"
value = venafi_certificate.this.private_key_pem
}
output "certificate_body" {
description = "The acm certificate body"
value = venafi_certificate.this.certificate
}
output "certificate_chain" {
description = "The acm certificate chain"
value = venafi_certificate.this.chain
}
'''
| One way would be to use local_file. For example:
resource "local_file" "private_key" {
content = venafi_certificate.this.private_key_pem
filename = "private_key.pem"
}
| Terraform | 63,845,957 | 24 |
I was using terraform in cloud build, but it fails at this step
steps:
# Terraform
- id: 'configure_terraform'
name: node:10.16.3
entrypoint: "node"
args: ["./create_terraform_config.js",
"../terraform/override.tf",
"${_TERRAFORM_BUCKET_NAME}",
"${_TERRAFORM_BUCKET_PATH}"]
dir: "app/scripts"
- id: 'init_terraform'
name: hashicorp/terraform:light
args: ["init"]
dir: "app/terraform"
Initializing the backend...
Successfully configured the backend "gcs"! Terraform will automatically
use this backend unless the backend configuration changes.
Error: Failed to get existing workspaces: querying Cloud Storage failed: storage: bucket doesn't exist
| This might fix the issue
terraform init -reconfigure
reference: https://github.com/hashicorp/terraform/issues/23532#issuecomment-560493391
| Terraform | 59,053,993 | 24 |
On terraform/cloudformation documentation there are two different resources to create an ElastiCache Redis instance:
aws_elasticache_cluster (https://www.terraform.io/docs/providers/aws/r/elasticache_cluster.html)
aws_elasticache_replication_group (https://www.terraform.io/docs/providers/aws/r/elasticache_replication_group.html)
What is the difference between these two? Which one should I choose?
| Simply, the replication group is for the Redis cluster and the cache cluster is for the Memcache. You cannot apply the command to the others, i.e. cache cluster for Redis cluster and vice versa.
The redis also can use aws_elasticache_cluster but only if when redis has node 1, that is not a cluster mode.
num_cache_nodes – (Required unless replication_group_id is provided) The initial number of cache nodes that the cache cluster will have. For Redis, this value must be 1. For Memcache, this value must be between 1 and 20. If this number is reduced on subsequent runs, the highest numbered nodes will be removed.
| Terraform | 58,356,938 | 24 |
What is the best way to make REST API calls from Terraform? I'm currently using a null_resource with the local-exec provisioner to make a cURL call:
resource "null_resource" "cloudability-setup" {
provisioner "local-exec" {
command = <<EOT
curl -s -X POST https://api.cloudability.com/v3/vendors/aws/accounts \
-H 'Content-Type: application/json' \
-u "$${CldAbltyAPIToken:?Missing Cloudability API Token Env Variable}:" \
-d '{"vendorAccountId": "${data.aws_caller_identity.current.account_id}", "type": "aws_role" }'
EOT
}
However, the cURL return code is successful for HTTP 200 and HTTP 400 responses. I'd like the resource to be marked as failed if the new account cannot be registered.
I've tried returning just the HTTP Response Code:
resource "null_resource" "cloudability-setup" {
provisioner "local-exec" {
command = <<EOT
curl -s -o /dev/null -w "%{http_code}" \
-X POST https://api.cloudability.com/v3/vendors/aws/accounts \
-H 'Content-Type: application/json' \
-u "$${CldAbltyAPIToken:?Missing Cloudability API Token Env Variable}:" \
-d '{"vendorAccountId": "${data.aws_caller_identity.current.account_id}", "type": "aws_role" }'
EOT
}
But then I lose the API response body, which contains valuable information. There are also times when a HTTP 400 code indicates the account already exists, which I consider a success from the overall setup standpoint.
| This question has been viewed over 10,000 times and I realized I never posted my solution to the problem. I ended up writing a Python script to handle the various API responses and controlling the return codes to Terraform.
Terraform resource:
resource "null_resource" "cloudability-setup" {
provisioner "local-exec" {
command = "${path.module}/cloudability_setup.py -a ${data.aws_caller_identity.current.account_id} -t aws_role"
}
depends_on = ["aws_iam_role.cloudability-role"]
}
Python script:
import getopt
import json
import os
import requests
import sys
def print_help():
print '''
Usage: cloudability_setup.py [options]
cloudability_setup -- Register new account with Cloudability
Options:
-h, --help Show this help message and exit
-a <acct #>, --acctnum=<acct #>
Required argument: IaaS Account Number
-t <type>, --type=<type>
Required argument: IaaS Account Type
'''
def register_acct(acctnum, type):
url = 'https://api.cloudability.com/v3/vendors/aws/accounts'
token = os.environ['CldAbltyAPIToken']
headers = {'Content-Type': 'application/json'}
data = '{"vendorAccountId": "' + acctnum + '", "type": "'+ type + '" }'
response = requests.post(url, auth=(token,''), headers=headers, data=data)
# If new account was registered successfully, update externalID:
if response.status_code == requests.codes.created:
update_acct(acctnum, type)
# If account already exists, update externalID:
elif str(response.status_code) == '409':
update_acct(acctnum, type)
else:
print "Bad response from Cloudability API while registering new account."
print "HTTP: " + str(response.status_code)
sys.exit(3)
def update_acct(acctnum, type):
url = 'https://api.cloudability.com/v3/vendors/aws/accounts/' + acctnum
token = os.environ['CldAbltyAPIToken']
headers = {'Content-Type': 'application/json'}
data = '{"type": "' + type + '", "externalId": "XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX" }'
response = requests.put(url, auth=(token,''), headers=headers, data=data)
if response.status_code == requests.codes.ok:
sys.exit()
else:
print "Bad response from Cloudability API while updating account."
print "HTTP: " + str(response.status_code)
sys.exit(3)
def main(argv=None):
'''
Main function: work with command line options and send an HTTPS request to the Cloudability API.
'''
try:
opts, args = getopt.getopt(sys.argv[1:], 'ha:t:',
['help', 'acctnum=', 'type='])
except getopt.GetoptError, err:
# Print help information and exit:
print str(err)
print_help()
sys.exit(2)
# Initialize parameters
acctnum = None
type = None
# Parse command line options
for opt, arg in opts:
if opt in ('-h', '--help'):
print_help()
sys.exit()
elif opt in ('-a', '--acctnum'):
acctnum = arg
elif opt in ('-t', '--type'):
type = arg
# Enforce required arguments
if not acctnum or not type:
print_help()
sys.exit(4)
register_acct(acctnum, type)
if __name__ == '__main__':
sys.exit(main())
| Terraform | 51,197,781 | 24 |
While running terraform init when using Terraform 0.11.3 we are getting the following error:
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
Error installing provider "template": Get
https://releases.hashicorp.com/terraform-provider-template/: read tcp
172.25.77.25:53742->151.101.13.183:443: read: connection reset by peer.
Terraform analyses the configuration and state and automatically
downloads plugins for the providers used. However, when attempting to
download this plugin an unexpected error occured.
This may be caused if for some reason Terraform is unable to reach the
plugin repository. The repository may be unreachable if access is
blocked by a firewall.
If automatic installation is not possible or desirable in your
environment, you may alternatively manually install plugins by
downloading a suitable distribution package and placing the plugin's
executable file in the following directory:
terraform.d/plugins/linux_amd64
I realized it's because of connectivity issues with https://releases.hashicorp.com domain. For some obvious reasons, we will have to adjust with this connectivity issue as there are some SSL and firewall issues between the control server and Hashicorp's servers.
Is there any way we could bypass this by downloading the plugins from Hashicorp's servers and copying them onto the control server? Or any other alternative to avoid trying to download things from Hashicorp's servers?
| You can use pre-installed plugins by either putting the plugins in the same directory as the terraform binary or by setting the -plugin-dir flag.
It's also possible to build a bundle of every provider you need automatically using the terraform-bundle tool.
I run Terraform in our CI pipeline in a Docker container so have a Dockerfile that looks something like this:
FROM golang:alpine AS terraform-bundler-build
RUN apk --no-cache add git unzip && \
go get -d -v github.com/hashicorp/terraform && \
go install ./src/github.com/hashicorp/terraform/tools/terraform-bundle
COPY terraform-bundle.hcl .
RUN terraform-bundle package terraform-bundle.hcl && \
mkdir -p terraform-bundle && \
unzip -d terraform-bundle terraform_*.zip
####################
FROM python:alpine
RUN apk add --no-cache git make && \
pip install awscli
COPY --from=terraform-bundler-build /go/terraform-bundle/* /usr/local/bin/
Note that the finished container image also adds git, make and the AWS CLI as I also require those tools in the CI jobs that uses this container.
The terraform-bundle.hcl then looks something like this (taken from the terraform-bundle README):
terraform {
# Version of Terraform to include in the bundle. An exact version number
# is required.
version = "0.10.0"
}
# Define which provider plugins are to be included
providers {
# Include the newest "aws" provider version in the 1.0 series.
aws = ["~> 1.0"]
# Include both the newest 1.0 and 2.0 versions of the "google" provider.
# Each item in these lists allows a distinct version to be added. If the
# two expressions match different versions then _both_ are included in
# the bundle archive.
google = ["~> 1.0", "~> 2.0"]
# Include a custom plugin to the bundle. Will search for the plugin in the
# plugins directory, and package it with the bundle archive. Plugin must have
# a name of the form: terraform-provider-*, and must be build with the operating
# system and architecture that terraform enterprise is running, e.g. linux and amd64
customplugin = ["0.1"]
}
| Terraform | 50,944,395 | 24 |
I am trying to create an sg with Terraform.
I want all instances of a particular SG to have all communication allowed among them, so I am adding the SG itself to the ingress rules as follows:
resource "aws_security_group" "rancher-server-sg" {
vpc_id = "${aws_vpc.rancher-vpc.id}"
name = "rancher-server-sg"
description = "security group for rancher server"
ingress {
from_port = 0
to_port = 0
protocol = -1
security_groups = ["${aws_security_group.rancher-server-sg.id}"]
}
However when running terraform plan, I get:
However, in the AWS console, I am allowed to add an SG name in the inbound rules and I see that I can add the group itself (i.e. self-referenced).
Why is that?
I have also tried this without success:
security_groups = ["${self.id}"]
| Citing the manual:
self - (Optional) If true, the security group itself will be added as
a source to this ingress rule.
ingress {
from_port = 0
to_port = 0
protocol = -1
self = true
}
| Terraform | 49,995,417 | 24 |
I'd like to create and deploy a cluster using terraform ecs_service, but am unable to do so. My terraform applys always fail around IAM roles, which I don't clearly understand. Specifically, the error message is:
InvalidParametersException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
And I have found that:
When I have iam_role specified in ecs_service, ECS complains that I need to use a service-linked role.
When I have iam_role commented in ecs_service, ECS complains that the assumed role cannot validate the targetGroupArn.
My terraform spans a bunch of files. I pulled what feels like the relevant portions out below. Though I have seen a few similar problems posted, none have provided an actionable solution that solves the dilemma above, for me.
## ALB
resource "aws_alb" "frankly_internal_alb" {
name = "frankly-internal-alb"
internal = false
security_groups = ["${aws_security_group.frankly_internal_alb_sg.id}"]
subnets = ["${aws_subnet.frankly_public_subnet_a.id}", "${aws_subnet.frankly_public_subnet_b.id}"]
}
resource "aws_alb_listener" "frankly_alb_listener" {
load_balancer_arn = "${aws_alb.frankly_internal_alb.arn}"
port = "8080"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
type = "forward"
}
}
## Target Group
resource "aws_alb_target_group" "frankly_internal_target_group" {
name = "internal-target-group"
port = 8080
protocol = "HTTP"
vpc_id = "${aws_vpc.frankly_vpc.id}"
health_check {
healthy_threshold = 5
unhealthy_threshold = 2
timeout = 5
}
}
## IAM
resource "aws_iam_role" "frankly_ec2_role" {
name = "franklyec2role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role" "frankly_ecs_role" {
name = "frankly_ecs_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
# aggresively add permissions...
resource "aws_iam_policy" "frankly_ecs_policy" {
name = "frankly_ecs_policy"
description = "A test policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"ecs:*",
"ecr:*",
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"logs:*",
"tag:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "frankly_ecs_attach" {
role = "${aws_iam_role.frankly_ecs_role.name}"
policy_arn = "${aws_iam_policy.frankly_ecs_policy.arn}"
}
## ECS
resource "aws_ecs_cluster" "frankly_ec2" {
name = "frankly_ec2_cluster"
}
resource "aws_ecs_task_definition" "frankly_ecs_task" {
family = "service"
container_definitions = "${file("terraform/task-definitions/search.json")}"
volume {
name = "service-storage"
docker_volume_configuration {
scope = "shared"
autoprovision = true
}
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
resource "aws_ecs_service" "frankly_ecs_service" {
name = "frankly_ecs_service"
cluster = "${aws_ecs_cluster.frankly_ec2.id}"
task_definition = "${aws_ecs_task_definition.frankly_ecs_task.arn}"
desired_count = 2
iam_role = "${aws_iam_role.frankly_ecs_role.arn}"
depends_on = ["aws_iam_role.frankly_ecs_role", "aws_alb.frankly_internal_alb", "aws_alb_target_group.frankly_internal_target_group"]
# network_configuration = {
# subnets = ["${aws_subnet.frankly_private_subnet_a.id}", "${aws_subnet.frankly_private_subnet_b}"]
# security_groups = ["${aws_security_group.frankly_internal_alb_sg}", "${aws_security_group.frankly_service_sg}"]
# # assign_public_ip = true
# }
ordered_placement_strategy {
type = "binpack"
field = "cpu"
}
load_balancer {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
container_name = "search-svc"
container_port = 8080
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
| I was seeing an identical error message and I was doing something else wrong:
I had specified the loadbalancer's ARN and not the loadbalancer's target_group ARN.
| Terraform | 56,742,157 | 23 |
Question
If there a way to get the assigned IP address of an aws_lb resource at the time aws_lb is created by Terraform?
As in AWS documentation - NLB - To find the private IP addresses to whitelist, we can find out the IP address associated to ELB.
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
In the navigation pane, choose Network Interfaces.
In the search field, type the name of your Network Load Balancer.
There is one network interface per load balancer subnet.
On the Details tab for each network interface, copy the address from
Primary private IPv4 IP.
Background
To be able to setup security group to white list the ELB IP address as Network Load Balancer cannot not have Security Group as in Network Load Balancers don't have Security Groups.
Considered aws_network_interface but it does not work with an error.
Error: no matching network interface found
Also I think datasource assumes the resource already exists and cannot be used for the resource to be created by Terraform.
| More elegent solution using only HCL in Terraform :
data "aws_network_interface" "lb" {
for_each = var.subnets
filter {
name = "description"
values = ["ELB ${aws_lb.example_lb.arn_suffix}"]
}
filter {
name = "subnet-id"
values = [each.value]
}
}
resource "aws_security_group" "lb_sg" {
vpc_id = var.vpc_id
ingress {
from_port = 0
to_port = 0
protocol = "tcp"
cidr_blocks = formatlist("%s/32", [for eni in data.aws_network_interface.lb : eni.private_ip])
description = "Allow connection from NLB"
}
}
Source : https://github.com/terraform-providers/terraform-provider-aws/issues/3007
Hope this helps.
| Terraform | 56,713,493 | 23 |
I have the following code in my main.tf file:
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "us-east-1"
alias = "us-east-1"
}
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "us-west-1"
alias = "us-west-1"
}
module "us-east_vpc" {
source = "./setup-networking"
providers = {
"aws.region" = "aws.us-east-1"
}
}
module "us-west_vpc" {
source = "./setup-networking"
providers = {
"aws.region" = "aws.us-west-1"
}
}
And then in my modules file I have:
provider "aws" {
alias = "region"
}
resource "aws_vpc" "default" {
provider = "aws.region"
cidr_block = "${lookup(var.vpc_cidr, ${aws.region.region})}"
enable_dns_hostnames = true
tags {
Name = "AWS VPC"
}
}
resource "aws_internet_gateway" "default" {
provider = "aws.region"
vpc_id = "${aws_vpc.default.id}"
}
resource "aws_subnet" "default" {
provider = "aws.region"
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${lookup(var.subnet_cidr, ${aws.region.region})}"
availability_zone = "aws.region"
tags {
Name = "AWS Subnet"
}
}
resource "aws_route_table" "default" {
provider = "aws.region"
vpc_id = "${aws_vpc.default.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.default.id}"
}
tags {
Name = "Main Gateway"
}
}
As you can see in the modules file code I am trying to do a lookup to find the VPC CIDR mask and the subnet CIDR mask from map variables.
The trouble is that I can't seem to sort out how to get the region to be used as a lookup value.
If I hard code these values:
cidr_block = "10.10.0.0/16"
cidr_block = "10.10.10.0/24"
The script works as expected but I don't want to hard code the values.
Can someone with more Terraform experience help me understand how I can properly reference the region to lookup the correct value?
| I was looking for the same answer for a different problem. I wanted to get the region for a name of a role, I was able to get the info by doing this:
1.- Create a file like data.tf and add this info:
data "aws_region" "current" {}
2.- Get the info from the data by calling this variable in any TF file:
name = "${var.vpc-name}-${data.aws_region.current.name}-Bastion-Role"
This way it will get the region where you are executing the code, and you don't have to mess with the provider.tf file.
| Terraform | 51,619,602 | 23 |
I create an AMI in EC2 with terraform with this resource:
resource "aws_instance" "devops-demo" {
ami = "jnkdjsndjsnfsdj"
instance_type = "t2.micro"
key_name = "demo-devops"
user_data = "${file("ops_setup.sh")}"
}
The user data executes a shell script that install Java JRE:
sudo yum remove java-1.7.0-openjdk -y
sudo wget -O /opt/server-jre-8u172-linux-x64.tar.gz --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u172-b11/a58eab1ec242421181065cdc37240b08/server-jre-8u172-linux-x64.tar.gz"
sudo tar xzf /opt/server-jre-8u172-linux-x64.tar.gz
export JAVA_HOME=/jdk1.8.0_172
export JRE_HOME=/jdk1.8.0_171/jre
export PATH=$JAVA_HOME/bin:$PATH
But none of the environment variables work. However, if I connect by ssh to the instance and I execute the export command, it works fine.
Is there any way to define the environment variables with terraform?
| Using the export command only sets those variables for the current shell and all processes that start from that shell. It is not a persistent setting. Anything you wish to make permanent should be set in /etc/environment.
For example in userdata:
echo "JAVA_HOME=/jdk1.8.0_172" >> /etc/environment
This would add the JAVA_HOME=/jdk1.8.0_172 line to that file. Note, you should not use export inside that file.
The PATH variable is likely already defined in the /etc/environment file and you'll need to overwrite that appropriately if you are going to append additional paths to it.
There is really great details on setting environment variables available in this answer.
| Terraform | 50,668,315 | 23 |
Is there any way to avoid resource deletion when reorganizing/renaming resources? Example: when I first implemented CloudFront Terraform it was an independent sub directory in my project, later I switched to using it as a module in. my root Terraform config but this caused Terraform to want to delete the old CloudFront distribution and create a new one:
Terraform will perform the following actions:
- aws_cloudfront_distribution.main_site_distribution
+ module.cloudfront.aws_cloudfront_distribution.main_site_distribution
Is there any way to force Terraform to rename the resource instead?
| Unfortunately Terraform doesn't know that you've renamed/moved the resource around but you could tell it where the resource should be stored in the state by using terraform state mv.
In your case if you ran:
terraform state mv aws_cloudfront_distribution.main_site_distribution module.cloudfront.aws_cloudfront_distribution.main_site_distribution
and then run another plan you should see no changes (or only the changes to the resource you have made as well as the move).
| Terraform | 49,112,142 | 23 |
I am using AWS CodeBuild along with Terraform for automated deployment of a Lambda based service. I have a very simple buildscript.yml that accomplishes the following:
Get dependencies
Run Tests
Get AWS credentials and save to file (detailed below)
Source the creds file
Run Terraform
The step "source the creds file" is where I am having my difficulty. I have a simply bash one-liner that grabs the AWS container creds off of curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI and then saves them to a file in the following format:
export AWS_ACCESS_KEY_ID=SOMEACCESSKEY
export AWS_SECRET_ACCESS_KEY=MYSECRETKEY
export AWS_SESSION_TOKEN=MYSESSIONTOKEN
Of course, the obvious step is to simply source this file so that these variables can be added to my environment for Terraform to use. However, when I do source /path/to/creds_file.txt, CodeBuild returns:
[Container] 2017/06/28 18:28:26 Running command source /path/to/creds_file.txt
/codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: source: not found
I have tried to install source through apt but then I get an error saying that source cannot be found (yes, I've run apt update etc.). I am using a standard Ubuntu image with the Python 2.7 environment for CodeBuild. What can I do to either get Terraform working credentials for source this credentials file in Codebuild.
Thanks!
| Try using . instead of source. source is not POSIX compliant. ss64.com/bash/source.html
| Terraform | 44,810,237 | 23 |
I'm using packer with ansible provisioner to build an ami, and terraform to setup the infrastructure with that ami as a source - somewhat similar to this article: http://www.paulstack.co.uk/blog/2016/01/02/building-an-elasticsearch-cluster-in-aws-with-packer-and-terraform
When command packer build pack.json completes successfully I get the output ami id in this format:
eu-central-1: ami-12345678
In my terraform variables variables.tf I need to specify the source ami id, region etc. The problem here is that I don't want to specify them manually or multiple times. For region (that I know beforehand) it's easy since I can use environment variables in both situations, but what about the output ami? Is there a built-in way to chain these products or some not so hacky approach to do it?
EDIT: Hacky approach for anyone who might be interested. In this solution I'm greping the aws region & ami from packer output and use a regular expression in perl to write the result into a terraform.tfvars file:
vars=$(pwd)"/terraform.tfvars"
packer build pack.json | \
tee /dev/tty | \
grep -E -o '\w{2}-\w+-\w{1}: ami-\w+' | \
perl -ne '@parts = split /[:,\s]+/, $_; print "aws_amis." . $parts[0] ." = \"" . $parts[1] . "\"\n"' > ${vars}
| You should consider using Terraform's Data Source for aws_ami. With this, you can rely on custom tags that you set on the AMI when it is created (for example a version number or timestamp). Then, in the Terraform configuration, you can simply filter the available AMIs for this account and region to get the AMI ID that you need.
https://www.terraform.io/docs/providers/aws/d/ami.html
data "aws_ami" "nat_ami" {
most_recent = true
executable_users = ["self"]
filter {
name = "owner-alias"
values = ["amazon"]
}
filter {
name = "name"
values = ["amzn-ami-vpc-nat*"]
}
name_regex = "^myami-\\d{3}"
owners = ["self"]
}
NOTE: in the example above (from the docs), the combination of filters is probably excessive. You can probably get by just fine with something like:
data "aws_ami" "image" {
most_recent = true
owners = ["self"]
filter {
name = "tag:Application"
values = ["my-app-name"]
}
}
output "ami_id" {
value = "${data.aws_ami.image.id}"
}
An additional benefit of this is that you can deploy to multiple regions with the same configuration and no variable map!
| Terraform | 37,357,618 | 23 |
Any pointers how to setup Terraform v0.14.0 on a Apple M1 , as tfenv doesn't support v0.14.0 on Apple M1
tfenv install v0.14.0
Installing Terraform v0.14.0
Downloading release tarball from https://releases.hashicorp.com/terraform/0.14.0/terraform_0.14.0_darwin_arm64.zip
curl: (22) The requested URL returned error: 403
Tarball download failed
| You can set the env var TFENV_ARCH and use tfenv
TFENV_ARCH=amd64 tfenv install 0.14.0
| Terraform | 71,606,880 | 22 |
I am working on terraform tasks and trying to understand how state files work. I have created main.tf file which has
vpc,firewall,subnet,compute_instance
which has to be create in GCP. So i have applied this to GCP environment and a file name terraform.tfstate file got created and i did backup of this file into folder called 1st-run.
Now i have updated my main.tf with
2vpc,2firewalls,2subnets,compute_instance
as i need to add another nic for my vm.Did terraform apply and environment got created and terraform.tfstate file got created. I did backup of this file into folder called 2nd-run.
I want to rollback the environment where i have executed for 1st-run. I have that state file which is in 1st-run folder.
What is the command to rollback by using statefile instead of touching the code so that automatically my GCP environment will have
vpc,firewall,subnet,compute_instance
which i have executed for the 1st time.
| There is no way to roll back to a previous state as described in a state file in Terraform today. Terraform always plans changes with the goal of moving from the prior state (the latest state snapshot) to the goal state represented by the configuration. Terraform also uses the configuration for information that is not tracked in the state, such as the provider configurations.
The usual way to represent "rolling back" in Terraform is to put your configuration in version control and commit before each change, and then you can use your version control system's features to revert to an older configuration if needed.
Not all changes can be rolled back purely by reverting a VCS change though. For example, if you added a new provider block and resources for that provider all in one commit and then applied the result, in order to roll back you'd need to change the configuration to still include the provider block but not include any of the resource blocks, so you'd need to adjust the configuration during the revert. Terraform will then use the remaining provider block to configure the provider to run the destroy actions, after which you can finally remove the provider block too.
| Terraform | 57,821,319 | 22 |
I have a list of maps like this -
[
{
"outer_key_1" = [
{
"ip_cidr" = "172.16.6.0/24"
"range_name" = "range1"
},
{
"ip_cidr" = "172.16.7.0/24"
"range_name" = "range2"
},
{
"ip_cidr" = "172.17.6.0/24"
"range_name" = "range3"
},
{
"ip_cidr" = "172.17.7.0/24"
"range_name" = "range4"
},
]
},
{
"outer_key_2" = [
{
"ip_cidr" = "172.16.5.0/24"
"range_name" = "range5"
},
{
"ip_cidr" = "172.17.5.0/24"
"range_name" = "range6"
},
]
},
]
I want to merge the maps inside the list. This is in an output variable module.module_name.module_op.
Required Output:
{
"outer_key_1" = [
{
"ip_cidr" = "172.16.6.0/24"
"range_name" = "range1"
},
{
"ip_cidr" = "172.16.7.0/24"
"range_name" = "range2"
},
{
"ip_cidr" = "172.17.6.0/24"
"range_name" = "range3"
},
{
"ip_cidr" = "172.17.7.0/24"
"range_name" = "range4"
},
]
"outer_key_2" = [
{
"ip_cidr" = "172.16.5.0/24"
"range_name" = "range5"
},
{
"ip_cidr" = "172.17.5.0/24"
"range_name" = "range6"
},
]
}
I have done this using
locals {
result = merge(module.module_name.module_op[0], module.module_name.module_op[1])
}
How do I do this in a more iterative way.
I will not always only 2 maps in my list, it can be more.
I tried using the for loop in terraform 12 like this -
output "result" {
value = [ for tuple in module.module_name.module_op : merge(tuple) ]
}
and this -
output "secondary_subnets" {
value = { for tuple in module.module_name.module_op : merge(tuple) }
}
The first one gives me my input back and the second one gives me an error saying I need to add a key since I am returning an object.
Is there a way to get this done?
Thanks in advance.
| You can actually pass a list of maps to the merge() function:
The Terraform language has a general feature for turning lists/tuples into multiple arguments, by using the special symbol ... after the last argument expression
So, in your example above, you could do:
locals {
result = merge(module.module_name.module_op...)
}
This will then merge all of the maps in the list, allowing for dynamically sized lists.
| Terraform | 57,392,101 | 22 |
I have 6 subnets, I want to filter 3 subnets from them matching substring internal and use in rds.
Tag name has internal word and want to filter based on that.
Could anyone please help me?
data "aws_vpc" "vpc_nonprod-sctransportationops-vpc" {
tags {
Name = "vpc_nonprod-sctransportationops-vpc"
}
}
data "aws_subnet_ids" "all" {
vpc_id = "${data.aws_vpc.vpc_nonprod-sctransportationops-vpc.id}"
}
output "aws_subnet_ids" {
value = "${data.aws_subnet_ids.all.ids}"
}
# 6 subnets
# Now look up details for each subnet
data "aws_subnet" "filtered_subnets" {
count = "${length(data.aws_subnet_ids.all.ids)}"
id = "${data.aws_subnet_ids.all.ids[count.index]}"
filter {
name = "tag:Name"
values = ["*internal*"]
}
}
Some tag name has internal substring
Need to grab all subnet id whose tag name has internal substring
values = ["*"] return 6 ids, however, values = ["any word not work"] or values = ["*internal*"] doesn't work.
Following are error:
Error: Error refreshing state: 1 error(s) occurred:
* data.aws_subnet.publicb: 3 error(s) occurred:
* data.aws_subnet.publicb[1]: data.aws_subnet.publicb.1: no matching subnet found
* data.aws_subnet.publicb[4]: data.aws_subnet.publicb.4: no matching subnet found
* data.aws_subnet.publicb[0]: data.aws_subnet.publicb.0: no matching subnet found
There should be 6 but I am getting only 3, that means there should be partially good things and partially bad things.
These 3 subnets doesn't have internal substring in tag name.
It means it's parsing. aws_subnet_ids doesn't have filter option.
There should be instead. For one match, it will be simple, however, I need multiple matches.
In my guess now the error is because of loops which runs for 6 times.
Here is same output without filter:
"data.aws_subnet.filtered_subnets.2": {
"type": "aws_subnet",
"depends_on": [
"data.aws_subnet_ids.all"
],
"primary": {
"id": "subnet-14058972",
"attributes": {
"assign_ipv6_address_on_creation": "false",
"availability_zone": "us-west-2a",
"cidr_block": "172.18.201.0/29",
"default_for_az": "false",
"id": "subnet-14038772",
"map_public_ip_on_launch": "false",
"state": "available",
"tags.%": "4",
"tags.Designation": "internal",
"tags.Name": "subnet_nonprod-sctransportationops-vpc_internal_az2",
"tags.Permissions": "f00000",
"tags.PhysicalLocation": "us-west-2a",
"vpc_id": "vpc-a47k07c2"
},
"meta": {},
"tainted": false
},
"deposed": [],
"provider": "provider.aws"
}
| aws_subnet_ids has this feature, however, different way. Here, it solved my problem:
data "aws_subnet_ids" "all" {
vpc_id = "${data.aws_vpc.vpc_nonprod-sctransportationops-vpc.id}"
tags = {
Name = "*internal*"
}
}
Thanks for reviewing :D
| Terraform | 48,817,967 | 22 |
I deploy lambda using Terraform as follows but have following questions:
1) I want null_resource.lambda to be called always or when stop_ec2.py is changed so that stop_ec2_upload.zip is not out-of-date. What should I write in triggers{}?
2) how to make aws_lambda_function.stop_ec2 update the new stop_ec2_upload.zip to cloud when stop_ec2_upload.zip is changed?
right now I have to destroy aws_lambda_function.stop_ec2 then create it again. is there anything I can write in the code so that when I run terraform apply, 1) and 2) will happen automatically?
resource "null_resource" "lambda" {
triggers {
#what should I write here?
}
provisioner "local-exec" {
command = "mkdir -p lambda_func && cd lambda_py && zip
../lambda_func/stop_ec2_upload.zip stop_ec2.py && cd .."
}
}
resource "aws_lambda_function" "stop_ec2" {
depends_on = ["null_resource.lambda"]
function_name = "stopEC2"
handler = "stop_ec2.handler"
runtime = "python3.6"
filename = "lambda_func/stop_ec2_upload.zip"
source_code_hash =
"${base64sha256(file("lambda_func/stop_ec2_upload.zip"))}"
role = "..."
}
| I read the link provided by Chandan and figured out.
Here is my code and it works perfectly.
In fact, with "archive_file", and source_code_hash, I do not need trigger. whenever I create a new file stop_ec2.py or modify it. when I run terraform, the file will be re-zipped and uploaded to cloud.
data "archive_file" "stop_ec2" {
type = "zip"
source_file = "src_dir/stop_ec2.py"
output_path = "dest_dir/stop_ec2_upload.zip"
}
resource "aws_lambda_function" "stop_ec2" {
function_name = "stopEC2"
handler = "stop_ec2.handler"
runtime = "python3.6"
filename = "dest_dir/stop_ec2_upload.zip"
source_code_hash = data.archive_file.stop_ec2.output_base64sha256
role = "..."
}
| Terraform | 48,577,727 | 22 |
I am trying to use Terraform to be able to stand up a simple API Proxy in API Gateway on AWS. Basically, I want to wrap root and proxy the requests back to another end point. Its probably the simplest setup and I can't seem to get it to work in Terraform.
Below you will find the script. At this point I am able to create the REST API, define a Resource, create a method but there doesn't seem to be any way to define it the end-point.
provider "aws" {
region = "us-east-1"
}
resource "aws_api_gateway_rest_api" "TerraTest" {
name = "TerraTest"
description = "This is my API for demonstration purposes"
}
resource "aws_api_gateway_resource" "TerraProxyResource" {
rest_api_id = "${aws_api_gateway_rest_api.TerraTest.id}"
parent_id = "${aws_api_gateway_rest_api.TerraTest.root_resource_id}"
path_part = "{proxy+}"
}
resource "aws_api_gateway_integration" "integration" {
rest_api_id = "${aws_api_gateway_rest_api.TerraTest.id}"
resource_id = "${aws_api_gateway_resource.TerraProxyResource.id}"
http_method = "${aws_api_gateway_method.mymethod.http_method}"
type = "HTTP_PROXY"
uri = "http://api.endpoint.com/{proxy+}"
}
Here I set the type to proxy, but I don't think URI is the right property for setting the endpoint.
resource "aws_api_gateway_method" "mymethod" {
rest_api_id = "${aws_api_gateway_rest_api.TerraTest.id}"
resource_id = "${aws_api_gateway_resource.TerraProxyResource.id}"
http_method = "ANY"
authorization = "NONE"
}
I expect somewhere here to be able to create that mapping to some other endpoint, but there doesn't appear to be any properties for that. (https://github.com/hashicorp/terraform/blob/master/builtin/providers/aws/resource_aws_api_gateway_method.go)
resource "aws_api_gateway_api_key" "TerraTestKey" {
name = "Terra_Test_Key"
stage_key {
rest_api_id = "${aws_api_gateway_rest_api.TerraTest.id}"
stage_name = "${aws_api_gateway_deployment.TerraTestDeployment.stage_name}"
}
}
resource "aws_api_gateway_deployment" "TerraTestDeployment" {
rest_api_id = "${aws_api_gateway_rest_api.TerraTest.id}"
stage_name = "dev"
}
I scanned the source code and I didn't see any properties that I can set.
Can anyone share any advice/snipets?
Tim
Ps. If you want to try to run the script yourself, I put it here: http://textuploader.com/d14sx
| This is the relevant module which shows a working solution. It doesn't stand alone since it relies on some variables defined elsewhere but it should be enough to help anyone struggling to get a AWS Proxy setup and also shows Lambda authorizer integration as a bonus.
provider "aws" {
region = "${var.region}"
profile = "${var.profile}"
}
data "aws_iam_role" "api_user" {
role_name = "api_user"
}
module "authorizer_lambda" {
source = "../lambda"
name = "${var.api_name}-authorizer_lambda"
filename = "authorizer_lambda"
runtime = "nodejs4.3"
role = "${data.aws_iam_role.api_user.arn}"
}
resource "aws_api_gateway_authorizer" "custom_authorizer" {
name = "${var.api_name}-custom_authorizer"
rest_api_id = "${aws_api_gateway_rest_api.ApiGateway.id}"
authorizer_uri = "${module.authorizer_lambda.uri}"
authorizer_credentials = "${data.aws_iam_role.api_user.arn}"
authorizer_result_ttl_in_seconds = 1
}
resource "aws_api_gateway_rest_api" "ApiGateway" {
name = "${var.api_name}"
description = "${var.api_description}"
}
resource "aws_api_gateway_resource" "ApiProxyResource" {
rest_api_id = "${aws_api_gateway_rest_api.ApiGateway.id}"
parent_id = "${aws_api_gateway_rest_api.ApiGateway.root_resource_id}"
path_part = "{proxy+}"
}
resource "aws_api_gateway_integration" "ApiProxyIntegration" {
rest_api_id = "${aws_api_gateway_rest_api.ApiGateway.id}"
resource_id = "${aws_api_gateway_resource.ApiProxyResource.id}"
http_method = "${aws_api_gateway_method.ApiProxyMethod.http_method}"
type = "HTTP_PROXY"
integration_http_method = "ANY"
uri = "${format("%s/{proxy}", "${var.base_url}")}"
passthrough_behavior = "WHEN_NO_MATCH"
request_parameters = "${var.aws_api_gateway_integration_request_parameters}"
}
resource "aws_api_gateway_method" "ApiProxyMethod" {
rest_api_id = "${aws_api_gateway_rest_api.ApiGateway.id}"
resource_id = "${aws_api_gateway_resource.ApiProxyResource.id}"
http_method = "ANY"
authorization = "CUSTOM"
authorizer_id = "${aws_api_gateway_authorizer.custom_authorizer.id}"
request_parameters = {"method.request.path.proxy" = true}
}
resource "aws_api_gateway_deployment" "ApiDeployment" {
depends_on = ["aws_api_gateway_method.ApiProxyMethod"]
rest_api_id = "${aws_api_gateway_rest_api.ApiGateway.id}"
stage_name = "${var.stage_name}"
}
| Terraform | 42,070,187 | 22 |
Is there a way in Terraform to check if a resource in Google Cloud exists prior to trying to create it?
I want to check if the following resources below exist in my CircleCI CI/CD pipeline during a job. I have access to terminal commands, bash, and gcloud commands. If the resources do exist, I want to use them. If they do not exist, I want to create them. I am doing this logic in CircleCI's config.yml as steps where I have access to terminal commands and bash. My goal is to create my necessary infrastructure (resources) in GCP when they are needed, otherwise use them if they are created, without getting Terraform errors in my CI/CD builds.
If I try to create a resource that already exists, Terraform apply will result in an error saying something like, "you already own this resource," and now my CI/CD job fails.
Below is pseudo code describing the resources I am trying to get.
resource "google_artifact_registry_repository" "main" {
# this is the repo for hosting my Docker images
# it does not have a data source afaik because it is beta
}
For my google_artifact_registry_repository resource. One approach I have is to do a Terraform apply using a data source block and see if a value is returned. The problem with this is that the google_artifact_registry_repository does not have a data source block. Therefore, I must create this resource once using a resource block and every CI/CD build thereafter can rely on it being there. Is there a work-around to read that it exists?
resource "google_storage_bucket" "bucket" {
# bucket containing the folder below
}
resource "google_storage_bucket_object" "content_folder" {
# folder containing Terraform default.tfstate for my Cloud Run Service
}
For my google_storage_bucket and google_storage_bucket_object resources. If I do a Terraform apply using a data source block to see if these exist, one issue I run into is when the resources are not found, Terraform takes forever to return that status. It would be great if I could determine if a resource exists within like 10-15 seconds or something, and if not assume these resources do not exist.
data "google_storage_bucket" "bucket" {
# bucket containing the folder below
}
output bucket {
value = data.google_storage_bucket.bucket
}
When the resource exists, I can use Terraform output bucket to get that value. If it does not exist, Terraform takes too long to return a response. Any ideas on this?
| TF does not have any build in tools for checking if there are pre-existing resources, as this is not what TF is meant to do. However, you can create your own custom data source.
Using the custom data source you can program any logic you want, including checking for pre-existing resources and return that information to TF for future use.
| Terraform | 70,689,512 | 21 |
So azurerm updated to 2.0 a few hours ago....
My main code is version locked for safety, but
I'm doing some testing to see what's changed from the public beta of 1.44 and now I'm getting this error on any TF command apart from terraform init.
has anybody else come upon this?
| OK,
running terraform in debug mode showed it was at the provider level that the error was being thrown.
It's not listed in the 2.0 upgrade guide but if you look at the provider docs it now shows a features{} block.
So at a minimum the provider now needs to look like:
provider "azurerm" {
features {}
}
| Terraform | 60,384,689 | 21 |
I have a terraform config which creates an AWS IAM user with an access key, and I assign both id and secret to output variables:
...
resource "aws_iam_access_key" "brand_new_user" {
user = aws_iam_user.brand_new_user.name
}
output "brand_new_user_id" {
value = aws_iam_access_key.brand_new_user.id
}
output "brand_new_user_secret" {
value = aws_iam_access_key.brand_new_user.encrypted_secret
sensitive = true
}
Here brand_new_user_secret is declared as sensitive, so terraform output obviously does not print it.
Is there any way to get its output value without parsing the whole state file?
Trying to access it directly (terraform output brand_new_user_secret) does not work (results in an error "The output variable requested could not be found in the state file...").
Terraform version: 0.12.18
| I had some hopes to avoid it, but so far I did not find a better way than parse terraform state:
terraform state pull | jq '.resources[] | select(.type == "aws_iam_access_key") | .instances[0].attributes'
which would result in a structure similar to:
{
"encrypted_secret": null,
"id": "....",
"key_fingerprint": null,
"pgp_key": null,
"secret": "....",
"ses_smtp_password": "....",
"ses_smtp_password_v4": null,
"status": "Active",
"user": "...."
}
| Terraform | 59,473,690 | 21 |
According to the documentation, to use s3 and not a local terraform.tfstate file for state storage, one should configure a backend more or less as follows:
terraform {
backend "s3" {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
I was
using a local (terraform.tfstate) file
added the above snippet in my provided.tf file
run (again) terraform init
was asked by terraform to migrate my state to the above bucket
...so far so good...
But then comes this confusing part about terraform_remote_state ...
Why do I need this?
Isn't my state now saved remotely (on the aforemenetioned s3 bucket) already?
| terraform_remote_state isn't for storage of your state its for retrieval in another terraform plan if you have outputs. It is a data source. For example if you output your Elastic IP Address in one state:
resource "aws_eip" "default" {
vpc = true
}
output "eip_id" {
value = "${aws_eip.default.id}"
}
Then wanted to retrieve that in another state:
data "terraform_remote_state" "remote" {
backend = "s3"
config {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
resource "aws_instance" "foo" {
...
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.foo.id}"
allocation_id = "${data.terraform_remote_state.remote.eip_id}"
}
edit: If you are retrieving outputs in Terraform > 0.12 you need to include outputs
data "terraform_remote_state" "remote" {
backend = "s3"
config {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
resource "aws_instance" "foo" {
...
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.foo.id}"
allocation_id = "${data.terraform_remote_state.remote.outputs.eip_id}"
}
| Terraform | 50,820,850 | 21 |
I have an existing resource group on Azure with a VM running on it and have been playing around with Terraform to try and import the resource to my state file.
I have set up a skeleton file, and as far as my understanding is once I import TF should populate this with the values on my resource group in Azure
resource "azurerm" "example" {
# ...instance configuration...
name = "MyResourceGroup"
}
Command I am running from CLI:
terraform import azurerm_resource_group.MyResourceGroup/subscriptions/MySubscriptionNumber/resourceGroups/MyResourceGroup
Message from Terraform:
The import command expects two arguments.
Usage: terraform import [options] ADDR ID
Import existing infrastructure into your Terraform state.
This will find and import the specified resource into your Terraform
state, allowing existing infrastructure to come under Terraform
management without having to be initially created by Terraform.
The ADDR specified is the address to import the resource to. Please
see the documentation online for resource addresses. The ID is a
resource-specific ID to identify that resource being imported. Please
reference the documentation for the resource type you're importing to
determine the ID syntax to use. It typically matches directly to the ID
that the provider uses.
The current implementation of Terraform import can only import resources
into the state. It does not generate configuration. A future version of
Terraform will also generate configuration.
Because of this, prior to running terraform import it is necessary to write
a resource configuration block for the resource manually, to which the
imported object will be attached.
This command will not modify your infrastructure, but it will make
network requests to inspect parts of your infrastructure relevant to
the resource being imported.
Options:
-backup=path Path to backup the existing state file before
modifying. Defaults to the "-state-out" path with
".backup" extension. Set to "-" to disable backup.
-config=path Path to a directory of Terraform configuration files
to use to configure the provider. Defaults to pwd.
If no config files are present, they must be provided
via the input prompts or env vars.
-allow-missing-config Allow import when no resource configuration block exists.
-input=true Ask for input for variables if not directly set.
-lock=true Lock the state file when locking is supported.
-lock-timeout=0s Duration to retry a state lock.
-no-color If specified, output won't contain any color.
-provider=provider Specific provider to use for import. This is used for
specifying aliases, such as "aws.eu". Defaults to the
normal provider prefix of the resource being imported.
-state=PATH Path to the source state file. Defaults to the configured
backend, or "terraform.tfstate"
-state-out=PATH Path to the destination state file to write to. If this
isn't specified, the source state file will be used. This
can be a new or existing path.
-var 'foo=bar' Set a variable in the Terraform configuration. This
flag can be set multiple times. This is only useful
with the "-config" flag.
-var-file=foo Set variables in the Terraform configuration from
a file. If "terraform.tfvars" or any ".auto.tfvars"
files are present, they will be automatically loaded.
Any help much appreciated
| It looks like you need to fix your script file first - azurerm isn't a valid resource name, did you mean:
resource "azurerm_resource_group" "example" {
# ...instance configuration...
name = "MyResourceGroup"
}
As seen in the output, import is expecting two parameters, ADDR and ID - you're only passing (what I assume is) the ID. You also need to tell terraform which resource in your script it maps to:
terraform import azurerm_resource_group.example \
/subscriptions/MySubscriptionNumber/resourceGroups/MyResourceGroup
| Terraform | 47,439,848 | 21 |
Is there any way to get the value of a secret from Azure Key Vault?
Doesn't look like value gets exposed in the key vault secret object here.
| Now you can do it with azurerm_key_vault_secret data source.
I'm enjoying without any scripting.
data "azurerm_key_vault" "example" {
name = "mykeyvault"
resource_group_name = "some-resource-group"
}
data "azurerm_key_vault_secret" "test" {
name = "secret-sauce"
key_vault_id = data.azurerm_key_vault.example.id
# vault_uri is deprecated in latest azurerm, use key_vault_id instead.
# vault_uri = "https://mykeyvault.vault.azure.net/"
}
output "secret_value" {
value = "${data.azurerm_key_vault_secret.test.value}"
}
| Terraform | 46,751,391 | 21 |
I have noticed that terraform will only run "file", "remote-exec" or "local-exec" on resources once. Once a resource is provisioned if the commands in a "remote-exec" are changed or a file from the provisioner "file" is changed then terraform will not make any changes to the instance. So how to I get terraform to run provisioner "file", "remote-exec" or "local-exec" everytime I run a terraform apply?
For more details:
Often I have had a resource provisioned partially due to an error from "remote-exec" causes terraform to stop (mostly due to me entering in the wrong commands while I'm writing my script). Running terraform again after this will cause the resource previously created to be destroyed and force terraform to create a new resource from scratch. This is also the only way I can run "remote-exec" twice on a resource... by creating it over from scratch.
This is really a drawback to terraform as opposed to ansible, which can do the same exact job as terraform except that it is totally idempotent. When using Ansible with tasks such as "ec2", "shell" and "copy" I can achieve the same tasks as terraform only each of those tasks will be idempotent. Ansible will automatically recognise when it doesn't need to make changes, where it does and because of this it can pick up where a failed ansible-playbook left off without destroying everything and starting from scratch. Terraform lacks this feature.
For reference here is a simple terraform resource block for an ec2 instance that uses both "remote-exec" and "file" provisioners:
resource "aws_instance" "test" {
count = ${var.amt}
ami = "ami-2d39803a"
instance_type = "t2.micro"
key_name = "ansible_aws"
tags {
name = "test${count.index}"
}
#creates ssh connection to consul servers
connection {
user = "ubuntu"
private_key="${file("/home/ubuntu/.ssh/id_rsa")}"
agent = true
timeout = "3m"
}
provisioner "remote-exec" {
inline = [<<EOF
sudo apt-get update
sudo apt-get install curl unzip
echo hi
EOF
]
}
#copying a file over
provisioner "file" {
source = "scripts/test.txt"
destination = "/path/to/file/test.txt"
}
}
| Came across this thread in my searches and eventually found a solution:
resource "null_resource" "ansible" {
triggers {
key = "${uuid()}"
}
provisioner "local-exec" {
command = "ansible-playbook -i /usr/local/bin/terraform-inventory -u ubuntu playbook.yml --private-key=/home/user/.ssh/aws_user.pem -u ubuntu"
}
}
You can use uuid(), which is unique to every terraform run, to trigger a null resource or provisioner.
| Terraform | 39,069,311 | 21 |
In reading the docs over at Terraform it says there are 3 options for finding AWS credientials:
Static Credentials( embedded in the source file )
Environment variables.
From the AWS credentials file
I am trying to have my setup just use the credential file. I've checked that the environment variables are cleared and I have left the relevant variables in Terraform blank.
When I do this and run 'Terraform Plan' I get the error:
No Valid credential sources found for AWS Provider.
I've even tried adding the location of my credentials file into my provider block and that didn't help either:
provider "aws" {
region = "${var.region}"
profile = "${var.profile}"
shared_credentials_file = "/Users/david/.aws/credentials"
profile = "testing"
}
Is there something I'm missing to get Terraform to read this file and not require environment variables?
| I tested with Terraform v0.6.15 and its working fine.
Issue must be with the profile. Check the following.
1. Remove 2 profile tags from your provider.
provider "aws" {
region = "${var.region}"
shared_credentials_file = "/Users/david/.aws/credentials"
profile = "testing"
}
2. Make sure your credentials file /Users/david/.aws/credentials is in the below format, where testing is the profile you are specifying in provider "aws"
[testing]
aws_access_key_id = *****
aws_secret_access_key = *****
| Terraform | 36,990,299 | 21 |
Using Terraform, I am declaring an s3 bucket and associated policy document, along with an iam_role and iam_role_policy.
The s3 bucket is creating fine in AWS however the bucket is listed as "Access: Objects can be public", and want the objects to be private. How can I explicitly make the objects private?
resource "aws_s3_bucket" "app" {
bucket = "${data.aws_caller_identity.current.account_id}-app"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
data "aws_iam_policy_document" "app_s3_policy" {
statement {
effect = "Allow"
actions = [
"s3:PutObject"
]
resources = [
aws_s3_bucket.app.arn,
"${aws_s3_bucket.app.arn}/*"
]
}
}
| The easiest way to block all objects in a bucket from ever being public is to attach an aws_s3_bucket_public_access_block resource to the bucket. It would look like this:
resource "aws_s3_bucket_public_access_block" "app" {
bucket = aws_s3_bucket.app.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
| Terraform | 67,389,192 | 20 |
Going through terraform tutorial I stumbled upon this error.
Error: Error launching source instance: InvalidAMIID.NotFound: The image id '[ami-830c94e3]' does not exist
status code: 400, request id: 4c3e0252-c3a5-471e-8b57-3f6e349628af
This is my code. The only change that I did was was region change from us-west-2 to eu-central-1
provider "aws" {
profile = "default"
region = "eu-central-1"
}
resource "aws_instance" "example" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
}
| It was simple. Apparently, AMI for Amazon Images of each region is different. I had to copy the AMI of the image that was present in my region. For example ami-07dfba995513840b5 is the id for Red Hat Enterprise Linux 8 (HVM), SSD Volume Type in eu-central-1 region. Go to AWS console, click EC2 from all services list, next click launch instance and find the AMI of an image of your interest.
| Terraform | 63,633,785 | 20 |
Is there a way of implementing the below logic
variable "environment" {
description = "The environment this will be run in can only be set to [preprod|test|prod]"
type = string
default = "test"
validation {
condition = can(regex("^(prod|preprod|test)$", var.environment))
error_message = "The environment variable can only be set to [prod|preprod|test]."
}
}
variable "fet_code" {
description = "Set the feature code"
type = string
default = ""
validation {
condition = var.environment == "test" && length(var.fet_code) != 3
error_message = "The environment has been set to 'test' but the fet_code has not be defined."
}
}
At the moment i get the following error:
Error: Invalid reference in variable validation
on variable.tf line 17, in variable "fet_code":
17: condition = var.environment == "fet" && length(var.fet_code) == 3
The condition for variable "fet_code" can only refer to the variable itself,
using var.fet_code.
I understand what the problem is with the code, I am just wondering if there is a way round the restriction?
| Update for Terraform 1.9.0
Input variable validation rules can refer to other objects
: Previously input variable validation rules could refer only to the variable being validated. Now they are general expressions, similar to those elsewhere in a module, which can refer to other input variables and to other objects such as data resources.
Source: https://github.com/hashicorp/terraform/blob/v1.9/CHANGELOG.md
Pre Terraform 1.9.0 solution
It seems to me that the two variables are close related. I would change them to one object in order to achieve what you want.
variable "environment" {
type = object({
environment = string
fet_code = string
})
default = {
environment = "test"
fet_code = ""
}
validation {
condition = can(regex("^(prod|preprod|test)$", var.environment.environment))
error_message = "The environment variable can only be set to [prod|preprod|test]."
}
validation {
condition = var.environment.environment == "test" && length(var.environment.fet_code) != 3
error_message = "The environment has been set to 'test' but the fet_code has not be defined."
}
}
You can then pass the variables like this:
environment = {
environment = "test"
fet_code = "1234"
}
| Terraform | 63,629,916 | 20 |
Terraform v0.12.12
+ provider.aws v3.0.0
+ provider.template v2.1.2
Before I was doing this:
resource "aws_route53_record" "derps" {
name = aws_acm_certificate.mycert[0].resource_record_name
type = aws_acm_certificate.mycert[0].resource_record_type
zone_id = var.my_zone_id
records = aws_acm_certificate.mycert[0].resource_record_value
ttl = 60
}
And that worked fine for me about a week ago.
I just did a plan and got an error:
records = [aws_acm_certificate.mycert.domain_validation_options[0].resource_record_value]
This value does not have any indices.
Now I don't pin provider versions, so I'm assuming I pulled a newer version and the resource changed.
After fighting with this and realizing it's not a list (even though when doing show state it sure looked like one) I am now doing this to make it a list:
resource "aws_route53_record" "derps" {
name = sort(aws_acm_certificate.mycert.domain_validation_options[*].resource_record_name)[0]
type = sort(aws_acm_certificate.mycert.domain_validation_options[*].resource_record_type)[0]
zone_id = var.my_zone_id
records = [sort(aws_acm_certificate.mycert.domain_validation_options[*].resource_record_value)[0]]
ttl = 60
}
This resulted in no changes which is good. But if I use the example for doing this from the docs they now use for_each: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/acm_certificate_validation
resource "aws_route53_record" "example" {
for_each = {
for dvo in aws_acm_certificate.example.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
zone_id = dvo.domain_name == "example.org" ? data.aws_route53_zone.example_org.zone_id : data.aws_route53_zone.example_com.zone_id
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = each.value.zone_id
}
resource "aws_acm_certificate_validation" "example" {
certificate_arn = aws_acm_certificate.example.arn
validation_record_fqdns = [for record in aws_route53_record.example : record.fqdn]
}
Is the above the correct way to do this now? Am I going to run into issues doing it the way I currently am? Doing it like above would result in a destroy/recreate (i guess I could import it myself but that's painful).
Is doing it my way not going to result in unexpected diffs?
Edit
So, more specific for my issue. This is what I see when I look at the state:
terraform state show aws_acm_certificate.mycert
...
domain_name = "*.mydom.com"
domain_validation_options = [
{
domain_name = "*.mydom.com"
resource_record_name = "_11111111111.mydom.com."
resource_record_type = "CNAME"
resource_record_value = "_1111111111.11111111.acm-validations.aws."
},
{
domain_name = "mydom.com"
resource_record_name = "_11111111111.mydom.com."
resource_record_type = "CNAME"
resource_record_value = "_1111111111.111111111.acm-validations.aws."
},
]
...
By using sort I'm effectively using count which of course results in a destroy/recreate if the order changes. But in my case I think that's unlikely?? I also don't fully understand the difference between just using the values from the wildcard validation config and using both of them.
| The AWS Terraform provider was recently upgraded to version 3.0. This version comes with a list of breaking changes. I recommend consulting the AWS provider 3.0 upgrade guide.
The issue you are encountering is because the domain_validation_options attribute is now a set instead of a list. From that guide:
Since the domain_validation_options attribute changed from a list to a set and sets cannot be indexed in Terraform, the recommendation is to update the configuration to use the more stable resource for_each support instead of count
I recommend using the new foreach syntax, as the upgrade guide recommends, in order to avoid unexpected diffs. The guide states that you will need to use terraform state mv to move the old configuration state to the new configuration, in order to prevent the resources from being recreated.
| Terraform | 63,235,321 | 20 |
I'm unsure what I'm doing wrong. I have terraform as such:
resource "aws_apigatewayv2_domain_name" "web" {
domain_name = var.web_url
count = var.web_url != "" ? 1 : 0
domain_name_configuration {
certificate_arn = var.web_acm_arn
endpoint_type = "REGIONAL"
security_policy = "TLS_1_2"
}
}
resource "aws_apigatewayv2_api_mapping" "web" {
api_id = aws_apigatewayv2_api.web.id
domain_name = aws_apigatewayv2_domain_name.web.id
stage = aws_apigatewayv2_stage.web_stage.id
count = var.web_url != "" ? 1 : 0
}
My terraform plan returns this. it complains about count, but unsure what to do with it.
Terraform v0.12.24
Configuring remote state backend...
Initializing Terraform configuration...
2020/07/29 06:20:46 [DEBUG] Using modified User-Agent: Terraform/0.12.24 TFC/29e17ad841
Error: Missing resource instance key
on ../modules/web/api.tf line 37, in resource "aws_apigatewayv2_api_mapping" "web":
37: domain_name = aws_apigatewayv2_domain_name.web.id
Because aws_apigatewayv2_domain_name.web has "count" set, its
attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_apigatewayv2_domain_name.web[count.index]
help is appreciated.
| As the error message suggest, since you've used count in your aws_apigatewayv2_domain_name, you should use index now when you refer to it.
For example:
domain_name = aws_apigatewayv2_domain_name.web[0].id
| Terraform | 63,147,590 | 20 |
I have some Terraform code with an aws_instance and a null_resource:
resource "aws_instance" "example" {
ami = data.aws_ami.server.id
instance_type = "t2.medium"
key_name = aws_key_pair.deployer.key_name
tags = {
name = "example"
}
vpc_security_group_ids = [aws_security_group.main.id]
}
resource "null_resource" "example" {
provisioner "local-exec" {
command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -T 300 -i ${aws_instance.example.public_dns}, --user centos --private-key files/id_rsa playbook.yml"
}
}
It kinda works, but sometimes there is a bug (probably when the instance in a pending state). When I rerun Terraform - it works as expected.
Question: How can I run local-exec only when the instance is running and accepting an SSH connection?
| The null_resource is currently only going to wait until the aws_instance resource has completed which in turn only waits until the AWS API returns that it is in the Running state. There's a long gap from there to the instance starting the OS and then being able to accept SSH connections before your local-exec provisioner can connect.
One way to handle this is to use the remote-exec provisioner on the instance first as that has the ability to wait for the instance to be ready. Changing your existing code to handle this would look like this:
resource "aws_instance" "example" {
ami = data.aws_ami.server.id
instance_type = "t2.medium"
key_name = aws_key_pair.deployer.key_name
tags = {
name = "example"
}
vpc_security_group_ids = [aws_security_group.main.id]
}
resource "null_resource" "example" {
provisioner "remote-exec" {
connection {
host = aws_instance.example.public_dns
user = "centos"
private_key = file("files/id_rsa")
}
inline = ["echo 'connected!'"]
}
provisioner "local-exec" {
command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -T 300 -i ${aws_instance.example.public_dns}, --user centos --private-key files/id_rsa playbook.yml"
}
}
This will first attempt to connect to the instance's public DNS address as the centos user with the files/id_rsa private key. Once it is connected it will then run echo 'connected!' as a simple command before moving on to your existing local-exec provisioner that runs Ansible against the instance.
Note that just being able to connect over SSH may not actually be enough for you to then provision the instance. If your Ansible script tries to interact with your package manager then you may find that it is locked from the instance's user data script running. If this is the case you will need to remotely execute a script that waits for cloud-init to be complete first. An example script looks like this:
#!/bin/bash
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
echo -e "\033[1;36mWaiting for cloud-init..."
sleep 1
done
| Terraform | 62,403,030 | 20 |
I have a lot of Terraform modules written in Terraform 0.11 using gcp-provider of Terraform and want to upgrade the same to Terraform 0.12.
For this purpose, I need to keep both the versions installed on my system and use the version according to the version the module is written in.
I will go one by one in every module and upgrade the module using terraform 0.12upgrade to be compatible with Terraform 0.12 as per this documentation.
How to safely keep two versions of Terraform in one System?
| I use Ubuntu 18.04 and I achieved this safely following the below steps. Similar steps can be followed to do the same on any Linux distro (making sure you are downloading the compatible binary. Confirm here)
NOTE Running the following commands as root or sudo user
Create directories to keep the Terraform binaries
$ mkdir -p /usr/local/tf
$ mkdir -p /usr/local/tf/11
$ mkdir -p /usr/local/tf/12
Download the binaries for both the versions
Download and extract the binary for Terraform 0.11 in a separate directory:
$ cd /usr/local/tf/11
$ wget https://releases.hashicorp.com/terraform/0.11.14/terraform_0.11.14_linux_amd64.zip
$ unzip terraform_0.11.14_linux_amd64.zip
$ rm terraform_0.11.14_linux_amd64.zip
Download and extract the binary for Terraform 0.12 in a separate directory:
$ cd /usr/local/tf/12
$ wget https://releases.hashicorp.com/terraform/0.12.20/terraform_0.12.20_linux_amd64.zip
$ unzip terraform_0.12.20_linux_amd64.zip
$ rm terraform_0.12.20_linux_amd64.zip
Create symlinks for both the Terraform versions in /usr/bin/ directory:
ln -s /usr/local/tf/11/terraform /usr/bin/terraform11
ln -s /usr/local/tf/12/terraform /usr/bin/terraform12
# Make both the symlinks executable
chmod ugo+x /usr/bin/terraform*
Calling different versions
Now, command terraform11 invokes version 0.11 and terraform12 invokes version 0.12
Example:
$ terraform11
$ terraform12
NOTE
Keeping the binaries in separate directories helps to separate their plugins as well without disturbing each other.
| Terraform | 60,113,774 | 20 |
I have created an EC2 instance using terraform (I do not have the .pem keys). Can I establish an SSH connection between my local system and the EC2 instance?
| Assuming you provisioned an instance using Terraform v0.12.+ with this structure:
resource "aws_instance" "instance" {
ami = "${var.ami}"
instance_type = "t2.micro"
count = 1
associate_public_ip_address = true
}
You can make some additional settings:
Configure the public ip output:
output "instance_ip" {
description = "The public ip for ssh access"
value = aws_instance.instance.public_ip
}
Create an aws_key_pair with an existing ssh public key or create a new one
Ex:
resource "aws_key_pair" "ssh-key" {
key_name = "ssh-key"
public_key = "ssh-rsa AAAAB3Nza............"
}
Add the key_name in instance resource just like this:
resource "aws_instance" "instance" {
ami = var.ami
instance_type = "t2.micro"
count = 1
associate_public_ip_address = true
key_name = "ssh-key"
}
Now you need to apply running terraform apply and terraform output to return the public IP
Get your public IP and run:
ssh <PUBLIC IP>
OR with a public key path
ssh -i "~/.ssh/id_rsa.pub" <PUBLIC IP>
Sources:
https://www.terraform.io/docs/providers/aws/r/instance.html
https://www.terraform.io/docs/providers/aws/r/key_pair.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html
| Terraform | 59,708,577 | 20 |
I am using terraform v0.12.6 and I run into many errors like:
Error: Error creating Security Group: InvalidGroup.Duplicate: The security group 'security-search-populate' already exists for VPC 'vpc-003e06e33a87c22f5'
status code: 400, request id: 82acdc81-c324-4672-b9fe-531eb8283ed3
Error: Error creating IAM Role PopulateTaskRole: EntityAlreadyExists: Role with name PopulateTaskRole already exists.
status code: 409, request id: 49aac94c-d52b-11e9-a535-c19e5ed20660
I know I can resolve them by deleting these resources from AWS but I wonder whether there is a better solution?
| If the existing resources are already in terraform in another module or workspace, then I would not import any of those resources since resources should be managed by a single state, not multiple.
If the existing resources are not managed anywhere else in terraform, then it should be imported into terraform.
You'll need to find the security group id of security-search-populate security group.
aws ec2 describe-security-groups \
--group-names security-search-populate \
--query 'SecurityGroups[].GroupId' \
--output text
Let's say the sg id is sg-903004f8. Import security group to terraform resource aws_security_group.elb_sg using your dev profile.
AWS_PROFILE=dev terraform import aws_security_group.elb_sg sg-903004f8
To import IAM role PopulateTaskRole to terraform resource aws_iam_role.developer using your dev profile.
AWS_PROFILE=dev terraform import aws_iam_role.developer PopulateTaskRole
After these are imported, you can do a targeted terraform plan to see the differences between what's in source controlled terraform and what's upstream in AWS
AWS_PROFILE=dev terraform plan \
-target aws_security_group.elb_sg \
-target aws_iam_role.developer
| Terraform | 57,903,408 | 20 |
I'm looking to set up some alerts from gcloud -> slack, and so far have a test up and running having followed these instructions:
https://cloud.google.com/monitoring/support/notification-options?_ga=2.190773474.-879257953.1550134526#slack
However, ideally I'd store the config for these notifications in a terraform script so that I don't have manual steps to follow if things need setting up again. It looks like this should be possible: https://www.terraform.io/docs/providers/google/r/monitoring_notification_channel.html
I've run gcloud alpha monitoring channel-descriptors describe projects/<My Project>/notificationChannelDescriptors/slack, which produces the following output for the labels+type:
labels:
- description: A permanent authentication token provided by Slack. This field is obfuscated
by returning only a few characters of the key when fetched.
key: auth_token
- description: The Slack channel to which to post notifications.
key: channel_name
type: slack
So, I think my terraform config for the notification channel wants to be:
resource "google_monitoring_notification_channel" "basic" {
display_name = "My slack notifications"
type = "slack"
labels = {
auth_token = "????????"
channel_name = "#notification-channel"
}
}
However, I can't figure out how to obtain the auth token for this script? I can't seem to extract the one I've already set up from Slack or gcloud, and can't find any instructions for creating one from scratch...
N.B. This is not a Terraform-specific issue, because the script is just hooking into the google REST API. So, anyone using the API directly would also have to obtain this auth_token from somewhere. There must be an intended way to obtain it or why is it in the API at all...?
|
Visit https://app.google.stackdriver.com/settings/accounts/notifications/slack?project=YOUR_PROJECT_NAME
Select "Add Slack Channel"
Select "Authorize Stackdriver"
Select "Install"
You will be redirected back to a URL of the form: https://app.google.stackdriver.com/settings/accounts/notifications/slack/add?project=YOUR_PROJECT_NAME&auth_token=AUTH_TOKEN_HERE
Save the notification channel (this seems to be necessary to finish the oauth flow)
Copy/paste the auth token from the &auth_token= parameter in the query string
You will end up with an extra notification channel, i.e. the one you created in the console, but after that you will be able to reuse the auth token in terraform-managed notification channels.
| Terraform | 54,884,815 | 20 |
I'm having a terrible time getting Terraform to assume an IAM role with another account with MFA required. Here's my setup
AWS Config
[default]
region = us-west-2
output = json
[profile GEHC-000]
region = us-west-2
output = json
....
[profile GEHC-056]
source_profile = GEHC-000
role_arn = arn:aws:iam::~069:role/hc/hc-master
mfa_serial = arn:aws:iam::~183:mfa/username
external_id = ~069
AWS Credentials
[default]
aws_access_key_id = xxx
aws_secret_access_key = xxx
[GEHC-000]
aws_access_key_id = same as above
aws_secret_access_key = same as above
Policies assigned to IAM user
STS Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AssumeRole",
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::*:role/hc/hc-master"
]
}
]
}
User Policy
{
"Statement": [
{
"Action": [
"iam:*AccessKey*",
"iam:*MFA*",
"iam:*SigningCertificate*",
"iam:UpdateLoginProfile*",
"iam:RemoveUserFromGroup*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:iam::~183:mfa/${aws:username}",
"arn:aws:iam::~183:mfa/*/${aws:username}",
"arn:aws:iam::~183:mfa/*/*/${aws:username}",
"arn:aws:iam::~183:mfa/*/*/*${aws:username}",
"arn:aws:iam::~183:user/${aws:username}",
"arn:aws:iam::~183:user/*/${aws:username}",
"arn:aws:iam::~183:user/*/*/${aws:username}",
"arn:aws:iam::~183:user/*/*/*${aws:username}"
],
"Sid": "Write"
},
{
"Action": [
"iam:*Get*",
"iam:*List*"
],
"Effect": "Allow",
"Resource": [
"*"
],
"Sid": "Read"
},
{
"Action": [
"iam:CreateUser*",
"iam:UpdateUser*",
"iam:AddUserToGroup"
],
"Effect": "Allow",
"Resource": [
"*"
],
"Sid": "CreateUser"
}
],
"Version": "2012-10-17"
}
Force MFA Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BlockAnyAccessOtherThanAboveUnlessSignedInWithMFA",
"Effect": "Deny",
"NotAction": "iam:*",
"Resource": "*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
}
}
]
}
main.tf
provider "aws" {
profile = "GEHC-056"
shared_credentials_file = "${pathexpand("~/.aws/config")}"
region = "${var.region}"
}
data "aws_iam_policy_document" "test" {
statement {
sid = "TestAssumeRole"
effect = "Allow"
actions = [
"sts:AssumeRole",
]
principals = {
type = "AWS"
identifiers = [
"arn:aws:iam::~183:role/hc-devops",
]
}
sid = "BuUserTrustDocument"
effect = "Allow"
principals = {
type = "Federated"
identifiers = [
"arn:aws:iam::~875:saml-provider/ge-saml-for-aws",
]
}
condition {
test = "StringEquals"
variable = "SAML:aud"
values = ["https://signin.aws.amazon.com/saml"]
}
}
}
resource "aws_iam_role" "test_role" {
name = "test_role"
path = "/"
assume_role_policy = "${data.aws_iam_policy_document.test.json}"
}
Get Caller Identity
bash-4.4$ aws --profile GEHC-056 sts get-caller-identity
Enter MFA code for arn:aws:iam::772660252183:mfa/503072343:
{
"UserId": "AROAIWCCLC2BGRPQMJC7U:botocore-session-1537474244",
"Account": "730993910069",
"Arn": "arn:aws:sts::730993910069:assumed-role/hc-master/botocore-session-1537474244"
}
And the error:
bash-4.4$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
Error: Error refreshing state: 1 error(s) occurred:
* provider.aws: Error creating AWS session: AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.
| Terraform doesn't currently support prompting for the MFA token when being ran as it is intended to be ran in a less interactive fashion as much as possible and it would apparently require significant rework of the provider structure to support this interactive provider configuration. There's more discussion about this in this issue.
As also mentioned in that issue the best bet is to use some form of script/tool that already assumes the role prior to running Terraform.
I personally use AWS-Vault and have written a small shim shell script that I symlink to from terraform (and other things such as aws that I want to use AWS-Vault to grab credentials for) that detects what it's being called as, finds the "real" binary using which -a, and then uses AWS-Vault's exec to run the target command with the specified credentials.
My script looks like this:
#!/bin/bash
set -eo pipefail
# Provides a shim to override target executables so that it is executed through aws-vault
# See https://github.com/99designs/aws-vault/blob/ae56f73f630601fc36f0d68c9df19ac53e987369/USAGE.md#overriding-the-aws-cli-to-use-aws-vault for more information about using it for the AWS CLI.
# Work out what we're shimming and then find the non shim version so we can execute that.
# which -a returns a sorted list of the order of binaries that are on the PATH so we want the second one.
INVOKED=$(basename $0)
TARGET=$(which -a ${INVOKED} | tail -n +2 | head -n 1)
if [ -z ${AWS_VAULT} ]; then
AWS_PROFILE="${AWS_DEFAULT_PROFILE:-read-only}"
(>&2 echo "Using temporary credentials from ${AWS_PROFILE} profile...")
exec aws-vault exec "${AWS_PROFILE}" --assume-role-ttl=60m -- "${TARGET}" "$@"
else
# If AWS_VAULT is already set then we want to just use the existing session instead of nesting them
exec "${TARGET}" "$@"
fi
It will use a profile in your ~/.aws/config file that matches the AWS_DEFAULT_PROFILE environment variable you have set, defaulting to a read-only profile which may or may not be a useful default for you. This makes sure that AWS-Vault assumes the IAM role, grabs the credentials and sets them as environment variables for the target process.
This means that as far as Terraform is concerned it is being given credentials via environment variables and this just works.
| Terraform | 52,432,717 | 20 |
My idea is to have elements of a list modified by appending to each of them a string. How could this be achieved? I haven't find any function that allow me to do that.
| Have you tried formatlist()?
For example:
my_list_var = ["a", "b", "c"]
my_new_list = formatlist("%s-foo", var.mylist)
my_new_list will be:
["a-foo", "b-foo", "c-foo"]
Yo can also pass another list of the same length as parameter to append different strings to each element.
| Terraform | 51,821,961 | 20 |
I am getting the following error when running terraform:
* aws_iam_role_policy.rds_policy: Error putting IAM role policy my-rds-policy: MalformedPolicyDocument: The policy failed legacy parsing
Here is my definition of the resource:
resource "aws_iam_role_policy" "rds_policy" {
name = "my-rds-policy"
role = "${aws_iam_role.rds_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-bucket"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectMetaData",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": [
"arn:aws:s3:::my-bucket/backups/*"
]
}
]
}
EOF
}
The JSON policy doc is well formed, and I can't see anything obvious.
| You need to make sure that you don't have any indentation at the start of your EOF heredoc because your JSON policy should not start with an indented brace.
So you should be fine with this small change:
resource "aws_iam_role_policy" "rds_policy" {
name = "my-rds-policy"
role = "${aws_iam_role.rds_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-bucket"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectMetaData",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": [
"arn:aws:s3:::my-bucket/backups/*"
]
}
]
}
EOF
}
| Terraform | 42,652,528 | 20 |
I am trying to create a Linux VM, with Terraform, in the West Europe Azure region, with a Ubuntu Server 20.04 LTS image. I can do this just fine from within the Azure Portal, but Terraform complains that the image doesn't exist:
The platform image 'Canonical:UbuntuServer:20.04-LTS:latest' is not available.
Indeed, az vm image list --location westeurope confirms this; 18.04 LTS exists, but no 20.04 LTS.
I am using the azurerm_linux_virtual_machine resource, with the following source_image_reference:
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "20.04-LTS" # FIXME SKU doesn't exist in westeurope
version = "latest"
}
I'm utterly confused by this! How does one access the images in the Azure Marketplace in Terraform? I've seen suggestions that the plan block is needed, but have no idea (nor have I found any documentation) on how to configure this.
| I too was confused at first when I found out that it is available but under a different name, it is indeed kind of hidden.
offer = "0001-com-ubuntu-server-focal"
publisher = "Canonical"
sku = "20_04-lts-gen2"
I used this inside packer so I am guessing it is the same in terraform, but you can let me know.
| Terraform | 71,253,468 | 19 |
I am using terraform to create a web-acl in aws and want to associate that web-acl with CloudFront distribution.
So, here's how my code looks like:
provider "aws" {
alias = "east1"
region = "us-east-1"
}
# -------------------------------------------
# -------------------------------------------
# Cloud Front
module "front_end_cloudfront" {
source = "./modules/front-end/CF"
# CF_ALIASES = ["terraformer-frontend.dev.effi.com.au"]
CF_LAMBDA_ARN = module.frontend_lambda.cf_lambda_qualified_arn
CF_BUCKET_DOMAIN_NAME = module.front_end_bucket.website_endpoint
CF_BUCKET_ORIGIN_ID = module.front_end_bucket.website_domain
CF_TAGS_LIST = { "Name" : "terraformer-front-end-cloudfrontv2" }
CF_CERTFICATE_ARN = var.CLOUDFRONT_US_EAST_1_ACM_ARN
# WEB_ACL = module.waf.web_acl_id
WEB_ACL = module.waf_cf.web_acl_id
depends_on = [module.waf_cf]
}
# -------------------------------------------
# -------------------------------------------
# WAF for CF
module "waf_cf" {
source = "./modules/waf"
providers = {
aws = aws.east1
}
WAF_NAME = "terraform-web-acl-cf"
WAF_DESCRIPTION = "terraform web acl-cf"
WAF_SCOPE = "CLOUDFRONT"
WAF_RULE_NAME_1 = "AWSManagedRulesCommonRuleSet"
WAF_RULE_NAME_2 = "AWSManagedRulesAmazonIpReputationList"
WAF_RULE_NAME_3 = "AWSManagedRulesLinuxRuleSet"
WAF_RULE_NAME_4 = "AWSManagedRulesKnownBadInputsRuleSet"
WAF_VENDOR = "AWS"
WAF_METRIC_1 = "aws-waf-logs-terraformer-metric"
WAF_METRIC_2 = "aws-waf-logs-terraformer-metric"
WAF_METRIC_3 = "aws-waf-logs-terraformer-metric"
WAF_METRIC_4 = "aws-waf-logs-terraformer-metric"
WAF_TAG_LIST = {
"Tag1" : "Name"
"Tag2" : "terraformer-rule-cf"
}
WAF_METRIC = "aws-waf-logs-friendly-metric-name"
CLOUDWATCH_METRICS_ENABLED = false
SAMPLE_REQUESTS_ENABLED = false
}
These are terraform modules I have wrote, the specific resource files for above modules are below respectively.
# CF
resource "aws_cloudfront_distribution" "aws_cloudfront_distribution" {
# aliases = var.CF_ALIASES
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
compress = "true"
default_ttl = "0"
forwarded_values {
cookies {
forward = "none"
}
query_string = "false"
}
lambda_function_association {
event_type = "origin-response"
include_body = "false"
lambda_arn = var.CF_LAMBDA_ARN
}
max_ttl = "0"
min_ttl = "0"
smooth_streaming = "false"
target_origin_id = var.CF_BUCKET_ORIGIN_ID
viewer_protocol_policy = "redirect-to-https"
}
enabled = "true"
http_version = "http2"
is_ipv6_enabled = "true"
origin {
custom_origin_config {
http_port = "80"
https_port = "443"
origin_keepalive_timeout = "5"
origin_protocol_policy = "http-only"
origin_read_timeout = "30"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
domain_name = var.CF_BUCKET_DOMAIN_NAME
origin_id = var.CF_BUCKET_ORIGIN_ID
}
price_class = "PriceClass_All"
restrictions {
geo_restriction {
restriction_type = "none"
}
}
retain_on_delete = "false"
tags = var.CF_TAGS_LIST
viewer_certificate {
acm_certificate_arn = var.CF_CERTFICATE_ARN
cloudfront_default_certificate = "false"
minimum_protocol_version = "TLSv1.2_2018"
ssl_support_method = "sni-only"
}
web_acl_id = var.WEB_ACL
}
# WAF
resource "aws_wafv2_web_acl" "aws_wafv2_web_acl" {
name = var.WAF_NAME
description = var.WAF_DESCRIPTION
scope = var.WAF_SCOPE
default_action {
allow {}
}
rule {
name = var.WAF_RULE_NAME_1
priority = 1
override_action {
count {}
}
statement {
managed_rule_group_statement {
name = var.WAF_RULE_NAME_1
vendor_name = var.WAF_VENDOR
# excluded_rule {
# name = "SizeRestrictions_QUERYSTRING"
# }
# excluded_rule {
# name = "NoUserAgent_HEADER"
# }
}
}
visibility_config {
cloudwatch_metrics_enabled = var.CLOUDWATCH_METRICS_ENABLED
metric_name = var.WAF_METRIC_1
sampled_requests_enabled = var.SAMPLE_REQUESTS_ENABLED
}
}
rule {
name = var.WAF_RULE_NAME_2
priority = 2
override_action {
count {}
}
statement {
managed_rule_group_statement {
name = var.WAF_RULE_NAME_2
vendor_name = var.WAF_VENDOR
}
}
visibility_config {
cloudwatch_metrics_enabled = var.CLOUDWATCH_METRICS_ENABLED
metric_name = var.WAF_METRIC_2
sampled_requests_enabled = var.SAMPLE_REQUESTS_ENABLED
}
}
rule {
name = var.WAF_RULE_NAME_3
priority = 3
override_action {
count {}
}
statement {
managed_rule_group_statement {
name = var.WAF_RULE_NAME_3
vendor_name = var.WAF_VENDOR
}
}
visibility_config {
cloudwatch_metrics_enabled = var.CLOUDWATCH_METRICS_ENABLED
metric_name = var.WAF_METRIC_3
sampled_requests_enabled = var.SAMPLE_REQUESTS_ENABLED
}
}
rule {
name = var.WAF_RULE_NAME_4
priority = 4
override_action {
count {}
}
statement {
managed_rule_group_statement {
name = var.WAF_RULE_NAME_4
vendor_name = var.WAF_VENDOR
}
}
visibility_config {
cloudwatch_metrics_enabled = var.CLOUDWATCH_METRICS_ENABLED
metric_name = var.WAF_METRIC_4
sampled_requests_enabled = var.SAMPLE_REQUESTS_ENABLED
}
}
tags = var.WAF_TAG_LIST
visibility_config {
cloudwatch_metrics_enabled = var.CLOUDWATCH_METRICS_ENABLED
metric_name = var.WAF_METRIC
sampled_requests_enabled = var.SAMPLE_REQUESTS_ENABLED
}
}
But I am getting the below error
error updating CloudFront Distribution (E32RNPFGEUHQ6J): InvalidWebACLId: Web ACL is not accessible by the requester.
Here the cloudfront is created in ap-southeast-2 region and the waf is created in us-east-1 region.
Can someone please help me on this?
| When using WAFv2, you need to specify the the ARN not the ID to web_acl_id in aws_cloudfront_distribution.
See the note here https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudfront_distribution#web_acl_id
or this GitHub issue https://github.com/hashicorp/terraform-provider-aws/issues/13902
| Terraform | 66,476,009 | 19 |
I am going through a terraform guide, where the author is spinning up a docker setup using the docker_image and docker_container resources.
In the sample code the main.tf file includes both the required_providers and the provider blocks, as follows:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
}
}
}
provider "docker" {}
Why are they both needed?
Shouldn't terraform be able to understand the need for a docker provider, only by this line?
provider "docker" {}
| When considering Terraform providers there are two related notions to think about: the provider itself, and a configuration for the provider.
As an analogy, the provider kreuzwerker/docker here is a bit like a class you're importing from another library, giving it the local name docker. I'll use a pseudo-JavaScript syntax just to make this a bit more concrete:
var docker = require("kreuzwerker/docker");
However, all we have here so far is the class itself. In order to use it we need to create an instance of it, which in Terraform's vernacular is called a "configuration". Again, using pseudo-JavaScript syntax:
var dockerInstance = new docker({});
Terraform's syntax here is decidedly less explicit than this pseudo-JavaScript form, but we can make the distinction more visible by adding a second instance of the provider to the configuration, which in Terraform we do by assigning it a configuration "alias":
provider "docker" {
alias = "example"
host = "ssh://user@remote-host:22"
}
This is like creating a second instance of the provider "class" in our pseudo-JavaScript example:
var dockerInstance2 = new docker({
host: 'ssh://user@remote-host:22'
});
Another variant that shows the distinction is when a module inherits a provider configuration from its calling module. In that case, it's as if the calling module were implicitly passing the provider configuration (instance) into the module, but the child module still needs to import the provider "class" so Terraform can see that we're talking about kreuzwerker/docker as opposed to any other provider that might have the name "docker".
Terraform has some automatic "magic" behaviors that try to make simpler cases implicit, but unfortunately that comes at the cost of making it harder to understand what's going on when things get more complicated. Providers and provider configurations are a particularly hard example of this, because providers have been in the Terraform language for a long time and the current incarnation of the language is trying to stay broadly backward-compatible with the simple uses while still allowing for the newer features like having third-party providers installable from multiple namespaces.
The particularly confusing assumption here is that if you don't declare a particular provider Terraform will create an implicit required_providers declaration assuming that you mean a provider in the hashicorp/ namespace, which makes it seem as though required_providers is only for third-party providers. In fact though, that is largely a backward-compatibility mechanism and so I'd suggest always writing out the required_providers entries, even for the providers in the hashicorp/ namespace, so that less-experienced readers don't need to know about this special backward-compatibility behavior. In your case though, the provider you're using is in a third-party namespace anyway and so the required_providers entry is mandatory.
| Terraform | 66,080,706 | 19 |
I tried terraform versions v0.12.26 and v0.13.3. Both failed.
terraform plan
Acquiring state lock. This may take a few moments...
Error: Error locking state: Error acquiring the state lock: 2 errors occurred:
* ResourceNotFoundException: Requested resource not found
* ResourceNotFoundException: Requested resource not found
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.
Related source code:
terraform {
backend "s3" {
encrypt = false
bucket = "dev-terraform-state"
key = "dev/Oregon/eks/terraform.tfstate"
region = "us-west-2"
dynamodb_table = "dev-lock-table"
}
required_version = ">= 0.12.0"
}
| The error is ResourceNotFoundException, which suggests that your dev-lock-table does not exist.
Terraform does not create it. Instead it must exist before you will use it. From docs:
dynamodb_table field to an existing DynamoDB table name.
| Terraform | 64,149,876 | 19 |
I have a terraform configuration which needs to:
Create a lambda
Invoke the lambda
Iterate on the lambda's json result which returns an array and create a CloudWatch event rule per entry in the array
The relevant code looks like:
Create lambda code...
data "aws_lambda_invocation" "run_lambda" {
function_name = "${aws_lambda_function.deployed_lambda.function_name}"
input = <<JSON
{}
JSON
depends_on = [aws_lambda_function.deployed_lambda]
}
resource "aws_cloudwatch_event_rule" "aws_my_cloudwatch_rule" {
for_each = {for record in jsondecode(data.aws_lambda_invocation.run_lambda.result).entities : record.entityName => record}
name = "${each.value.entityName}-event"
description = "Cloudwatch rule for ${each.value.entityName}"
schedule_expression = "cron(${each.value.cronExpression})"
}
The problem is that when I run it, I get:
Error: Invalid for_each argument
on lambda.tf line 131, in resource "aws_cloudwatch_event_rule" "aws_my_cloudwatch_rule":
131: for_each = {for record in jsondecode(data.aws_lambda_invocation.aws_lambda_invocation.result).entities : record.entityName => record}
The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.
I've read a bunch of posts on the problem but couldn't find a workaround.
The problem is that Terraform needs to know the size of the array returned by the lambda in the planning phase before the lambda was created.
What is the best approach to solving such a task?
Since it is run as part of a CI/CD pipeline I prefer a solution that doesn't include the "-target" flag.
| One possibility is to reconsider for_each and use count instead, if appropriate. for_each has some major limitations. I ran into something similar (seems like a major bug to me, but they say it is a feature)
Consider I am deploying three vms, and want to bind them to a load balancer:
resource "aws_instance" "xxx-IIS-004" {
ami = var.ami["Windows Server 2019"]
instance_type = var.depoy_lowcost ? var.default_instance_type : "m5.2xlarge"
count = "3"
...
When I try to use for_each, I get The “for_each” value depends on resource attributes that cannot be determined... or Tuple error.
Fails:
resource "aws_elb_attachment" "attachments_004" {
depends_on = [ aws_instance.xxx-IIS-004 ]
elb = data.aws_elb.loadBalancer.id
for_each = aws_instance.xxx-IIS-004[*]
instance = each.value.id
}
Works*
locals {
att_004 = join("_", aws_instance.xxx-IIS-004[*].id )
}
resource "aws_elb_attachment" "attachments_004" {
depends_on = [ aws_instance.xxx-IIS-004 ]
elb = data.aws_elb.loadBalancer.id
count = length( aws_instance.xxx-IIS-004 )
instance = split("_", local.att_004)[count.index]
}
| Terraform | 63,768,921 | 19 |
I am having a hard time figuring out how to make an output for each target group resource that this code creates.
I'd like to be able to reference each one individually in other modules. It sounds like for_each stores it as a map, so my question is how would I get the arn for targetgroup1 and targetgroup2?
Terraform normally refers to outputs by resource name, so I am struggling with that in this scenario and also how to refer to these individual arns.
Would I also need to work the outputs into the for_each or could I drop it into the output.tf file?
locals {
target_groups_beta = {
targetgroup1 = {
name = "example",
path = "/",
environment = "Beta"
}
targetgroup2 = {
name = "example2",
path = "/",
environment = "Beta"
}
}
}
resource "aws_lb_target_group" "target-group" {
for_each = local.target_groups_beta
name = "example-${each.value.name}-"
port = 80
protocol = "HTTP"
vpc_id = var.vpc-id
deregistration_delay = 5
tags = {
Environment = "${each.value.environment}"
}
health_check{
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 10
interval = 15
path = each.value.path
}
}
I receive the following error when trying to do it in the output.tf file without a key value, but when I input one such as value = "${aws_lb_target_group.target-group[0].arn}" it says it's invalid. Error without key value below:
Error: Missing resource instance key
on modules\targetgroups\output.tf line 2, in output "tg_example_beta":
2: value = "${aws_lb_target_group.target-group.arn}"
Because aws_lb_target_group.target-group has "for_each" set, its attributes
must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_lb_target_group.target-group[each.key]
| The aws_lb_target_group.target-group generated will be a map, with key values of targetgroup2 and targetgroup1.
Therefore, to get the individual target group details you can do:
output "target-group1-arn" {
value = aws_lb_target_group.target-group["targetgroup1"].arn
}
To return both as a map:
output "target-groups-arn-alternatice" {
value = {for k, v in aws_lb_target_group.target-group: k => v.arn}
}
target-groups-arn-alternatice = {
"targetgroup1" = "arn:aws:elasticloadbalancing:us-east-1:xxxx:targetgroup/example-example/285b26e15221b113"
"targetgroup2" = "arn:aws:elasticloadbalancing:us-east-1:xxxx:targetgroup/example-example2/075bd58359e4c4b2"
}
To return both as a list (order will be same as for keys function):
output "target-groups-arn" {
value = values(aws_lb_target_group.target-group)[*].arn
}
target-groups-arn = [
"arn:aws:elasticloadbalancing:us-east-1:xxxx:targetgroup/example-example/285b26e15221b113",
"arn:aws:elasticloadbalancing:us-east-1:xxxx:targetgroup/example-example2/075bd58359e4c4b2",
]
| Terraform | 63,627,282 | 19 |
I'm writing sort of wrapper module for azurerm_storage_account.
azurerm_storage_account has optional block
static_website {
index_document = string
error_404_document = string
}
I want to set it based on variable and I'm not really sure how can I do that? Conditional operators don't really work for blocks (e.g. static_website = var.disable ? null : { .. } )
Or do blocks work in such a way that if I'd set index_document and error_404_document to null it'd be the same as not setting static_website block altogether?
azurerm@2.x
TF@0.12.x
| I think you can use dynamic block for that. Basically, when the disable is true, no static_website will be created. Otherwise, one static_website block is going to be constructed.
For example, the modified code could be:
dynamic "static_website" {
for_each = var.disable == true ? toset([]) : toset([1])
content {
index_document = string
error_404_document = string
}
}
You could also try with splat to check if disable has value or is null:
dynamic "static_website" {
for_each = var.disable[*]
content {
index_document = string
error_404_document = string
}
}
In the above examples, you may need to adjust conditions based on what values var.disable can actually have.
| Terraform | 63,592,602 | 19 |
I want to assign multiple IAM roles to a single service account through terraform. I prepared a TF file to do that, but it has an error. With a single role it can be successfully assigned but with multiple IAM roles, it gave an error.
data "google_iam_policy" "auth1" {
binding {
role = "roles/cloudsql.admin"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
role = "roles/secretmanager.secretAccessor"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
role = "roles/datastore.owner"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
role = "roles/storage.admin"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
}
}
How can I assign multiple roles against a single service account?
| I did something like this
resource "google_project_iam_member" "member-role" {
for_each = toset([
"roles/cloudsql.admin",
"roles/secretmanager.secretAccessor",
"roles/datastore.owner",
"roles/storage.admin",
])
role = each.key
member = "serviceAccount:${google_service_account.service_account_1.email}"
project = my_project_id
}
Authoritative vs non-Authoritative
Pay attention to which of the resources you are using.
google_project_iam_policy - This is Authoritative - it will replace other policies in your Terraform code. Only use once per workspace directory.
google_project_iam_binding - This is Authoritative -
it will override other bindings to the role elsewhere in your Terraform code. Only use once per workspace directory.
google_project_iam_member - This is non-Authoritative - This you can use many times in the same workspace directory - if using it multiple times better organizes your code.
Read here: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_project_iam
| Terraform | 61,661,116 | 19 |