question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I am developing some apps which depends on one of them. I see pip can create bundles. I have all my projects source in one dir. How to create bundle for these projects and then install in other Ubuntu system? probably I should use virtualenv. Is it possible to have one virtualenv for all of them?
To create a bundle, use something like: pip bundle <name of bundle>.pybundle -r requirements.txt where requirements.txt contains your list of apps to install. Or you can create a bundle with a single app (why?): pip bundle <name of bundle>.pybundle <my app> Then, on the other system, to install your bundle, you use: pip install <name of bundle>.pybundle Note the .pybundle extension for your bundle - you need this for pip to recognise it - install the bundle without an extension didn't work for me. However pip bundle has been deprecated in 1.4 release and it will be removed in 1.5 release, see change log Cheers, Victor
Distribution
9,093,124
18
I understand that three phase commit was made to solve the problem of "two phase commit" when in the second phase the coordinator and the cohort fails at the same time it is impossible to know if the coordinator had decided on a commit message. Apparently three phase commit aims to solve this problem by adding an extra phase. But don't you face the exact same problem during the third phase, if the coordinator and a cohort fails?
In 3PC it is possible to figure out failed coordinator decision by querying remaining active cohorts. If any active cohort is in pre-commit state- that means that all of them agreed to commit (otherwise coordinator would not have sent pre-commit). And we need to commit the rest of the cohorts because failed ones might have committed. If none of the cohorts is in pre-commit state - we can assume that coordinator has not sent 'commit' to any cohort, so no side effects has taken place and we can abort. Here's good explanation: http://the-paper-trail.org/blog/consensus-protocols-three-phase-commit/
Distribution
11,209,564
18
I'm currently using 1 + (int)(rand() * 999.0 / RAND_MAX) to generate a random number between 1 and 999 inclusive but the two and one digit numbers don't occur as often as the three digit numbers. How can I fix this? Note that although the original code gives a range of 0 to 999 inclusive, I actually want a range of 1 to 999 inclusive.
Your observation that one digit numbers don't occur as often as two and three digit numbers is not surprising. There are only 9 one digit numbers (not including zero), but there are 90 two-digit ones, and 900 three digit ones. So a uniform random number generator will draw numbers in that frequency. To generate random numbers in the range [1, 999] such that the probability of their having 1, 2, and 3 digits is equal, use your favourite generator to generate a random number p, say, in the range [0, 1) (see the new random library functions in C++ to do that), and transform it using std::pow(1000, p);. You should note that the resulting distribution will not be piecewise-uniform: that is to say that the probability of drawing a number with a certain number of digits is not the same as the probability of drawing any other number with that number of digits. But it does have a continuous and differentiable cumulative density function which can be important mathematically. (For the mathematically-inclined, the transformation I'm applying is the quantile function of the distribution that the OP needs).
Distribution
37,073,369
18
A distribution is beta-binomial if p, the probability of success, in a binomial distribution has a beta distribution with shape parameters α > 0 and β > 0. The shape parameters define the probability of success. I want to find the values for α and β that best describe my data from the perspective of a beta-binomial distribution. My dataset players consist of data about the number of hits (H), the number of at-bats (AB) and the conversion (H / AB) of a lot of baseball players. I estimate the PDF with the help of the answer of JulienD in Beta Binomial Function in Python from scipy.special import beta from scipy.misc import comb pdf = comb(n, k) * beta(k + a, n - k + b) / beta(a, b) Next, I write a loglikelihood function that we will minimize. def loglike_betabinom(params, *args): """ Negative log likelihood function for betabinomial distribution :param params: list for parameters to be fitted. :param args: 2-element array containing the sample data. :return: negative log-likelihood to be minimized. """ a, b = params[0], params[1] k = args[0] # the conversion rate n = args[1] # the number of at-bats (AE) pdf = comb(n, k) * beta(k + a, n - k + b) / beta(a, b) return -1 * np.log(pdf).sum() Now, I want to write a function that minimizes loglike_betabinom from scipy.optimize import minimize init_params = [1, 10] res = minimize(loglike_betabinom, x0=init_params, args=(players['H'] / players['AB'], players['AB']), bounds=bounds, method='L-BFGS-B', options={'disp': True, 'maxiter': 250}) print(res.x) The result is [-6.04544138 2.03984464], which implies that α is negative which is not possible. I based my script on the following R-snippet. They get [101.359, 287.318].. ll <- function(alpha, beta) { x <- career_filtered$H total <- career_filtered$AB -sum(VGAM::dbetabinom.ab(x, total, alpha, beta, log=True)) } m <- mle(ll, start = list(alpha = 1, beta = 10), method = "L-BFGS-B", lower = c(0.0001, 0.1)) ab <- coef(m) Can someone tell me what I am doing wrong? Help is much appreciated!!
One thing to pay attention to is that comb(n, k) in your log-likelihood might not be well-behaved numerically for the values of n and k in your dataset. You can verify this by applying comb to your data and see if infs appear. One way to amend things could be to rewrite the negative log-likelihood as suggested in https://stackoverflow.com/a/32355701/4240413, i.e. as a function of logarithms of Gamma functions as in from scipy.special import gammaln import numpy as np def loglike_betabinom(params, *args): a, b = params[0], params[1] k = args[0] # the OVERALL conversions n = args[1] # the number of at-bats (AE) logpdf = gammaln(n+1) + gammaln(k+a) + gammaln(n-k+b) + gammaln(a+b) - \ (gammaln(k+1) + gammaln(n-k+1) + gammaln(a) + gammaln(b) + gammaln(n+a+b)) return -np.sum(logpdf) You can then minimize the log-likelihood with from scipy.optimize import minimize init_params = [1, 10] # note that I am putting 'H' in the args res = minimize(loglike_betabinom, x0=init_params, args=(players['H'], players['AB']), method='L-BFGS-B', options={'disp': True, 'maxiter': 250}) print(res) and that should give reasonable results. You could check How to properly fit a beta distribution in python? for inspiration if you want to rework further your code.
Distribution
54,505,173
18
In iPhone Developer Program Portal, there's a video to demonstrate how to create a development certificate and assign a private key Now I have finished the development process and starting to distribute I have created a "Distribution Certificate", but how do I assign a private key to this certificate? As it got the "CodeSign error: code signing identity '...' does not match any code-signing certificate in your keychain" when compiles It will be glad to let me now if I need to assign a private key to the distribution cert and how. Please be noticed that I am NOT talking about development certificate here, thanks.
As no one has updated or answered this question since Nathan pointed out the original answer link is unavailable (Point number one on posting on stackoverflow "Please be sure to answer the question") here is an answer from my experiance, hope it helps someone: For this specific question (Dev/Distribution cert installed but no private key) the answer is a choice of 2 alternatives: 1. Create a new certificate identity via the apple devloper portal In this case you'll need to log into the apple developer portal and use the assistant, as part of this you will need to create and upload a CSR (Certificate Signing Request) which will also create the public and private keys on the machine. Apple sign the request and voila, a signed public/private certificate pair : Reference 2. Export the private key from the machine originally used to create the Certificate This case is when your signing with an identity previously created, probably on an old machine or if (as in my case) your developing for a client that has their ID's already created. If you can't access the machine originally used to create the certificate identity you'll have to create a new identity as per the guide/assistant. To do this, you will need to export your private key (a .p12 file), or public/private key pair via Keychain Access, and install the private key or key pair to the machine your developing on. Here's a guide I found on stackoverflow. Ref: http://iphonedevsdk.com/forum/iphone-sdk-development/11498-problems-exporting-importing-p12-private-key-for-developing-on-another-mac.html PS> Be warned, I am performing the latter option right now, and while evidence suggests this will work, it is untested by me as of yet, If if doesn't work I will re-reply!
Distribution
688,809
17
I have some data points and their mean point. I need to find whether those data points (with that mean) follows a Gaussian distribution. Is there a function in MATLAB which can do that kind of a test? Or do I need to write a test of my own? I tried looking at different statistical functions provided by MATLAB. I am very new to MATLAB so I might have overlooked the right function. cheers
Check this documentation page on all available hypothesis tests. From those, for your purpose you can use: Chi-square goodness-of-fit test Lilliefors test z-test t-test Kolmogorov-Smirnov test ... among others You can also use some visual tests like: hist normplot cdfplot
Distribution
1,882,944
17
To plot a normal distribution curve in R we can use: (x = seq(-4,4, length=100)) y = dnorm(x) plot(x, y) If dnorm calculates y as a function of x, does R have a function that calculates x as a function of y? If not what is the best way to approach this?
What dnorm() is doing is giving you a probability density function. If you integrate over that, you would have a cumulative distribution function (which is given by pnorm() in R). The inverse of the CDF is given by qnorm(); that is the standard way these things are conceptualized in statistics.
Distribution
19,589,191
17
Google uses bsdiff and Courgette for patching binary files like the Chrome distribution. Do any similar tools exist for patching jar files? I am updating jar files remotely over a bandwidth-limited connection and would like to minimize the amount of data sent. I do have some control over the client machine to some extent (i.e. I can run scripts locally) and I am guaranteed that the target application will not be running at the time. I know that I can patch java applications by putting updated class files in the classpath, but I would prefer a cleaner method for doing updates. It would be good if I could start with the target jar file, apply a binary patch, and then wind up with an updated jar file that is identical (bitwise) to the new jar (from which the patch was created).
Try the javaxdelta project on Sourceforge. It should allow to create patches and to apply them. [EDIT] This tool doesn't exist, yet. Open the JAR file with the usual tools and then use javaxdelta to create one patch per entry in the JAR. ZIP them up and copy them onto the server. On the other side, you need to install a small executable JAR which takes the patch and the JAR file as arguments and applies the patch. You will have to write this one, too, but that shouldn't take much more than a few hours.
Distribution
2,546,581
16
Possible Duplicate: Making a standard normal distribution in R Using R, draw a standard normal distribution. Label the mean and 3 standard deviations above and below the (10) mean. Include an informative title and labels on the x and y axes. This is a homework problem. I'm not sure how to get going with the code. How should I get started?
I am pretty sure this is a duplicate. Anyway, have a look at the following piece of code x <- seq(5, 15, length=1000) y <- dnorm(x, mean=10, sd=3) plot(x, y, type="l", lwd=1) I'm sure you can work the rest out yourself, for the title you might want to look for something called main= and y-axis labels are also up to you. If you want to see more of the tails of the distribution, why don't you try playing with the seq(5, 15, ) section? Finally, if you want to know more about what dnorm is doing I suggest you look here
Distribution
10,543,443
16
I'm writing a function which I want to accept a distribution as a parameter. Let's say the following: #include<random> #include<iostream> using namespace std; random_device rd; mt19937 gen(rd()); void print_random(uniform_real_distribution<>& d) { cout << d(gen); } Now is there a way to generalise this code, in a short way, in C++ such that it would accept all distributions and distributions only (otherwise the compiler should complain)? Edit: To clarify, the solution should also be able to accept only a subset of all distributions (which would have to be pre-specified). I would for example accept the ability to define a type as a collection of allowed types but it would be even better if there is already a type which has this property for distributions.
There is no such traits in standard library. You can just write something like template<typename T> struct is_distribution : public std::false_type {}; and specialize for each type, that is distribution template<typename T> struct is_distribution<std::uniform_int_distribution<T> > : public std::true_type {}; Then just template<typename Distr> typename std::enable_if<is_distribution<Distr>::value>::type print_random(Distr& d) { cout << d(gen); } Also, you can use something like concepts-lite (but with decltypes, since there is no this feature now), it can not work in some cases. In standard there are rules, that should any distribution follow (n3376 26.5.1.6/Table 118). template<typename D> constexpr auto is_distribution(D& d) -> decltype(std::declval<typename D::result_type>(), std::declval<typename D::param_type>(), d.reset(), d.param(), d.param(std::declval<typename D::param_type>()), true); template<typename D> auto print_random(D& d) -> decltype(is_distribution(d), void()) { } If you want just check that type is callable with some generator and execution of this call returns result_type you can just simplify function template<typename D> auto is_distribution(D& d) -> decltype(std::is_same<typename D::result_type, decltype(d(*static_cast<std::mt19937*>(0)))>::value); all this things will be much simple, when concepts-lite will be available in standard.
Distribution
27,482,972
16
I'm developing an iOS app for iPhone and iPad. It runs great on the simulators and actual devices. It installs without error using both iTunes and the iPhone Configuration Utility. I cannot, however, seem to get wireless distribution to work properly. Sanity checks: I have an Apple developer license. I have a valid developer certificate from the Provisioning Portal. I have added my device's UDID in the Provisioning Portal. I have created a valid AppID in the Provisioning Portal. I have created a distribution provisioning profile. Developer profiles don't seem to work for this. I have clicked the proper aforementioned device to be active for the given provisioning profile. I have downloaded and installed the certificate and provisioning profile. A release build installs perfectly with both iTunes and the iPhone Configuration Utility. For wireless distribution, I have followed Apple's instructions: I have proper .ipa, .mobileprovision, and .plist files setup and hosted on a LAMP web server (with the proper MIME types added per Apple's instructions). The .plist file is properly formatted. The URLs to the .mobileprovision and .plist files are correct. The .mobileprovision file downloads and installs properly via an iOS device's Safari browser. The iOS device's Safari browser properly processes the .plist file, finds the .ipa file, and prompts for install with a message "[my domain name] would like to install '[my app name]'". I click the "Install" soft button. Installation commences with the typical grayed version of the application icon and blue progress bar that proceeds from left to right. The icon's text is at first "Loading", and then changes to "Installing". After several seconds of "Installing", an alert is displayed: "Unable to download '[my app name]'". I am prompted with "Done" and "Retry" soft buttons. "Retry" of course just repeats the process and fails again. "Done" exits installation, and after a moment, the app icon disappears. Just to be clear, this installs PERFECTLY via iTunes and the iPhone Configuration Utility. I have read countless blogs and articles on how to get this working, but no one seems to have definitive answers. Is there ANYONE that can think of what is going wrong here??? Thanks in advance. Pulling my hair out.
NovaJoe -- I was pretty discouraged to review your link as it does appear to read that you need Enterprise Developer license... I think I figured it out. Read the first paragraph and first bullet point: http://developer.apple.com/library/ios/#featuredarticles/FA_Wireless_Enterprise_App_Distribution/Introduction/Introduction.html This would explain why all wired deployment methods work, but Wireless Distribution always fails. Enterprise developer account is required, it seems. >.< But, this is not the case! I was able to successfully deploy by completing the following: removing app archive from xcode organizer in the xcode project: clean/clean all targets, then build and archive with the developer development certificate in organizer, select the re-added application archive, and select Share... For Identify, pick the Same Development Cert used to build the app and 'save to disk' You now will have an .ipa file that will work, but in order for the remote-install to register, you will still need the plist file that is generated (and pointed to the new .ipa) for the process to initiate. So to summarize -- follow enterprise process, then replace generated enterprise .ipa with non-enterprise .ipa
Distribution
4,742,959
15
I have a vector of count data that is strongly over dispersed and zero inflated. The vector looks like this: i.vec=c(0,63,1,4,1,44,2,2,1,0,1,0,0,0,0,1,0,0,3,0,0,2,0,0,0,0,0,2,0,0,0,0, 0,0,0,0,0,0,0,0,6,1,11,1,1,0,0,0,2) m=mean(i.vec) # 3.040816 sig=sd(i.vec) # 10.86078 I would like to fit a distribution to this, which I strongly suspect will be a zero inflated poisson (ZIP). But I need to perform a significance test to demonstrate that a ZIP distribution fits the data. If I had a normal distribution, I could do a chi square goodness of fit test using the function goodfit() in the package vcd, but I don't know of any tests that I can perform for zero inflated data.
Here is one approach # LOAD LIBRARIES library(fitdistrplus) # fits distributions using maximum likelihood library(gamlss) # defines pdf, cdf of ZIP # FIT DISTRIBUTION (mu = mean of poisson, sigma = P(X = 0) fit_zip = fitdist(i.vec, 'ZIP', start = list(mu = 2, sigma = 0.5)) # VISUALIZE TEST AND COMPUTE GOODNESS OF FIT plot(fit_zip) gofstat(fit_zip, print.test = T) Based on this, it does not look like ZIP is a good fit.
Distribution
7,157,158
15
I am facing some issues related with iOS Developer program and iOS Enterprise Program. One of my client ask me to suggest one of them. Please answer my questions related to iOS Enterprise Program- If i purchase an iOS Enterprise account so when it is available for in-house application distribution? How many device i have on which i can install my app? Do i need UDID of all devices? What if i want to add some new devices? If it is same Ad-hoc distribution the what is the expiry date of Ad-hoc certificate? Thanks
As soon as you sign the contracts and make the purchase and Apple verifies all the information, you can create your distribution certificate, provisioning profile and then sign your app with the certificate, and distribute your app with the profile. There is no limit to the number of devices. No, you do not need to get the UDID's of the devices for Enterprise distribution. Just send the app and the distribution provisioning profile to the user who has the new device. The certificate is used to sign the app - to my knowledge there is not an "ad-hoc" vs. "distribution" certificate.
Distribution
7,306,441
15
I am working on a simple desktop java application. I would like to make it as seamless to install for end users as possible. E.g. similar to how Minecraft is distributed - a simple executable for OS X and an EXE file for Windows. What tool should I use?
Users of your Java app must have the JRE installed in order to run it. You can either tell them to install Java first, or distribute JRE with your app, as Processing does. Note, however, that your packaged program will be heavy if you include JRE with it. And, if you want to do that, users will need to download the appropiate package for their platform. Executable Java Wrappers They take your Java app as input and wrap them in an executable (for a specified platform). You can customize them as you like; and if the user doesn't have Java installed, the download page will open. Some examples are Launch4J, JSmooth and Jar2EXE.   Installers They are independent applications configured to copy your app files to the user's computer and (optionally) create a shortcut. Some installers are written in Java, so they're multiplatform. In this case, the installer is a .jar. Some others are platform-dependent, but you have the advantage that you don't need to wrap them. Java installers: IzPack, Packlet, PackJacket, Antigen, …   Java Web Start It's a Java feature that allows you users to easily run your apps. You give them a .jnpl file, they open it, and Java downloads the latest version of your app and runs it. No packaging troubles! See the complete list of resources here.
Distribution
7,915,315
15
I want to distribute two versions of my app, the stable branch as well as the current development trunk, using TestFlight. And, if possible, I want to invite the testers only once. Can I have two versions of one app in one TestFlight team? Or maybe two app with different namens? Or can I create a second team and link it to the first one or copy the testers over?
Unfortunately I think there is no nice way to do that. Your options are: Two different TestFlight teams. You'll have to invite people to both teams. But, TestFlight is clever and if it already knows about a user in another team who is in the provisioning profile in the IPA you upload, then you select that they can access the build, it will auto-invite the user. Use different app IDs for your stable and development branches. I would personally go for the first option.
Distribution
9,711,061
15
I would like to compute the Earth Mover Distance between two 2D arrays (these are not images). Right now I go through two libraries: scipy (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.html) and pyemd (https://pypi.org/project/pyemd/). #define a sampeling method def sampeling2D(n, mu1, std1, mu2, std2): #sample from N(0, 1) in the 2D hyperspace x = np.random.randn(n, 2) #scale N(0, 1) -> N(mu, std) x[:,0] = (x[:,0]*std1) + mu1 x[:,1] = (x[:,1]*std2) + mu2 return x #generate two sets Y1 = sampeling2D(1000, 0, 1, 0, 1) Y2 = sampeling2D(1000, -1, 1, -1, 1) #compute the distance distance = pyemd.emd_samples(Y1, Y2) While the scipy version doesn't accept 2D arrays and it returns an error, the pyemd method returns a value. If you see from the documentation, it says that it accept only 1D arrays, so I think that the output is wrong. How can I calculate this distance in this case?
So if I understand you correctly, you're trying to transport the sampling distribution, i.e. calculate the distance for a setup where all clusters have weight 1. In general, you can treat the calculation of the EMD as an instance of minimum cost flow, and in your case, this boils down to the linear assignment problem: Your two arrays are the partitions in a bipartite graph, and the weights between two vertices are your distance of choice. Assuming that you want to use the Euclidean norm as your metric, the weights of the edges, i.e. the ground distances, may be obtained using scipy.spatial.distance.cdist, and in fact SciPy provides a solver for the linear sum assignment problem as well in scipy.optimize.linear_sum_assignment (which recently saw huge performance improvements which are available in SciPy 1.4. This could be of interest to you, should you run into performance problems; the 1.3 implementation is a bit slow for 1000x1000 inputs). In other words, what you want to do boils down to from scipy.spatial.distance import cdist from scipy.optimize import linear_sum_assignment d = cdist(Y1, Y2) assignment = linear_sum_assignment(d) print(d[assignment].sum() / n) It is also possible to use scipy.sparse.csgraph.min_weight_bipartite_full_matching as a drop-in replacement for linear_sum_assignment; while made for sparse inputs (which yours certainly isn't), it might provide performance improvements in some situations. It might be instructive to verify that the result of this calculation matches what you would get from a minimum cost flow solver; one such solver is available in NetworkX, where we can construct the graph by hand: import networkx as nx G = nx.DiGraph() # Represent elements in Y1 by 0, ..., 999, and elements in # Y2 by 1000, ..., 1999. for i in range(n): G.add_node(i, demand=-1) G.add_node(n + i, demand=1) for i in range(n): for j in range(n): G.add_edge(i, n + j, capacity=1, weight=d[i, j]) At this point, we can verify that the approach above agrees with the minimum cost flow: In [16]: d[assignment].sum() == nx.algorithms.min_cost_flow_cost(G) Out[16]: True Similarly, it's instructive to see that the result agrees with scipy.stats.wasserstein_distance for 1-dimensional inputs: from scipy.stats import wasserstein_distance np.random.seed(0) n = 100 Y1 = np.random.randn(n) Y2 = np.random.randn(n) - 2 d = np.abs(Y1 - Y2.reshape((n, 1))) assignment = linear_sum_assignment(d) print(d[assignment].sum() / n) # 1.9777950447866477 print(wasserstein_distance(Y1, Y2)) # 1.977795044786648
Distribution
57,562,613
15
Summary I recently had a conversation with the creator of a framework that one of my applications depends on. During that conversation he mentioned as a sort of aside that it would make my life simpler if I just bundled his framework with my application and delivered to the end user a version that I knew was consistent with my code. Intuitively I have always tried to avoid doing this and, in fact, I have taken pains to segment my own code so that portions of it could be redistributed without taking the entire project (even when there was precious little chance anyone would ever reuse any of it). However, after mulling it over for some time I have not been able to come up with a particularly good reason why I do this. In fact, now that I have thought about it, I'm seeing a pretty compelling case to bundle all my smaller dependencies. I have come up with a list of pros and cons and I'm hoping someone can point out anything that I'm missing. Pros Consistency of versions means easier testing and troubleshooting. Application may reach a wider audience since there appear to be fewer components to install. Small tweaks to the dependency can more easily be made downstream and delivered with the application, rather than waiting for them to percolate into the upstream code base. Cons More complex packaging process to include dependencies. User may end up with multiple copies of a dependency on their machine. Per bortzmeyer's response, there are potential security concerns with not being able to upgrade individual components. Notes For reference, my application is written in Python and the dependencies I'm referencing are "light", by which I mean small and not in very common use. (So they do not exist on all machines or even in all repositories.) And when I say "package with" my application, I mean distribute under my own source tree, not install with a script that resides inside my package, so there would be no chance of conflicting versions. I am also developing solely on Linux so there are no Windows installation issues to worry about. All that being said, I am interested in hearing any thoughts on the broader (language-independent) issue of packaging dependencies as well. Is there something I am missing or is this an easy decision that I am just over-thinking? Addendum 1 It is worth mentioning that I am also quite sensitive to the needs of downstream packagers. I would like it to be as straightforward as possible to wrap the application up in a distribution-specific Deb or RPM.
I favor bundling dependencies, if it's not feasible to use a system for automatic dependency resolution (i.e. setuptools), and if you can do it without introducing version conflicts. You still have to consider your application and your audience; serious developers or enthusiasts are more likely to want to work with a specific (latest) version of the dependency. Bundling stuff in may be annoying for them, since it's not what they expect. But, especially for end-users of an application, I seriously doubt most people enjoy having to search for dependencies. As far as having duplicate copies goes, I would much rather spend an extra 10 milliseconds downloading some additional kilobytes, or spend whatever fraction of a cent on the extra meg of disk space, than spend 10+ minutes searching through websites (which may be down), downloading, installing (which may fail if versions are incompatible), etc. I don't care how many copies of a library I have on my disk, as long as they don't get in each others' way. Disk space is really, really cheap.
Distribution
598,299
14
I'm working on packaging a small Python project as a zip or egg file so that it can be distributed. I've come across 2 ways to include the project's config files, both of which seem to produce identical results. Method 1: Include this code in setup.py: from distutils.core import setup setup(name='ProjectName', version='1.0', packages=['somePackage'], data_files = [('config', ['config\propFiles1.ini', 'config\propFiles2.ini', 'config\propFiles3.ini'])] ) Method 2: Include this code in setup.py: from distutils.core import setup setup(name='ProjectName', version='1.0', packages=['somePackage'] ) Then, create a MANIFEST.in file with this line in it: include config\* Is there any difference between the methods? Which one is preferred? I tend to lean towards the first because then no MANIFEST.in file is necessary at all. However, in the first method you have to specify each file individually while in the second you can just include the whole folder. Is there anything else I should be taking into consideration? What's the standard practice?
MANIFEST.in controls what files are put into the distribution zip file when you call python setup.py sdist. It does not control what is installed. data_files (or better package_data) controls what files are installed (and I think also makes sure files are included in the zip file). Use MANIFEST.in for files you won't install, like documentation, and package_data for files you use that aren't Python code (like an image or template).
Distribution
2,968,701
14
reading the doc from apple I need to create an ad-hoc distribution app, and to do so I need the entitlements.plist. when i create a new entitlement, the value "get-task-allow" is not present, so I added by hand.. is right?? at the end the Entitlements.plist is this: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>application-identifier</key> <string>$(AppIdentifierPrefix)$(CFBundleIdentifier)</string> <key>keychain-access-groups</key> <array> <string>$(AppIdentifierPrefix)$(CFBundleIdentifier)</string> </array> <key>get-task-allow</key> <true/> </dict> </plist>
Try, in XCode, to go "new file" and in the wizard select "code signing". Select the "Entitlements" file type. Just name it "Entitlements.plist". Make sure the task-allow is not checked for adhoc distrobution. This is all you need to do, no need to "roll your own" :) (Xcode 3.2.2) UPDATE: Since Xcode 4.2 the Entitlements.plist was removed from the "new file" menu. You now find it under: Project's Targets > Summary at the bottom of the page.
Distribution
3,021,569
14
Is it possible to (legally) sell WP7 apps outside of Microsoft's App Hub? Are there license restrictions to this? Is it ideal to even approach app distribution in such a manner (i.e., are there other sites/services that are easily accessible by WP7 users to find new apps).
Strictly speaking, I'm not aware of any restrictions in place to prevent exchange (commercial or otherwise) of xap files outside of the marketplace. However, to deploy the xap files the would be users would have to have an active marketplace account. Even then, you're limited to 10 apps deployed not via the marketplace. These points would make it impractical to reach a broader audience than developers outside of the marketplace. There are no sites specialising in this. Perhaps as a result of it being not an ideal/worthwhile scenario.
Distribution
4,139,063
14
I need to generate 3 random numbers, the amount of which is equal to 1. My implementation does not support uniform distribution. :(
Just get 3 random numbers and then calculate a factor which is 1 / [sum of your numbers]. Finally multiply each of the random numbers with that factor. The sum will be 1.
Distribution
5,563,808
14
I am trying to write a program to select a random name from the US Census last name list. The list format is Name Weight Cumulative line ----- ----- ----- - SMITH 1.006 1.006 1 JOHNSON 0.810 1.816 2 WILLIAMS 0.699 2.515 3 JONES 0.621 3.136 4 BROWN 0.621 3.757 5 DAVIS 0.480 4.237 6 Assuming I load the data in to a structure like Class Name { public string Name {get; set;} public decimal Weight {get; set;} public decimal Cumulative {get; set;} } What data structure would be best to hold the list of names, and what would be the best way to select a random name from the list but have the distribution of names be the same as the real world. I will only be working with the first 10,000 rows if it makes a difference in the data structure. I have tried looking at some of the other questions about weighted randomness but I am having a bit of trouble turning theory in to code. I do not know much about math theory so I do not know if this is a "With or without replacement" random selection, I want the same name able to show up more than once, which ever that one means.
The "easiest" way to handle this would be to keep this in a list. You could then just use: Name GetRandomName(Random random, List<Name> names) { double value = random.NextDouble() * names[names.Count-1].Culmitive; return names.Last(name => name.Culmitive <= value); } If speed is a concern, you could store a separate array of just the Culmitive values. With this, you could use Array.BinarySearch to quickly find the appropriate index: Name GetRandomName(Random random, List<Name> names, double[] culmitiveValues) { double value = random.NextDouble() * names[names.Count-1].Culmitive; int index = Array.BinarySearch(culmitiveValues, value); if (index >= 0) index = ~index; return names[index]; } Another option, which is probably the most efficient, would be to use something like one of the C5 Generic Collection Library's tree classes. You could then use RangeFrom to find the appropriate name. This has the advantage of not requiring a separate collection
Distribution
7,366,838
14
This might have been asked lots of times, but still I couldn't find info on why are they needed. I use DEVELOPER prov profiles to test apps on my device, that makes sense. The Provisioning Portal explains prov profiles like this: A Provisioning Profile is a collection of digital assets that uniquely ties developers and devices to an authorized iOS Development Team and enables a device to be used for testing. By this logic they are only needed for testing, eg not for distribution. Do we need one to deploy the app on the AppStore?
Absolutely yes. The distribution profile is used for submission to the App Store. It does not have the 100 device limit that the development profiles have. From the Tools Workflow Guide: When you’re ready to share your app for user testing or for general distribution through the App Store, you need to create an archive of the app using a distribution provisioning profile and send it to app testers or submit it to iTunes Connect. This chapter shows how to perform these tasks.
Distribution
12,689,073
14
Can I use Inno Setup to import a .cer file (a certificate)? How can I do it? I need to create a certificate installer for Windows XP, Windows Vista and Windows 7.
Actually the CertMgr.exe is not available on all PCs and furthermore it does not appear to be redistributable (as hinted by @TLama); and besides you don't even need it. CertUtil is available on every Windows machine (that I have tested) and works perfectly: [Run] Filename: "certutil.exe"; Parameters: "-addstore ""TrustedPublisher"" {app}\MyCert.cer"; \ StatusMsg: "Adding trusted publisher..."
Distribution
12,754,314
14
I am looking to find the peaks in some gaussian smoothed data that I have. I have looked at some of the peak detection methods available but they require an input range over which to search and I want this to be more automated than that. These methods are also designed for non-smoothed data. As my data is already smoothed I require a much more simple way of retrieving the peaks. My raw and smoothed data is in the graph below. Essentially, is there a pythonic way of retrieving the max values from the array of smoothed data such that an array like a = [1,2,3,4,5,4,3,2,1,2,3,2,1,2,3,4,5,6,5,4,3,2,1] would return: r = [5,3,6]
There exists a bulit-in function argrelextrema that gets this task done: import numpy as np from scipy.signal import argrelextrema a = np.array([1,2,3,4,5,4,3,2,1,2,3,2,1,2,3,4,5,6,5,4,3,2,1]) # determine the indices of the local maxima max_ind = argrelextrema(a, np.greater) # get the actual values using these indices r = a[max_ind] # array([5, 3, 6]) That gives you the desired output for r. As of SciPy version 1.1, you can also use find_peaks. Below are two examples taken from the documentation itself. Using the height argument, one can select all maxima above a certain threshold (in this example, all non-negative maxima; this can be very useful if one has to deal with a noisy baseline; if you want to find minima, just multiply you input by -1): import matplotlib.pyplot as plt from scipy.misc import electrocardiogram from scipy.signal import find_peaks import numpy as np x = electrocardiogram()[2000:4000] peaks, _ = find_peaks(x, height=0) plt.plot(x) plt.plot(peaks, x[peaks], "x") plt.plot(np.zeros_like(x), "--", color="gray") plt.show() Another extremely helpful argument is distance, which defines the minimum distance between two peaks: peaks, _ = find_peaks(x, distance=150) # difference between peaks is >= 150 print(np.diff(peaks)) # prints [186 180 177 171 177 169 167 164 158 162 172] plt.plot(x) plt.plot(peaks, x[peaks], "x") plt.show()
Distribution
35,282,456
14
Let's say I have this Hello.scala. object HelloWorld { def main(args: Array[String]) { println("Hello, world!") } } I could run 'scalac' to get HelloWorld.class and HelloWorld$.class. I can run using 'scala -classpath . Hello'. But I can't run 'java -cp . Hello'. Why is this? Isn't scala interoperable with Java? Is there any way to run scala's class with Java? Why the scalac produces two class files? Does scala have something like 'lein uber'? I mean, does scala have some tools that generates one jar file for distribution purposes? IntelliJ + scala plugin I could get a jar that contains everything to run scala file file with IntelliJ + scala plugin, I think this is the better option than proguard. proguard setup Thanks to Moritz, I could make one jar file that be run with java. This is the overall structure. |-- classes | |-- HelloWorld$.class | |-- HelloWorld.class | `-- META-INF | `-- MANIFEST.MF `-- your.pro your.pro has the following contents. -injar classes -injar /Users/smcho/bin/scala/lib/scala-library.jar(!META-INF/MANIFEST.MF) -outjar main.jar -libraryjar /System/Library/Frameworks/JavaVM.framework/Versions/1.6/Classes/classes.jar -dontwarn -dontnote -ignorewarnings -optimizationpasses 2 -optimizations !code/allocation/variable -keep,allowoptimization,allowshrinking class * { *; } -keepattributes SourceFile,LineNumberTable -keepclasseswithmembers public class HelloWorld { public static void main(java.lang.String[]); } MANIFEST.MF has following setup. Don't forget to include [CR] or blank line. Main-Class: HelloWorld I download the proguard 4.5.1 to put it in ~/bin/proguard4.5.1. Running proguard, I could make the jar (main.jar), and it works fine. prosseek:classes smcho$ java -Xmx512m -jar /Users/smcho/bin/proguard4.5.1/lib/proguard.jar @your.pro ProGuard, version 4.5.1 Reading program directory [/Users/smcho/Desktop/scala/proguard/classes/classes] Reading program jar [/Users/smcho/bin/scala/lib/scala-library.jar] (filtered) Reading library jar [/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Classes/classes.jar] Preparing output jar [/Users/smcho/Desktop/scala/proguard/classes/main.jar] Copying resources from program directory [/Users/smcho/Desktop/scala/proguard/classes/classes] Copying resources from program jar [/Users/smcho/bin/scala/lib/scala-library.jar] (filtered) prosseek:classes smcho$ java -jar main.jar hello or java -cp main.jar HelloWorld I upload the example zip file here. And, scalar source code here.
You need to add Scala to the classpath, e.g. -classpath scala-library.jar:. or by adding -Xbootclasspath/a:scala-library.jar to the VM arguments. Addition: Sorry, did not see that last question. If you want do distribute single JAR files many people use ProGuard to ship the classes needed from the scala-library.jar along with your classes in one jar. Second Edit: Assuming you have your .class-files and your META-INF folder containing the MANIFEST.MF in a folder called classes you can use the following Proguard configuration (after adjusting the paths, e.g. you need rt.jar on Linux/Windows instead of classes.jar on Mac OS) saved e.g. as your.pro: -injar classes -injar /opt/local/share/scala-2.8/lib/scala-library.jar(!META-INF/MANIFEST.MF) -outjar main.jar -libraryjar /System/Library/Frameworks/JavaVM.framework/Versions/1.6/Classes/classes.jar -dontwarn -dontnote -ignorewarnings -optimizationpasses 2 -optimizations !code/allocation/variable -keep,allowoptimization,allowshrinking class * { *; } -keepattributes SourceFile,LineNumberTable -keepclasseswithmembers public class your.Main { public static void main(java.lang.String[]); } Now you can create main.jar with java -Xmx512m -jar proguard.jar @your.pro Eventually you have to set the -Xmx a bit higher. Some more info can be found on the SBT Wiki.
Distribution
3,366,545
13
My client asked me to get the review of the app on which I am working. So, I want to get the ipa file and mobile provision file from Xcode 4.2 to share my app to run in real device. I have a paid account of apple with me. Please tell me the procedure to get it. Thanks in advance.
STEP-1: You need to refer steps for AdHoc Distribution I think you need to login with your credentials at Developer Apple Login Once you are logged in go through this link and read through it step by step. I think this is the best solution you can get as this documentation guide is given by Apple https://developer.apple.com/ios/manage/certificates/team/howto.action This has multiple steps like: 1. Generating a Certificate Signing Request 2. Submitting a Certificate Signing Request for Approval 3. Approving Certificate Signing Requests 4. Downloading and Installing Development Certificates 5. Saving your Private Key and Transferring to other Systems I think if you read all this steps on the apple documentation at the given link then you don't need to refer to any other guide. STEP-2: Then just you need to download your certificates and provisioning profile. STEP-3: Just set the profile into your Project and Target Settings and then put proper Entitlements using "Entitlements.plist". STEP-4: Once you have done that, just set up your project in AdHoc Scheme. STEP-5: Clean your Project. STEP-6: Go to Product -> Click on Build For -> "Build For Archiving" STEP-7: Product -> Archive Now your Archive can be obtained in your Organizer where in you can save it to disk with an IPA extension and send it your client. EDIT: Here are some of the useful links you can refer to for creating provisioning profile and IPA file: Create IPA file in Xcode 4.2, iOS 5.0 Beta http://www.makebetterthings.com/iphone/how-to-create-ipa-file-for-your-iphone-app-xcode-build-and-archive/ http://www.wikihow.com/Create-a-Provisioning-Profile-for-iPhone Create provisioning profile in iphone application Hope this helps you.
Distribution
9,595,925
13
This is getting frustrating. I have two identities, one old, one new, and the latter should be used to deploy iOS apps to the App Store. I've created the new user, granted him admin access, then I created the app name and provisioning profiles. However, in the Organizer I see that the Dev provision works flawlessly, while the Deploy profile shows me the dreaded error: Valid signing identity not found. How can it be? Well, I see that in the Certificates section in the iOS Provisioning Portal, there is only one distribution certificate, the one belonging to my company. Is there a way to enable the new user to create apps without accessing the uberadmin's Xcode? Thanks & Cheers!
You need the key that was used to create the Distribution Certificate for your company. Remember when you created your developer certificate? Then you went to keychain -> certificate assistant -> Request a certificate from ... When you did this, your Mac paired your certificate request to a key in your keychain. Once your developer certificate was processed and you downloaded it to your computer, it could be accessed by your computer through that key. But if you did not create the Distribution Certificate that your company has, you don't have the key on your computer. Take a look at your certificates in keychain: Go to 'Certificates' and expand your developer certificate - it will have a little key with your name. Now try to expand your distribution certificate - it will not have a key, right? If this is the case, you have two options: Ask the person who created the Distribution Certificate to export it from his keychain. This will create a file that includes both certificate and key. Delete the current Distribution Certificate, and create a new Certificate Signing Request from your computer, which will connect it to a key that you have. First method require access to "Uberadmins" computer. The second require admin access to your teams Apple account. There is usually no downside in using method 2, because creating a new certificate is necessary from time to time anyway. It will not affect already published apps, just coming releases and updates need to use a the latest certificate. Once all this is done, you need to create a distribution provisioning profile for App Store and connect to the Distribution Certificate that you are going to use. (if you went with option 1, you might already have done this). Download the profile to your computer, install it, and then in your app, select to build with this profile for distribution builds.
Distribution
11,359,001
13
I'm using the following code to fit the normal distribution. The link for the dataset for "b" (too large to post directly) is : link for b setwd("xxxxxx") library(fitdistrplus) require(MASS) tazur <-read.csv("b", header= TRUE, sep=",") claims<-tazur$b a<-log(claims) plot(hist(a)) After plotting the histogram, it seems a normal distribution should fit well. f1n <- fitdistr(claims,"normal") summary(f1n) #Length Class Mode #estimate 2 -none- numeric #sd 2 -none- numeric #vcov 4 -none- numeric #n 1 -none- numeric #loglik 1 -none- numeric plot(f1n) But when I try to plot the fitted distribution, I get Error in xy.coords(x, y, xlabel, ylabel, log) : 'x' is a list, but does not have components 'x' and 'y' and even the summary statistics are off for f1n. Any ideas?
Looks like you are making confusion between MASS::fitdistr and fitdistrplus::fitdist. MASS::fitdistr returns object of class "fitdistr", and there is no plot method for this. So you need to extract estimated parameters and plot the estimated density curve yourself. I don't know why you load package fitdistrplus, because your function call clearly shows you are using MASS. Anyway, fitdistrplus has function fitdist which returns object of class "fitdist". There is plot method for this class, but it won't work for "fitdistr" returned by MASS. I will show you how to work with both packages. ## reproducible example set.seed(0); x <- rnorm(500) Using MASS::fitdistr No plot method is available, so do it ourselves. library(MASS) fit <- fitdistr(x, "normal") class(fit) # [1] "fitdistr" para <- fit$estimate # mean sd #-0.0002000485 0.9886248515 hist(x, prob = TRUE) curve(dnorm(x, para[1], para[2]), col = 2, add = TRUE) Using fitdistrplus::fitdist library(fitdistrplus) FIT <- fitdist(x, "norm") ## note: it is "norm" not "normal" class(FIT) # [1] "fitdist" plot(FIT) ## use method `plot.fitdist`
Distribution
39,961,964
13
I have cells for whom the numeric value can be anything between 0 and Integer.MAX_VALUE. I would like to color code these cells correspondingly. If the value = 0, then r = 0. If the value is Integer.MAX_VALUE, then r = 255. But what about the values in between? I'm thinking I need a function whose limit as x => Integer.MAX_VALUE is 255. What is this function? Or is there a better way to do this? I could just do (value / (Integer.MAX_VALUE / 255)) but that will cause many low values to be zero. So perhaps I should do it with a log function. Most of my values will be in the range [0, 10,000]. So I want to highlight the differences there.
The "fairest" linear scaling is actually done like this: floor(256 * value / (Integer.MAX_VALUE + 1)) Note that this is just pseudocode and assumes floating-point calculations. If we assume that Integer.MAX_VALUE + 1 is 2^31, and that / will give us integer division, then it simplifies to value / 8388608 Why other answers are wrong Some answers (as well as the question itself) suggsted a variation of (255 * value / Integer.MAX_VALUE). Presumably this has to be converted to an integer, either using round() or floor(). If using floor(), the only value that produces 255 is Integer.MAX_VALUE itself. This distribution is uneven. If using round(), 0 and 255 will each get hit half as many times as 1-254. Also uneven. Using the scaling method I mention above, no such problem occurs. Non-linear methods If you want to use logs, try this: 255 * log(value + 1) / log(Integer.MAX_VALUE + 1) You could also just take the square root of the value (this wouldn't go all the way to 255, but you could scale it up if you wanted to).
Distribution
1,549,717
12
I'm looking at moving a program that currently embeds a Python interpreter to use Lua. With Python it's fairly easy to use modulefinder, compileall, and zipfile to make a nice tidy zip containing all the external libraries used. Does Lua have the ability to bundle up its libraries like that, or is there some better best practice for distributing programs that embed Lua?
As is typical with Lua, there's no one standard and a lot of people roll their own. There's an effort to standardize on a package-management system called Lua Rocks, but I'm not sure how much momentum is behind it or how mature it is. (In 2008 it was not quite ready for prime time, but things may have changed.) I myself am very low tech: if I want to distribute something, I just turn my Lua sources into C files and link them in with the binary. Finding and converting all the modules could be a bit tedious for me, but quite the easiest thing for my users—they don't even need to know that Lua is involved. I've posted a copy of my lua2c script at Pastebin. I have the option of compiling but I generally don't compile because the results are not portable and because the Lua compiler is so fast anyway. It would be nice to have something more automated. I think it would probably take several days to write and debug a good tool. People on the Lua mailing list will surely know more.
Distribution
3,065,783
12
I'm interested in how to distribute a Java application that has a lot of dependencies (specified in a pom.xml in Maven). Obviously it would be possible to just package everything in one big .jar file. However that seems wasteful, since an update of the application would require sending a new copy of all the dependencies as well. So I'm looking for a way of distributing the app that does the following: Only includes the core application in the main .jar file Downloads dependencies as needed when the .jar file is run Keeps copies of the dependencies locally, so that if an application update is distributed the dependencies don't need to be downloaded again What's the best way of achieving this?
Can you just use Maven, with something like described at Maven Run Project ? This is how I have some of my own applications setup within my own network. I've never needed to worry about messing with the classpaths or downloading / providing dependencies for programs setup like this for a long time. This approach also meets all of your criteria.
Distribution
8,547,718
12
Supposing the following scenario: Company A asks company B to produce an IPad App for them. Company A only wants to use it for themselves on a very limited amount of IPads (less than 100). Company A is not necessarily interested in offering it on the app store. How can company B distribute the app (sell it) to company A? It could install the App on the iPads via ad hoc provisioning profile, but this is only meant for testing and the app can't be used once the profile expires. How can B legally install this app on the iPads without the app expiring after one year? When using the app store, is there an option to sell the app only to this company, or to specific users via the app store, when the companies are not located in the U.S. (I heard about B2B distribution which can only be used in the U.S.) Would the enterprise distribution be the option to choose? But then company A must have an IOS developer enterprise program ticket, so that it can install the app on its iPads, and not company B, right? Yet company B is the developer here... Or would it be legal, if company B had the IOS developer enterprise program ticket, installed the app on some iPads and sold the iPads to company A?
Enterprise distribution is exactly what you want in this situation.
Distribution
10,885,112
12
I am trying to model some data that follows a sigmoid curve relationship. In my field of work (psychophysics), a Weibull function is usually used to model such relationships, rather than probit. I am trying to create a model using R and am struggling with syntax. I know that I need to use the vglm() function from the VGAM package, but I am unable to get a sensible model out. Here's my data: # Data frame example data dframe1 <- structure(list(independent_variable = c(0.3, 0.24, 0.23, 0.16, 0.14, 0.05, 0.01, -0.1, -0.2), dependent_variable = c(1, 1, 1, 0.95, 0.93, 0.65, 0.55, 0.5, 0.5)), .Names = c("independent_variable", "dependent_variable"), class = "data.frame", row.names = c(NA, -9L)) Here is a plot of the data in dframe1: library(ggplot2) # Plot my original data ggplot(dframe1, aes(independent_variable, dependent_variable)) + geom_point() This should be able to be modelled by a Weibull function, since the data fit a sigmoid curve relationship. Here is my attempt to model the data and generate a representative plot: library(VGAM) # Generate model my_model <- vglm(formula = dependent_variable ~ independent_variable, family = weibull, data = dframe1) # Create a new dataframe based on the model, so that it can be plotted model_dframe <- data.frame(dframe1$independent_variable, fitted(my_model)) # Plot my model fitted data ggplot(model_dframe, aes(dframe1.independent_variable, fitted.my_model.)) + geom_point() As you can see, this doesn't represent my original data at all. I'm either generating my model incorrectly, or I'm generating my plot of the model incorrectly. What am I doing wrong? Note: I have edited this question to make it more understandable; previously I had been using the wrong function entirely (weibreg()). Hence, some of the comments below may not make sense. .....
Here's my solution, with bbmle. Data: dframe1 <- structure(list(independent_variable = c(0.3, 0.24, 0.23, 0.16, 0.14, 0.05, 0.01, -0.1, -0.2), dependent_variable = c(1, 1, 1, 0.95, 0.93, 0.65, 0.55, 0.5, 0.5)), .Names = c("independent_variable", "dependent_variable"), class = "data.frame", row.names = c(NA, -9L)) Construct a cumulative Weibull that goes from 0.5 to 1.0 by definition: wfun <- function(x,shape,scale) { (1+pweibull(x,shape,scale))/2.0 } dframe2 <- transform(dframe1,y=round(40*dependent_variable),x=independent_variable) Fit a Weibull (log-scale relevant parameters), with binomial variation: library(bbmle) m1 <- mle2(y~dbinom(prob=wfun(exp(a+b*x),shape=exp(logshape),scale=1),size=40), data=dframe2,start=list(a=0,b=0,logshape=0)) Generate predictions: pframe <- data.frame(x=seq(-0.2,0.3,length=101)) pframe$y <- predict(m1,pframe) png("wplot.png") with(dframe2,plot(y/40~x)) with(pframe,lines(y/40~x,col=2)) dev.off()
Distribution
14,777,393
12
I'm trying to calculate p-values of a f-statistic with R. The formula R uses in the lm() function is equal to (e.g. assume x=100, df1=2, df2=40): pf(100, 2, 40, lower.tail=F) [1] 2.735111e-16 which should be equal to 1-pf(100, 2, 40) [1] 2.220446e-16 It is not the same! There s no BIG difference, but where does it come from? If I calculate (x=5, df1=2, df2=40): pf(5, 2, 40, lower.tail=F) [1] 0.01152922 1-pf(5, 2, 40) [1] 0.01152922 it is exactly the same. Question is...what is happening here? Have I missed something?
> all.equal(pf(100, 2, 40, lower.tail=F),1-pf(100, 2, 40)) [1] TRUE
Distribution
21,433,528
12
I am looking to count the number of times the values in an array change in polarity (EDIT: Number of times the values in an array cross zero). Suppose I have an array: [80.6 120.8 -115.6 -76.1 131.3 105.1 138.4 -81.3 -95.3 89.2 -154.1 121.4 -85.1 96.8 68.2]` I want the count to be 8. One solution is to run a loop and check for greater than or less than 0, and keep a history of the previous polarity. Can we do this faster? EDIT: My purpose is really to find something faster, because I have these arrays of length around 68554308, and I have to do these calculations on 100+ such arrays.
This produces the same result: import numpy as np my_array = np.array([80.6, 120.8, -115.6, -76.1, 131.3, 105.1, 138.4, -81.3, -95.3, 89.2, -154.1, 121.4, -85.1, 96.8, 68.2]) ((my_array[:-1] * my_array[1:]) < 0).sum() gives: 8 and seems to be the fastest solution: %timeit ((my_array[:-1] * my_array[1:]) < 0).sum() 100000 loops, best of 3: 11.6 µs per loop Compared to the fastest so far: %timeit (np.diff(np.sign(my_array)) != 0).sum() 10000 loops, best of 3: 22.2 µs per loop Also for larger arrays: big = np.random.randint(-10, 10, size=10000000) this: %timeit ((big[:-1] * big[1:]) < 0).sum() 10 loops, best of 3: 62.1 ms per loop vs: %timeit (np.diff(np.sign(big)) != 0).sum() 1 loops, best of 3: 97.6 ms per loop
Distribution
30,272,538
12
I have an array of colors that will populate a pie chart to act as a game spinner. I don't want the same colors to appear next to each other, making one huge chunk in the circle. My array looks something like this: var colors = ["blue", "red", "green", "red", "blue", "blue", "blue", "green"] The problem is of course that there are three blues together. Is there anything built into Swift that will allow me to equally (or as close to equally as possible) spread values within the total distribution and avoid them being adjacent? I can test for a match with the following code, but rearranging them proves to be a bit more difficult. var lastColor = "white" for color in colors { if color == lastColor { print("match") } lastColor = color } UPDATE: To make my colors array, I start out with the number of spaces for each color. It looks something like this: let numberOfReds = 2 let numberOfGreens = 2 let numberOfBlues = 4 let spaces = numberOfReds + numberOfGreens + numberOfBlues for _ in 0..< spaces { if numberOfReds > 0 { numberOfReds -= 1 colors.append("red") } if numberOfGreens > 0 { numberOfGreens -= 1 colors.append("green") } if numberOfBlues > 0 { numberOfBlues -= 1 colors.append("blue") } } Which ends up spitting out: colors = ["red", "green", "blue", "red", "green", "blue", "blue", "blue" ]
Disclaimer: In order to generate a "random" solution I am going to use backtracking. This approach is NOT fast and is NOT cheap by a space point of view. Infact both Time And Space Complexity are O(n!)... and this is HUGE! However it gives you a valid solution as random as possible. Backtracking So you want a random combination of a list of values with the condition that the solution is valid if there are not be 2 consecutive equals elements. In order to have a real random solution I suggest the following approach. I generate every possible valid combination. For this I'm using a backtracking approach func combinations<Element:Equatable>(unusedElms: [Element], sequence:[Element] = []) -> [[Element]] { // continue if the current sequence doesn't contain adjacent equal elms guard !Array(zip(sequence.dropFirst(), sequence)).contains(==) else { return [] } // continue if there are more elms to add guard !unusedElms.isEmpty else { return [sequence] } // try every possible way of completing this sequence var results = [[Element]]() for i in 0..<unusedElms.count { var unusedElms = unusedElms let newElm = unusedElms.removeAtIndex(i) let newSequence = sequence + [newElm] results += combinations(unusedElms, sequence: newSequence) } return results } Now given a list of colors let colors = ["blue", "red", "green", "red", "blue", "blue", "blue", "green"] I can generate every valid possible combination let combs = combinations(colors) [["blue", "red", "green", "blue", "red", "blue", "green", "blue"], ["blue", "red", "green", "blue", "red", "blue", "green", "blue"], ["blue", "red", "green", "blue", "green", "blue", "red", "blue"], ["blue", "red", "green", "blue", "green", "blue", "red", "blue"], ["blue", "red", "green", "blue", "red", "blue", "green", "blue"], ["blue", "red", "green", "blue", "red", "blue", "green", "blue"], ["blue", "red", "green", "blue", "green", "blue", "red", "blue"], ["blue", "red", "green", "blue", "green", "blue", "red", "blue"], ["blue", "red", "green", "blue", "red", "blue", "green", "blue"], ["blue", "red", "green", "blue", "red", "blue", "green", "blue"], ["blue", "red", "green", "blue", "green", "blue", "red", "blue"], ["blue", "red", "green", "blue", "green", "blue", "red", "blue"], ["blue", "red", "blue", "green", "red", "blue", "green", "blue"], ["blue", "red", "blue", "green", "red", "blue", "green", "blue"], ["blue", "red", "blue", "green", "blue", "red", "blue", "green"], ["blue", "red", "blue", "green", "blue", "red", "green", "blue"], ["blue", "red", "blue", "green", "blue", "green", "red", "blue"], ["blue", "red", "blue", "green", "blue", "green", "blue", "red"], ["blue", "red", "blue", "green", "blue", "red", "blue", "green"], ["blue", "red", "blue", "green", "blue", "red", "green", "blue"], ["blue", "red", "blue", "green", "blue", "green", "red", "blue"], ["blue", "red", "blue", "green", "blue", "green", "blue", "red"], ["blue", "red", "blue", "red", "green", "blue", "green", "blue"], ["blue", "red", "blue", "red", "green", "blue", "green", "blue"], ["blue", "red", "blue", "red", "blue", "green", "blue", "green"], ["blue", "red", "blue", "red", "blue", "green", "blue", "green"], ["blue", "red", "blue", "red", "blue", "green", "blue", "green"], ["blue", "red", "blue", "red", "blue", "green", "blue", "green"], ["blue", "red", "blue", "red", "green", "blue", "green", "blue"], ["blue", "red", "blue", "red", "green", "blue", "green", "blue"], ["blue", "red", "blue", "green", "red", "blue", "green", "blue"], ["blue", "red", "blue", "green", "red", "blue", "green", "blue"], ["blue", "red", "blue", "green", "blue", "green", "red", "blue"], ["blue", "red", "blue", "green", "blue", "green", "blue", "red"], ["blue", "red", "blue", "green", "blue", "red", "green", "blue"], ["blue", "red", "blue", "green", "blue", "red", "blue", "green"], ["blue", "red", "blue", "green", "blue", "green", "red", "blue"], ["blue", "red", "blue", "green", "blue", "green", "blue", "red"], ["blue", "red", "blue", "green", "blue", "red", "green", "blue"], ["blue", "red", "blue", "green", "blue", "red", "blue", "green"], ["blue", "red", "blue", "green", "red", "blue", "green", "blue"], ["blue", "red", "blue", "green", "red", "blue", "green", "blue"], ["blue", "red", "blue", "green", "blue", "red", "blue", "green"], ["blue", "red", "blue", "green", "blue", "red", "green", "blue"], ["blue", "red", "blue", "green", "blue", "green", "red", "blue"], ["blue", "red", "blue", "green", "blue", "green", "blue", "red"], ["blue", "red", "blue", "green", "blue", "red", "blue", "green"], ["blue", "red", "blue", "green", "blue", "red", "green", "blue"], ["blue", "red", "blue", "green", "blue", "green", "red", "blue"], ["blue", "red", "blue", "green", "blue", "green", "blue", "red"], ["blue", "red", "blue", "red", "green", "blue", "green", "blue"], ["blue", "red", "blue", "red", "green", "blue", "green", "blue"], ["blue", "red", "blue", "red", "blue", "green", "blue", "green"], ["blue", "red", "blue", "red", "blue", "green", "blue", "green"], ["blue", "red", "blue", "red", "blue", "green", "blue", "green"], ["blue", "red", "blue", "red", "blue", "green", "blue", "green"], ["blue", "red", "blue", "red", "green", "blue", "green", "blue"], ["blue", "red", "blue", "red", "green", "blue", "green", "blue"], ["blue", "red", "blue", "green", "red", "blue", "green", "blue"], ["blue", "red", "blue", "green", "red", "blue", "green", "blue"], ["blue", "red", "blue", "green", "blue", "green", "red", "blue"], ["blue", "red", "blue", "green", "blue", "green", "blue", "red"], ["blue", "red", "blue", "green", "blue", "red", "green", "blue"], ["blue", "red", "blue", "green", "blue", "red", "blue", "green"], ["blue", "red", "blue", "green", "blue", "green", "red", "blue"], ["blue", "red", "blue", "green", "blue", "green", "blue", "red"], ["blue", "red", "blue", "green", "blue", "red", "green", "blue"], ["blue", "red", "blue", "green", "blue", "red", "blue", "green"], ["blue", "red", "blue", "green", "red", "blue", "green", "blue"], ["blue", "red", "blue", "green", "red", "blue", "green", "blue"], ["blue", "red", "blue", "green", "blue", "red", "blue", "green"], ["blue", "red", "blue", "green", "blue", "red", "green", "blue"], ["blue", "red", "blue", "green", "blue", "green", "red", "blue"], ["blue", "red", "blue", "green", "blue", "green", "blue", "red"], ["blue", "red", "blue", "green", "blue", "red", "blue", "green"], ["blue", "red", "blue", "green", "blue", "red", "green", "blue"], ["blue", "red", "blue", "green", "blue", "green", "red", "blue"], ["blue", "red", "blue", "green", "blue", "green", "blue", "red"], ["blue", "red", "blue", "red", "green", "blue", "green", "blue"], ["blue", "red", "blue", "red", "green", "blue", "green", "blue"], …, ["green", "blue", "green", "blue", "red", "blue", "red", "blue"], ["green", "blue", "green", "blue", "red", "blue", "red", "blue"], ["green", "blue", "green", "blue", "red", "blue", "red", "blue"], ["green", "blue", "green", "blue", "red", "blue", "red", "blue"], ["green", "blue", "green", "blue", "red", "blue", "red", "blue"], ["green", "blue", "green", "blue", "red", "blue", "red", "blue"], ["green", "blue", "green", "blue", "red", "blue", "red", "blue"], ["green", "blue", "green", "blue", "red", "blue", "red", "blue"], ["green", "blue", "red", "blue", "red", "blue", "green", "blue"], ["green", "blue", "red", "blue", "red", "blue", "green", "blue"], ["green", "blue", "red", "blue", "green", "blue", "red", "blue"], ["green", "blue", "red", "blue", "green", "blue", "red", "blue"], ["green", "blue", "red", "blue", "red", "blue", "green", "blue"], ["green", "blue", "red", "blue", "red", "blue", "green", "blue"], ["green", "blue", "red", "blue", "green", "blue", "red", "blue"], ["green", "blue", "red", "blue", "green", "blue", "red", "blue"], ["green", "blue", "red", "blue", "red", "blue", "green", "blue"], ["green", "blue", "red", "blue", "red", "blue", "green", "blue"], ["green", "blue", "red", "blue", "green", "blue", "red", "blue"], ["green", "blue", "red", "blue", "green", "blue", "red", "blue"]] Finally I just need to pick one of these combinations let comb = combs[Int(arc4random_uniform(UInt32(combs.count)))] // ["red", "blue", "green", "blue", "green", "blue", "red", "blue"] Improvements If you don't need a true random solution, but simply a permutation that doesn't have 2 consecutive equal elements we can change the previous function in order to return the first valid solution. func combination<Element:Equatable>(unusedElms: [Element], sequence:[Element] = []) -> [Element]? { guard !Array(zip(sequence.dropFirst(), sequence)).contains(==) else { return nil } guard !unusedElms.isEmpty else { return sequence } for i in 0..<unusedElms.count { var unusedElms = unusedElms let newElm = unusedElms.removeAtIndex(i) let newSequence = sequence + [newElm] if let solution = combination(unusedElms, sequence: newSequence) { return solution } } return nil } Now you can simply write combination(["blue", "red", "green", "red", "blue", "blue", "blue", "green"]) to get a valid solution (if it does exists) ["blue", "red", "green", "blue", "red", "blue", "green", "blue"] This approach can be much faster (when the solution does exist) however the worst case scenario is still O(n!) for both space and time complexity.
Distribution
39,170,398
12
I need to develop a small-medium sized desktop GUI application, preferably with Python as a language of choice because of time constraints. What GUI library choices do I have which allow me to redistribute my application standalone, assuming that the users don't have a working Python installation and obviously don't have the GUI libraries I'm using either? Also, how would I go about packaging everything up in binaries of reasonable size for each target OS? (my main targets are Windows and Mac OS X) Addition: I've been looking at WxPython, but I've found plenty of horror stories of packaging it with cx_freeze and getting 30mb+ binaries, and no real advice on how to actually do the packaging and how trust-worthy it is.
This may help: How can I make an EXE file from a Python program?
Distribution
153,956
11
In the context of creating a custom Eclipse distribution for a development team. How would I go about building a custom Eclipse distribution containing a specific set of plugins? Would it be difficult to also add a kind of update site to put specific versions of the plug-ins from which the customized eclipse would update?
I realize this is an old post, but it keeps coming up on searches I do and I’d like to put in some more details given all the changes and maturity that has occurred when it comes to delivering Eclipse plug-ins... So, for those who end up on this page, hopefully the following will help you out! To summarize my personal findings: There have been many improvements in this space both in open source and commercially The complexities of distribution are often greater than expected Build on the backs of others, it is worth it! And while I work for a company offering a commercial solution (http://genuitec.com/sdc), I’ve tried to answer below with the practicalities of Eclipse delivery using free solutions. So, without further adieu... The absolute minimal solution is to download an Eclipse package from Eclipse.org, add the plugins you want, set the -clean parameter in the eclipse.ini, zip up the directory and hand it around your team. As long as you added the features from your internal update site (and the URL remains constant), Eclipse will be able to update from it. This will work the first time, and since it's easy, it's what most people start out doing. But it ignores the lifecycle of your tool stack. Here are some pain points I've encountered while helping customers with their Eclipse tooling: Eclipse Packages: You have to be an Eclipse/p2 guru to set up and maintain Eclipse packages. The EPP tools allow you to build your own packages, but you need a lot of domain knowledge around Eclipse packages, p2, and the EPP tooling. A place to start is http://wiki.eclipse.org/EPP/How_to_build_a_package_locally Plugins: Finding plugins involves lots of hunting for update sites and then you can never be sure you got the exact right binaries. Sometimes update sites go down, or you lose support for your Eclipse version when the plugin developers release a new update site. One suggestion is to make local copies of update sites to mitigate your exposure to such problems. Eclipse Updates: If you want your team to switch Eclipse versions, you'll end up just rebuilding your tool stack on the next version and having everybody reinstall. There's no way around this when just shipping a zip. Plugin Updates: Eclipse is designed to keep installing the new version of plugins, but in large production teams that can be counterproductive. Local mirrors of update sites can help with this as long as your team doesn't go out and add their own update sites. Security: Do you need to prevent your team from installing some software? What about requiring signed tools? You'll have to write plugins to limit the features of your package and you may have to sign plugins yourself. The PDE build has some support for signing. Long Term Maintenance: Rebuilding a tool stack in a few years (or sometimes a few months) can be close to impossible as support for different versions of Eclipse and different plugin versions comes and goes dynamically in the Eclipse ecosystem. Save off copies of your Eclipse packages. Buy big hard drives. Mirror the update sites you use. Workspace Setup: You can deploy an Eclipse to your team, but that's just the first step in the process. Automation for workspace setup, e.g. preferences, projects, Checkstyle or PMD configuration, goes a long way in reducing the amount of time your team spends getting ready to work. Additionally, these settings change often as you add projects creating continuous management hassles. When passing around a zip, I've seen teams also pass around a corresponding WIKI page or something similar. It's usually up to each developer to make sure they follow the steps. Managing Multiple Packages: Maybe you have one package for your dev team and another for your QA team. And then your dev team grows and splits into two groups with slightly different tooling needs and now your QA team needs multiple packages too. And then you start shipping your own plugin on top of Eclipse so that's another package that you are managing. After a few years of this, you spend all your time building Eclipse packages and you became a Eclipse/P2/Update Site guru without even trying. Clearly, the solution here is to hire somebody to do this for you. :) SMS Distribution: This works reasonably well with a zip file, but pushing out updates gets messy. Usually, people use SMS to drop down the first install, and then it's the developer's job to keep it up to date.
Distribution
351,373
11
Hello fellow software developers. I want to distribute a C program which is scriptable by embedding the Python interpreter. The C program uses Py_Initialize, PyImport_Import and so on to accomplish Python embedding. I'm looking for a solution where I distribute only the following components: my program executable and its libraries the Python library (dll/so) a ZIP-file containing all necessary Python modules and libraries. How can I accomplish this? Is there a step-by-step recipe for that? The solution should be suitable for both Windows and Linux. Thanks in advance.
Have you looked at Python's official documentation : Embedding Python into another application? There's also this really nice PDF by IBM : Embed Python scripting in C application. You should be able to do what you want using those two resources.
Distribution
2,494,468
11
I'm working on an application that I need to be cross-platform. I'd like to use Python for it, and am looking for GUI toolkits that make interface programming simple and easy. After a slight hunt, I found PythonCard. This looks like it fits the bill perfectly, but I'm not sure if it will be possible to compile this down to an appropriate executable for each operating system. I found these instructions, but they're 6 years old. Whatever solution I choose must support the following: Write one GUI to work on both Windows and Mac OSX Must 'compile' into an easily distributable file for both windows/mac Compiled file must not require Python to be installed on the users computer Can anyone recommend a library/solution before I have to wade into the desolate world of Java?
I think the answer here is less about the particular GUI toolkit and more about distributing stand-alone python applications. Personally, I've found the tools for this a little less perfect than I'd like but, after some finagling, they get the job done. The most likely candidate that'd fit your needs is cx_Freeze. Though there's a Windows specific py2exe and Mac specific py2app that might fill the bill if cx_Freeze is insufficient.
Distribution
3,604,113
11
According to the question " How to get Linux distribution name and version? ", to get the linux distro name and version, this works: lsb_release -a On my system, it shows the needed output: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 9.10 Release: 9.10 Codename: karmic Now, to get this info in C++, Qt4's QProcess would be a great option but since I am developing without Qt using std c++, I need to know how to get this info in standard C++, i.e. the stdout of the process, and also a way to parse the info. Uptil now I am trying to use code from here but am stuck on function read().
You can simply use the function: int uname(struct utsname *buf); by including the header #include <sys/utsname.h> It already returns the name & version as a part of the structure: struct utsname { char sysname[]; /* Operating system name (e.g., "Linux") */ char nodename[]; /* Name within "some implementation-defined network" */ char release[]; /* OS release (e.g., "2.6.28") */ char version[]; /* OS version */ char machine[]; /* Hardware identifier */ #ifdef _GNU_SOURCE char domainname[]; /* NIS or YP domain name */ #endif }; Am I missing something?
Distribution
6,315,666
11
I have a C# library that is called by various clients (both 32-bit and 64-bit). Up to now it was compiled as AnyCPU, so there was no issues. Recently I added a dependency to SQLite .NET library which come in both 32 and 64-bit flavors (but not AnyCPU). So, now, I have to have 2 builds - for both bitnesses. In the past, I've seen other libraries (MS SQL Compact comes to mind) that had a scheme where a single .NET assembly would have Private\amd64 and Private\x86 folders in the folders with the appropriate native libraries in them and it would call each one as necessary. Is this approach viable for my situation? Is there documentation on how to implement it? Are there code changes required or is this a distribution technique?
There are several ways you can handle this. Code changes (small) are required for the first three approaches: A. You can modify the PATH to point to the platform specific folder during application start up. Then .NET will automatically load local DLLs from that folder. B. You can subscribe to the AssemblyResolve event and then choose the assembly based on the platform. Check out Scott Bilias's blog post on this http://scottbilas.com/blog/automatically-choose-32-or-64-bit-mixed-mode-dlls/. Note that he ends up preferring approach A. "In a nutshell, the solution is to trick the loader! Reference a p4dn.dll that does not exist, and use the AssemblyResolve event to intercept the load and reroute it to the correct bit size assembly." C. Use a platform-specific set of exe.configs and the codebase element to determine assembly locations. Your setup would install the correct one based on platform. http://msdn.microsoft.com/en-us/library/4191fzwb.aspx D. Write two setups one for 32-bit and one for 64-bit, then only install the appropriate files for the platform.
Distribution
9,469,467
11
I am new IOS development, i want to distribute my iPad app (.ipa format file) over my website. So, others can download my iPad app (.ipa format file) from my website. So, Is it possible to download the iPad app (.ipa format file) over website by others?
You need to check the Box "Distribute to Enterprise" when you Archive your Application. When you do so, a plist File is generated. (Be Careful with the Informations you Provide, the URL has to be right). Place the ipa and plist to your server. Then you can Link to the plist from an HTML File: itms-services://?action=download-manifest&url=http://YOURSERVER/YOURAPP.plist Thats how you do OTA (Over-The-Air Distribution) This is only possible with an Enterprise Profile or an AdHoc Profile for dedicated Devices, thus for Testing Purposes. You can also use TestFlight
Distribution
11,538,177
11
After making games with XNA, I wanted to broaden my horizon by working with python. I know XNA is supposedly easy to distribute; however, i'm not sure if a game made with pygame compiled with py2exe could be submitted to steam? My overall question is...How would I submit a game made with pygame to steam?
That it is Python, assembly or XNA doesn't really matter for Steam, AFAIK. There are two general ways to distribute games through Steam, contacting the Steam team (I just love saying that) themselves, or getting accepted through the Green Light program. Seeing that you aren't a AAA game dev team, the latter would probably be the best option.
Distribution
11,586,579
11
I have been doing some data analysis in R and I am trying to figure out how to fit my data to a 3 parameter Weibull distribution. I found how to do it with a 2 parameter Weibull but have come up short in finding how to do it with a 3 parameter. Here is how I fit the data using the fitdistr function from the MASS package: y <- fitdistr(x[[6]], 'weibull') x[[6]] is a subset of my data and y is where I am storing the result of the fitting.
First, you might want to look at FAdist package. However, that is not so hard to go from rweibull3 to rweibull: > rweibull3 function (n, shape, scale = 1, thres = 0) thres + rweibull(n, shape, scale) <environment: namespace:FAdist> and similarly from dweibull3 to dweibull > dweibull3 function (x, shape, scale = 1, thres = 0, log = FALSE) dweibull(x - thres, shape, scale, log) <environment: namespace:FAdist> so we have this > x <- rweibull3(200, shape = 3, scale = 1, thres = 100) > fitdistr(x, function(x, shape, scale, thres) dweibull(x-thres, shape, scale), list(shape = 0.1, scale = 1, thres = 0)) shape scale thres 2.42498383 0.85074556 100.12372297 ( 0.26380861) ( 0.07235804) ( 0.06020083) Edit: As mentioned in the comment, there appears various warnings when trying to fit the distribution in this way Error in optim(x = c(60.7075705026659, 60.6300379017397, 60.7669410153573, : non-finite finite-difference value [3] There were 20 warnings (use warnings() to see them) Error in optim(x = c(60.7075705026659, 60.6300379017397, 60.7669410153573, : L-BFGS-B needs finite values of 'fn' In dweibull(x, shape, scale, log) : NaNs produced For me at first it was only NaNs produced, and that is not the first time when I see it so I thought that it isn't so meaningful since estimates were good. After some searching it seemed to be quite popular problem and I couldn't find neither cause nor solution. One alternative could be using stats4 package and mle() function, but it seemed to have some problems too. But I can offer you to use a modified version of code by danielmedic which I have checked a few times: thres <- 60 x <- rweibull(200, 3, 1) + thres EPS = sqrt(.Machine$double.eps) # "epsilon" for very small numbers llik.weibull <- function(shape, scale, thres, x) { sum(dweibull(x - thres, shape, scale, log=T)) } thetahat.weibull <- function(x) { if(any(x <= 0)) stop("x values must be positive") toptim <- function(theta) -llik.weibull(theta[1], theta[2], theta[3], x) mu = mean(log(x)) sigma2 = var(log(x)) shape.guess = 1.2 / sqrt(sigma2) scale.guess = exp(mu + (0.572 / shape.guess)) thres.guess = 1 res = nlminb(c(shape.guess, scale.guess, thres.guess), toptim, lower=EPS) c(shape=res$par[1], scale=res$par[2], thres=res$par[3]) } thetahat.weibull(x) shape scale thres 3.325556 1.021171 59.975470
Distribution
11,817,883
11
How do you define your own distributions in R? If I have a distribution that looks something like this: P(D=0)=2/4, P(D=1)=1/4, P(D=2)=1/4 How do I turn that into a distribution I can work with? In the end, I want to be able to use these and do things involving cdfs, icdfs and pmfs. Like find the probability of 1 through a cdf type thing. And I also need to find out how to graph things. But I was going to ask in smaller steps and try to figure things out in between.
If you just need to generate random variates from the distribution, this should suffice: rMydist <- function(n) { sample(x = c(0,1,2), size = n, prob = c(.5, .25, .25), replace=T) } rMydist(20) # [1] 1 0 2 0 2 1 1 0 2 2 0 0 2 1 0 0 0 0 0 1 prop.table(table(rMydist(1e6))) # 0 1 2 # 0.500555 0.250044 0.249401 For something more fancy, try out the distr package. In addition to random number generation, it'll get you the density, distribution, and quantile functions associated with your distribution: library(distr) ## For more info, type: vignette("newDistributions") # Define full suite of functions (d*, p*, q*, r*) for your distribution D <- DiscreteDistribution (supp = c(0, 1, 2) , prob = c(0.5, .25, .25)) dD <- d(D) ## Density function pD <- p(D) ## Distribution function qD <- q(D) ## Quantile function rD <- r(D) ## Random number generation # Take them for a spin dD(-1:3) # [1] 0.00 0.50 0.25 0.25 0.00 pD(-1:3) # [1] 0.00 0.50 0.75 1.00 1.00 qD(seq(0,1,by=0.1)) # [1] 0 0 0 0 0 0 1 1 2 2 2 rD(20) # [1] 0 0 2 2 1 0 0 1 0 1 0 2 0 0 0 0 1 2 1 0
Distribution
12,848,736
11
I want to extract file "default.jasperreports.properties" from depended jasperreports.jar and put it in zip distribution with new name "jasperreports.properties" Sample gradle build: apply plugin: 'java' task zip(type: Zip) { from 'src/dist' // from configurations.runtime from extractFileFromJar("default.jasperreports.properties"); rename 'default.jasperreports.properties', 'jasperreports.properties' } def extractFileFromJar(String fileName) { // configurations.runtime.files.each { file -> println file} //it's not work // not finished part of build file FileTree tree = zipTree('someFile.zip') FileTree filtered = tree.matching { include fileName } } repositories { mavenCentral() } dependencies { runtime 'jasperreports:jasperreports:2.0.5' } How to get FileTree in extractFileFromJar() from dependency jasperreports-2.0.5.jar? In script above I use FileTree tree = zipTree('someFile.zip') but want to use somethink like (wrong, but human readable) FileTree tree = configurations.runtime.filter("jasperreports").singleFile.zipTree PS: Try to call def extractFileFromJar(String fileName) { configurations.runtime.files.each { file -> println file} //it's not work ... but it doesn't work with exception You can't change a configuration which is not in unresolved state!
Here is a possible solution (sometimes code says more than a thousand words): apply plugin: "java" repositories { mavenCentral() } configurations { jasper } dependencies { jasper('jasperreports:jasperreports:2.0.5') { transitive = false } } task zip(type: Zip) { from 'src/dist' // note that zipTree call is wrapped in closure so that configuration // is only resolved at execution time from({ zipTree(configurations.jasper.singleFile) }) { include 'default.jasperreports.properties' rename 'default.jasperreports.properties', 'jasperreports.properties' } }
Distribution
13,339,237
11
I am trying to write a Winbugs/Jags model for modeling multi grain topic models (exactly this paper -> http://www.ryanmcd.com/papers/mg_lda.pdf) Here I would like to choose a different distribution based on a particular value. For Eg: I would like to do something like `if ( X[i] > 0.5 ) { Z[i] ~ dcat(theta-gl[D[i], 1:K-gl]) W[i] ~ dcat(phi-gl[z[i], 1:V]) } else { Z[i] ~ dcat(theta-loc[D[i], 1:K-loc]) W[i] ~ dcat(phi-loc[z[i], 1:V]) } ` Is this possible to be done in Winbugs/JAGS?
Winbugs/JAGS is not a procedural language, so you cannot use the construct like that. Use step function. Quote from the manual: step(e) ...... 1 if e >= 0; 0 otherwise So you need a trick to change the condition: X[i] > 0.5 <=> X[i] - 0.5 > 0 <=> !(X[i] - 0.5 <= 0) <=> !(-(X[i] - 0.5) >= 0) <=> !(step(-(X[i] - 0.5)) == 1) <=> step(-(X[i] - 0.5)) == 0 and then use this for indexing trick: # then branch Z_branch[i, 1] ~ dcat(theta-gl[D[i], 1:K-gl]) W_branch[i, 1] ~ dcat(phi-gl[z[i], 1:V]) # else branch Z_branch[i, 2] ~ dcat(theta-loc[D[i], 1:K-loc]) W_branch[i, 2] ~ dcat(phi-loc[z[i], 1:V]) # decision here if_branch[i] <- 1 + step(-(X[i] - 0.5)) # 1 for "then" branch, 2 for "else" branch Z[i] ~ Z_branch[i, if_branch[i]] W[i] ~ W_branch[i, if_branch[i]]
Distribution
15,414,303
11
Javascript's Math.random() returns a psuedo-random number with "uniform" distribution. I need to generate a random number in the range [0,1] that is skewed to either side. (Meaning, higher chance of getting more numbers next to 0 or next to 1) Ideally I would like to have a parameter to set this curve. I supposed I can do Math.random^2 to get such a result, but what more sophisticated ways are there to achieve this?
I think you want beta distribution with alpha=beta=0.5 It is possible to transform uniform random number to beta distribution using inverse cumulative distribution. unif = Math.random() I am not familiar with javascript, but this should be clear: beta = sin(unif*pi/2)^2 PS: you can generate many such numbers and plot histogram Edit: For skewing towards 0, transform the beta values as - beta_left = (beta < 0.5) ? 2*beta : 2*(1-beta); For skewing towards 1, transform as - beta_right = (beta > 0.5) ? 2*beta-1 : 2*(1-beta)-1;
Distribution
16,110,758
11
We want to distribute our application in China, but we currently have a BIG issue. The application requires Google Play Services installed. It normally works well: the user is prompted a dialog, and the brought to the Google Play application where he can install the Google Play Services application. And in China? When an Android phone is bought in mainland China, it usually does not have Google Play installed. And it stays that way, users usually never download the Google Play application (see here and there). So what we are trying to do is: when we detect that the user does not have Google Play installed, we send him to an URL where he can download the Google Play Services APK directely. But - as expected - we could not find stable url where the apk is available for download. Did any of you encountered the same kind of problems? How did you resolve it? If not do you have ideas, suggestions? Any help would be much appreciated :) Thank you!
The Google Play Services APK is only available from the Google Play Store and doesn't support installation on devices without the store app, see http://developer.android.com/google/play-services/index.html Depending on what kind of functionality you use from the Google Play Service APK you would need to use a 3rd party API or implement it yourself.
Distribution
16,233,531
11
Frozen Distribution In scipy.stats you can create a frozen distribution that allows the parameterization (shape, location & scale) of the distribution to be permanently set for that instance. For example, you can create an gamma distribution (scipy.stats.gamma) with a,loc and scale parameters and freeze them so they do not have to be passed around every time that distribution is needed. import scipy.stats as stats # Parameters for this particular gamma distribution a, loc, scale = 3.14, 5.0, 2.0 # Do something with the general distribution parameterized print 'gamma stats:', stats.gamma(a, loc=loc, scale=scale).stats() # Create frozen distribution rv = stats.gamma(a, loc=loc, scale=scale) # Do something with the specific, already parameterized, distribution print 'rv stats :', rv.stats() gamma stats: (array(11.280000000000001), array(12.56)) rv stats : (array(11.280000000000001), array(12.56)) Accessible rv parameters? Since the parameters will most likely not be passed around as a result of this feature, is there a way to get those values back from only the frozen distribution, rv, later on?
Accessing rv frozen parameters Yes, the parameters used to create a frozen distribution are available within the instance of the distribution. They are stored within the args & kwds attribute. This will be dependent on if the distribution's instance was created with positional arguments or keyword arguments. import scipy.stats as stats # Parameters for this particular alpha distribution a, loc, scale = 3.14, 5.0, 2.0 # Create frozen distribution rv1 = stats.gamma(a, loc, scale) rv2 = stats.gamma(a, loc=loc, scale=scale) # Do something with frozen parameters print 'positional and keyword' print 'frozen args : {}'.format(rv1.args) print 'frozen kwds : {}'.format(rv1.kwds) print print 'positional only' print 'frozen args : {}'.format(rv2.args) print 'frozen kwds : {}'.format(rv2.kwds) positional and keyword frozen args : (3.14, 5.0, 2.0) frozen kwds : {} positional only frozen args : (3.14,) frozen kwds : {'loc': 5.0, 'scale': 2.0} Bonus: Private method that handles both args and kwds There is an private method, .dist._parse_args(), which handles both cases of parameter passing and will return a consistent result. # Get the original parameters regardless of argument type shape1, loc1, scale1 = rv1.dist._parse_args(*rv1.args, **rv1.kwds) shape2, loc2, scale2 = rv2.dist._parse_args(*rv2.args, **rv2.kwds) print 'positional and keyword' print 'frozen parameters: shape={}, loc={}, scale={}'.format(shape1, loc1, scale1) print print 'positional only' print 'frozen parameters: shape={}, loc={}, scale={}'.format(shape2, loc2, scale2) positional and keyword frozen parameters: shape=(3.14,), loc=5.0, scale=2.0 positional only frozen parameters: shape=(3.14,), loc=5.0, scale=2.0 Caveat Granted, using private methods is typically bad practice because technically internal APIs can always change, however, sometimes they provide nice features, would be easy to re-implement should things change and nothing is really private in Python :).
Distribution
37,501,075
11
I've checked the examples in the Boost website, but they are not what I'm looking for. To put it simple, I want to see if a number on a die is favored, using 600 rolls, so the average appearances of every number (1 through 6) should be 100. And I want to use the chi square distribution to check if the die is fair. Help!, how would I do this please ??
Suppose e[i] and o[i] are arrays holding the expected and observed count of rolls for each of the 6 possibilities. In your case, e[i] is 100 for each bin, and o[i] is the number of times i was rolled in your 600 trials. You then calculate the chi-squared statistic by summing (e[i]-o[i])2/e[i] over the 6 bins. Lets say your o[i] array came out with 105, 95, 102, 98, 98, and 102 counts after doing your 600 trials. chi2 = 52/100 + 52/100 + 22/100 + 22/100 + 22/100 + 22/100 = .660 You have five degrees of freedom (number of bins minus 1). So you're going to have a declaration like boost::math::chi_squared mydist(5); to create the Boost object representing your chi-square distribution. At this point you would use the cdf accessor function (cumulative distribution function) from the Boost library to look up the p-value corresponding to a chi-squared score of .660 with five degrees of freedom. p = boost::math::cdf(mydist,.660); You should get something close to 0.015, which would be interpreted as a (1 - .015) = 98.5% probability of observing a chi-squared score at least as extreme as 0.660, if one assumes the null hypothesis (that the die is fair) holds. So for this set of data, the null hypothesis cannot be rejected with any reasonable confidence level. (Disclaimer: untested code! But if I understand the Boost documentation correctly, this is how it should work.)
Distribution
2,079,937
10
Ok, so here's my problem. We are looking at purchasing a data set from a company to augment our existing data set. For the purposes of this question, let's say that this data set ranks places with an organic number (meaning that the number assigned to one place has no bearing on the number assigned to another). The technical range is 0 to infinity, but from sample sets that I've seen, it's 0 to 70. Based on the sample, it's most definitely not a uniform distribution (out of 10,000 there are maybe 5 places with a score over 40, 50 with a score over 10, and 1000 with a score over 1). Before we decide to purchase this set, we would like to simulate it so that we can see how useful it may be. So, to simulate it, I've been thinking about generating a random number for each place (about 150,000 random numbers). But, I also want to keep to the spirit of the data, and keep the distribution relatively the same (or at least reasonably close). I've been racking my brain all day trying to think of a way to do it, and have come up empty. One thought I had was to square the random number (between 0 and sqrt(70)). But that would favor both less than 1 and larger numbers. I'm thinking that he real distribution should be hyperbolic in the first quadrant... I'm just blanking on how to turn a linear, even distribution of random numbers into a hyperbolic distribution (If hyperbolic is even what I want in the first place). Any thoughts? So, to sum, here's the distribution I would like (approximately): 40 - 70: 0.02% - 0.05% 10 - 40: 0.5% - 1% 1 - 10: 10% - 20% 0 - 1 : Remainder (78.95% - 89.48%)
Look at distributions used in reliability analysis - they tend to have these long tails. A relatively simply possibility is the Weibull distribution with P(X>x)=exp[-(x/b)^a]. Fitting your values as P(X>1)=0.1 and P(X>10)=0.005, I get a=0.36 and b=0.1. This would imply that P(X>40)*10000=1.6, which is a bit too low, but P(X>70)*10000=0.2 which is reasonable. EDIT Oh, and to generate a Weibull-distributed random variable from a uniform(0,1) value U, just calculate b*[-log(1-u)]^(1/a). This is the inverse function of 1-P(X>x) in case I miscalculated something.
Distribution
3,109,670
10
We are an iPhone Developer Program member. We've got a DUNS number but not the 500 employees necessary to join the iPhone Developer Enterprise Program. Therefore I can can't see how things exactly operate for the Enterprise level. But we have customers that are big enough to be Enterprise developers and we could distribute our applications to them, and let them build and distribute them on their own. Ideally they could build our app, and distribute it and the associated enterprise distribution provisioning profile via their web site, and users could install both via iTunes. But... do they need to put every potential user's iPhone UUID in the enterprise distribution provisioning profile as we have to do as individual developers when we do ad-hoc distribution? I am thinking that they don't (have to include all UUIDs) but can't really find anything that specifically says this. Does anyone have experience with this and could shed some light on it, even better with pointers to where this is detailed or explained?
Using an iOS Enterprise Program distribution deployment method does NOT require you to enter every device id. All you need is a distribution certificate for signing and a provisioning profile built for it. Note that ANYONE that has the profile can run the app on their device, although you can revoke the profile if necessary. You are also given the standard test and Ad Hoc deployment mechanisms as with the standard Development Program. The Ad Hoc is limited to 100 devices, which I don't understand, but anyway, there it is.
Distribution
3,251,291
10
I am about to upload an app to iTunes Connect. I am not Team Agent, nor does it seem the Team Agent can make me a Team Agent. So he logged onto Member Center and downloaded a Distribution Certificate, which is in my Keychain along with the WWDR Certificate. The bundle identifier is set to se."companyname"."appname". When I set the Code signing identity to Distribution, it says no profiles match. Can only the Team Agent build the final apps for upload? How do I make XCode "use the right set of profiles"? Any idea on how to get past this last hurdle? :) Edit: can the Team Agent log onto Member Center and create a provisioning profile for the app, will that solve everything? Answer: See Paul Peeleen's answer, I decided to add this additional information (too long for comment). Paul, I'm going to mark yours as the correct answer, because it set me on the correct track... certificates are for the keychain (which is usually linked to a computer, or rather, a computer user's login, I guess). A quite separate distribution profile must be created for the app - modifying an existing Development certificate to include the Team Agent only lets him develop. The little 'a-ha' or perhaps 'd'oh' moment was that it has to be created in the Provision section with Distribution tab selected (in the provisioning portal). After that, in the Target Info/Build tab you just use the default automatic profile selector (dev/distro) and it's found automatically. I also temporarily tried adding the 'gibberish' (f.ex. JX567ERNB.) before the se.companyname.appname for the Bundle Identifier, but the automatic profile selector told me that it shouldn't be there, I removed it and it worked! The profiles are what enable the projects to use certificates in the Keychain, I guess.
"iPhone distribution no profiles match" is one of the most annoying issue that I have ever had with app development. This is how I sorted it out: In Developer under iOS Provisioning Portal I needed to generate 4 certificates and download the WWDR intermediate certificate to be able to submit my app to the App Store: Under Developer Certificate section (link) generate a Developer Certificate. Also Make sure that you have the WWDR intermediate certificate installed, if in doubt download it from there. Under Developer Certificate section (link) generate a Distribution Certificate (This is not that will show up in Xcode!) Under Provisioning section (link) generate a Development Provisioning profile certificate Under Provisioning section (link) generate a Distribution Provisioning profile. THIS WILL SHOW UP IN XCODE AS A DISTRIBUTION CERTIFICATE! After that I was able to select the iPhone distribution profile generated at 4. Also make sure that your target settings are correct as they overwrite the project settings. Your active provisioning profiles are listed under "Xcode/Organizer/Library/Provisioning Profiles" I hope it helps UPDATE: Some distribution provisioning profiles often just "disappear" from my list. So I have to download and install (just double click) them again from https://developer.apple.com/ios/manage/provisioningprofiles/viewDistributionProfiles.action not a big deal, but annoying.
Distribution
3,608,851
10
I Configured a distribution in SQL Server 2008 using both Wizard and T-SQL but after it when I want to remove it Using Wizard (right clicking on Replication and choosing 'Disable Publishing and Distribution...') or executing following command with and without its parameters: exec sp_dropdistributor @no_checks = 1 -- no new results with @ignore_distributor = 1 this Error would be presented: Msq 21122, Level 16, State 1, Procedure sp_dropdistributiondb Line 124 Cannot drop the distribution database 'lobloblob' because it is currently in use. I didn't publish any thing, didn't configure any subscription but gave this error what should I do ?
Try this: SELECT spid FROM sys.sysprocesses WHERE dbid = db_id('distribution') Kill the spid and try again. Now it should work.
Distribution
10,193,404
10
First, is this the correct C++ representation of the pdf gaussian function ? float pdf_gaussian = ( 1 / ( s * sqrt(2*M_PI) ) ) * exp( -0.5 * pow( (x-m)/s, 2.0 ) ); Second, does it make sense of we do something like this ? if(pdf_gaussian < uniform_random()) do something else do other thing EDIT: An example of what exactly are you trying to achieve: Say I have a data called Y1. Then a new data called Xi arrive. I want to see if I should associated Xi to Y1 or if I should keep Xi as a new data data that will be called Y2. This is based on the distance between the new data Xi and the existing data Y1. If Xi is "far" from Y1 then Xi will not be associated to Y1, otherwise if it is "not far", it will be associated to Y1. Now I want to model this "far" or "not far" using a gaussian probability based on the mean and stdeviation of distances between Y and the data that where already associated to Y in the past.
Technically, float pdf_gaussian = ( 1 / ( s * sqrt(2*M_PI) ) ) * exp( -0.5 * pow( (x-m)/s, 2.0 ) ); is not incorrect, but can be improved. First, 1 / sqrt(2 Pi) can be precomputed, and using pow with integers is not a good idea: it may use exp(2 * log x) or a routine specialized for floating point exponents instead of simply x * x. Example better code: float normal_pdf(float x, float m, float s) { static const float inv_sqrt_2pi = 0.3989422804014327; float a = (x - m) / s; return inv_sqrt_2pi / s * std::exp(-0.5f * a * a); } You may want to make this a template instead of using float: template <typename T> T normal_pdf(T x, T m, T s) { static const T inv_sqrt_2pi = 0.3989422804014327; T a = (x - m) / s; return inv_sqrt_2pi / s * std::exp(-T(0.5) * a * a); } this allows you to use normal_pdf on double arguments also (it is not that much more generic though). There are caveats with the last code, namely that you have to beware not using it with integers (there are workarounds, but this makes the routine more verbose).
Distribution
10,847,007
10
I have a simple table BIRDCOUNT below, showing how many birds were counted on any given day: +----------+ | NUMBIRDS | +----------+ | 123 | | 573 | | 3 | | 234 | +----------+ I would like to create a frequency distribution graph, showing how many times a number of birds were counted. So I need MySQL to create something like: +------------+-------------+ | BIRD_COUNT | TIMES_SEEN | +------------+-------------+ | 0-99 | 17 | | 100-299 | 23 | | 200-399 | 12 | | 300-499 | 122 | | 400-599 | 3 | +------------+-------------+ If the bird count ranges were fixed this would be easy. However, I never know the min/max of how many birds were seen. So I need a select statement that: Creates an output similar to above, always creating 10 ranges of counts. (more advanced) Creates output similar to above, always creating N ranges of counts. I don't know if #2 is possible in a single select but can anyone solve #1?
SELECT FLOOR( birds.bird_count / stat.diff ) * stat.diff as range_start, (FLOOR( birds.bird_count / stat.diff ) +1) * stat.diff -1 as range_end, count( birds.bird_count ) as times_seen FROM birds_table birds, (SELECT ROUND((MAX( bird_count ) - MIN( bird_count ))/10) AS diff FROM birds_table ) AS stat GROUP BY FLOOR( birds.bird_count / stat.diff ) Here You have answer for both of Your questions ;] with difference that start and end of range are in separate columns instead of concatenated but if You need it in one column I guess You can do it from here. To change number of ranges just edit number 10 You can find in sub-query.
Distribution
15,055,540
10
I am interested in using python to compute a confidence interval from a student t. I am using the StudentTCI() function in Mathematica and now need to code the same function in python http://reference.wolfram.com/mathematica/HypothesisTesting/ref/StudentTCI.html I am not quite sure how to build this function myself, but before I embark on that, is this function in python somewhere? Like numpy? (I haven't used numpy and my advisor advised not using numpy if possible). What would be the easiest way to solve this problem? Can I copy the source code from the StudentTCI() in numpy (if it exists) into my code as a function definition? edit: I'm going to need to build the Student TCI using python code (if possible). Installing scipy has turned into a dead end. I am having the same problem everyone else is having, and there is no way I can require Scipy for the code I distribute if it takes this long to set up. Anyone know how to look at the source code for the algorithm in the scipy version? I'm thinking I'll refactor it into a python definition.
I guess you could use scipy.stats.t and its interval method: In [1]: from scipy.stats import t In [2]: t.interval(0.95, 10, loc=1, scale=2) # 95% confidence interval Out[2]: (-3.4562777039298762, 5.4562777039298762) In [3]: t.interval(0.99, 10, loc=1, scale=2) # 99% confidence interval Out[3]: (-5.338545334351676, 7.338545334351676) Sure, you can make your own function if you like. Let's make it look like in Mathematica: from scipy.stats import t def StudentTCI(loc, scale, df, alpha=0.95): return t.interval(alpha, df, loc, scale) print StudentTCI(1, 2, 10) print StudentTCI(1, 2, 10, 0.99) Result: (-3.4562777039298762, 5.4562777039298762) (-5.338545334351676, 7.338545334351676)
Distribution
17,203,403
10
I have frequency values changing with the time (x axis units), as presented on the picture below. After some normalization these values may be seen as data points of a density function for some distribution. Q: Assuming that these frequency points are from Weibull distribution T, how can I fit best Weibull density function to the points so as to infer the distribution T parameters from it? sample <- c(7787,3056,2359,1759,1819,1189,1077,1080,985,622,648,518, 611,1037,727,489,432,371,1125,69,595,624) plot(1:length(sample), sample, type = "l") points(1:length(sample), sample) Update. To prevent from being misunderstood, I would like to add little more explanation. By saying I have frequency values changing with the time (x axis units) I mean I have data which says that I have: 7787 realizations of value 1 3056 realizations of value 2 2359 realizations of value 3 ... etc. Some way towards my goal (incorrect one, as I think) would be to create a set of these realizations: # Loop to simulate values set.values <- c() for(i in 1:length(sample)){ set.values <<- c(set.values, rep(i, times = sample[i])) } hist(set.values) lines(1:length(sample), sample) points(1:length(sample), sample) and use fitdistr on the set.values: f2 <- fitdistr(set.values, 'weibull') f2 Why I think it is incorrect way and why I am looking for a better solution in R? in the distribution fitting approach presented above it is assumed that set.values is a complete set of my realisations from the distribution T in my original question I know the points from the first part of the density curve - I do not know its tail and I want to estimate the tail (and the whole density function)
Here is a better attempt, like before it uses optim to find the best value constrained to a set of values in a box (defined by the lower and upper vectors in the optim call). Notice it scales x and y as part of the optimization in addition to the Weibull distribution shape parameter, so we have 3 parameters to optimize over. Unfortunately when using all the points it pretty much always finds something on the edges of the constraining box which indicates to me that maybe Weibull is maybe not a good fit for all of the data. The problem is the two points - they ares just too large. You see the attempted fit to all data in the first plot. If I drop those first two points and just fit the rest, we get a much better fit. You see this in the second plot. I think this is a good fit, it is in any case a local minimum in the interior of the constraining box. library(optimx) sample <- c(60953,7787,3056,2359,1759,1819,1189,1077,1080,985,622,648,518, 611,1037,727,489,432,371,1125,69,595,624) t.sample <- 0:22 s.fit <- sample[3:23] t.fit <- t.sample[3:23] wx <- function(param) { res <- param[2]*dweibull(t.fit*param[3],shape=param[1]) return(res) } minwx <- function(param){ v <- s.fit-wx(param) sqrt(sum(v*v)) } p0 <- c(1,200,1/20) paramopt <- optim(p0,minwx,gr=NULL,lower=c(0.1,100,0.01),upper=c(1.1,5000,1)) popt <- paramopt$par popt rms <- paramopt$value tit <- sprintf("Weibull - Shape:%.3f xscale:%.1f yscale:%.5f rms:%.1f",popt[1],popt[2],popt[3],rms) plot(t.sample[2:23], sample[2:23], type = "p",col="darkred") lines(t.fit, wx(popt),col="blue") title(main=tit)
Distribution
29,054,270
10
I would like to make a word frequency distribution, with the words on the x-axis and the frequency count on the y-axis. I have the following list: example_list = [('dhr', 17838), ('mw', 13675), ('wel', 5499), ('goed', 5080), ('contact', 4506), ('medicatie', 3797), ('uur', 3792), ('gaan', 3473), ('kwam', 3463), ('kamer', 3447), ('mee', 3278), ('gesprek', 2978)] I tried to first convert it into a pandas DataFrame and then use the pd.hist() as in the example below, but I just can't figure it out and think it is actually straight forward but probably I'm missing something. import numpy as np import matplotlib.pyplot as plt word = [] frequency = [] for i in range(len(example_list)): word.append(example_list[i][0]) frequency.append(example_list[i][1]) plt.bar(word, frequency, color='r') plt.show()
Using pandas: import pandas as pd import matplotlib.pyplot as plt example_list = [('dhr', 17838), ('mw', 13675), ('wel', 5499), ('goed', 5080), ('contact', 4506), ('medicatie', 3797), ('uur', 3792), ('gaan', 3473), ('kwam', 3463), ('kamer', 3447), ('mee', 3278), ('gesprek', 2978)] df = pd.DataFrame(example_list, columns=['word', 'frequency']) df.plot(kind='bar', x='word')
Distribution
45,080,698
10
Suppose I have the variable x that was generated using the following approach: x <- rgamma(100,2,11) + rnorm(100,0,.01) #gamma distr + some gaussian noise head(x,20) [1] 0.35135058 0.12784251 0.23770365 0.13095612 0.18796901 0.18251968 [7] 0.20506117 0.25298286 0.11888596 0.07953969 0.09763770 0.28698417 [13] 0.07647302 0.17489578 0.02594517 0.14016041 0.04102864 0.13677059 [19] 0.18963015 0.23626828 How could I fit a gamma distribution to it?
A good alternative is the fitdistrplus package by ML Delignette-Muller et al. For instance, generating data using your approach: set.seed(2017) x <- rgamma(100,2,11) + rnorm(100,0,.01) library(fitdistrplus) fit.gamma <- fitdist(x, distr = "gamma", method = "mle") summary(fit.gamma) Fitting of the distribution ' gamma ' by maximum likelihood Parameters : estimate Std. Error shape 2.185415 0.2885935 rate 12.850432 1.9066390 Loglikelihood: 91.41958 AIC: -178.8392 BIC: -173.6288 Correlation matrix: shape rate shape 1.0000000 0.8900242 rate 0.8900242 1.0000000 plot(fit.gamma)
Distribution
45,536,234
10
Below is the describe output for both my clusterissuer and certificate reource. I am brand new to cert-manager so not 100% sure this is set up properly - we need to use http01 validation however we are not using an nginx controller. Right now we only have 2 microservices so the public-facing IP address simply belongs to a k8s service (type loadbalancer) which routes traffic to a pod where an Extensible Service Proxy container sits in front of the container running the application code. Using this set up I haven't been able to get anything beyond the errors below, however as I mentioned I'm brand new to cert-manager & ESP so this could be configured incorrectly... Name: clusterissuer-dev Namespace: Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: API Version: cert-manager.io/v1beta1 Kind: ClusterIssuer Metadata: Creation Timestamp: 2020-08-07T18:46:29Z Generation: 1 Resource Version: 4550439 Self Link: /apis/cert-manager.io/v1beta1/clusterissuers/clusterissuer-dev UID: 65933d87-1893-49af-b90e-172919a18534 Spec: Acme: Email: email@test.com Private Key Secret Ref: Name: letsencrypt-dev Server: https://acme-staging-v02.api.letsencrypt.org/directory Solvers: http01: Ingress: Class: nginx Status: Acme: Last Registered Email: email@test.com Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/15057658 Conditions: Last Transition Time: 2020-08-07T18:46:30Z Message: The ACME account was registered with the ACME server Reason: ACMEAccountRegistered Status: True Type: Ready Events: <none> Name: test-cert-default-ns Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: API Version: cert-manager.io/v1beta1 Kind: Certificate Metadata: Creation Timestamp: 2020-08-10T15:05:31Z Generation: 2 Resource Version: 5961064 Self Link: /apis/cert-manager.io/v1beta1/namespaces/default/certificates/test-cert-default-ns UID: 259f62e0-b272-47d6-b70e-dbcb7b4ed21b Spec: Dns Names: dev.test.com Issuer Ref: Name: clusterissuer-dev Secret Name: clusterissuer-dev-tls Status: Conditions: Last Transition Time: 2020-08-10T15:05:31Z Message: Issuing certificate as Secret does not exist Reason: DoesNotExist Status: False Type: Ready Last Transition Time: 2020-08-10T15:05:31Z Message: Issuing certificate as Secret does not exist Reason: DoesNotExist Status: True Type: Issuing Next Private Key Secret Name: test-cert-default-ns-rrl7j Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Requested 2m51s cert-manager Created new CertificateRequest resource "test-cert-default-ns-c4wxd" One last item - if I run the command kubectl get certificate -o wide I get the following output. NAME READY SECRET ISSUER STATUS AGE test-cert-default-ns False clusterissuer-dev-tls clusterissuer-dev Issuing certificate as Secret does not exist 2d23h
I had the same issue and I followed the advice given in the comments by @Popopame suggesting to check out the troubleshooting guide of cert-manager to find out how to troubleshoot cert-manager. or [cert-managers troubleshooting guide for acme issues] to find out which part of the acme process breaks the setup. It seems that often it is the acme-challenge where letsencrypt verifies the domain ownership by requesting a certain code be offered at port 80 at a certain path. For example: http://example.com/.well-known/acme-challenge/M8iYs4tG6gM-B8NHuraXRL31oRtcE4MtUxRFuH8qJmY. Notice the http:// that shows letsencrypt will try to validate domain ownership on port 80 of your desired domain. So one of the common errors is, that cert-manager could not put the correct challenge in the correct path behind port 80. For example due to a firewall blocking port 80 on a bare metal server or a loadbalancer that only forwards port 443 to the kubernetes cluster and redirects to 443 directly. Also be aware of the fact, that cert-manager tries to validate the ACME challenge as well so you should configure the firewalls to allow requests coming from your servers as well. If you have trouble getting your certificate to a different namespace, this would be a good point to start with. In your specific case I would guess at a problem with the ACME challenge as the CSR (Certificate Signing Request) was created as indicated in the bottom most describe line but nothing else happened.
cert-manager
63,346,728
54
I'm running into an issue handling tls certificates with cert-manager, I'm following the documentation and added some extras to work with Traefik as an ingress. Currently, I have this YAML files: cluster-issuer.yaml apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-staging namespace: secure-alexguedescom spec: acme: email: user@gmail.com server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: # Secret resource used to store the account's private key. name: letsencrypt-staging # Add a single challenge solver, HTTP01 using nginx solvers: - selector: {} http01: ingress: class: traefik-cert-manager traefik-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: # add an annotation indicating the issuer to use. cert-manager.io/cluster-issuer: letsencrypt-staging name: secure-alexguedescom-ingress-http namespace: secure-alexguedescom spec: rules: - host: secure.alexguedes.com http: paths: - backend: serviceName: secure-alexguedescom-nginx servicePort: 80 path: / tls: - hosts: - secure.alexguedes.com secretName: secure-alexguedescom-cert cert-staging.yaml apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: secure-alexguedescom-cert namespace: secure-alexguedescom spec: commonName: secure.alexguedes.com secretName: letsencrypt-staging dnsNames: - secure.alexguedes.com issuerRef: name: letsencrypt-staging kind: ClusterIssuer Inspecting the certs I have this error message: Message: Issuing certificate as Secret does not contain a certificate Reason: MissingData Also inspecting the certificaterequest I have this log messages: Status: Conditions: Last Transition Time: 2020-08-16T00:32:01Z Message: Waiting on certificate issuance from order secure-alexguedescom/secure-alexguedescom-cert-q8w5p-1982372682: "pending" Reason: Pending Status: False Type: Ready Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal OrderCreated 11m cert-manager Created Order resource secure-alexguedescom/secure-alexguedescom-cert-q8w5p-1982372682 Normal OrderPending 11m cert-manager Waiting on certificate issuance from order secure-alexguedescom/secure-alexguedescom-cert-q8w5p-1982372682: "" I'm not sure which piece is wrong, using Helm v2 with Tiller and k8s v1.7 Any ideas? Thanks in advance
The typical problem with letsencrypt certs is the letsencrypt itself not being able to validate who you are and that you own the domain. In this case, alexguedes.com. With cert-manager you can do Domain Validation and HTTP Validation. Based on the posted ClusterIssuer you are doing HTTP Validation. So you need to make sure that secure.alexguedes.com resolves to a globally available IP address and that Traefik port 443 is listening on that IP address.
cert-manager
63,432,101
18
I can't seem to get cert-manager working: $ kubectl get certificates -o wide NAME READY SECRET ISSUER STATUS AGE example-ingress False example-ingress letsencrypt-prod Waiting for CertificateRequest "example-ingress-2556707613" to complete 6m23s $ kubectl get CertificateRequest -o wide NAME READY ISSUER STATUS AGE example-ingress-2556707613 False letsencrypt-prod Referenced "Issuer" not found: issuer.cert-manager.io "letsencrypt-prod" not found 7m7s and in the logs i see: I1025 06:22:00.117292 1 sync.go:163] cert-manager/controller/ingress-shim "level"=0 "msg"="certificate already exists for ingress resource, ensuring it is up to date" "related_resource_kind"="Certificate" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Ingress" "resource_name"="example-ingress" "resource_namespace"="default" I1025 06:22:00.117341 1 sync.go:176] cert-manager/controller/ingress-shim "level"=0 "msg"="certificate resource is already up to date for ingress" "related_resource_kind"="Certificate" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Ingress" "resource_name"="example-ingress" "resource_namespace"="default" I1025 06:22:00.117382 1 controller.go:135] cert-manager/controller/ingress-shim "level"=0 "msg"="finished processing work item" "key"="default/example-ingress" I1025 06:22:00.118026 1 sync.go:361] cert-manager/controller/certificates "level"=0 "msg"="no existing CertificateRequest resource exists, creating new request..." "related_resource_kind"="Secret" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Certificate" "resource_name"="example-ingress" "resource_namespace"="default" I1025 06:22:00.147147 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-venafi "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613" I1025 06:22:00.147267 1 sync.go:373] cert-manager/controller/certificates "level"=0 "msg"="created certificate request" "related_resource_kind"="Secret" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Certificate" "resource_name"="example-ingress" "resource_namespace"="default" "request_name"="example-ingress-2556707613" I1025 06:22:00.147284 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-acme "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613" I1025 06:22:00.147273 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.147254385 +0000 UTC m=+603.871617341 I1025 06:22:00.147392 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.147380513 +0000 UTC m=+603.871743521 E1025 06:22:00.147560 1 pki.go:128] cert-manager/controller/certificates "msg"="error decoding x509 certificate" "error"="error decoding cert PEM block" "related_resource_kind"="Secret" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Certificate" "resource_name"="example-ingress" "resource_namespace"="default" "secret_key"="tls.crt" I1025 06:22:00.147620 1 conditions.go:155] Setting lastTransitionTime for Certificate "example-ingress" condition "Ready" to 2019-10-25 06:22:00.147613112 +0000 UTC m=+603.871976083 I1025 06:22:00.147731 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-ca "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613" I1025 06:22:00.147765 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.14776244 +0000 UTC m=+603.872125380 I1025 06:22:00.147912 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-selfsigned "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613" I1025 06:22:00.147942 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.147938966 +0000 UTC m=+603.872301909 I1025 06:22:00.147968 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-vault "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613" I1025 06:22:00.148023 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.148017945 +0000 UTC m=+603.872380906 i deployed cert-manager via the manifest: https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml $ kubectl get clusterissuer letsencrypt-prod -o yaml apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"cert-manager.io/v1alpha2","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-prod"},"spec":{"acme":{"email":"me@me.com","privateKeySecretRef":{"name":"letsencrypt-prod"},"server":"https://acme-staging-v02.api.letsencrypt.org/directory","solvers":[{"http01":{"ingress":{"class":"nginx"}},"selector":{}}]}}} creationTimestamp: "2019-10-25T06:27:06Z" generation: 1 name: letsencrypt-prod resourceVersion: "1759784" selfLink: /apis/cert-manager.io/v1alpha2/clusterissuers/letsencrypt-prod uid: 05831417-b359-42de-8298-60da553575f2 spec: acme: email: me@me.com privateKeySecretRef: name: letsencrypt-prod server: https://acme-staging-v02.api.letsencrypt.org/directory solvers: - http01: ingress: class: nginx selector: {} status: acme: lastRegisteredEmail: me@me.com uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/11410425 conditions: - lastTransitionTime: "2019-10-25T06:27:07Z" message: The ACME account was registered with the ACME server reason: ACMEAccountRegistered status: "True" type: Ready and my ingress is: $ kubectl get ingress example-ingress -o yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: cert-manager.io/issuer: letsencrypt-prod kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"cert-manager.io/issuer":"letsencrypt-prod","kubernetes.io/ingress.class":"nginx","kubernetes.io/tls-acme":"true"},"name":"example-ingress","namespace":"default"},"spec":{"rules":[{"host":"example-ingress.example.com","http":{"paths":[{"backend":{"serviceName":"apple-service","servicePort":5678},"path":"/apple"},{"backend":{"serviceName":"banana-service","servicePort":5678},"path":"/banana"}]}}],"tls":[{"hosts":["example-ingress.example.com"],"secretName":"example-ingress"}]}} kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: "true" creationTimestamp: "2019-10-25T06:22:00Z" generation: 1 name: example-ingress namespace: default resourceVersion: "1758822" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/example-ingress uid: 921b2e91-9101-4c3c-a0d8-3f871dafdd30 spec: rules: - host: example-ingress.example.com http: paths: - backend: serviceName: apple-service servicePort: 5678 path: /apple - backend: serviceName: banana-service servicePort: 5678 path: /banana tls: - hosts: - example-ingress.example.com secretName: example-ingress status: loadBalancer: ingress: - ip: x.y.z.a any idea whats wrong? cheers,
Your ingress is referring to an issuer, but the issuer is a ClusterIssuer. Could that be the reason? I have a similar setup with Issuer instead of a ClusterIssuer and it is working.
cert-manager
58,553,510
14
I am using cert-manager 0.5.2 to manage Let's Encrypt certificates on our Kubernetes cluster. I was using the Let's Encrypt staging environment, but have now moved to use their production certificates. The problem is that my applications aren't updating to the new, valid certificates. I must have screwed something up while updating the issuer, certificate, and ingress resources, but I can't see what. I have also reinstalled the NGINX ingress controller and cert-manager, and recreated my applications, but I am still getting old certificates. What can I do next? Describing the letsencrypt cluster issuer: Name: letsencrypt Namespace: Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt","namespace":""},"spec":{"acme":{"e... API Version: certmanager.k8s.io/v1alpha1 Kind: ClusterIssuer Metadata: Cluster Name: Creation Timestamp: 2019-01-04T09:27:49Z Generation: 0 Resource Version: 130088 Self Link: /apis/certmanager.k8s.io/v1alpha1/letsencrypt UID: 00f0ea0f-1003-11e9-997f-ssh3b4bcc625 Spec: Acme: Email: administrator@domain.com Http 01: Private Key Secret Ref: Key: Name: letsencrypt Server: https://acme-v02.api.letsencrypt.org/directory Status: Acme: Uri: https://acme-v02.api.letsencrypt.org/acme/acct/48899673 Conditions: Last Transition Time: 2019-01-04T09:28:33Z Message: The ACME account was registered with the ACME server Reason: ACMEAccountRegistered Status: True Type: Ready Events: <none> Describing the tls-secret certificate: Name: tls-secret Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Certificate","metadata":{"annotations":{},"name":"tls-secret","namespace":"default"},"spec":{"acme"... API Version: certmanager.k8s.io/v1alpha1 Kind: Certificate Metadata: Cluster Name: Creation Timestamp: 2019-01-04T09:28:13Z Resource Version: 130060 Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/tls-secret UID: 0f38w7y4-1003-11e9-997f-e6e9b4bcc625 Spec: Acme: Config: Domains: mydomain.com Http 01: Ingress Class: nginx Dns Names: mydomain.com Issuer Ref: Kind: ClusterIssuer Name: letsencrypt Secret Name: tls-secret Events: <none> Describing the aks-ingress ingress controller: Name: aks-ingress Namespace: default Address: Default backend: default-http-backend:80 (<none>) TLS: tls-secret terminates mydomain.com Rules: Host Path Backends ---- ---- -------- mydomain.com / myapplication:80 (<none>) Annotations: kubectl.kubernetes.io/last-applied-configuration: ... kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / certmanager.k8s.io/cluster-issuer: letsencrypt Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 21m nginx-ingress-controller Ingress default/aks-ingress Normal CREATE 21m nginx-ingress-controller Ingress default/aks-ingress Logs for cert-manager after restarting the server: I0104 09:28:38.378953 1 setup.go:144] Skipping re-verifying ACME account as cached registration details look sufficient. I0104 09:28:38.379058 1 controller.go:154] clusterissuers controller: Finished processing work item "letsencrypt" I0104 09:28:38.378953 1 setup.go:144] Skipping re-verifying ACME account as cached registration details look sufficient. I0104 09:28:38.379058 1 controller.go:154] clusterissuers controller: Finished processing work item "letsencrypt" I0104 09:28:38.378455 1 controller.go:140] clusterissuers controller: syncing item 'letsencrypt' I0104 09:28:38.378455 1 controller.go:140] clusterissuers controller: syncing item 'letsencrypt' I0104 09:28:33.440466 1 controller.go:185] certificates controller: Finished processing work item "default/tls-secret" I0104 09:28:33.440417 1 sync.go:206] Certificate default/tls-secret scheduled for renewal in 1423 hours I0104 09:28:33.440466 1 controller.go:185] certificates controller: Finished processing work item "default/tls-secret" I0104 09:28:33.440417 1 sync.go:206] Certificate default/tls-secret scheduled for renewal in 1423 hours I0104 09:28:33.439824 1 controller.go:171] certificates controller: syncing item 'default/tls-secret' I0104 09:28:33.439824 1 controller.go:171] certificates controller: syncing item 'default/tls-secret' I0104 09:28:33.377556 1 controller.go:154] clusterissuers controller: Finished processing work item "letsencrypt" I0104 09:28:33.377556 1 controller.go:154] clusterissuers controller: Finished processing work item "letsencrypt" I0104 09:28:33.359246 1 helpers.go:147] Setting lastTransitionTime for ClusterIssuer "letsencrypt" condition "Ready" to 2019-01-04 09:28:33.359214315 +0000 UTC m=+79.014291591 I0104 09:28:33.359178 1 setup.go:181] letsencrypt: verified existing registration with ACME server I0104 09:28:33.359178 1 setup.go:181] letsencrypt: verified existing registration with ACME server I0104 09:28:33.359246 1 helpers.go:147] Setting lastTransitionTime for ClusterIssuer "letsencrypt" condition "Ready" to 2019-01-04 09:28:33.359214315 +0000 UTC m=+79.014291591 I0104 09:28:32.427832 1 controller.go:140] clusterissuers controller: syncing item 'letsencrypt' I0104 09:28:32.427978 1 controller.go:182] ingress-shim controller: Finished processing work item "default/aks-ingress" I0104 09:28:32.427832 1 controller.go:140] clusterissuers controller: syncing item 'letsencrypt' I0104 09:28:32.427832 1 controller.go:168] ingress-shim controller: syncing item 'default/aks-ingress' I0104 09:28:32.428133 1 logger.go:88] Calling GetAccount I0104 09:28:32.427936 1 sync.go:140] Certificate "tls-secret" for ingress "aks-ingress" already exists I0104 09:28:32.427965 1 sync.go:143] Certificate "tls-secret" for ingress "aks-ingress" is up to date I0104 09:28:32.427978 1 controller.go:182] ingress-shim controller: Finished processing work item "default/aks-ingress" I0104 09:28:32.428133 1 logger.go:88] Calling GetAccount I0104 09:28:32.427936 1 sync.go:140] Certificate "tls-secret" for ingress "aks-ingress" already exists I0104 09:28:32.427832 1 controller.go:168] ingress-shim controller: syncing item 'default/aks-ingress' I0104 09:28:32.427965 1 sync.go:143] Certificate "tls-secret" for ingress "aks-ingress" is up to date I0104 09:28:29.439299 1 controller.go:171] certificates controller: syncing item 'default/tls-secret' E0104 09:28:29.439586 1 controller.go:180] certificates controller: Re-queuing item "default/tls-secret" due to error processing: Issuer letsencrypt not ready I0104 09:28:29.439404 1 sync.go:120] Issuer letsencrypt not ready E0104 09:28:29.439586 1 controller.go:180] certificates controller: Re-queuing item "default/tls-secret" due to error processing: Issuer letsencrypt not ready I0104 09:28:29.439299 1 controller.go:171] certificates controller: syncing item 'default/tls-secret' I0104 09:28:29.439404 1 sync.go:120] Issuer letsencrypt not ready I0104 09:28:27.404656 1 controller.go:68] Starting certificates controller I0104 09:28:27.404606 1 controller.go:68] Starting issuers controller I0104 09:28:27.404325 1 controller.go:68] Starting ingress-shim controller I0104 09:28:27.404606 1 controller.go:68] Starting issuers controller I0104 09:28:27.404325 1 controller.go:68] Starting ingress-shim controller I0104 09:28:27.404269 1 controller.go:68] Starting clusterissuers controller I0104 09:28:27.404656 1 controller.go:68] Starting certificates controller I0104 09:28:27.404269 1 controller.go:68] Starting clusterissuers controller I0104 09:28:27.402806 1 leaderelection.go:184] successfully acquired lease kube-system/cert-manager-controller I0104 09:28:27.402806 1 leaderelection.go:184] successfully acquired lease kube-system/cert-manager-controller I0104 09:27:14.359634 1 server.go:84] Listening on http://0.0.0.0:9402 I0104 09:27:14.357610 1 controller.go:126] Using the following nameservers for DNS01 checks: [10.0.0.10:53] I0104 09:27:14.357610 1 controller.go:126] Using the following nameservers for DNS01 checks: [10.0.0.10:53] I0104 09:27:14.358408 1 leaderelection.go:175] attempting to acquire leader lease kube-system/cert-manager-controller... I0104 09:27:14.359634 1 server.go:84] Listening on http://0.0.0.0:9402 I0104 09:27:14.356692 1 start.go:79] starting cert-manager v0.5.2 (revision 9e8c3ad899c5aafaa360ca947eac7f5ba6301035) I0104 09:27:14.358408 1 leaderelection.go:175] attempting to acquire leader lease kube-system/cert-manager-controller... I0104 09:27:14.356692 1 start.go:79] starting cert-manager v0.5.2 (revision 9e8c3ad899c5aafaa360ca947eac7f5ba6301035) Certificate resource: apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: tls-secret spec: secretName: tls-secret dnsNames: - mydomain.com acme: config: - http01: ingressClass: nginx domains: - mydomain.com issuerRef: name: letsencrypt kind: ClusterIssuer
in this case the problem went away after recreating the secret and the cert-manager certificate resource. generally what you want to check, annotations on your ingress resource (certmanager.k8s.io/cluster-issuer: letsencrypt), cert-manager certificate resource, ssl certificate secret in k8s and in ingress resource
cert-manager
54,038,028
12
In tensorflow the training from the scratch produced following 6 files: events.out.tfevents.1503494436.06L7-BRM738 model.ckpt-22480.meta checkpoint model.ckpt-22480.data-00000-of-00001 model.ckpt-22480.index graph.pbtxt I would like to convert them (or only the needed ones) into one file graph.pb to be able to transfer it to my Android application. I tried the script freeze_graph.py but it requires as an input already the input.pb file which I do not have. (I have only these 6 files mentioned before). How to proceed to get this one freezed_graph.pb file? I saw several threads but none was working for me.
You can use this simple script to do that. But you must specify the names of the output nodes. import tensorflow as tf meta_path = 'model.ckpt-22480.meta' # Your .meta file output_node_names = ['output:0'] # Output nodes with tf.Session() as sess: # Restore the graph saver = tf.train.import_meta_graph(meta_path) # Load weights saver.restore(sess,tf.train.latest_checkpoint('path/of/your/.meta/file')) # Freeze the graph frozen_graph_def = tf.graph_util.convert_variables_to_constants( sess, sess.graph_def, output_node_names) # Save the frozen graph with open('output_graph.pb', 'wb') as f: f.write(frozen_graph_def.SerializeToString()) If you don't know the name of the output node or nodes, there are two ways You can explore the graph and find the name with Netron or with console summarize_graph utility. You can use all the nodes as output ones as shown below. output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node] (Note that you have to put this line just before convert_variables_to_constants call.) But I think it's unusual situation, because if you don't know the output node, you cannot use the graph actually.
Check Point
45,864,363
38
Using CheckPoint I'm trying to use a VPN access from work to my clients site, which worked fine in Windows 7 and 8. But in Windows 10 I'm getting the error "ssl network extender service is down..." I get the error message just at the beginning of the request, when CheckPoint is trying to connect. Trying to run Internet Explorer emulated as an another browser has no effect.
I resolved this issue by running IE 11 as an Administrator.
Check Point
32,646,572
23
When I run code such as the following: val newRDD = prevRDD.map(a => (a._1, 1L)).distinct.persist(StorageLevel.MEMORY_AND_DISK_SER) newRDD.checkpoint print(newRDD.count()) and watch the stages in Yarn, I notice that Spark is doing the DAG calculation TWICE -- once for the distinct+count that materializes the RDD and caches it, and then a completely SECOND time to created the checkpointed copy. Since the RDD is already materialized and cached, why doesn't the checkpointing simply take advantage of this, and save the cached partitions to disk? Is there an existing way (some kind of configuration setting or code change) to force Spark to take advantage of this and only run the operation ONCE, and checkpointing will just copy things? Do I need to "materialize" twice, instead? val newRDD = prevRDD.map(a => (a._1, 1L)).distinct.persist(StorageLevel.MEMORY_AND_DISK_SER) print(newRDD.count()) newRDD.checkpoint print(newRDD.count()) I've created an Apache Spark Jira ticket to make this a feature request: https://issues.apache.org/jira/browse/SPARK-8666
Looks like this may be a known issue. See an older JIRA ticket, https://issues.apache.org/jira/browse/SPARK-8582
Check Point
31,078,350
10
I would like to provision with my three nodes from the last one by using Ansible. My host machine is Windows 10. My Vagrantfile looks like: Vagrant.configure("2") do |config| (1..3).each do |index| config.vm.define "node#{index}" do |node| node.vm.box = "ubuntu" node.vm.box = "../boxes/ubuntu_base.box" node.vm.network :private_network, ip: "192.168.10.#{10 + index}" if index == 3 node.vm.provision :setup, type: :ansible_local do |ansible| ansible.playbook = "playbook.yml" ansible.provisioning_path = "/vagrant/ansible" ansible.inventory_path = "/vagrant/ansible/hosts" ansible.limit = :all ansible.install_mode = :pip ansible.version = "2.0" end end end end end My playbook looks like: --- # my little playbook - name: My little playbook hosts: webservers gather_facts: false roles: - create_user My hosts file looks like: [webservers] 192.168.10.11 192.168.10.12 [dbservers] 192.168.10.11 192.168.10.13 [all:vars] ansible_connection=ssh ansible_ssh_user=vagrant ansible_ssh_pass=vagrant After executing vagrant up --provision I got the following error: Bringing machine 'node1' up with 'virtualbox' provider... Bringing machine 'node2' up with 'virtualbox' provider... Bringing machine 'node3' up with 'virtualbox' provider... ==> node3: Running provisioner: setup (ansible_local)... node3: Running ansible-playbook... PLAY [My little playbook] ****************************************************** TASK [create_user : Create group] ********************************************** fatal: [192.168.10.11]: FAILED! => {"failed": true, "msg": "ERROR! Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."} fatal: [192.168.10.12]: FAILED! => {"failed": true, "msg": "ERROR! Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."} PLAY RECAP ********************************************************************* 192.168.10.11 : ok=0 changed=0 unreachable=0 failed=1 192.168.10.12 : ok=0 changed=0 unreachable=0 failed=1 Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. I extended my Vagrantfile with ansible.limit = :all and added [all:vars] to the hostfile, but still cannot get through the error. Has anyone encountered the same issue?
Create a file ansible/ansible.cfg in your project directory (i.e. ansible.cfg in the provisioning_path on the target) with the following contents: [defaults] host_key_checking = false provided that your Vagrant box has sshpass already installed - it's unclear, because the error message in your question suggests it was installed (otherwise it would be "ERROR! to use the 'ssh' connection type with passwords, you must install the sshpass program"), but in your answer you add it explicitly (sudo apt-get install sshpass), like it was not
Ansible
42,462,435
57
In my system provisioning with Ansible, I don't want to specify become=yes in every task, so I created the following ansible.cfg in the project main directory, and Ansible automatically runs everything as root: [privilege_escalation] become = True But as the project kept growing, some new roles should not be run as root. I would like to know if it is possible to have some instruction inside the role that all tasks whithin that role should be run as root (eg. something in vars/), instead of the global ansible.cfg solution above!
I have found a solution, although I think a better solution should be implemented by the Ansible team. Rename main.yml to tasks.yml, and then write the following to main.yml: --- - { include: tasks.yml, become: yes } Another solution is to pass the parameter directly in site.yml, but the main idea of the question was reusing the role in other projects without forgetting it needs root: --- - hosts: localhost roles: - { role: name, become: yes }
Ansible
39,183,100
57
I have a large Ansible playbook where Docker images are built when running it. I am using an increasing number as the tag to version them. Currently, I have to specify this in every hosts: section. I know there are global variables but from what I found by searching for "ansible" "global variables", they have to defined outside of the playbook. Is it possible to define global variables which are global for the playbook?
Ansible has a default all group that, funnily enough, contains all the hosts in the inventory file. As such you can do like with any host groups and provide group_vars for the host group. As shown in the previous link these can be defined directly in the inventory file or they can be contained in a separate file named after the group in a group_vars directory at the same directory level as the inventory file. An example directory structure might then look something like: -ansible |--inventory | |--group_vars | | |--all | | |--dev | | |--test | | |--prod | | |--webservers | | |--databases | |--dev | |--test | |--prod |--roles ... Your dev inventory file might then look something like: [dev:children] webservers databases [webservers] web1.dev web2.dev [databases] database-master.dev database-slave.dev All of these hosts will now pick up any host specific config (that could be defined either in line or, just like with group_vars can be put into a host_vars folder) and also config for the specific groups they are in such as webservers and then the groups they also inherit from such as dev but also, by default, all. This can then be used to configure things in a coarser way than per host. Things such as NTP servers may want to be defined in all, while DNS servers may want to be defined at the environment level (if your network is segmented into dev, test and production they may need different DNS servers setting in /etc/resolv.conf) while different types of servers may have different configurations around things such as lists of packages to be installed. Finally, some things may need to be host specific such as setting the MySQL server id in a replication group. If, instead, you only want to define playbook global settings rather than across the inventory (and so could be accessed by other playbooks) then you simply need a vars block in your play definition like so: - hosts: webservers vars: http_port: 80 tasks: - name: Task1 to be ran against all the webservers ... As mentioned before, you can always use the all group here too: - hosts: all vars: ntp_pool: - ntp1.domain - ntp2.domain tasks: - name: Task1 to be ran against all the servers ... In general though, I would strongly recommend using roles to structure what things are ran against certain hosts and then using the inventory to explain what servers are what type and then use a group_vars dir at the inventory level to contain all the variables for those groups of hosts. Doing things this way will help you keep things in sensible places and allow you to easily reuse your code base.
Ansible
33,126,156
57
Does anyone know how to do something (like wait for port / boot of the managed node) BEFORE gathering facts? I know I can turn gathering facts off gather_facts: no and THEN wait for port but what if I need the facts while also still need to wait until the node boots up?
Gathering facts is equivalent to running the setup module. You can manually gather facts by running it. It's not documented, but simply add a task like this: - name: Gathering facts setup: In combination with gather_facts: no on playbook level the facts will only be fetched when above task is executed. Both in an example playbook: - hosts: all gather_facts: no tasks: - name: Some task executed before gathering facts # whatever task you want to run - name: Gathering facts setup:
Ansible
31,054,453
57
I want to abort execution of remaining task if certain condition is failed. and display proper error message. So instead of skipping remaining task I want to show error message and stop execution of ansible playbook. Lets say I am running below command $ ansible-playbook playbook.yml -e "param1=value1 param2=value" My playbook look like this:- playbook.yml:- --- - hosts: local user: roop gather_facts: no vars: {param1: "", param2: ""} tasks: #check whether param1 defined - name: 'Check for valid param1' shell: echo {{ param1 }} register: isValidParam1 when: param1 !="" #check if param1 is null or invalid - name: 'check if param1 is null or invalid' debug: msg="Please enter correct param1" when: param1 == "" #check whether param2 defined - name: 'Check for valid param2' shell: echo {{ param2 }} register: isValidParam2 when: param2 != "" #check if param2 is null or invalid - name: 'check if param2 is null or invalid' debug: msg="Please enter correct param2" when: param2 == "" #check params is valid and defined - name: 'Check for valid params' shell: echo "param1={{ param1 }} param2={{ param2 }} register: validParams when: isValidParam1 is defined and isValidParam2 is defined #check if params are invalid then abort below all tasks. - name: 'check if validParams is null or invalid' debug: msg="Please enter correct Params" when: validParams is not defined # based on validParams, and different value of param1 more than 10 task executing. As I have mentioned in my last task comment. I am executing more than 10 task based on validParams and param1 different value. What I need here if validParams is undefined then abort all execution and show error messages. Is there any efficient way to do this . Please suggest me.
You can use assert https://docs.ansible.com/ansible/latest/collections/ansible/builtin/assert_module.html or fail https://docs.ansible.com/ansible/latest/collections/ansible/builtin/fail_module.html It will go along with something like this #check if params are invalid then abort below all tasks. - name: 'check parm is null or invalid' fail: msg="Please enter correct Params" when: "param1 is not defined or param2 is not defined " ## whatever condition you want
Ansible
22,758,925
57
I'm using Ansible to setup EC2 instances and deploy an application. There's a hosts script which gathers tags related servers and groups info. I'd like to run these actions as a single playbook, so New instances are created if needed Hosts script loads inventory (including servers' facts) Deployment playbook works However, the inventory is loaded in advance, so, there is no servers/groups data if servers are created/updated during the play. I can separate provision and deployment playbooks use add_host trick to emulate dynamic inventory when servers are updated But, there are drawbacks in those approaches. Can I force Ansible to reload inventory? My test files are: hosts script: #!/bin/sh echo `date` >> log.log echo "{\"standalone\":[\"localhost\"]}" Sample playbook.yml: --- - hosts: all tasks: - name: show inventory_hostname command: echo {{ inventory_hostname }} I run it with the command ansible-playbook -i hosts playbook.yml -v and see two runs: $> cat log.log Thu Mar 12 09:43:16 SAMT 2015 Thu Mar 12 09:43:16 SAMT 2015 but I haven't found a command to double it.
With Ansible 2.0+, you can refresh your inventory mid-play by running the task: - meta: refresh_inventory
Ansible
29,003,420
56
I'm running into the silliest issue. I cannot figure out how to test for boolean in an Ansible 2.2 task file. In vars/main.yml, I have: destroy: false In the playbook, I have: roles: - {'role': 'vmdeploy','destroy': true} In the task file, I have the following: - include: "create.yml" when: "{{ destroy|bool }} == 'false'" I've tried various combinations below: when: "{{ destroy|bool }} == false" when: "{{ destroy|bool }} == 'false'" when: "{{ destroy|bool == false}}" when: "{{ destroy == false}}" when: "{{ destroy == 'false'}}" when: destroy|bool == false when: destroy|bool == 'false' when: not destroy|bool In all the above cases, I still get: statically included: .../vmdeploy/tasks/create.yml Debug output: - debug: msg: "{{ destroy }}" --- ok: [atlcicd009] => { "msg": true } The desired result, is that it would skip the include.
To run a task when destroy is true: --- - hosts: localhost connection: local vars: destroy: true tasks: - debug: when: destroy and when destroy is false: --- - hosts: localhost connection: local vars: destroy: false tasks: - debug: when: not destroy
Ansible
39,640,654
54
The question is simple: what is the difference between ansible_user (former ansible_ssh_user) and remote_user in Ansible, besides that the first one is set if configuration file and the latter one is set in plays / roles? How do they relate to -u / --user command line options?
They both seem to be the same. Take a look here: # the magic variable mapping dictionary below is used to translate # host/inventory variables to fields in the PlayContext # object. The dictionary values are tuples, to account for aliases # in variable names. MAGIC_VARIABLE_MAPPING = dict( connection = ('ansible_connection',), remote_addr = ('ansible_ssh_host', 'ansible_host'), remote_user = ('ansible_ssh_user', 'ansible_user'), port = ('ansible_ssh_port', 'ansible_port'), Source: https://github.com/ansible/ansible/blob/c600ab81ee/lib/ansible/playbook/play_context.py#L46-L55 Besides, ansible_user is used when we want to specifiy default SSH user in ansible hosts file where as remote_user is used in playbook context. From https://github.com/ansible/ansible/blob/c600ab81ee/docsite/rst/intro_inventory.rst ansible_user The default ssh user name to use. and here is an example of using ansible_user in ansible hosts file: [targets] localhost ansible_connection=local other1.example.com ansible_connection=ssh ansible_user=mpdehaan other2.example.com ansible_connection=ssh ansible_user=mdehaan
Ansible
36,668,756
54
In Ansible 1.7, I can use --tags from the command-line to only run a subset of that playbooks tasks. But I'm wanting to bake into my playbook to run a set of roles with only tasks that match tags. That is, I don't want to have to pass this in via the command-line since it will be the same every time. At first I thought it was this command, but this does the opposite: tagging tasks with these tags instead of filtering them out based on this. roles: - { role: webserver, port: 5000, tags: [ 'web', 'foo' ] } I can imagine implementing this using conditionals but tags would be a much more elegant way of achieving this.
You only have the following options with the current version of Ansible: Specify the tags on the command line Use a variable instead of a tag to conditionally run tasks Split your webserver role into multiple roles and use role dependencies for the common tasks This feature request has come up on the mailing list a few times and I haven't seen any indication from the dev team that it will be added as a new feature.
Ansible
25,674,649
54
I have a register task to test for the installation of a package: tasks: - name: test for nginx command: dpkg -s nginx-common register: nginx_installed Every run it gets reported as a "change": TASK: [test for nginx] ******************************************************** changed: [vm1] I don't regard this as a change... it was installed last run and is still installed this run. Yeah, not a biggy, just one of those untidy OCD type issues. So am I doing it wrong? Is there some way to use register without it always being regarded as a change? The [verbose] output is untidy, but the only way I've found to get the correct return code. TASK: [test for nginx] ******************************************************** changed: [vm1] => {"changed": true, "cmd": ["dpkg", "-s", "nginx-common"], "delta": "0:00:00.010231", "end": "2014-05-30 12:16:40.604405", "rc": 0, "start": "2014-05-30 12:16:40.594174", "stderr": "", "stdout": "Package: nginx-common\nStatus: install ok ... \nHomepage: http://nginx.net"}
It’s described in official documentation here. tasks: - name: test for nginx command: dpkg -s nginx-common register: nginx_installed changed_when: false
Ansible
23,946,112
54
While doing clone, push or pull of a private git repository hosted internally (e.g. on a GitLab instance) with Ansible's Git module, how do I specify username and password to authenticate with the Git server? I don't see any way to do this in the documentation.
You can use something like this: --- - hosts: all gather_facts: no become: yes tasks: - name: install git package apt: name: git - name: Get updated files from git repository git: repo: "https://{{ githubuser | urlencode }}:{{ githubpassword | urlencode }}@github.com/privrepo.git" dest: /tmp Note: {{ githubpassword | urlencode }} is used here, if your password also contains special characters @,#,$ etc Then execute the following playbook: ansible-playbook -i hosts github.yml -e "githubuser=arbabname" -e "githubpassword=xxxxxxx" Note: Make sure you put the credentials in ansible vaults or pass it secure way
Ansible
37,841,914
53
I am having a hard time understanding the logic of ansible with_subelements syntax, what exactly does with_subelements do? i took a look at ansible documentation on with_subelements here https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#with-subelements and was not very helpful. I also saw a playbook with with_subelements example on a blog --- - hosts: cent vars: users: - name: jagadish comments: - 'Jagadish is Good' - name: srini comments: - 'Srini is Bad' tasks: - name: User Creation shell: useradd -c "{{ item.1 }}" "{{ item.0.name }}" with_subelements: - users - comments what do item.1 and item.0 refer to?
This is really bad example of how subelements lookup works. (And has old, unsupported, syntax as well). Look at this one: --- - hosts: localhost gather_facts: no vars: families: - surname: Smith children: - name: Mike age: 4 - name: Kate age: 7 - surname: Sanders children: - name: Pete age: 12 - name: Sara age: 17 tasks: - name: List children debug: msg: "Family={{ item.0.surname }} Child={{ item.1.name }} Age={{ item.1.age }}" with_subelements: - "{{ families }}" - children Task List children is like a nested loop over families list (outer loop) and over children subelement in each family (inner loop). So you should provide a list of dicts as first argument to subelements and name of subelement you want to iterate inside each outer item. This way item.0 (family in my example) is an outer item and item.1 (child in my example) is an inner item. In Ansible docs example subelements is used to loop over users (outer) and add several public keys (inner).
Ansible
41,908,715
52
I'm trying to organize my playbooks according to the Directory Layout structure. The documentation doesn't seem to have a recommendation for host-specific files/templates. I have 2 plays for a single site example.com-provision.yml example.com-deploy.yml These files are located in the root of my structure. The provisioning playbook simply includes other roles --- - hosts: example.com roles: - common - application - database become: true become_method: su become_user: root The deployment playbook doesn't include roles, but has it's own vars and tasks sections. I have a couple template and copy tasks, and am wondering what the 'best practice' is for where to put these host-specific templates/files within this directory structure. Right now I have them at ./roles/example.com/templates/ and ./roles/example.com/files/, but need to reference the files with their full path from my deployment playbook, like - name: deployment | copy httpd config template: src: ./roles/example.com/templates/{{ host }}.conf.j2 # ... instead of - name: deployment | copy httpd config template: src: {{ host }}.conf.j2 # ...
Facing the same problem the cleanest way seems for me the following structure: In the top-level directory (same level as playbooks) I have a files folder (and if I needed also a templates folder). In the files folder there is a folder for every host with it's own files where the folder's name is the same as the host name in inventory. (see the structure below: myhost1 myhost2) . ├── files │   ├── common │   ├── myhost1 │ ├── myhost2 | ├── inventory │   ├── group_vars │   └── host_vars ├── roles │   ├── first_role │   └── second_role └── my_playbook.yml Now in any role you can access the files with files modules relatively: # ./roles/first_role/main.yml - name: Copy any host based file copy: src: "{{ inventory_hostname }}/file1" dest: /tmp Explanation: The magic variable inventory_hostname is to get the host, see here The any file module (as for example copy) looks up the files directory in the respective role directory and the files directory in the same level as the calling playbook. Of course same applies to templates (but if you have different templates for the same role you should reconsider your design) Semantically a host specific file does not belong into a role, but somewhere outside (like host_vars).
Ansible
32,830,428
52
I'm trying to follow this Ansible tutorial while adjusting it for Ubuntu 16.04 with php7. Below this message you'll find my Ansible file. After running it and trying to visit the page in the browser I get a 404, and the following in the nginx error logs: 2016/10/15 13:13:20 [crit] 28771#28771: *7 connect() to unix:/var/run/php7.0-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 93.xxx.xxx.xx, server: 95.xx.xx.xx, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php7.0-fpm.sock:", host: "95.xx.xx.xx" So I checked if the socket file exists, and it seems to exist, but ls behaves weird: $ sudo ls -l /var/run/php total 4 -rw-r--r-- 1 root root 5 Oct 15 13:00 php7.0-fpm.pid srw-rw---- 1 www-data www-data 0 Oct 15 13:00 php7.0-fpm.sock $ sudo ls -l /var/run/php7 ls: cannot access '/var/run/php7': No such file or directory $ sudo ls -l /var/run/php7.0-fpm.sock ls: cannot access '/var/run/php7.0-fpm.sock': No such file or directory Why can ls find the socket file if I search it by part of the name php while it cannot find the socket file when I list more than that php7 or even the full name php7.0-fpm.sock? And most importantly, how can I make this work with nginx? All tips are welcome! below I pasted my Ansible file --- - hosts: php become: true tasks: - name: install packages apt: name={{ item }} update_cache=yes state=latest with_items: - git - mcrypt - nginx - php-cli - php-curl - php-fpm - php-intl - php-json - php-mcrypt - php-mbstring - php-sqlite3 - php-xml - sqlite3 - name: enable mbstring shell: phpenmod mbstring notify: - restart php7.0-fpm - restart nginx - name: create /var/www/ directory file: dest=/var/www/ state=directory owner=www-data group=www-data mode=0700 - name: Clone git repository git: > dest=/var/www/laravel repo=https://github.com/laravel/laravel.git update=no become: true become_user: www-data register: cloned - name: install composer shell: curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer args: creates: /usr/local/bin/composer - name: composer create-project composer: command=create-project working_dir=/var/www/laravel optimize_autoloader=no become: true become_user: www-data when: cloned|changed - name: set APP_DEBUG=false lineinfile: dest=/var/www/laravel/.env regexp='^APP_DEBUG=' line=APP_DEBUG=false - name: set APP_ENV=production lineinfile: dest=/var/www/laravel/.env regexp='^APP_ENV=' line=APP_ENV=production - name: Configure nginx template: src=nginx.conf dest=/etc/nginx/sites-available/default notify: - restart php5-fpm - restart nginx handlers: - name: restart php7.0-fpm service: name=php7.0-fpm state=restarted - name: restart nginx service: name=nginx state=restarted - name: reload nginx service: name=nginx state=reloaded
Had the same problem. Solution is very easy. In nginx conf file you are trying upstreaming to unix:/var/run/php7.0-fpm.sock Correct path is unix:/var/run/php/php7.0-fpm.sock There is a mention about this in the documentation Nginx communicates with PHP-FPM using a Unix domain socket. Sockets map to a path on the filesystem, and our PHP 7 installation uses a new path by default: PHP 5 /var/run/php5-fpm.sock PHP 7 /var/run/php/php7.0-fpm.sock
Ansible
40,059,745
51
Here is the inventory file --- [de-servers] 192.26.32.32 [uk-servers] 172.21.1.23 172.32.2.11 and my playbook is look like this: - name: Install de-servers configurations hosts: de-servers roles: - de-server-setup - name: Install uk-servers configurations hosts: uk-servers roles: - uk-server-setup - name: Do some other job on de-servers (cannot be done until uk-servers is installed) hosts: de-servers roles: - de-servers-rest-of-jobs In role de-servers-setup role the ssh port is changed from 22 to 8888, so when the last task is called it fails because it cannot connect to host through 22 port. How to overcome this ssh port change?
In the role de-server-setup add a task to change the ansible_port host variable. - name: Change ssh port to 8888 set_fact: ansible_port: 8888
Ansible
34,333,058
51
In the documentation, there is an example of using the lineinfile module to edit /etc/sudoers. - lineinfile: "dest=/etc/sudoers state=present regexp='^%wheel' line='%wheel ALL=(ALL) NOPASSWD: ALL'" Feels a bit hackish. I assumed there would be something in the user module to handle this but there doesn't appear to be any options. What are the best practices for adding and removing users to /etc/sudoers?
That line isn't actually adding an users to sudoers, merely making sure that the wheel group can have passwordless sudo for all command. As for adding users to /etc/sudoers this is best done by adding users to necessary groups and then giving these groups the relevant access to sudo. This holds true when you aren't using Ansible too. The user module allows you to specify an exclusive list of group or to simply append the specified groups to the current ones that the user already has. This is naturally idempotent as a user cannot be defined to be in a group multiple times. An example play might look something like this: - hosts: all vars: sudoers: - user1 - user2 - user3 tasks: - name: Make sure we have a 'wheel' group group: name: wheel state: present - name: Allow 'wheel' group to have passwordless sudo lineinfile: dest: /etc/sudoers state: present regexp: '^%wheel' line: '%wheel ALL=(ALL) NOPASSWD: ALL' validate: visudo -cf %s - name: Add sudoers users to wheel group user: name: "{{ item }}" groups: wheel append: yes with_items: "{{ sudoers }}"
Ansible
33,359,404
51
what I'm trying to accomplish is to run commands inside of a Docker container that has already been created on a Digital Ocean Ubuntu/Docker Droplet using Ansible. Can't seem to find anything on this, or I'm majorly missing something. This is my Ansible task in my play book. I'm very new to Ansible so any advice or wisdom would be greatly appreciated. - name: Test Deploy hosts: [my-cluster-of-servers] tasks: - name: Go Into Docker Container And Run Multiple Commands docker: name: [container-name] image: [image-ive-created-container-with-on-server] state: present command: docker exec -it [container-name] bash
After discussion with some very helpful developers on the ansible github project, a better way to do this is like so: - name: add container to inventory add_host: name: [container-name] ansible_connection: docker changed_when: false - name: run command in container delegate_to: [container-name] raw: bash If you have python installed in your image, you can use the command module or any other module instead of raw. If you want to do this on a remote docker host, add: ansible_docker_extra_args: "-H=tcp://[docker-host]:[api port]" to the add_host block. See the Ansible documentation for a more complete example.
Ansible
32,878,795
51
I am using [file lookup] which reads the whole file and stores the content in a variable. My play looks something like this: - name: Store foo.xml contents in a variable set_fact: foo_content: "{{ lookup('file', 'foo.xml' ) | replace('\n', '')}}" So the above code reads the foo.xml file and stores it in the variable, but the problem is when the foo.xml has line breaks in it, it also includes the line break in the variable. My foo.xml is this file: <?xml version="1.0" encoding="utf-8"?> <initialize_param> <secrets> <my_secret id="99">3VMjII6Hw+pd1zHV5THSI712y421USUS8124487128745812sajfhsakjfasbfvcasvnjasjkvbhasdfasgfsfaj5G8A9+n8CkLxk7Dqu0G8Jclg0eb1A5xeFzR3rrJHrb2GBBa7PJNVx8tFJP3AtF6ek/F/WvlBIs2leX2fq+/bGryKlySuFmbcwBsThmPJC5Z5AwPJgGZx</my_secret> </secrets> </initialize_param> The output removes line break \n but also incudes the tabs \r & \t I need to got rid of the \n , need to get rid of extra formatting too (\r & \t), Moreover after the replace filter I get the error while firing a DB Update query as stderr: /bin/sh: 1: cannot open ?xml: No such file
Use the Jinja trim filter: "{{ lookup('file', 'foo.xml' ) | trim }}"
Ansible
32,016,123
51
I have created an autoscaling group for Amazon EC2 and I have added my public key when I created the AMI with packer, I can run ansible-playbook and ssh to the hosts. But there is a problem when I run the playbook like this ansible-playbook load.yml I am getting this message that I need to write my password Enter passphrase for key '/Users/XXX/.ssh/id_rsa': Enter passphrase for key '/Users/XXX/.ssh/id_rsa': Enter passphrase for key '/Users/XXX/.ssh/id_rsa': The problem is it doesn't accept my password (I am sure I am typing my password correctly). I found that I can send my password with ask-pass flag, so I have changed my command to ansible-playbook load.yml --ask-pass and I got some progress but again for some other task it asks for the password again and it didn't accept my password [WARNING]: Unable to parse /etc/ansible/hosts as an inventory source [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ************************************************************************************************************* TASK [ec2_instance_facts] **************************************************************************************************** ok: [localhost] TASK [add_host] ************************************************************************************************************** changed: [localhost] => (item=xx.xxx.xx.xxx) changed: [localhost] => (item=yy.yyy.yyy.yyy) PLAY [instances] ************************************************************************************************************* TASK [Copy gatling.conf] ***************************************************************************************************** ok: [xx.xxx.xx.xxx] ok: [yy.yyy.yyy.yyy] Enter passphrase for key '/Users/ccc/.ssh/id_rsa': Enter passphrase for key '/Users/ccc/.ssh/id_rsa': Enter passphrase for key '/Users/ccc/.ssh/id_rsa': Enter passphrase for key '/Users/ccc/.ssh/id_rsa': Enter passphrase for key '/Users/ccc/.ssh/id_rsa': If I don't use ask-pass flag even the task [Copy gatling.conf] doesn't complete and complaining about could not access the hosts. By adding the flag this part passes, but my next task again asks for pass. How should I solve this issue? What am I doing wrong here?
In ansible There is no option to store passphrase-protected private key For that we need to add the passphrase-protected private key in the ssh-agent Start the ssh-agent in the background. # eval "$(ssh-agent -s)" Add SSH private key to the ssh-agent # ssh-add ~/.ssh/id_rsa Now try running ansible-playbook and ssh to the hosts.
Ansible
50,277,495
50
I am using ansible to deploy my app. I am cloning the app from github using the following: - name: Deploy site files from Github repository sudo: yes git: repo=git@github.com:xyz/abc.git dest=/home/{{deploy_user}}/{{app_name}} key_file=/home/ubuntu/.ssh/id_rsa accept_hostkey=yes force=yes I want to clone a specific branch from the repository. I read the documentation of ansible but couldn't find any option to clone a specific branch. It has an option to clone a version but not branch.
From the documentation: version What version of the repository to check out. This can be the full 40-character SHA-1 hash, the literal string HEAD, a branch name, or a tag name. (emphasis mine)
Ansible
33,450,240
50
I was wondering what is the correct syntax for when statements? I have this playbook: - set_fact: sh_vlan_id: "{{ output.response|map(attribute='vlan_id')|list|join(',') }}" - name: create vlans ios_config: provider: "{{ provider }}" parents: vlan {{ item.id }} lines: name {{ item.name }} with_items: "{{ vlans }}" register: result when: '"{{ item.id }}" not in sh_vlan_id' And, running it, gives me a warning but it still runs through. I am not sure if this is correct or not. TASK [set_fact] ************************************************ ok: [acc_sw_01] TASK [create vlans] *********************************************** [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: "{{ item.id }}" not in sh_vlan_id skipping: [acc_sw_01] => (item={u'id': 10, u'name': u'voice-1'}) skipping: [acc_sw_01] => (item={u'id': 101, u'name': u'data-2'}) skipping: [acc_sw_01] => (item={u'id': 100, u'name': u'data-1'}) changed: [acc_sw_01] => (item={u'id': 11, u'name': u'voice-2'}) If I remove the curly braces around item.id in the when statement: when: item.id not in sh_vlan_id It gives me an error: TASK [set_fact] *************************************************** ok: [acc_sw_01] TASK [create vlans] *********************************************** fatal: [acc_sw_01]: FAILED! => {"failed": true, "msg": "The conditional check 'item.id not in sh_vlan_id' failed. The error was: Unexpected templating type error occurred on ({% if item.id not in sh_vlan_id %} True {% else %} False {% endif %}): coercing to Unicode: need string or buffer, int found\n\nThe error appears to have been in '/ansible/cisco-ansible/config_tasks/vlan.yml': line 16, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: create vlans\n ^ here\n"} I'm using ansible 2.3.0 (devel cbedc4a12a).
The correct syntax is to not include Jinja delimiters ({{ ... }}) as indicated by the warning. Your condition doesn't work otherwise because the types are not compatible. You could try type coercion: when: 'item.id | string not in sh_vlan_id' See: https://jinja.palletsprojects.com/en/3.1.x/templates/#builtin-filters
Ansible
42,673,045
49
When deploying with ansible, There's 1 specific case where I need to strip a string of a trailing -p substring. The string somemachine-prod-p should become somemachine-prod only if the -p is at the end. The substring function I saw I can use with Jinja does not fulfill my needs as I need to strip the end of the string, not the start. Ideas?
Found it. If anyone wants to know: {% if name.endswith('-p') %} {{ name[:-2] }} {% else %} {{ name }} {% endif %}
Ansible
41,791,055
49
I have this error when I launch my playbook against the localhost host. TASK [setup] ******************************************************************* fatal: [127.0.0.1]: UNREACHABLE! => {"changed": false, "msg": "SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue", "unreachable": true} to retry, use: --limit @deploy-test-env.retry PLAY RECAP ********************************************************************* 127.0.0.1 : ok=0 changed=0 unreachable=1 failed=0 And my hosts file have this config: [local] 127.0.0.1 What is the problem? Thanks!
Ansible by default tries to connect through ssh. For localhost you should set the connection to local. You can define this when calling the playbook: ansible-playbook playbook.yml --connection=local Define it in your playbook: - hosts: local connection: local Or, preferable, define it as a host var just for localhost/127.0.0.1. Create a file host_vars/127.0.0.1 relative to your playbook with this content: ansible_connection: local You also could add it as a group var in your inventory: [local] 127.0.0.1 [local:vars] ansible_connection=local or as a host var: [local] 127.0.0.1 ansible_connection=local See Behavioral Parameters in docs.
Ansible
37,184,699
49
Forgive my newbie question, but I would like to execute three tasks and use two roles in a playbook, in the order: task role task role task This is what I have so far (task, role, task): --- - name: Task Role Task hosts: 127.0.0.1 connection: local gather_facts: false pre_tasks: - name: Do this task first foo: roles: - role: this role second foo: post_tasks: - name: Do this task third foo: Is this possible or should I be changing my tasks into roles?
--- - name: Task Role Task hosts: 127.0.0.1 connection: local gather_facts: false tasks: - name: task1 foo: - name: include role1 include_role: name: myrole1 - name: task2 foo: - name: include role2 include_role: name: myrole2 see official docs
Ansible
30,763,709
49
I have 2 app servers with a loadbalancer in front of them and 1 database server in my system. I'm provisioning them using Ansible. App servers has Nginx + Passenger and running for a Rails app. Will use capistrano for deployment but I have an issue about ssh keys. My git repo is in another server and I have to generate ssh public keys on appservers and add them to the Git server(To authorized_keys file). How can I do this in ansible playbook? PS: I may have more than 2 app servers.
This does the trick for me, it collects the public ssh keys on the nodes and distributes it over all the nodes. This way they can communicate with each other. - hosts: controllers gather_facts: false remote_user: root tasks: - name: fetch all public ssh keys shell: cat ~/.ssh/id_rsa.pub register: ssh_keys tags: - ssh - name: check keys debug: msg="{{ ssh_keys.stdout }}" tags: - ssh - name: deploy keys on all servers authorized_key: user=root key="{{ item[0] }}" delegate_to: "{{ item[1] }}" with_nested: - "{{ ssh_keys.stdout }}" - "{{groups['controllers']}}" tags: - ssh Info: This is for the user root
Ansible
25,629,933
49
Recently I'm looking at Ansible and want to use it in projects. And also there's another tool Rundeck can be used to do all kinds of Operations works. I have experience with neither tool and this is my current understanding about them: Similar points Both tools are agent-less and use SSH to execute commands on remote servers Rundeck's main concept is Node, the same as Ansible's inventory, the key idea is to define/manage/group the target servers Rundeck can execute ad-hoc commands on selected nodes, Ansible can also do this very conveniently. Rundeck can define workflow and do the execution on selected nodes, this can be done with Ansible by writing playbook Rundeck can be integrated with CI tool like Jenkins to do deploy work, we can also define a Jenkins job to run ansible-playbook to do the deploy work Different points Rundeck has the concept of Job, which Ansible does not Rundeck has Job Scheduler, which Ansible can only achieve this with other tools like Jenkins or Cron tasks Rundeck has Web UI by default for free, but you have to pay for Ansible Tower Seems both Ansible and Rundeck can be used to do configuration/management/deployment work, maybe in a different way. So my questions are: Are these two complementary tools or they are designed for different purposes? If they're complementary tools, why is Ansibl only compared to tools like Chef/Puppet/Slat but not with Rundeck? If they're not why they have so many similar functionalities? We're already using Jenkins for CI, to build a Continuous-Delivery pipeline, which tool(Ansible/Rundeck) is a better idea to use to do the deployment? If they can be used together, what's the best practice? Any suggestions and experience sharing are greatly appreciated.
TL;DR - given your environment of Jenkins for CI/CD I'd recommend using just Ansible. You've spotted that there is sizeable cross-over between Ansible & Rundeck, so it's probably best to concentrate on where each product focuses, it's style and use. Focus I believe Rundeck's focus is in enabling sysadmins to build a (web-based) self-service portal that's accessible to both other sysadmins and, potentially, less "technical"/sysadmin people. Rundeck's website says "Turn your operations procedures into self-service jobs. Safely give others the control and visibility they need.". Rundeck also feels like it has a more 'centralised' view on the world: you load the jobs into a database and that's where they live. To me, Ansible is for devops - so building out and automating deployments of (self-built) applications in a way such that they are highly-repeatable. I'd argue that Ansible comes more focussed for software development houses that build their own products: Ansible 'playbooks' are text files, so normally stored into source control and normally alongside the app that the playbooks will deploy. Job creation focus With Rundeck you typically create jobs via the web UI. With Ansible you create tasks/playbooks in files via a text editor. Operation/Task/Job Style Rundeck by default is imperative - you write scripts that are executed (via SSH). Ansible is both imperative (i.e. execute bash statements) but also declarative, so in some cases, say, starting Apache you can use the service task to make sure that it's running. This is closer to other configuration management tools like Puppet and Chef. Complex jobs / scripts Rundeck has the ability to run another job by defining a step in the Job's workflow but from experience this feels like a tacked-on addition than a serious top-level feature. Ansible is designed to create complex operations; running/including/etc are top-level features. How it runs Rundeck is a server app. If you want to run jobs from somewhere else (like CI) you'll either need to call out to the cli or make an API call. Straight Ansible is command-line. Proviso Due to the cross-over and overall flexibility of Rundeck and Ansible you could achieve all of the above in each. You can achieve version control of your Rundeck jobs by exporting them to YAML or XML and checking them into source control. You can get a web UI in Ansible using Tower. etc. etc. etc. Your questions: Complementary tools? I could envision a SaaS shop using both: one might use Ansible to perform all deployment actions and then use Rundeck to perform one-off, adhoc jobs. However, while I could envision it I wouldn't recommend that as a starting point. Me, I'd start with just Ansible and see how far I get. I'd only layer in Rundeck later on if I discovered that I really, really need to run one-offs. CI/CD Ansible: your environment sounds more like a software house where you're deploying your own app. It should probably be repeatable (especially as you're going Continuous Delivery) so you'll want your deploy scripts in source control. You'll want simplicity and Ansible is "just text files". I hope you will also want your devs to be able to run things on their machines (right?), Ansible is decentralised. Used together (for CI/CD) Calling Rundeck from Ansible, no. Sure, it would be possible but I'm struggling to come up with good reasons. At least, not very specialised specific-to-a-particular-app-or-framework reasons. Calling Ansible from Rundeck, yes. I could envision someone first building out some repeatable adhoc commands in Ansible. Then I could see there being a little demand for being able to call this without a command line (say: non technical users). But, again, this is getting specific to your environment.
Ansible
31,152,102
48
I would like to set an ansible variable to some default value but only if the variable is undefined. Otherwise I would like to keep it unchanged. I tried these two approaches and both of them produce recursive loop: namespace: "{{namespace|default(default_namespace)}}" namespace: "{% if namespace is defined %}{{namespace}}{% else %}{{default_namespace}}{% endif %}"
It seems like you are taking a wrong approach. Take a look at the Ansible documentation concerning variable precedence. It is a built-in feature of Ansible to use the default variable if the variable is not defined. In Ansible 2.x the variable precedence starts like this: role defaults inventory vars So if you want to define a default value for a variable you should set it in role/defaults/main.yml. Ansible will use that value only if the variable is not defined somewhere else. Another option is to use a Jinja2 filter. With a Jinja filter you can set a default value for a variable like this: {{ some_variable | default(5) }}
Ansible
35,083,756
48
Background My question seems simple, but it gets more complex really fast. Basically, I got really tired of maintaining my servers manually (screams in background) and I decided it was time to find a way to make being a server admin much more liveable. That's when I found Ansible. Great huh? Sure beats making bash scripts (louder scream) for everything I wanted to automate. What's the problem? I'm having a lot of trouble figuring out what user my Ansible playbook will run certain things as. I also need the ability to specify what user certain tasks will run as. Here are some specific use cases: Cloning a repo as another user: My purpose with this is it run my node.js webapp from another user, who we'll call bill (that can only use sudo to run a script that I made that starts the node server, as opposed to root or my user that can use sudo for all commands). To do this, I need the ability to have Ansible's git module clone my git repo as bill. How would I do that? Knowing how Ansible will gain root: As far as I understand, you can set what user Ansible will connect to the server you're maintaining by defining 'user' and the beginning of the playbook file. Here's what I don't understand: if I tell it to connect via my username, joe, and ask it to update a package via the apt module, how will it gain root? Sudo usually prompts me for my password, and I'd prefer keeping it that way (for security). Final request I've scoured the Ansible docs, done some (what I thought was thorough) Googling, and generally just tried to figure it out on my own, but this information continues to elude me. I am very new to Ansible, and while it's mostly straight-forwards, I would benefit greatly if I could understand exactly how Ansible runs, on which users it runs, and how/where I can specify what user to use at different times. Thank you tons in advance
You may find it useful to read the Hosts and Users section on Ansible's documentation site: http://docs.ansible.com/playbooks_intro.html#hosts-and-users In summary, ansible will run all commands in a playbook as the user specified in the remote_user variable (assuming you're using ansible >= 1.4, user before that). You can specify this variable on a per-task basis as well, in case a task needs to run as a certain user. Use sudo: true in any playbook/task to use sudo to run it. Use the sudo_user variable to specify a user to sudo to if you don't want to use root. In practice, I've found it easiest to run my playbook as a deploy user that has sudo privileges. I set up my SSH keys so I can SSH into any host as deploy without using a password. This means that I can run my playbook without using a password and even use sudo if I need to. I use this same user to do things like cloning git repos and starting/stopping services. If a service needs to run as a lower-privileged user, I let the init script take care of that. A quick Google search for a node.js init.d script revealed this one for CentOS: https://gist.github.com/nariyu/1211413 Doing things this way helps to keep it simple, which I like. Hope that helps.
Ansible
21,670,747
48
In my playbook, I have this: #More things - include: deploy_new.yml vars: service_type: "{{ expose_service == 'true' | ternary('NodePort', 'ClusterIP') }}" when: service_up|failed When expose_service is true, I want service_type to be set to NodePort, and ClusterIP otherwise. However, service_type is set to False in all cases. What am I doing wrong?
Solved! service_type: "{{ 'NodePort' if expose_service == 'true' else 'ClusterIP' }}"
Ansible
37,160,668
47
I am running an Ansible play and would like to list all the hosts targeted by it. Ansible docs mentions that this is possible, but their method doesn't seem to work with a complex targeted group (targeting like hosts: web_servers:&data_center_primary) I'm sure this is doable, but cant seem to find any further documentation on it. Is there a var with all the currently targeted hosts?
You are looking for 'play_hosts' variable --- - hosts: all tasks: - name: Create a group of all hosts by app_type group_by: key={{app_type}} - debug: msg="groups={{groups}}" run_once: true - hosts: web:&some_other_group tasks: - debug: msg="play_hosts={{play_hosts}}" run_once: true would result in TASK: [Create a group of all hosts by app_type] ******************************* changed: [web1] => {"changed": true, "groups": {"web": ["web1", "web2"], "load_balancer": ["web3"]}} TASK: [debug msg="play_hosts={{play_hosts}}"] ********************************* ok: [web1] => { "msg": "play_hosts=['web1']" } inventory: [proxy] web1 app_type=web web2 app_type=web web3 app_type=load_balancer [some_other_group] web1 web3
Ansible
28,709,501
47