qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
65,814,036 | When I want to import a package in python, I can alias it:
```
import package_with_a_very_long_nameeeeee as pl
```
After that statement, I can refer to the package by it alias:
```
pl.foo()
```
julia allows me to do:
```
using PackageWithAVeryLongName
pl = PackageWithAVeryLongName
pl.foo()
```
But it feels like an ugly hack, possibly with implications I do not understand.
Is there an idiomatic way to alias imported packages in julia? | 2021/01/20 | [
"https://Stackoverflow.com/questions/65814036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11926170/"
] | This is now possible on the upcoming Julia 1.6 using the exact same syntax as Python uses:
```
julia> import LinearAlgebra as LA
julia> typeof(LA)
Module
help?> LA.svd
svd(A; full::Bool = false, alg::Algorithm = default_svd_alg(A)) -> SVD
```
On prior versions, you can do what [@Bill suggests](https://stackoverflow.com/a/65814933/176071) — but I'd *strongly* recommend doing so as a `const` assignment alongside an `import`:
```
julia> import SparseArrays
julia> const SA = SparseArrays
SparseArrays
``` | python:
```
>>> import matplotlib as plt
>>> type(plt)
<class 'module'>
>>>
```
julia:
```
julia> using Plots
[ Info: Precompiling Plots [91a5bcdd-55d7-5caf-9e0b-520d859cae80]
julia> const plt = Plots
Plots
julia> typeof(plt)
Module
```
So, it is pretty much the identical effect between languages. What may make this usage less than ideal, thus seem ugly in Julia is that multiple dispatch usually allows the export of function names from multiple modules without a conflict since their arguments usually differ. So having to precede the function name with a module alias might mean that something needed export and was not exported with the module. The exceptions ought to be rare. | 17,794 |
245,465 | How do you connect to a remote server via IP address in the manner that TOAD, SqlDeveloper, are able to connect to databases with just the ip address, username, SID and password?
Whenever I try to specify and IP address, it seems to be taking it locally.
In other words, how should the string for cx\_Oracle.connect() be formatted to a non local database?
There was a previous post which listed as an answer connecting to Oracle via cx\_Oracle module with the following code:
```
#!/usr/bin/python
import cx_Oracle
connstr='scott/tiger'
conn = cx_Oracle.connect(connstr)
curs = conn.cursor()
curs.execute('select * from emp')
print curs.description
for row in curs:
print row
conn.close()
``` | 2008/10/29 | [
"https://Stackoverflow.com/questions/245465",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | ```py
import cx_Oracle
dsn = cx_Oracle.makedsn(host='127.0.0.1', port=1521, sid='your_sid')
conn = cx_Oracle.connect(user='your_username', password='your_password', dsn=dsn)
conn.close()
``` | ```
import cx_Oracle
ip = '172.30.1.234'
port = 1524
SID = 'dev3'
dsn_tns = cx_Oracle.makedsn(ip, port, SID)
conn = cx_Oracle.connect('dbmylike', 'pass', dsn_tns)
print conn.version
conn.close()
``` | 17,795 |
69,111,185 | I'm trying to convert the command line return of `docker container ls -q` into a python list using the os.system library and method.
Is there a way to do this using the os library?
Essentially I'm trying to check if there are any running containers before I proceed through my code.
Thanks in advance! | 2021/09/09 | [
"https://Stackoverflow.com/questions/69111185",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12706523/"
] | Using `src` in the `img` tag to render image
```
<img className="icon" key={item.weather[0].icon} src={`https://openweathermap.org/img/w/${item.weather[0].icon}.png`} />
```
And put the key in `div` instead in the children. | Not sure why you are doing it, but setting JSX in your state seems strange.
Try something like this, assuming `res2.data.list` has the list of values.
```
<div className="five_day_forecast">
<div className="temp_body">
// Display here with Date/Time & Main Temp.
<p>
{res2.data.list && res2.data.list.length >0 && res2.data.list.map(item => [
<div className="each_data_point">
<li className="date_time" key={item.dt_txt}>{`Date & Time: ${
item.dt_txt
}`}</li>
,
<img
className="icon"
key={item.weather[0].icon}
>{`https://openweathermap.org/img/w/${
item.weather[0].icon
}.png`}</img>
,
<li className="main_temp" key={item.main.temp}>{`Temp: ${
item.main.temp
}`}</li>
</div>
])}
</p>
</div>
</div>
``` | 17,805 |
56,272,161 | I'm trying to set up travis build with JDK13, using for it two ways
- by setting `jdk` parameter to `openjdkea`
- by downloading jdk and setting env variables (installation tested locally),
And still `java --version` is 1.8 and it is used in runtime, even maven uses jdk13 for build.
Here is my `travis.yml`:
```
language: "perl"
perl:
- "5.14"
- "5.26"
git:
depth: false
sudo: false
jdk:
- openjdk-ea
addons:
apt:
packages:
- python3
- maven
- graphviz
before_install:
- wget https://download.java.net/java/early_access/jdk13/21/GPL/openjdk-13-ea+21_linux-x64_bin.tar.gz
- tar xvzf openjdk-13-ea+21_linux-x64_bin.tar.gz
- export JAVA_HOME=$PWD/jdk-13
- export PATH=$PATH:$JAVA_HOME/bin
- ls $JAVA_HOME
- java -version
- javac -version
- mvn -version
- mysql --version
- sqlite3 --version
- env
- cd wrappers/java
- mvn clean
- mvn package
- cd ../..
install:
- cpanm -v --installdeps --with-recommends --notest .
- cpanm -n Devel::Cover::Report::Coveralls
- cpanm -n Devel::Cover::Report::Codecov
script: "./scripts/dev/travis_run_tests.sh"
```
And here in output you can see java --version is 1.8 that causes test fail, even maven uses jdk13-ea for build.
```
java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
travis_time:end:23204870:start=1558602135904104138,finish=1558602136218922843,duration=314818705
[0Ktravis_fold:end:before_install.18
[0Ktravis_fold:start:before_install.19
[0Ktravis_time:start:16b0d9da
[0K$ javac -version
javac 1.8.0_151
travis_time:end:16b0d9da:start=1558602136223048229,finish=1558602136845508635,duration=622460406
[0Ktravis_fold:end:before_install.19
[0Ktravis_fold:start:before_install.20
[0Ktravis_time:start:346a8d88
[0K$ mvn -version
Apache Maven 3.0.5
Maven home: /usr/share/maven
Java version: 13-ea, vendor: Oracle Corporation
Java home: /home/travis/build/Ensembl/jdk-13
Default locale: en_US, platform encoding: UTF-8
```
What am I doing wrong, how to correctly set new JDK? | 2019/05/23 | [
"https://Stackoverflow.com/questions/56272161",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5427344/"
] | As loadChildren:string is deprecated in angular 8 Change your loadChildren declarations
From
```
loadChildren: './production/production.module#ProductionModule'
```
To
```
loadChildren: () => import('./production/production.module').then(m => m.ProductionModule)
```
For more you can ref angular official [git link](https://github.com/angular/angular/pull/30073) and [commit](https://github.com/angular/angular/commit/c61df39) | Apparently, it's not the entire `"loadChildren" : ...` which is deprecated. It just will no longer accept strings. Instead you have to specify a function now.
The documentation is already available [here](https://fireship.io/snippets/lazy-loaded-routes-angular-v8-ivy/).
It comes down to:
```
const routes = [
{
path: 'lazy',
loadChildren : () => import('./production/production.module').then(m => m.ProductionModule),
}
];
``` | 17,807 |
11,980,524 | Within brackets, python's slice shorthand auto-generates tuples of slice objects:
```
class Foo(object):
def __getitem__(self, key):
print key
Foo()[1::, 2:20:5]
```
This prints `(slice(1, None, None), slice(2, 20, 5))`. As far as I can tell, however, this shorthand doesn't work outside brackets.
Is there any way to use slice shorthand in other contexts? I could define a dummy object that simply returns whatever it is passed to `__getitem__` -- that would at least give me a way to generate slice tuples using the shorthand syntax. Is there a more pythonic way? | 2012/08/16 | [
"https://Stackoverflow.com/questions/11980524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1332492/"
] | NumPy has an [`s_`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.s_.html) object that will do this for you:
```none
>>> np.s_[2::2]
slice(2, None, 2)
```
You can easily make your own version of this simply by setting `s_ = Foo()` and then using `s_` whenever you want to create a slice easily. | What you see printed out is actually a good clue. You can create slice objects directly through the slice() global function. These can then be passed used to index from lists.
```
s = slice(1, 10, 2)
numbers = list(range(20))
print numbers[s]
print numbers[1:10:2]
```
This will print the same result twice, [1, 3, 5, 7, 9].
The slice instances have a few attributes and methods. You can look at help(slice) to see more. | 17,808 |
24,743,805 | In Ruby one can inspect a variable (object) by `p my_obj` so it'll print it as "deep" and in detail as possible. It's useful for logging and debugging. In python I thought it was `print my_obj`, only it didn't print much but `<requests.sessions.Session object at 0x7febcb29b390>` which wasn't useful at all.
How do I do this? | 2014/07/14 | [
"https://Stackoverflow.com/questions/24743805",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | `dir()` will probably give you what you want:
```
for key in dir(my_obj):
print('{}: {}'.format(key, getattr(my_obj, key)))
```
There's a lot more information there that you're expecting though, if I had to guess :) | That is the most universal way to "print" an object, it's just that not every object will have a very useful printout; most probably don't, in fact.
If you're working with an object someone else created, you'll often want to look at the documentation to find out how to look at its variables and whatnot. That will often give you the prettiest output. Other answers to this question offer a more general way of inspecting an object, which can be a lot faster and easier than looking up documentation; they do a more shallow inspection, as far as I can tell, but they're a good starting point.
If you're making your own object, you'll want to define a method called `__str__()` that returns a string. Then, whenever you say `print obj`, that object's `__str__()` function will be called, and that string will be returned.
In your case, if you have a `request.Session()` object named `my_obj`, you can apparently print it with `print(my_obj.text)`, according to the documentation.
<http://docs.python-requests.org/en/latest/user/advanced/> | 17,809 |
46,019,965 | I'm trying to access data on IBM COS from Data Science Experience based on this [blog post](https://www.ibm.com/blogs/bluemix/2017/06/access-analyze-data-ibm-cross-region-cloud-object-storage/).
First, I select 1.0.8 version of stocator ...
```
!pip install --user --upgrade pixiedust
import pixiedust
pixiedust.installPackage("com.ibm.stocator:stocator:1.0.8")
```
Restarted kernel, then ...
```
access_key = 'xxxx'
secret_key = 'xxxx'
bucket = 'xxxx'
host = 'lon.ibmselect.objstor.com'
hconf = sc._jsc.hadoopConfiguration()
hconf.set("fs.s3d.service.endpoint", "http://" + host)
hconf.set("fs.s3d.service.access.key", access_key)
hconf.set("fs.s3d.service.secret.key", secret_key)
file = 'mydata_file.tsv.gz'
inputDataset = "s3d://{}.service/{}".format(bucket, file)
lines = sc.textFile(inputDataset, 1)
lines.count()
```
However, that results in the following error:
```
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.lang.AbstractMethodError: com/ibm/stocator/fs/common/IStoreClient.setStocatorPath(Lcom/ibm/stocator/fs/common/StocatorPath;)V
at com.ibm.stocator.fs.ObjectStoreFileSystem.initialize(ObjectStoreFileSystem.java:104)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:249)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:249)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:249)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:249)
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:53)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:249)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:249)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1927)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:932)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:378)
at org.apache.spark.rdd.RDD.collect(RDD.scala:931)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:507)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:785)
```
**Note:** My first attempt at connecting to IBM COS resulted in a different error. That attempt is captured here: [No FileSystem for scheme: cos](https://stackoverflow.com/questions/46011671/no-filesystem-for-scheme-cos) | 2017/09/03 | [
"https://Stackoverflow.com/questions/46019965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1033422/"
] | No need to install stocator it is already there. As Roland mentioned, new installation most likely would collide with the pre-installed one and cause conflicts.
Try ibmos2spark:
<https://stackoverflow.com/a/46035893/8558372>
Let me know if you still facing problems. | Chris, I usually don't use the 'http://' in the endpoint and that works for me. Not sure if that is the problem here.
Here is how I access the COS objects from DSX notebooks
```
endpoint = "s3-api.dal-us-geo.objectstorage.softlayer.net"
hconf = sc._jsc.hadoopConfiguration()
hconf.set("fs.s3d.service.endpoint",endpoint)
hconf.set("fs.s3d.service.access.key",Access_Key_ID)
hconf.set("fs.s3d.service.secret.key",Secret_Access_Key)
inputObject = "s3d://<bucket>.service/<file>"
myRDD = sc.textFile(inputObject,1)
``` | 17,816 |
50,340,906 | I kind of know why when I do `migrate` it gives me the message of `no migrations to apply` but I just don't know how to fix it
This is what happens.
I added a new field named `update` into my model fields.
I did a `migration` which created a file called `0003_xxxxx.py` then I did a `migrate` now this worked fine.
But then because of some reason, I have to remove the `update` field from the same model.
So I removed it (more likey commented instead of really delete the code) then I did `migration` and `migrate` which removed the field in db. (created `0004_xxxxx.py`)
But sigh....some reason I have to add the field back again (this is why I only commented out) but then before I do the `migration`
I removed the `0003_xxxx.py and 0004_xxxx.py` files I wanted to remove those two files because this is actually the same or almost the same config as `0003_xxxx.py` so I feel it's pointless having the `0003_xxxx.py and 0004_xxxx.py` here...also when I go production, that's just another extra step for python to run.
After I removed those two files, I did the `migration` which creates another `0003_xxxx.py` but when I do `migrate` it gives me the message of `no migrations to apply`
I know that by deleting the `0003_xxxx.py` and get the original `0003 and 0004` back then do another `migration (creates 0005_xxxx.py) then migrate` then changes will be made. I know this because I didn't really delete the original `0003 and 0004` I moved it somewhere just in case of this kind of happening.
But why is this though? and is there a way to fix it?
Thanks in advance for any replies | 2018/05/15 | [
"https://Stackoverflow.com/questions/50340906",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2128414/"
] | django keep records of current `migrations` log by table `django_migrations` in your db.
something like:
[![enter image description here](https://i.stack.imgur.com/4iFvO.jpg)](https://i.stack.imgur.com/4iFvO.jpg)
The migrations is a chain structure,it's depend on the parent node.By this table django can know which migrations file is executed.
In you case,`no migrations to apply` because the new create `0003_xxxx.py` is record in this table,you can fix it by delete this record in this table.
If you want remove some migration file you need see [Squashing migrations](https://docs.djangoproject.com/en/2.0/topics/migrations/#squashing-migrations). | Or even simply just delete the last migration row from your database you can try the following
First, check which row you want to remove
```
SELECT * FROM "public"."django_migrations";
```
Now pick the last `id` from the above query and execute
```
DELETE FROM "public"."django_migrations" WHERE "id"=replace with your Id;
```
run `./manage.py migrate`. | 17,820 |
44,504,899 | I'm getting something like this. Can anyone please tell me how to fix this.
```
C:\Users\krush\Documents\ML using Python>pip install pocketsphinx
Collecting pocketsphinx
Using cached pocketsphinx-0.1.3.zip
Building wheels for collected packages: pocketsphinx
Running setup.py bdist_wheel for pocketsphinx: started
Running setup.py bdist_wheel for pocketsphinx: finished with status 'error'
Complete output from command C:\Users\krush\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\krush\\AppData\\Local\\Temp\\pip-build-cns2i_wb\\pocketsphinx\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\krush\AppData\Local\Temp\tmp3tyvnl9wpip-wheel- --python-tag cp36:
running bdist_wheel
running build_ext
building 'sphinxbase._ad' extension
swigging swig/sphinxbase/ad.i to swig/sphinxbase/ad_wrap.c
swig.exe -python -modern -Ideps/sphinxbase/include -Ideps/sphinxbase/include/sphinxbase -Ideps/sphinxbase/include/win32 -Ideps/sphinxbase/swig -outdir sphinxbase -o swig/sphinxbase/ad_wrap.c swig/sphinxbase/ad.i
error: command 'swig.exe' failed: No such file or directory
----------------------------------------
Failed building wheel for pocketsphinx
Running setup.py clean for pocketsphinx
Failed to build pocketsphinx
Installing collected packages: pocketsphinx
Running setup.py install for pocketsphinx: started
Running setup.py install for pocketsphinx: finished with status 'error'
Complete output from command C:\Users\krush\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\krush\\AppData\\Local\\Temp\\pip-build-cns2i_wb\\pocketsphinx\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\krush\AppData\Local\Temp\pip-x5mxeczy-record\install-record.txt --single-version-externally-managed --compile:
running install
running build_ext
building 'sphinxbase._ad' extension
swigging swig/sphinxbase/ad.i to swig/sphinxbase/ad_wrap.c
swig.exe -python -modern -Ideps/sphinxbase/include -Ideps/sphinxbase/include/sphinxbase -Ideps/sphinxbase/include/win32 -Ideps/sphinxbase/swig -outdir sphinxbase -o swig/sphinxbase/ad_wrap.c swig/sphinxbase/ad.i
error: command 'swig.exe' failed: No such file or directory
----------------------------------------
Command "C:\Users\krush\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\krush\\AppData\\Local\\Temp\\pip-build-cns2i_wb\\pocketsphinx\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\krush\AppData\Local\Temp\pip-x5mxeczy-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\krush\AppData\Local\Temp\pip-build-cns2i_wb\pocketsphinx\
``` | 2017/06/12 | [
"https://Stackoverflow.com/questions/44504899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4813058/"
] | Instead of copying Swig files to the Python folder, you can simply add Swig`s location to the environment variables:
1. Press `Ctrl+S`
2. Type `env` and press `Enter`
3. Double click on `Path`
4. Add the *path-to-Swig* to the last blank line
5. Click `OK` and restart your PC | I was also getting same error, while installing in ubuntu 16.04, I executed following commands:
```
sudo apt-get install -y python python-dev python-pip build-essential swig git libpulse-dev
sudo pip install pocketsphinx
```
source: [pocketsphinx-python](https://github.com/cmusphinx/pocketsphinx-python) | 17,823 |
44,221,225 | In reference to this question:
[python: Two modules and classes with the same name under different packages](https://stackoverflow.com/questions/15720593/python-two-modules-and-classes-with-the-same-name-under-different-packages)
Should all modules in a package be uniquely named, regardless of nesting? PEP8 and PEP423 do not seem to address this. | 2017/05/27 | [
"https://Stackoverflow.com/questions/44221225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3985089/"
] | No, there is no requirement that names at different levels must be different. Each level is a separate namespace. If `foo.utils` and `foo.bar.utils` make sense in your project, just do so.
For example, the Python standard library has [`email.message`](https://docs.python.org/3/library/email.message.html) and [`email.mime.message`](https://docs.python.org/3/library/email.mime.html?highlight=email.mime.message#email.mime.message.MIMEMessage), and [`multiprocessing.connection`](https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.connection), as well as [`multiprocessing.dummy.connection`](https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.dummy), and many more:
```
$ ls ~/Development/Library/cpython/Lib/**/*.py | grep -v __ | grep -v test_ | xargs basename | sort | uniq -c | grep -v ' 1 ' | sort
2 abc.py
2 ascii.py
2 client.py
2 connection.py
2 constants.py
2 dump.py
2 errors.py
2 filelist.py
2 handlers.py
2 log.py
2 message.py
2 parse.py
2 parser.py
2 process.py
2 queues.py
2 server.py
2 spawn.py
2 text.py
2 tree.py
3 main.py
4 config.py
5 support.py
6 util.py
```
That's all modules that appear inside packages, appear more than once, excluding tests, `__init__.py` and `__main__.py`. | Since package is based on the filesystem, you can not, in normal circonstance, have the same packages because files/directories have no duplicate.
You can have the same namespace package, of course.
It is also possible to have the same package/module name in different paths. They are searched in order, so the first win. | 17,833 |
26,674,433 | I am using Windows 8.1 64 bit. I have installed canopy with academic license. My laptop has Nvidia GeForce GT750m
I have installed Theano using `python setup.py develop` from Theano's git repository. I have Visual Studio 2013 and Cuda Toolkit 64 bit installed. Cuda samples are working fine.
Theanoarc file:
```
[global]
device = gpu
floatX = float32
[nvcc]
compiler_bindir=C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin
```
I get the following error while doing `import theano`
```
c:\users\harsh\appdata\local\enthought\canopy\app\appdata\canopy-1.4.1.1975.win-x86_64\include\pymath.h(22): warning: dllexport/dllimport conflict with "round"
c:\program files\nvidia gpu computing toolkit\cuda\v6.5\include\math_functions.h(2455): here; dllimport/dllexport dropped
mod.cu(672): warning: conversion from pointer to smaller integer
mod.cu(954): warning: statement is unreachable
mod.cu(1114): error: namespace "std" has no member "min"
mod.cu(1145): error: namespace "std" has no member "min"
mod.cu(1173): error: namespace "std" has no member "min"
mod.cu(1174): error: namespace "std" has no member "min"
mod.cu(1317): error: namespace "std" has no member "min"
mod.cu(1318): error: namespace "std" has no member "min"
mod.cu(1442): error: namespace "std" has no member "min"
mod.cu(1443): error: namespace "std" has no member "min"
mod.cu(1742): error: namespace "std" has no member "min"
mod.cu(1777): error: namespace "std" has no member "min"
mod.cu(1781): error: namespace "std" has no member "min"
mod.cu(1814): error: namespace "std" has no member "min"
mod.cu(1821): error: namespace "std" has no member "min"
mod.cu(1853): error: namespace "std" has no member "min"
mod.cu(1861): error: namespace "std" has no member "min"
mod.cu(1898): error: namespace "std" has no member "min"
mod.cu(1905): error: namespace "std" has no member "min"
mod.cu(1946): error: namespace "std" has no member "min"
mod.cu(1960): error: namespace "std" has no member "min"
mod.cu(2684): warning: conversion from pointer to smaller integer
mod.cu(3750): error: namespace "std" has no member "min"
mod.cu(3752): error: namespace "std" has no member "min"
mod.cu(3784): error: namespace "std" has no member "min"
mod.cu(3786): error: namespace "std" has no member "min"
mod.cu(3789): error: namespace "std" has no member "min"
mod.cu(3791): error: namespace "std" has no member "min"
mod.cu(3794): error: namespace "std" has no member "min"
mod.cu(3795): error: namespace "std" has no member "min"
mod.cu(3836): error: namespace "std" has no member "min"
mod.cu(3838): error: namespace "std" has no member "min"
mod.cu(4602): error: namespace "std" has no member "min"
mod.cu(4604): error: namespace "std" has no member "min"
31 errors detected in the compilation of "C:/Users/Harsh/AppData/Local/Temp/tmpxft_00001e58_00000000-10_mod.cpp1.ii".
mod.cu
['nvcc', '-shared', '-g', '-O3', '--compiler-bindir', 'C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin', '-Xlinker', '/DEBUG', '-m64', '-Xcompiler', '-DCUDA_NDARRAY_CUH=d67f7c8a21306c67152a70a88a837011,/Zi,/MD', '-IC:\\Users\\Harsh\\AppData\\Local\\Enthought\\Canopy\\User\\lib\\site-packages\\theano\\sandbox\\cuda', '-IC:\\Users\\Harsh\\AppData\\Local\\Enthought\\Canopy\\User\\lib\\site-packages\\numpy\\core\\include', '-IC:\\Users\\Harsh\\AppData\\Local\\Enthought\\Canopy\\App\\appdata\\canopy-1.4.1.1975.win-x86_64\\include', '-o', 'C:\\Users\\Harsh\\AppData\\Local\\Theano\\compiledir_Windows-8-6.2.9200-Intel64_Family_6_Model_58_Stepping_9_GenuineIntel-2.7.6-64\\cuda_ndarray\\cuda_ndarray.pyd', 'mod.cu', '-LC:\\Users\\Harsh\\AppData\\Local\\Enthought\\Canopy\\User\\libs', '-LC:\\Users\\Harsh\\AppData\\Local\\Enthought\\Canopy\\App\\appdata\\canopy-1.4.1.1975.win-x86_64\\libs', '-LNone\\lib', '-LNone\\lib64', '-LC:\\Users\\Harsh\\AppData\\Local\\Enthought\\Canopy\\App\\appdata\\canopy-1.4.1.1975.win-x86_64', '-lpython27', '-lcublas', '-lcudart']
ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu: ('nvcc return status', 2, 'for cmd', 'nvcc -shared -g -O3 --compiler-bindir C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin -Xlinker /DEBUG -m64 -Xcompiler -DCUDA_NDARRAY_CUH=d67f7c8a21306c67152a70a88a837011,/Zi,/MD -IC:\\Users\\Harsh\\AppData\\Local\\Enthought\\Canopy\\User\\lib\\site-packages\\theano\\sandbox\\cuda -IC:\\Users\\Harsh\\AppData\\Local\\Enthought\\Canopy\\User\\lib\\site-packages\\numpy\\core\\include -IC:\\Users\\Harsh\\AppData\\Local\\Enthought\\Canopy\\App\\appdata\\canopy-1.4.1.1975.win-x86_64\\include -o C:\\Users\\Harsh\\AppData\\Local\\Theano\\compiledir_Windows-8-6.2.9200-Intel64_Family_6_Model_58_Stepping_9_GenuineIntel-2.7.6-64\\cuda_ndarray\\cuda_ndarray.pyd mod.cu -LC:\\Users\\Harsh\\AppData\\Local\\Enthought\\Canopy\\User\\libs -LC:\\Users\\Harsh\\AppData\\Local\\Enthought\\Canopy\\App\\appdata\\canopy-1.4.1.1975.win-x86_64\\libs -LNone\\lib -LNone\\lib64 -LC:\\Users\\Harsh\\AppData\\Local\\Enthought\\Canopy\\App\\appdata\\canopy-1.4.1.1975.win-x86_64 -lpython27 -lcublas -lcudart')
WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not available
```
VisualStudio Command prompt gives the error:
```
C:\Program Files (x86)\Microsoft Visual Studio 12.0>cd C:\Users\Harsh\Downloads\
pycuda-2012.1
C:\Users\Harsh\Downloads\pycuda-2012.1>set VS90COMNTOOLS=%VS100COMNTOOLS%
C:\Users\Harsh\Downloads\pycuda-2012.1>python setup.py build
running build
running build_py
running build_ext
building '_driver' extension
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\BIN\cl.exe /c /nologo /Ox
/MD /W3 /GS- /DNDEBUG -DPYGPU_PACKAGE=pycuda -DHAVE_CURAND=1 -DBOOST_PYTHON_SOU
RCE=1 -DPYGPU_PYCUDA=1 -DBOOST_MULTI_INDEX_DISABLE_SERIALIZATION=1 -DBOOST_THREA
D_BUILD_DLL=1 -Dboost=pycudaboost -DBOOST_ALL_NO_LIB=1 -Isrc/cpp -Ibpl-subset/bp
l_subset "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.5\include" -Ic
:\users\harsh\appdata\local\enthought\canopy\user\lib\site-packages\numpy\core\i
nclude -IC:\Users\Harsh\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.4.1.
1975.win-x86_64\include -Ic:\users\harsh\appdata\local\enthought\canopy\user\PC
/Tpsrc/cpp/cuda.cpp /Fobuild\temp.win-amd64-2.7\Release\src/cpp/cuda.obj
cuda.cpp
Unknown compiler version - please run the configure tests and report the results
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\INCLUDE\xlocale(337) : wa
rning C4530: C++ exception handler used, but unwind semantics are not enabled. S
pecify /EHsc
c:\users\harsh\appdata\local\enthought\canopy\app\appdata\canopy-1.4.1.1975.win-
x86_64\include\pymath.h(22) : warning C4273: 'round' : inconsistent dll linkage
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\INCLUDE\math.h(51
6) : see previous definition of 'round'
c:\users\harsh\downloads\pycuda-2012.1\src\cpp\cuda.hpp(1786) : warning C4146: u
nary minus operator applied to unsigned type, result still unsigned
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\BIN\cl.exe /c /nologo /Ox
/MD /W3 /GS- /DNDEBUG -DPYGPU_PACKAGE=pycuda -DHAVE_CURAND=1 -DBOOST_PYTHON_SOU
RCE=1 -DPYGPU_PYCUDA=1 -DBOOST_MULTI_INDEX_DISABLE_SERIALIZATION=1 -DBOOST_THREA
D_BUILD_DLL=1 -Dboost=pycudaboost -DBOOST_ALL_NO_LIB=1 -Isrc/cpp -Ibpl-subset/bp
l_subset "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.5\include" -Ic
:\users\harsh\appdata\local\enthought\canopy\user\lib\site-packages\numpy\core\i
nclude -IC:\Users\Harsh\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.4.1.
1975.win-x86_64\include -Ic:\users\harsh\appdata\local\enthought\canopy\user\PC
/Tpsrc/cpp/bitlog.cpp /Fobuild\temp.win-amd64-2.7\Release\src/cpp/bitlog.obj
bitlog.cpp
Unknown compiler version - please run the configure tests and report the results
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\BIN\cl.exe /c /nologo /Ox
/MD /W3 /GS- /DNDEBUG -DPYGPU_PACKAGE=pycuda -DHAVE_CURAND=1 -DBOOST_PYTHON_SOU
RCE=1 -DPYGPU_PYCUDA=1 -DBOOST_MULTI_INDEX_DISABLE_SERIALIZATION=1 -DBOOST_THREA
D_BUILD_DLL=1 -Dboost=pycudaboost -DBOOST_ALL_NO_LIB=1 -Isrc/cpp -Ibpl-subset/bp
l_subset "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.5\include" -Ic
:\users\harsh\appdata\local\enthought\canopy\user\lib\site-packages\numpy\core\i
nclude -IC:\Users\Harsh\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.4.1.
1975.win-x86_64\include -Ic:\users\harsh\appdata\local\enthought\canopy\user\PC
/Tpsrc/wrapper/wrap_cudadrv.cpp /Fobuild\temp.win-amd64-2.7\Release\src/wrapper/
wrap_cudadrv.obj
wrap_cudadrv.cpp
Unknown compiler version - please run the configure tests and report the results
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\INCLUDE\xlocale(337) : wa
rning C4530: C++ exception handler used, but unwind semantics are not enabled. S
pecify /EHsc
c:\users\harsh\appdata\local\enthought\canopy\app\appdata\canopy-1.4.1.1975.win-
x86_64\include\pymath.h(22) : warning C4273: 'round' : inconsistent dll linkage
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\INCLUDE\math.h(51
6) : see previous definition of 'round'
src/cpp\cuda.hpp(1786) : warning C4146: unary minus operator applied to unsigned
type, result still unsigned
c:\users\harsh\appdata\local\enthought\canopy\user\lib\site-packages\numpy\core\
include\numpy\npy_1_7_deprecated_api.h(12) : Warning Msg: Using deprecated NumPy
API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\BIN\cl.exe /c /nologo /Ox
/MD /W3 /GS- /DNDEBUG -DPYGPU_PACKAGE=pycuda -DHAVE_CURAND=1 -DBOOST_PYTHON_SOU
RCE=1 -DPYGPU_PYCUDA=1 -DBOOST_MULTI_INDEX_DISABLE_SERIALIZATION=1 -DBOOST_THREA
D_BUILD_DLL=1 -Dboost=pycudaboost -DBOOST_ALL_NO_LIB=1 -Isrc/cpp -Ibpl-subset/bp
l_subset "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.5\include" -Ic
:\users\harsh\appdata\local\enthought\canopy\user\lib\site-packages\numpy\core\i
nclude -IC:\Users\Harsh\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.4.1.
1975.win-x86_64\include -Ic:\users\harsh\appdata\local\enthought\canopy\user\PC
/Tpsrc/wrapper/mempool.cpp /Fobuild\temp.win-amd64-2.7\Release\src/wrapper/mempo
ol.obj
mempool.cpp
Unknown compiler version - please run the configure tests and report the results
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\INCLUDE\xlocale(337) : wa
rning C4530: C++ exception handler used, but unwind semantics are not enabled. S
pecify /EHsc
c:\users\harsh\appdata\local\enthought\canopy\app\appdata\canopy-1.4.1.1975.win-
x86_64\include\pymath.h(22) : warning C4273: 'round' : inconsistent dll linkage
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\INCLUDE\math.h(51
6) : see previous definition of 'round'
src/cpp\cuda.hpp(1786) : warning C4146: unary minus operator applied to unsigned
type, result still unsigned
c:\users\harsh\appdata\local\enthought\canopy\user\lib\site-packages\numpy\core\
include\numpy\npy_1_7_deprecated_api.h(12) : Warning Msg: Using deprecated NumPy
API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
c:\users\harsh\downloads\pycuda-2012.1\src\wrapper\mempool.cpp(256) : fatal erro
r C1001: An internal error has occurred in the compiler.
(compiler file 'f:\dd\vctools\compiler\utc\src\p2\main.c', line 228)
To work around this problem, try simplifying or changing the program near the l
ocations listed above.
Please choose the Technical Support command on the Visual C++
Help menu, or open the Technical Support help file for more information
INTERNAL COMPILER ERROR in 'C:\Program Files (x86)\Microsoft Visual Studio 12.0\
VC\BIN\cl.exe'
Please choose the Technical Support command on the Visual C++
Help menu, or open the Technical Support help file for more information
error: command '"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\BIN\cl.e
xe"' failed with exit status 1
C:\Users\Harsh\Downloads\pycuda-2012.1>python setup.py install
running install
running bdist_egg
running egg_info
writing requirements to pycuda.egg-info\requires.txt
writing pycuda.egg-info\PKG-INFO
writing top-level names to pycuda.egg-info\top_level.txt
writing dependency_links to pycuda.egg-info\dependency_links.txt
reading manifest file 'pycuda.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.cpp' under directory 'bpl-subset\bpl_subset\
boost'
warning: no files found matching '*.html' under directory 'bpl-subset\bpl_subset
\boost'
warning: no files found matching '*.inl' under directory 'bpl-subset\bpl_subset\
boost'
warning: no files found matching '*.txt' under directory 'bpl-subset\bpl_subset\
boost'
warning: no files found matching '*.h' under directory 'bpl-subset\bpl_subset\li
bs'
warning: no files found matching '*.hpp' under directory 'bpl-subset\bpl_subset\
libs'
warning: no files found matching '*.ipp' under directory 'bpl-subset\bpl_subset\
libs'
warning: no files found matching '*.pl' under directory 'bpl-subset\bpl_subset\l
ibs'
writing manifest file 'pycuda.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_py
running build_ext
building '_driver' extension
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\BIN\cl.exe /c /nologo /Ox
/MD /W3 /GS- /DNDEBUG -DPYGPU_PACKAGE=pycuda -DHAVE_CURAND=1 -DBOOST_PYTHON_SOU
RCE=1 -DPYGPU_PYCUDA=1 -DBOOST_MULTI_INDEX_DISABLE_SERIALIZATION=1 -DBOOST_THREA
D_BUILD_DLL=1 -Dboost=pycudaboost -DBOOST_ALL_NO_LIB=1 -Isrc/cpp -Ibpl-subset/bp
l_subset "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.5\include" -Ic
:\users\harsh\appdata\local\enthought\canopy\user\lib\site-packages\numpy\core\i
nclude -IC:\Users\Harsh\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.4.1.
1975.win-x86_64\include -Ic:\users\harsh\appdata\local\enthought\canopy\user\PC
/Tpsrc/cpp/cuda.cpp /Fobuild\temp.win-amd64-2.7\Release\src/cpp/cuda.obj
cuda.cpp
Unknown compiler version - please run the configure tests and report the results
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\INCLUDE\xlocale(337) : wa
rning C4530: C++ exception handler used, but unwind semantics are not enabled. S
pecify /EHsc
c:\users\harsh\appdata\local\enthought\canopy\app\appdata\canopy-1.4.1.1975.win-
x86_64\include\pymath.h(22) : warning C4273: 'round' : inconsistent dll linkage
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\INCLUDE\math.h(51
6) : see previous definition of 'round'
c:\users\harsh\downloads\pycuda-2012.1\src\cpp\cuda.hpp(1786) : warning C4146: u
nary minus operator applied to unsigned type, result still unsigned
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\BIN\cl.exe /c /nologo /Ox
/MD /W3 /GS- /DNDEBUG -DPYGPU_PACKAGE=pycuda -DHAVE_CURAND=1 -DBOOST_PYTHON_SOU
RCE=1 -DPYGPU_PYCUDA=1 -DBOOST_MULTI_INDEX_DISABLE_SERIALIZATION=1 -DBOOST_THREA
D_BUILD_DLL=1 -Dboost=pycudaboost -DBOOST_ALL_NO_LIB=1 -Isrc/cpp -Ibpl-subset/bp
l_subset "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.5\include" -Ic
:\users\harsh\appdata\local\enthought\canopy\user\lib\site-packages\numpy\core\i
nclude -IC:\Users\Harsh\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.4.1.
1975.win-x86_64\include -Ic:\users\harsh\appdata\local\enthought\canopy\user\PC
/Tpsrc/cpp/bitlog.cpp /Fobuild\temp.win-amd64-2.7\Release\src/cpp/bitlog.obj
bitlog.cpp
Unknown compiler version - please run the configure tests and report the results
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\BIN\cl.exe /c /nologo /Ox
/MD /W3 /GS- /DNDEBUG -DPYGPU_PACKAGE=pycuda -DHAVE_CURAND=1 -DBOOST_PYTHON_SOU
RCE=1 -DPYGPU_PYCUDA=1 -DBOOST_MULTI_INDEX_DISABLE_SERIALIZATION=1 -DBOOST_THREA
D_BUILD_DLL=1 -Dboost=pycudaboost -DBOOST_ALL_NO_LIB=1 -Isrc/cpp -Ibpl-subset/bp
l_subset "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.5\include" -Ic
:\users\harsh\appdata\local\enthought\canopy\user\lib\site-packages\numpy\core\i
nclude -IC:\Users\Harsh\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.4.1.
1975.win-x86_64\include -Ic:\users\harsh\appdata\local\enthought\canopy\user\PC
/Tpsrc/wrapper/wrap_cudadrv.cpp /Fobuild\temp.win-amd64-2.7\Release\src/wrapper/
wrap_cudadrv.obj
wrap_cudadrv.cpp
Unknown compiler version - please run the configure tests and report the results
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\INCLUDE\xlocale(337) : wa
rning C4530: C++ exception handler used, but unwind semantics are not enabled. S
pecify /EHsc
c:\users\harsh\appdata\local\enthought\canopy\app\appdata\canopy-1.4.1.1975.win-
x86_64\include\pymath.h(22) : warning C4273: 'round' : inconsistent dll linkage
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\INCLUDE\math.h(51
6) : see previous definition of 'round'
src/cpp\cuda.hpp(1786) : warning C4146: unary minus operator applied to unsigned
type, result still unsigned
c:\users\harsh\appdata\local\enthought\canopy\user\lib\site-packages\numpy\core\
include\numpy\npy_1_7_deprecated_api.h(12) : Warning Msg: Using deprecated NumPy
API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\BIN\cl.exe /c /nologo /Ox
/MD /W3 /GS- /DNDEBUG -DPYGPU_PACKAGE=pycuda -DHAVE_CURAND=1 -DBOOST_PYTHON_SOU
RCE=1 -DPYGPU_PYCUDA=1 -DBOOST_MULTI_INDEX_DISABLE_SERIALIZATION=1 -DBOOST_THREA
D_BUILD_DLL=1 -Dboost=pycudaboost -DBOOST_ALL_NO_LIB=1 -Isrc/cpp -Ibpl-subset/bp
l_subset "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.5\include" -Ic
:\users\harsh\appdata\local\enthought\canopy\user\lib\site-packages\numpy\core\i
nclude -IC:\Users\Harsh\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.4.1.
1975.win-x86_64\include -Ic:\users\harsh\appdata\local\enthought\canopy\user\PC
/Tpsrc/wrapper/mempool.cpp /Fobuild\temp.win-amd64-2.7\Release\src/wrapper/mempo
ol.obj
mempool.cpp
Unknown compiler version - please run the configure tests and report the results
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\INCLUDE\xlocale(337) : wa
rning C4530: C++ exception handler used, but unwind semantics are not enabled. S
pecify /EHsc
c:\users\harsh\appdata\local\enthought\canopy\app\appdata\canopy-1.4.1.1975.win-
x86_64\include\pymath.h(22) : warning C4273: 'round' : inconsistent dll linkage
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\INCLUDE\math.h(51
6) : see previous definition of 'round'
src/cpp\cuda.hpp(1786) : warning C4146: unary minus operator applied to unsigned
type, result still unsigned
c:\users\harsh\appdata\local\enthought\canopy\user\lib\site-packages\numpy\core\
include\numpy\npy_1_7_deprecated_api.h(12) : Warning Msg: Using deprecated NumPy
API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
c:\users\harsh\downloads\pycuda-2012.1\src\wrapper\mempool.cpp(256) : fatal erro
r C1001: An internal error has occurred in the compiler.
(compiler file 'f:\dd\vctools\compiler\utc\src\p2\main.c', line 228)
To work around this problem, try simplifying or changing the program near the l
ocations listed above.
Please choose the Technical Support command on the Visual C++
Help menu, or open the Technical Support help file for more information
INTERNAL COMPILER ERROR in 'C:\Program Files (x86)\Microsoft Visual Studio 12.0\
VC\BIN\cl.exe'
Please choose the Technical Support command on the Visual C++
Help menu, or open the Technical Support help file for more information
error: command '"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\BIN\cl.e
xe"' failed with exit status 1
```
`INTERNAL COMPILER ERROR in 'C:\Program Files (x86)\Microsoft Visual Studio 12.0\
VC\BIN\cl.exe'` in the last line
I have followed [Installing theano on Windows 8 with GPU enabled](https://stackoverflow.com/questions/25729969/installing-theano-on-windows-8-with-gpu-enabled)
But **Python distutils fix** cannot be applied as I have used Canopy. Please help. | 2014/10/31 | [
"https://Stackoverflow.com/questions/26674433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2399329/"
] | Theano do not support Visual Studio compiler. It is needed for cuda, but not for Theano CPU code. This need g++ compiler.
Is the canopy package for the compiler correctly installed? What did you do to try to compile with microsoft compiler? | Windows Canopy, like all standard CPython 2.7 distributions for Windows, is compiled with Visual C++ 2008, and that is the compiler that you should use for compiling all non-trivial C extensions for it.
The following article includes some links for obtaining this compiler:
<https://support.enthought.com/entries/26864394-Windows-Unable-to-find-vcvarsall-bat-cython-f2py-other-c-extensions-> | 17,834 |