id
stringlengths 3
6
| prompt
stringlengths 100
55.1k
| response_j
stringlengths 30
18.4k
|
---|---|---|
187726 | I am using Retrofit 2.6.0 with coroutines for my web service call. I am getting the API response properly with all response codes (success & error cases). My Issue is when I disconnect the internet(Wifi/mobile data) in between an API call, from the code that I have written, the error is not getting caught properly. The errors most of the time are ConnectException and SocketException.
I have tried to catch the error using interceptor and also in from the ViewModel where initiated my call as well. but here as well, the exception is not getting caught and handled.
```
//ApiService
@GET(ApiUrl.API_DASHBOARD)
suspend fun getHomeUiDetails(@Header("Authorization") authHeader: String): Response<HomeDetailsResponse>
//ConnectionBridge
suspend fun getHomeUiDetails(authToken: String): Response<HomeDetailsResponse> {
return ApiServiceGenerator.BASIC_CLIENT_CONTRACT.getHomeUiDetails(authToken)
}
// ViewModel
viewModelScope.launch(Dispatchers.IO) {
val apiResponse = ApiConnectionBridge.getHomeUiDetails(SharedPrefUtils.getAuthToken(context))
if (apiResponse.isSuccessful) {
// success case
} else {
// error case
}
}
object ApiServiceGenerator {
val BASIC_CLIENT_CONTRACT: ApiService = ApiClient
.getContract(
ApiService::class.java,
true,
BuildConfig.BASE_URL
)
}
object ApiClient {
fun <T> getContract(clazz: Class<T>, isAuth: Boolean, baseUrl: String): T {
return getRetrofitBuilder(baseUrl, getContractBuilder(isAuth)).create(clazz)
}
private fun getRetrofitBuilder(baseUrl: String, builder: OkHttpClient.Builder): Retrofit {
val gson = GsonBuilder().serializeNulls().create()
val loggingInterceptor = HttpLoggingInterceptor()
loggingInterceptor.level = HttpLoggingInterceptor.Level.BODY
val okHttpClient = OkHttpClient.Builder()
.addInterceptor { chain ->
val original = chain.request()
// Customize the request
val request = original.newBuilder()
request.header("Content-Type", "application/x-www-form-urlencoded")
var response: Response? = null
try {
response = chain.proceed(request.build())
response.cacheResponse()
// Customize or return the response
response!!
} catch (e: ConnectException) {
Log.e("RETROFIT", "ERROR : " + e.localizedMessage)
chain.proceed(original)
} catch (e: SocketException) {
Log.e("RETROFIT", "ERROR : " + e.localizedMessage)
chain.proceed(original)
} catch (e: IOException) {
Log.e("RETROFIT", "ERROR : " + e.localizedMessage)
chain.proceed(original)
} catch (e: Exception) {
Log.e("RETROFIT", "ERROR : " + e.localizedMessage)
chain.proceed(original)
}
}
// .cache(cache)
.eventListener( object : EventListener() {
override fun callFailed(call: Call, ioe: IOException) {
super.callFailed(call, ioe)
}
})
.addInterceptor(loggingInterceptor)
.readTimeout(60, TimeUnit.SECONDS)
.connectTimeout(60, TimeUnit.SECONDS)
.build()
return Retrofit.Builder()
.baseUrl(baseUrl)
.client(okHttpClient)//getUnsafeOkHttpClient()
.addConverterFactory(GsonConverterFactory.create(gson))
.build()
}
}
```
The stack trace:
```
2019-08-02 14:15:12.819 4157-4288/com.my.app E/AndroidRuntime: FATAL EXCEPTION: DefaultDispatcher-worker-3
Process: com.my.app, PID: 4157
java.net.ConnectException: Failed to connect to my_url
at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:248)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:166)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.logging.HttpLoggingInterceptor.intercept(HttpLoggingInterceptor.java:213)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at com.my.app.network.ApiClient$getRetrofitBuilder$okHttpClient$1.intercept(ApiClient.kt:50)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:254)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:200)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:764)
Caused by: java.net.ConnectException: failed to connect to my_url (port 80) from /:: (port 0) after 60000ms: connect failed: ENETUNREACH (Network is unreachable)
at libcore.io.IoBridge.connect(IoBridge.java:137)
at java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:137)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:390)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:230)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:212)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:436)
at java.net.Socket.connect(Socket.java:621)
at okhttp3.internal.platform.AndroidPlatform.connectSocket(AndroidPlatform.java:73)
at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:246)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:166)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.logging.HttpLoggingInterceptor.intercept(HttpLoggingInterceptor.java:213)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at com.my.app.network.ApiClient$getRetrofitBuilder$okHttpClient$1.intercept(ApiClient.kt:50)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:254)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:200)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:764)
Caused by: android.system.ErrnoException: connect failed: ENETUNREACH (Network is unreachable)
at libcore.io.Linux.connect(Native Method)
at libcore.io.BlockGuardOs.connect(BlockGuardOs.java:118)
at libcore.io.IoBridge.connectErrno(IoBridge.java:168)
at libcore.io.IoBridge.connect(IoBridge.java:129)
at java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:137)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:390)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:230)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:212)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:436)
at java.net.Socket.connect(Socket.java:621)
at okhttp3.internal.platform.AndroidPlatform.connectSocket(AndroidPlatform.java:73)
at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:246)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:166)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:257)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.logging.HttpLoggingInterceptor.intercept(HttpLoggingInterceptor.java:213)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at com.my.app.network.ApiClient$getRetrofitBuilder$okHttpClient$1.intercept(ApiClient.kt:50)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:254)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:200)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:764)
``` | Well, that's what I do, just to reduce try-catch junk copypaste
Declare our API call methods like this
```
@GET("do/smth")
suspend fun doSomething(): SomeCustomResponse
```
In a separate file
```
suspend fun <T: Any> handleRequest(requestFunc: suspend () -> T): kotlin.Result<T> {
return try {
Result.success(requestFunc.invoke())
} catch (he: HttpException) {
Result.failure(he)
}
}
```
Usage:
```
suspend fun doSmth(): kotlin.Result<SomeCustomResponse> {
return handleRequest { myApi.doSomething() }
}
```
HTTP codes are handled by Retrofit - it just throws an HttpException if responseCode is not 2xx. So what we should do is just catch this exception.
I know, it is not a perfect solution, but let's for Jake to invent something better) |
187906 | **How can I threshold this blurry image to make the digits as clear as possible?**
In [a previous post](https://stackoverflow.com/questions/13391073/adaptive-threshold-of-blurry-image), I tried adaptively thresholding a blurry image (left), which resulted in distorted and disconnected digits (right):
![enter image description here](https://i.stack.imgur.com/NjcGn.png)
Since then, I've tried using a morphological closing operation as described in [this post](https://stackoverflow.com/a/11366549/1397061) to make the brightness of the image uniform:
![enter image description here](https://i.stack.imgur.com/E5ANq.png)
If I adaptively threshold this image, I don't get significantly better results. However, because the brightness is approximately uniform, I can now use an ordinary threshold:
![enter image description here](https://i.stack.imgur.com/7oqvV.png)
This is a lot better than before, but I have two problems:
1. I had to manually choose the threshold value. Although the closing operation results in uniform brightness, the level of brightness might be different for other images.
2. Different parts of the image would do better with slight variations in the threshold level. For instance, the 9 and 7 in the top left come out partially faded and should have a lower threshold, while some of the 6s have fused into 8s and should have a higher threshold.
I thought that going back to an adaptive threshold, but with a very large block size (1/9th of the image) would solve both problems. Instead, I end up with a weird "halo effect" where the centre of the image is a lot brighter, but the edges are about the same as the normally-thresholded image:
![enter image description here](https://i.stack.imgur.com/kwetf.png)
Edit: remi [suggested](https://stackoverflow.com/a/13641589/1397061) morphologically opening the thresholded image at the top right of this post. This doesn't work too well. Using elliptical kernels, only a 3x3 is small enough to avoid obliterating the image entirely, and even then there are significant breakages in the digits:
![enter image description here](https://i.stack.imgur.com/9GaA5.png)
Edit2: mmgp [suggested](https://stackoverflow.com/a/13675537/1397061) using a Wiener filter to remove blur. I adapted [this code for Wiener filtering in OpenCV](http://www.ocf.berkeley.edu/~rayver/mm/code/cvWiener2/) to OpenCV4Android, but it makes the image even blurrier! Here's the image before (left) and after filtering with my code and a 5x5 kernel:
![enter image description here](https://i.stack.imgur.com/neeZH.png)
Here is my adapted code, which filters in-place:
```
private void wiener(Mat input, int nRows, int nCols) { // I tried nRows=5 and nCols=5
Mat localMean = new Mat(input.rows(), input.cols(), input.type());
Mat temp = new Mat(input.rows(), input.cols(), input.type());
Mat temp2 = new Mat(input.rows(), input.cols(), input.type());
// Create the kernel for convolution: a constant matrix with nRows rows
// and nCols cols, normalized so that the sum of the pixels is 1.
Mat kernel = new Mat(nRows, nCols, CvType.CV_32F, new Scalar(1.0 / (double) (nRows * nCols)));
// Get the local mean of the input. localMean = convolution(input, kernel)
Imgproc.filter2D(input, localMean, -1, kernel, new Point(nCols/2, nRows/2), 0);
// Get the local variance of the input. localVariance = convolution(input^2, kernel) - localMean^2
Core.multiply(input, input, temp); // temp = input^2
Imgproc.filter2D(temp, temp, -1, kernel, new Point(nCols/2, nRows/2), 0); // temp = convolution(input^2, kernel)
Core.multiply(localMean, localMean, temp2); //temp2 = localMean^2
Core.subtract(temp, temp2, temp); // temp = localVariance = convolution(input^2, kernel) - localMean^2
// Estimate the noise as mean(localVariance)
Scalar noise = Core.mean(temp);
// Compute the result. result = localMean + max(0, localVariance - noise) / max(localVariance, noise) * (input - localMean)
Core.max(temp, noise, temp2); // temp2 = max(localVariance, noise)
Core.subtract(temp, noise, temp); // temp = localVariance - noise
Core.max(temp, new Scalar(0), temp); // temp = max(0, localVariance - noise)
Core.divide(temp, temp2, temp); // temp = max(0, localVar-noise) / max(localVariance, noise)
Core.subtract(input, localMean, input); // input = input - localMean
Core.multiply(temp, input, input); // input = max(0, localVariance - noise) / max(localVariance, noise) * (input - localMean)
Core.add(input, localMean, input); // input = localMean + max(0, localVariance - noise) / max(localVariance, noise) * (input - localMean)
}
``` | Some hints that you might try out:
* Apply the morphological opening in your original thresholded image (the one which is noisy at the right of the first picture). You should get rid of most of the background noise and be able to reconnect the digits.
* Use a different preprocessing of your original image instead of morpho closing, such as median filter (tends to blur the edges) or [bilateral filtering](http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=bilateralfilter#cv2.bilateralFilter) which will preserve better the edges but is slower to compute.
* As far as threshold is concerned, you can use CV\_OTSU flag in the cv::threshold to determine an optimal value for a global threshold. Local thresholding might still be better, but should work better with the bilateral or median filter |
187967 | I'm trying to get Django and Apache working together using Mod\_wsgi and currently I'm getting the following errors:
```
[Thu Jul 15 12:52:38 2010] [error] [client 10.100.50.73] mod_wsgi (pid=4803): Target WSGI script '/home/webdev/websites/virtualenvs/polaris/polaris_project.py' cannot be loaded as Python module.
[Thu Jul 15 12:52:38 2010] [error] [client 10.100.50.73] mod_wsgi (pid=4803): Exception occurred processing WSGI script '/home/webdev/websites/virtualenvs/polaris/polaris_project.py'.
[Thu Jul 15 12:52:38 2010] [error] [client 10.100.50.73] Traceback (most recent call last):
[Thu Jul 15 12:52:38 2010] [error] [client 10.100.50.73] File "/home/webdev/websites/virtualenvs/polaris/polaris_project.py", line 8, in <module>
[Thu Jul 15 12:52:38 2010] [error] [client 10.100.50.73] import django.core.handlers.wsgi
[Thu Jul 15 12:52:38 2010] [error] [client 10.100.50.73] ImportError: No module named django.core.handlers.wsgi
```
My apache conf looks like
```
Alias /polaris_django/media/ "/home/webdev/websites/virtualenvs/polaris/polaris/static/"
WSGIScriptAlias /polaris_django /home/webdev/websites/virtualenvs/polaris/polaris_project.py
WSGIApplicationGroup %{GLOBAL}
<Directory "/home/webdev/websites/virtualenvs/polaris">
Order deny,allow
Allow from all
</Directory>
```
My Mod\_WSGi file looks like
```
import os, sys
sys.path.append('/home/webdev/websites/virtualenvs/polaris')
sys.path.append('/home/webdev/websites/virtualenvs/polaris/polaris/apps')
sys.path.append('/home/webdev/websites/virtualenvs/polaris/polaris/extra_settings')
os.environ['DJANGO_SETTINGS_MODULE'] = 'polaris.settings'
print >> sys.stderr, sys.path
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
```
How can I get apache serving Django properly? | It looks like you're using a virtualenv - you need to activate that within your WSGI script to set up the paths correctly.
```
activate_this = os.path.join("path/to/my/virtualenv", "bin/activate_this.py")
execfile(activate_this, dict(__file__=activate_this))
``` |
188563 | Hi Im involved with tourist lodges in Namibia. We record water readings ect. every day and input to an Excel file and calculate consumption per Pax , the problem is not every staff member understands Excel. So I wrote a simple Python program to input readings into excel automatically. It works the only problem is I want to save each month in a new sheet and have all the data grouped by month (eg. January(all readings) February(all readings)) . I can create a new sheet but I cannot input data to the new sheet, it just overwrites my data from the previous months... The code looks as follows
```
*import tkinter
from openpyxl import load_workbook
from openpyxl.styles import Font
import time
import datetime
book = load_workbook('sample.xlsx')
#sheet = book.active
Day = datetime.date.today().strftime("%B")
x = book.get_sheet_names()
list= x
if Day in list: # this checks if the sheet exists to stop the creation of multiple sheets with the same name
sheet = book.active
else:
book.create_sheet(Day)
sheet = book.active
#sheet = book.active*
```
And to write to the sheet I use and entry widget then save the value as follow:
```
Bh1=int(Bh1In.get())
if Bh1 == '0':
import Error
else:
sheet.cell(row=Day , column =4).value = Bh1
number_format = 'Number'
```
Maybe I'm being stupid but please help!! | You're depending on getting the active worksheet instead of accessing it by name. Simply using something like:
```
try:
sheet = wb[Day]
except KeyError:
sheet = wb.create_sheet(Day)
```
is probably all you need. |
189097 | **Problem description:** I want to create a program that can update one whole row (or cells in this row within given range) in one single line (i.e. one single API request).
This is what've seen in the **documentation**, that was related to my problem:
```py
# Updates A2 and A3 with values 42 and 43
# Note that update range can be bigger than values array
worksheet.update('A2:B4', [[42], [43]])
```
This is how I tried to implement it into **my program**:
```py
sheet.update(f'A{rowNumber + 1}:H{rowNumber + 1}', [[str(el)] for el in list_of_auctions[rowNumber]])
```
This is what printing `[[str(el)] for el in list_of_auctions[rowNumber]]` looks like:
```py
[['068222bb-c251-47ad-8c2a-e7ad7bad2f60'], ['urlLink'], ['100'], ['250'], ['20'], [''], [' ,'], ['0']]
```
As far as I can tell, everything here is according to the docs, however I get this **output**:
```
error from callback <bound method SocketHandler.handle_message of <amino.socket.SocketHandler object at 0x00000196F8C46070>>: {'code': 400, 'message': "Requested writing within range ['Sheet1'!A1:H1], but tried writing to row [2]", 'status': 'INVALID_ARGUMENT'}
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\websocket\_app.py", line 344, in _callback
callback(*args)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 80, in handle_message
self.client.handle_socket_message(data)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\client.py", line 345, in handle_socket_message
return self.callbacks.resolve(data)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 204, in resolve
return self.methods.get(data["t"], self.default)(data)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 192, in _resolve_chat_message
return self.chat_methods.get(key, self.default)(data)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 221, in on_text_message
def on_text_message(self, data): self.call(getframe(0).f_code.co_name, objects.Event(data["o"]).Event)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 209, in call
handler(data)
File "C:\Users\1\Desktop\python bots\auction-bot\bot.py", line 315, in on_text_message
bid_rub(link, data.message.author.nickname, data.message.author.userId, id, linkto)
File "C:\Users\1\Desktop\python bots\auction-bot\bot.py", line 162, in bid_rub
sheet.update(f'A{ifExisting + 1}:H{ifExisting + 1}', [[str(el)] for el in list_of_auctions[ifExisting]])
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\gspread\utils.py", line 592, in wrapper
return f(*args, **kwargs)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\gspread\models.py", line 1096, in update
response = self.spreadsheet.values_update(
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\gspread\models.py", line 235, in values_update
r = self.client.request('put', url, params=params, json=body)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\gspread\client.py", line 73, in request
raise APIError(response)
```
**Question:** I cannot figure out what can be the issue here, trying to tackle this for quite a while. | The error indicates that the data you are trying to write has more rows than the rows in the range. Each list inside your list represent single row of data in the spreadsheet.
In your example:
```
[['068222bb-c251-47ad-8c2a-e7ad7bad2f60'], ['urlLink'], ['100'], ['250'], ['20'], [''], [' ,'], ['0']]
```
It represents 8 rows of data.
```
row 1 = ['068222bb-c251-47ad-8c2a-e7ad7bad2f60']
row 2 = ['urlLink']
row 3 = ['100']
row 4 = ['250']
row 5 = ['20']
row 6 = ['']
row 7 = [' ,']
row 8 = ['0']
```
If you want to write into a single row, the format should be:
```
[['068222bb-c251-47ad-8c2a-e7ad7bad2f60', 'urlLink', '100', '250', '20', '', ' ,', '0']]
```
To solve this, remove the `[` and `]` in `[str(el)]` and add another `[` and `]` to `str(el) for el in list_of_auctions[rowNumber]` or use itertools.chain() to make single list within a list
Your code should look like this:
```
sheet.update(f'A{rowNumber + 1}:H{rowNumber + 1}', [[str(el) for el in list_of_auctions[rowNumber]]])
or
import itertools
sheet.update(f'A{rowNumber + 1}:H{rowNumber + 1}', [list(itertools.chain(str(el) for el in list_of_auctions[rowNumber]))]
```
Reference:
----------
[Writing to a single range](https://developers.google.com/sheets/api/guides/values#writing_to_a_single_range)
[Python itertools](https://docs.python.org/3/library/itertools.html) |
189117 | First of all. Please bear with my questions.
What I am doing is just performing an ajax request which will return a response data of string.
Here's my php
```
<?php
header('Access-Control-Allow-Origin: *');
echo 'Peenoise Crazy';
?>
```
Http request
```
let headers = new Headers({ 'Content-Type': 'application/x-www-form-urlencoded' });
let options = new RequestOptions({ headers: headers });
return this.http.post('./app/parsers/peenoise.php', '', options)
.then(val => console.log(val))
```
This doesn't work. But if I change the url into this `http://localhost:8912/app/parsers/peenoise.php`. It works.
Questions
1. Why `./app/parsers/peenoise.php` doesn't work?
2. If I change the url to `http://localhost:8912/app/parsers/peenoise.php`. Why do I need to put `header('Access-Control-Allow-Origin: *')`? It is on the same folder of my app.
3. How to use http.post without passing an url of `http://localhost:8912/route/to/phpfolder.php`?
Any help would be appreciated. Please guide me to this. Thanks | Inject Spring Boot properties
-----------------------------
You can inject the values in your code this way:
```
@Value("${server.port}")
private int port;
@Value("${server.contextPath}")
private String contextPath;
```
Hateoas
-------
Alternatively, you could take a look at the Hateoas project, which can generate the link section for you: <http://docs.spring.io/spring-hateoas/docs/current/reference/html> |
189140 | I'm working on a WinRT application that for now consists of two pages. While in the first page the user clicks a button that directs him/her to the second page where he makes a choice that needs to be passed back to the calling page. How can I pass data back to the calling page? | This expression selects *all* nodes, but does not include in the result set any nodes that are `RejectedRecords` or that have `RejectedRecords` as an ancestor:
```
//*[not(descendant::RejectedRecords) and not(ancestor-or-self::RejectedRecords)]
```
Here is a link to the result in [XPath Tester](http://www.xpathtester.com/xpath/49de9b7282429a217788acdcd872ec3c) |
189523 | When working with Terraform, what features of Azure services are there that cannot be scripted in Terraform or require embedding ARM? | This works (see <https://swish.swi-prolog.org/p/gMgNaCKQ.pl> for a working example):
```
suffix(L,S) :- append(_,S,L).
```
This also works (see <https://swish.swi-prolog.org/p/XqCXArvz.pl> for a working example):
```
suffix( L , L ) . % every list is a suffix of itself, with the empty list [] as its prefix
suffix( [_|L] , S ) :- suffix(L,S) . % otherwise, pop the list and try again.
``` |
189617 | Working with breeze backed by SharePoint, as described [here](https://stackoverflow.com/q/15497258/1014822), and using TypeScript rather than JS.
In a DataService class I create an EntityManager and execute a Query:
```
private servicePath: string = '/api/PATH/';
private manager: breeze.EntityManager;
constructor() {
this.init();
}
private init(): void {
this.manager = new breeze.EntityManager(this.servicePath);
}
public ListResponses(): breeze.Promise {
var query = breeze.EntityQuery.from("Responses");
return this.manager.executeQuery(query);
}
```
I then call this from my view model, which works fine:
```
private loadResponses(): void {
this.dataservice.ListResponses().then((data) => {
this.handleResponsesLoaded(data);
}).fail((error) => {
this.handleDataError(error);
});
}
private handleResponsesLoaded(data:any): void {
for (var i = 0; i < results.length; i++){
this.extendItem(results[i]);
}
this.renderList(results, "#tp-responses-list");
}
```
But my attempt to extend each item fails, because the item's `entityAspect` is null:
```
private extendItem(item: any): void {
item.entityAspect.propertyChanged.subscribe(() => { // FAILS HERE
setTimeout(() => {
if (item.entityAspect.entityState.isModified()) {
this.dataservice.SaveChanges().then((result) => {
tracer.Trace("SaveChanged Result: " + result);
}).fail((error) => {
this.handleDataError(error);
});
}
}, 0);
});
}
```
Upon inspecting the result item, I can see it's just the plain data object, with all the properties I would expect but no Entity goodness:
![enter image description here](https://i.stack.imgur.com/z1YHL.png)
I've just started with breeze, so the best way to put the question is probably: *what have I done wrong here?* | If Breeze can't find a matching type in its metadata to what it receives as a result of a query, it simply returns the "raw" json object.
The reason that your metadata is not available is usually due to one of two explanations:
1) You are not serializing the type information in the query response. The [BreezeController] attribute or the [BreezeJsonFormatter] attributes both accomplish this.
2) The query itself is not returning types for which metadata was described. You can either create the metadata directly on the client in this case, or have it returned from the server via a "Metadata" method. ( see the NoDb example in the Breeze Zip package for an example of the first).
You can also look at the JsonResultsAdapter if you want to coerce any query result into a "known" metadata type, but this is generally not necessary if you are using the [BreezeController] attribute.
Hope this helps. |
189749 | I run this command from the CLI and it works fine...
```
curl -H Content-Type:text/plain -vLk https://10.42.0.197/exec/show%20ver --user chartley:<pw omitted>
```
Now when I put it into a bash script I get the following...
```
* About to connect() to 10.42.0.197 port 443 (#0)
* Trying 10.42.0.197... connected
* Connected to 10.42.0.197 (10.42.0.197) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* SSL connection using TLS_RSA_WITH_RC4_128_SHA
* Server certificate:
* subject: CN=ASA Temporary Self Signed Certificate
* start date: Jul 18 20:53:46 2013 GMT
* expire date: Jul 16 20:53:46 2023 GMT
* common name: ASA Temporary Self Signed Certificate
* issuer: CN=ASA Temporary Self Signed Certificate
* Server auth using Basic with user 'chartley'
> GET /exec/show%20version HTTP/1.1
> Authorization: Basic
> User-Agent: Firefox
> Host: 10.42.0.197
> Accept: */*
> Content-Type:text/plain
> < HTTP/1.1 401 Unauthorized < Date: Tue, 04 Apr 2017 22:06:53 UTC < Connection: close < Content-Type: text/html < Expires: Thu, 16 eb
1989 00:00:00 GMT
* Authentication problem. Ignoring this. < WWW-Authenticate: Basic realm="Authentication" < <HEAD><TITLE>Authorization
Required</TITLE></HEAD><BODY><H1>Authorization Required</H1>Browser
not authentication-capable or authentication failed.</BODY>
* Closing connection #0
```
I had the curl command echoed out with variable expansion performed and it's character for character with the command that works on the CLI.
What am I missing?
Here is the script
```
#!/usr/bin/bash
IFS=$'\n'
echo "Gimme yo password foo!!"
read -s pass
pass=$(echo $pass | sed 's/[(\!|\@|\#|\$|\%|\^|\&|\*|\(|\))&]/\\&/g')
if [[ "$2" =~ [:space:] ]];
then
CMD=`echo $2 | sed 's/ /\%20/g'`
#echo "space matched"
#echo "$2"
fi
if [[ "$CMD" =~ */* ]];
then
CMD=`echo $2 | 's/[\/]/\%2f/g'`
#echo "Slash matched"
#echo "$2"
fi
curl -H Content-Type:text/plain -vLk https://$1/exec/$CMD --user "$USER:$pass"
```
... and it is run as such... ASA\_do 10.42.0.197 "show ver"
Here is the output having added "set -x" in the bash script...
```
[chartley@s324phx-syslog ~]$ ASA_do 10.42.0.197 "show version"
+ echo 'Gimme yo password foo!!'
Gimme yo password foo!!
+ read -s pass
++ echo '<omitted>'
++ sed 's/[(\!|\@|\#|\$|\%|\^|\&|\*|\(|\))&]/\\&/g'
+ pass='<pw omitted>'
+ [[ show version =~ [:space:] ]]
++ echo 'show version'
++ sed 's/ /\%20/g'
+ CMD=show%20version
+ [[ show%20version =~ */* ]]
+ curl -H Content-Type:text/plain -vLk https://10.42.0.197/exec/show%20version --user 'chartley:<pw omitted>'
* About to connect() to 10.42.0.197 port 443 (#0)
* Trying 10.42.0.197... connected
* Connected to 10.42.0.197 (10.42.0.197) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* SSL connection using TLS_RSA_WITH_RC4_128_SHA
* Server certificate:
* subject: CN=ASA Temporary Self Signed Certificate
* start date: Jul 18 20:53:46 2013 GMT
* expire date: Jul 16 20:53:46 2023 GMT
* common name: ASA Temporary Self Signed Certificate
* issuer: CN=ASA Temporary Self Signed Certificate
* Server auth using Basic with user 'chartley'
> GET /exec/show%20version HTTP/1.1
> Authorization: Basic <omitted>
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.18 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 10.42.0.197
> Accept: */*
> Content-Type:text/plain
>
< HTTP/1.1 401 Unauthorized
< Date: Thu, 06 Apr 2017 20:39:38 UTC
< Connection: close
< Content-Type: text/html
< Expires: Thu, 16 Feb 1989 00:00:00 GMT
* Authentication problem. Ignoring this.
< WWW-Authenticate: Basic realm="Authentication"
<
<HEAD><TITLE>Authorization Required</TITLE></HEAD><BODY><H1>Authorization Required</H1>Browser not authentication-capable or authentication failed.</BODY>
* Closing connection #0
```
This is the script with it working using eval...
```
#!/usr/bin/bash
set -x
echo "Gimme yo password foo!!"
IFS=$'\n' read -r -s -p 'Password:' pass
pass=$(echo $pass | sed 's/[(\!|\@|\#|\$|\%|\^|\&|\*|\(|\))&]/\\&/g' | sed "s/'//g")
if [[ "$2" =~ [:space:] ]];
then
CMD=`echo $2 | sed 's/ /\%20/g'`
#echo "space matched"
#echo "$2"
fi
if [[ "$CMD" =~ */* ]];
then
CMD=`echo $2 | 's/[\/]/\%2f/g'`
#echo "Slash matched"
#echo "$2"
fi
eval curl -H Content-Type:text/plain -vLk https://$1/exec/$CMD --user "$USER:$pass"
``` | The answer was to add "eval" to the curl command.
```
#!/usr/bin/bash
set -x
echo "Gimme yo password foo!!"
IFS=$'\n' read -r -s -p 'Password:' pass
pass=$(echo $pass | sed 's/[(\!|\@|\#|\$|\%|\^|\&|\*|\(|\))&]/\\&/g' | sed "s/'//g")
if [[ "$2" =~ [:space:] ]];
then
CMD=`echo $2 | sed 's/ /\%20/g'`
#echo "space matched"
#echo "$2"
fi
if [[ "$CMD" =~ */* ]];
then
CMD=`echo $2 | 's/[\/]/\%2f/g'`
#echo "Slash matched"
#echo "$2"
fi
eval curl -H Content-Type:text/plain -vLk https://$1/exec/$CMD --user "$USER:$pass"
``` |
190087 | I've discovered that my Phonegap/Cordova app on iOS is saving all the pictures I take with the camera and uploading them to a server.
In iPhone settings under the "use" tap there is my app and the disk size it needs is increasing, even after quitting the app.
So how could I delete these temporary files, or the picture itself, after it was uploaded to my server? I use the pictureURI to locate it for uploading.
**EDIT:** thanks. I've implemented the code and now it looks like this:
In my fileupload function I've added this line:
```
window.resolveLocalFileSystemURI(fileURI, onResolveSuccess, faillocal);
```
and the other functions look like:
```
function onResolveSuccess(fileEntry) {
console.log(fileEntry.name);
entry.remove(entrysuccess, failresolve);
}
function entrysuccess(entry) {
console.log("Removal succeeded");
}
function failresolve(error) {
alert('Error removing directory: ' + error.code);
}
function faillocal(evt) {
console.log(evt.target.error.code);
}
```
But it doesn't work. File is uploaded, but no alert, no console writing, no deleting at all... | You should be able to take the picture URI and pass it to [window.resolveLocalFileSystemURI](http://docs.phonegap.com/en/1.8.1/cordova_file_file.md.html#LocalFileSystem_resolve_local_file_system_uri_quick_example) and then in the success callback of this method you will get a [FileEntry](http://docs.phonegap.com/en/1.8.1/cordova_file_file.md.html#FileEntry) object which you can call the [remove](http://docs.phonegap.com/en/1.8.1/cordova_file_file.md.html#FileEntry_remove) method on. |
190186 | I have programmed a Neural Network in Java and am now working on the back-propagation algorithm.
I've read that batch updates of the weights will cause a more stable gradient search instead of a online weight update.
As a test I've created a time series function of 100 points, such that `x = [0..99]` and `y = f(x)`. I've created a Neural Network with one input and one output and 2 hidden layers with 10 neurons for testing. What I am struggling with is the learning rate of the back-propagation algorithm when tackling this problem.
I have 100 input points so when I calculate the weight change `dw_{ij}` for each node it is actually a sum:
```
dw_{ij} = dw_{ij,1} + dw_{ij,2} + ... + dw_{ij,p}
```
where `p = 100` in this case.
Now the weight updates become really huge and therefore my error `E` bounces around such that it is hard to find a minimum. The only way I got some proper behaviour was when I set the learning rate `y` to something like `0.7 / p^2`.
Is there some general rule for setting the learning rate, based on the amount of samples? | <http://francky.me/faqai.php#otherFAQs> :
**Subject: What learning rate should be used for
backprop?**
In standard backprop, too low a learning rate makes the network learn very slowly. Too high a learning rate
makes the weights and objective function diverge, so there is no learning at all. If the objective function is
quadratic, as in linear models, good learning rates can be computed from the Hessian matrix (Bertsekas and
Tsitsiklis, 1996). If the objective function has many local and global optima, as in typical feedforward NNs
with hidden units, the optimal learning rate often changes dramatically during the training process, since
the Hessian also changes dramatically. Trying to train a NN using a constant learning rate is usually a
tedious process requiring much trial and error. For some examples of how the choice of learning rate and
momentum interact with numerical condition in some very simple networks, see
<ftp://ftp.sas.com/pub/neural/illcond/illcond.html>
With batch training, there is no need to use a constant learning rate. In fact, there is no reason to use
standard backprop at all, since vastly more efficient, reliable, and convenient batch training algorithms exist
(see Quickprop and RPROP under "What is backprop?" and the numerous training algorithms mentioned
under "What are conjugate gradients, Levenberg-Marquardt, etc.?").
Many other variants of backprop have been invented. Most suffer from the same theoretical flaw as
standard backprop: the magnitude of the change in the weights (the step size) should NOT be a function of
the magnitude of the gradient. In some regions of the weight space, the gradient is small and you need a
large step size; this happens when you initialize a network with small random weights. In other regions of
the weight space, the gradient is small and you need a small step size; this happens when you are close to a
local minimum. Likewise, a large gradient may call for either a small step or a large step. Many algorithms
try to adapt the learning rate, but any algorithm that multiplies the learning rate by the gradient to compute
the change in the weights is likely to produce erratic behavior when the gradient changes abruptly. The
great advantage of Quickprop and RPROP is that they do not have this excessive dependence on the
magnitude of the gradient. Conventional optimization algorithms use not only the gradient but also secondorder derivatives or a line search (or some combination thereof) to obtain a good step size.
With incremental training, it is much more difficult to concoct an algorithm that automatically adjusts the
learning rate during training. Various proposals have appeared in the NN literature, but most of them don't
work. Problems with some of these proposals are illustrated by Darken and Moody (1992), who
unfortunately do not offer a solution. Some promising results are provided by by LeCun, Simard, and
Pearlmutter (1993), and by Orr and Leen (1997), who adapt the momentum rather than the learning rate.
There is also a variant of stochastic approximation called "iterate averaging" or "Polyak averaging"
(Kushner and Yin 1997), which theoretically provides optimal convergence rates by keeping a running
average of the weight values. I have no personal experience with these methods; if you have any solid
evidence that these or other methods of automatically setting the learning rate and/or momentum in
incremental training actually work in a wide variety of NN applications, please inform the FAQ maintainer
(saswss@unx.sas.com).
**References**:
* Bertsekas, D. P. and Tsitsiklis, J. N. (1996), Neuro-Dynamic
Programming, Belmont, MA: Athena Scientific, ISBN 1-886529-10-8.
* Darken, C. and Moody, J. (1992), "Towards faster stochastic gradient
search," in Moody, J.E., Hanson, S.J., and Lippmann, R.P., eds.
* Advances in Neural Information Processing Systems 4, San Mateo, CA:
Morgan Kaufmann Publishers, pp. 1009-1016. Kushner, H.J., and Yin,
G. (1997), Stochastic Approximation Algorithms and Applications, NY:
Springer-Verlag. LeCun, Y., Simard, P.Y., and Pearlmetter, B.
(1993), "Automatic learning rate maximization by online estimation of
the Hessian's eigenvectors," in Hanson, S.J., Cowan, J.D., and Giles,
* C.L. (eds.), Advances in Neural Information Processing Systems 5, San
Mateo, CA: Morgan Kaufmann, pp. 156-163. Orr, G.B. and Leen, T.K.
(1997), "Using curvature information for fast stochastic search," in
* Mozer, M.C., Jordan, M.I., and Petsche, T., (eds.) Advances in Neural
Information Processing Systems 9,Cambridge, MA: The MIT Press, pp.
606-612.
**Credits**:
* Archive-name: ai-faq/neural-nets/part1
* Last-modified: 2002-05-17
* URL: <ftp://ftp.sas.com/pub/neural/FAQ.html>
* Maintainer: saswss@unx.sas.com (Warren S. Sarle)
* Copyright 1997, 1998, 1999, 2000, 2001, 2002 by Warren S. Sarle, Cary, NC, USA. |
190189 | I have a MainActivity where I replace the fragments in a FrameLayout using getSupportFragmentManager().beginTransaction.replace() and the PositionsFragment as given below.
When I open PositionFragment initially, data is perfectly displayed, but when I switch to dashboard and then again switch to PositionsFragment, the cards in the recyclerview get doubled.
They get doubled every time I switch. I tried clearing the arrayList using productList.clear(); but didn't work.
MainActivity where I replace the fragments.
```
case R.id.nav_dashboard: getSupportFragmentManager().beginTransaction().replace(R.id.fragment_container,new DashboardFragment()).commit();
toolbar.setTitle("Dashboard");
break;
case R.id.nav_positions:
getSupportFragmentManager().beginTransaction().replace(R.id.fragment_container,new PositionsFragment()).commit();
toolbar.setTitle("Positions");
break;
```
PositionsFragment
```
public class PositionsFragment extends Fragment{
public static Spinner spinner;
RecyclerView recyclerView;
ProgressBar progressBar;
ArrayList<Products> productList=new ArrayList<>();
ArrayList<String> positionsArray;
FloatingActionButton fab;
public PositionsFragment() {
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View v=inflater.inflate(R.layout.fragment_positions, container, false);
return v;
}
@Override
public void onViewCreated(@NonNull View v, @Nullable Bundle savedInstanceState) {
super.onViewCreated(v, savedInstanceState);
fab=v.findViewById(R.id.positionsFab);
progressBar=v.findViewById(R.id.progressBarPositions);
progressBar.setVisibility(View.VISIBLE);
spinner=v.findViewById(R.id.positionsSpinner);
recyclerView=v.findViewById(R.id.positionsRecyclerView);
recyclerView.setLayoutManager(new LinearLayoutManager(getActivity()));
productList.clear();
new fetchDataPositions(new OnPositionsFetched() {
@Override
public void OnPositionsFetched() {
progressBar.setVisibility(View.INVISIBLE);
positionsArray=fetchDataPositions.getArrayList();
for (int i=0;i<positionsArray.size();i++){
productList.add(new Products(//data));
}
final PositionsRecyclerAdapter recyclerAdapter=new PositionsRecyclerAdapter(getActivity(),productList,getActivity());
recyclerView.setAdapter(recyclerAdapter);
}
}).execute(url);
ArrayAdapter<CharSequence> adapter = ArrayAdapter.createFromResource(getActivity(),
R.array.positions_spinner_array, android.R.layout.simple_spinner_item);
adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
spinner.setAdapter(adapter);
}
}
``` | You can clear the list in Adapter constructor as:
```
public PositionsRecyclerAdapter(Context context,ArrayList<Products> productList,Context
context) {
//local product list in adapter
if(mProductList!=null){
mProductList.clear();
mProductList.addAll(productList);
}
}
```
Or can create the method in your adapter and call it before adding new data.
```
public void clearProductList(){
mProductList.clear();
}
``` |
190828 | How can I make this code faster? the string can contain characters such as ", .?#" and possibly others.
```
Const Nums = ['0'..'9'];
function CleanNumber(s: String): Int64;
Var z: Cardinal;
begin
for z := length(s) downto 1 do
if not (s[z] in Nums) then Delete(s,z,1);
if s = '' then
Result := 0 else
Result := StrToInt64(s);
end;
```
Results (long loop):
CL2,CL3 = HeartWare's
*32-bit, "dirty number" / "clean number"*
* Mine: 270ms, 165ms
* CL2: 220ms, 210ms
* CL3: **100ms**, 110ms
* DirtyStrToNum: 215ms, **90ms**
*64-bit, "dirty number" / "clean number"*
* Mine: 2280ms, 75ms
* CL2: 1320ms, 130ms
* CL3: **280ms**, **25ms**
* DirtyStrToNum: 1390ms, 125ms | Here are two examples that for sure are faster than the one you have (deleting a character from a string is relatively slow):
This one works by pre-allocating a string of the maximum possible length and then filling it out with the digits as I come across them in the source string. No delete for every unsupported character, and no expansion of the target string for every supported character.
```
FUNCTION CleanNumber(CONST S : STRING) : Int64;
VAR
I,J : Cardinal;
C : CHAR;
T : STRING;
BEGIN
SetLength(T,LENGTH(S));
J:=LOW(T);
FOR I:=LOW(S) TO HIGH(S) DO BEGIN
C:=S[I];
IF (C>='0') AND (C<='9') THEN BEGIN
T[J]:=C;
INC(J)
END
END;
IF J=LOW(T) THEN
Result:=0
ELSE BEGIN
SetLength(T,J-LOW(T)); // or T[J]:=#0 [implementation-specific]
Result:=StrToInt64(T)
END
END;
```
This one works by simple multiplication of the end result by 10 and adding the corresponding digit value.
```
{$IFOPT Q+}
{$DEFINE OverflowEnabled }
{$ELSE }
{$Q+ If you want overflow checking }
{$ENDIF }
FUNCTION CleanNumber(CONST S : STRING) : Int64;
VAR
I : Cardinal;
C : CHAR;
BEGIN
Result:=0;
FOR I:=LOW(S) TO HIGH(S) DO BEGIN
C:=S[I];
IF (C>='0') AND (C<='9') THEN Result:=Result*10+(ORD(C)-ORD('0'))
END
END;
{$IFNDEF OverflowEnabled } {$Q-} {$ENDIF }
{$UNDEF OverflowEnabled }
```
Also note that I don't use IN or CharInSet as these are much slower than a simple inline >= and <= comparison.
Another comment I could make is the use of LOW and HIGH on the string variable. This makes it compatible with both 0-based strings (mobile compilers) and 1-based strings (desktop compilers). |
190908 | I am using the following code to display an image preview. In the below code the image preview and choose button are separate but I want both them to be the same like in the below image:
[![enter image description here](https://i.stack.imgur.com/dTeXr.png)](https://i.stack.imgur.com/dTeXr.png)
As shown in the image above, first it shows the add button. Then, once an image is selected, it displays a preview.
Here is the code I used:
**HTML**
```
<div id="imagePreview"></div>
<input id="uploadFile" type="file" name="image" class="img" />
```
**Script**
```
<script type="text/javascript">
$(function() {
$("#uploadFile").on("change", function()
{
var files = !!this.files ? this.files : [];
if (!files.length || !window.FileReader) return; // no file selected, or no FileReader support
if (/^image/.test( files[0].type)){ // only image file
var reader = new FileReader(); // instance of the FileReader
reader.readAsDataURL(files[0]); // read the local file
reader.onloadend = function(){ // set image data as background of div
$("#imagePreview").css("background-image", "url("+this.result+")");
}
}
});
});
</script>
```
**CSS**
```
#imagePreview {
width: 180px;
height: 180px;
background-position: center center;
background-size: cover;
-webkit-box-shadow: 0 0 1px 1px rgba(0, 0, 0, .3);
display: inline-block;
}
``` | 1. Hide the input
2. Attach `.onclick()` to image placeholder
3. Trigger input click manually
HIH
```js
$(function() {
$("#uploadFile").on("change", function()
{
var files = !!this.files ? this.files : [];
if (!files.length || !window.FileReader) return; // no file selected, or no FileReader support
if (/^image/.test( files[0].type)){ // only image file
var reader = new FileReader(); // instance of the FileReader
reader.readAsDataURL(files[0]); // read the local file
reader.onloadend = function(){ // set image data as background of div
$("#imagePreview").css("background-image", "url("+this.result+")");
}
}
});
});
$('#imagePreview').click(function(){
$('#uploadFile').click();
});
```
```css
#imagePreview {
width: 180px;
height: 180px;
background-position: center center;
background-size: cover;
-webkit-box-shadow: 0 0 1px 1px rgba(0, 0, 0, .3);
display: inline-block;
background-image: url('http://via.placeholder.com/350x150');
}
#uploadFile{
display: none
}
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="imagePreview" src="http://via.placeholder.com/350x150" alt="placeholder image goes here"></div>
<input id="uploadFile" type="file" name="image" class="img" />
``` |
191224 | I am using a selector to build a custom @Html.EditorFor (called @Html.FullFieldEditor). It determines the type of input to generate (textbox, drop down, radio buttons, check boxes, etc.). I have been trying to hook it into one for a radio button list, thusly:
```
@Html.FullFieldEditor(m => m.MyModel.MyRadioButton, new Dictionary<string, object> { { "data_bind", "myRadioButton" } })
```
or like this:
```
@Html.FullFieldEditor(m => m.MyModel.MyRadioButton, new { data_bind = "checked: myRadioButton" })
```
But with no luck.
I was trying to fix my `selector.cshtml` code but only wound up making a horrible mess. Here is the code that has worked for me BEFORE I was trying to implement `knockout.js`:
```
@{
var supportsMany = typeof (IEnumerable).IsAssignableFrom(ViewData.ModelMetadata.ModelType);
var selectorModel = (Selector)ViewData.ModelMetadata.AdditionalValues["SelectorModelMetadata"];
var fieldName = ViewData.TemplateInfo.GetFullHtmlFieldName("");
var validationClass = ViewData.ModelState.IsValidField(fieldName) ? "" : "input-validation-error";
// Loop through the items and make sure they are Selected if the value has been posted
if(Model != null)
{
foreach (var item in selectorModel.Items)
{
if (supportsMany)
{
var modelStateValue = GetModelStateValue<string[]>(Html, fieldName) ?? ((IEnumerable)Model).OfType<object>().Select(m => m.ToString());
item.Selected = modelStateValue.Contains(item.Value);
}
else
{
var modelStateValue = GetModelStateValue<string>(Html, fieldName);
if (modelStateValue != null)
{
item.Selected = modelStateValue.Equals(item.Value, StringComparison.OrdinalIgnoreCase);
}
else
{
Type modelType = Model.GetType();
if (modelType.IsEnum)
{
item.Selected = item.Value == Model.ToString();
}
}
}
}
}
}
@functions
{
public MvcHtmlString BuildInput(string fieldName,
SelectListItem item, string inputType, object htmlAttributes)
// UPDATE: Trying to do it above
{
var id = ViewData.TemplateInfo.GetFullHtmlFieldId(item.Value);
var wrapper = new TagBuilder("div");
wrapper.AddCssClass("selector-item");
var input = new TagBuilder("input");
input.MergeAttribute("type", inputType);
input.MergeAttribute("name", fieldName);
input.MergeAttribute("value", item.Value);
input.MergeAttribute("id", id);
input.MergeAttributes(new RouteValueDictionary(htmlAttributes));
// UPDATE: and trying above, but see below in the
// @foreach...@BuildInput section
input.MergeAttributes(Html.GetUnobtrusiveValidationAttributes(fieldName, ViewData.ModelMetadata));
if(item.Selected)
input.MergeAttribute("checked", "checked");
wrapper.InnerHtml += input.ToString(TagRenderMode.SelfClosing);
var label = new TagBuilder("label");
label.MergeAttribute("for", id);
label.InnerHtml = item.Text;
wrapper.InnerHtml += label;
return new MvcHtmlString(wrapper.ToString());
}
/// <summary>
/// Get the raw value from model state
/// </summary>
public static T GetModelStateValue<T>(HtmlHelper helper, string key)
{
ModelState modelState;
if (helper.ViewData.ModelState.TryGetValue(key, out modelState) && modelState.Value != null)
return (T)modelState.Value.ConvertTo(typeof(T), null);
return default(T);
}
}
@if (ViewData.ModelMetadata.IsReadOnly)
{
var readonlyText = selectorModel.Items.Where(i => i.Selected).ToDelimitedString(i => i.Text);
if (string.IsNullOrWhiteSpace(readonlyText))
{
readonlyText = selectorModel.OptionLabel ?? "Not Set";
}
@readonlyText
foreach (var item in selectorModel.Items.Where(i => i.Selected))
{
@Html.Hidden(fieldName, item.Value)
}
}
else
{
if (selectorModel.AllowMultipleSelection)
{
if (selectorModel.Items.Count() < selectorModel.BulkSelectionThreshold)
{
<div class="@validationClass">
@foreach (var item in selectorModel.Items)
{
@BuildInput(fieldName, item, "checkbox") // throwing error here if I leave this as is (needs 4 arguments)
//But if I do this:
//@BuildInput(fieldName, item, "checkbox", htmlAttributes) // I get does not exit in current context
}
</div>
}
else
{
@Html.ListBox("", selectorModel.Items)
}
}
else if (selectorModel.Items.Count() < selectorModel.BulkSelectionThreshold)
{
<div class="@validationClass">
@*@if (selectorModel.OptionLabel != null)
{
@BuildInput(fieldName, new SelectListItem { Text = selectorModel.OptionLabel, Value = "" }, "radio")
}*@
@foreach (var item in selectorModel.Items)
{
@BuildInput(fieldName, item, "radio")//same here
}
</div>
}
else
{
@Html.DropDownList("", selectorModel.Items, selectorModel.OptionLabel)
}
}
```
Any help is greatly appreciated.
**EDIT**
I am trying to minimize existing `JS` code (over 1500 lines) to show/hide with KO (which just seems to be able to cut down the code considerably). I know I've got choices (`visible`, `if`, etc.) with KO, but assuming what I wanted to accomplish with KO was doable I could go with `visible`. In any event, getting past the binding hurdle prevents me from getting that far.
Here is an example of some code I am using to show/hide with plain `JS`:
```
$(document).ready(function () {
$("input[name$='MyModel.MyRadioButton']").click(function () {
var radio_value = $(this).val();
if (radio_value == '1') {
$("#MyRadioButton_1").show();
$("#MyRadioButton_2").hide();
$("#MyRadioButton_3").hide();
}
else if (radio_value == '2') {
$("#MyRadioButton_1").show();
$("#MyRadioButton_2").hide();
$("#MyRadioButton_3").hide();
}
else if (radio_value == '3') {
$("#MyRadioButton_1").show();
$("#MyRadioButton_2").hide();
$("#MyRadioButton_3").hide();
}
});
$("#MyRadioButton_1").hide();
$("#MyRadioButton_2").hide();
$("#MyRadioButton_3").hide();
});
```
I figured KO could minimize the above. Again, I'm looking at 20-30 inputs, most with more than 3 choices (a few have 10 choices in a Drop Down). This is getting hard to maintain at 1500 lines and growing.
And then in my `view` I've got this going on:
```
<div id="MyRadioButton_1">
@Helpers.StartingCost(MyModel.Choice1, "1")
</div>
<div id="MyRadioButton_2">
@Helpers.StartingCost(MyModel.Choice2, "2")
</div>
<div id="MyRadioButton_3">
@Helpers.StartingCost(MyModel.Choice2, "2")
</div>
```
The `view` code above will change slightly with KO, but again its the `JS` I am trying to cut down on.
**EDIT 2**
This is part of the code for `FullFieldEditor`. Some parts are left out for brevity (such as code for `RequiredFor`, `ToolTipFor` and `SpacerFor`).
```
public static MvcHtmlString FullFieldEditor<T, TValue>(this HtmlHelper<T> html, Expression<Func<T, TValue>> expression)
{
return FullFieldEditor(html, expression, null);
}
public static MvcHtmlString FullFieldEditor<T, TValue>(this HtmlHelper<T> html, Expression<Func<T, TValue>> expression, object htmlAttributes)
{
var metadata = ModelMetadata.FromLambdaExpression(expression, html.ViewData);
if (!metadata.ShowForEdit)
{
return MvcHtmlString.Empty;
}
if (metadata.HideSurroundingHtml)
{
return html.EditorFor(expression);
}
var wrapper = new TagBuilder("div");
wrapper.AddCssClass("field-wrapper");
var table = new TagBuilder("table");
table.Attributes["border"] = "0";
table.Attributes["width"] = "100%";//added this to even out table columns
var tbody = new TagBuilder("tbody");
var tr = new TagBuilder("tr");
var td1 = new TagBuilder("td");
td1.Attributes["width"] = "40%";
td1.Attributes["valign"] = "top";
var label = new TagBuilder("div");
label.AddCssClass("field-label");
label.AddCssClass("mylabelstyle");
label.InnerHtml += html.MyLabelFor(expression);
td1.InnerHtml = label.ToString();
var td2 = new TagBuilder("td");
td2.Attributes["width"] = "50%";
td2.Attributes["valign"] = "top";
var input = new TagBuilder("div");
input.AddCssClass("field-input");
input.InnerHtml += html.EditorFor(expression);
td2.InnerHtml = input.ToString();
var td3 = new TagBuilder("td");
td3.Attributes["width"] = "5%";
td3.Attributes["valign"] = "top";
if (metadata.IsRequired && !metadata.IsReadOnly)
{
td3.InnerHtml += html.RequiredFor(expression);
}
var td4 = new TagBuilder("td");
td4.Attributes["width"] = "5%";
td4.Attributes["valign"] = "middle";
if (!string.IsNullOrEmpty(metadata.Description))
{
td4.InnerHtml += html.TooltipFor(expression);
}
else td4.InnerHtml += html.SpacerFor(expression);
td4.InnerHtml += html.ValidationMessageFor(expression);
tr.InnerHtml = td1.ToString() + td2.ToString() + td3.ToString() + td4.ToString();
tbody.InnerHtml = tr.ToString();
table.InnerHtml = tbody.ToString();
wrapper.InnerHtml = table.ToString();
return new MvcHtmlString(wrapper + Environment.NewLine);
}
```
**UPDATE 3**
The options are not working. Option 1 will not even show `data-bind` in the `<input>`. Option 2 will not work since it's just checking if the field is required (the code just shows a "required" image if it is).
When I tried your first suggestion before your **"UPDATE2"** (`input.MergeAttributes(new RouteValueDictionary(htmlAttributes));`), this was the output:
```
<div class="field-input" data_bind="checked: MyRadioButton">
<div class="">
<div class="selector-item">
<input id="MyModel_MyRadioButton_Choice1" name="MyModel.MyRadioButton" type="radio" value="Choice1">
<label for="MyModel_MyRadioButton_Choice1">I am thinking about Choice 1.</label>
</div>
<!--Just showing one radio button for brevity-->
</div>
</div>
```
Since I merged the attribute with the `input` part of `TagBuilder`, which is outputting the `field-input` `<div>`, that is where it's being placed (which is logical). Notice that it should be `data-bind` but is showing as `data_bind` in the `field-input` class. This is how I have the `FullFieldEditor`:
```
@Html.FullFieldEditor(m => m.MyModel.MyRadioButton, new Dictionary<string, object> { { "data_bind", "myRadioButton" } })
```
What it should be showing up as is this, I think:
```
<div class="field-input">
<div class="">
<div class="selector-item">
<!-- "data-bind" should be showing up in the following INPUT, correct?-->
<input data-bind="checked: MyRadioButton" id="MyModel_MyRadioButton_Choice1" name="MyModel.MyRadioButton" type="radio" value="Choice1">
<label for="MyModel_MyRadioButton_Choice1">I am thinking about Choice 1.</label>
</div>
<!--Just showing one radio button for brevity-->
</div>
</div>
```
What I suspect is that I have to get that `htmlAttributes` into the `Selector.cshtml` above, and not in the `HtmlFormHelper.cs` file. The `Selector.cshtml` is what is making the determination between showing, for example, a drop down list, a radio button list or a checkbox list (among others). The `Selector.cshtml` is a template in the `Shared\EditorTemplates` folder.
For background: I have dozens of forms representing hundreds of inputs over dozens of pages (or wizards). I am using the `@Html.FullFieldEditor` because it was easier to maintain than having spaghetti code for each type of input (drop down, checkbox, radio buttons, etc.).
**UPDATE 4**
Still not working.
I tried this in the `Selector.cshtml` (its the `BuildInput` function)code and was able to get "data-bind" into the `<input>` tag for each radio button in the list:
```
input.MergeAttribute("data-bind", htmlAttributes);
```
and then I did this lower down in the same file:
```
@foreach (var item in selectorModel.Items)
{
@BuildInput(fieldName, item, "radio", "test")
}
```
and my HTML output is this:
```
<div class="selector-item">
<input data-bind="test" id="MyModel_MyRadioButton_Choice1" name="MyModel.MyRadioButton" type="radio" value="Choice1">
<label for="MyModel_MyRadioButton_Choice1">Choice 1</label>
</div>
```
**Which is what is leading me to believe it's the `Selector.cshtml` file, not the `HtmlFormHelper.cs` file.**
I am going to open up the bounty to everyone 50+. | **UPDATE3**
First good call on the underscore bit, I totally forgot about that. Your bit of code looks almost right, but should actually be this:
```
@Html.FullFieldEditor(m => m.MyModel.MyRadioButton, new Dictionary<string, object> { { "data_bind", "checked:myRadioButton" } })
```
So since you are dynamically selecting between checkbox and text, you'll have to do a couple of things. If its a checkbox, you'll have to use the code above. If its a textbox, you'll have to use:
```
@Html.FullFieldEditor(m => m.MyModel.MyTextBox, new Dictionary<string, object> { { "data_bind", "value:MyTextBox" } })
```
**UPDATE2**
So I updated your code where I ***think*** the data-bind belongs in your html (marked option1 and option2). What would be helpful is if you gave me a snippet of the html being generated, along with where you need the data bind.
```
public static MvcHtmlString FullFieldEditor<T, TValue>(this HtmlHelper<T> html, Expression<Func<T, TValue>> expression)
{
return FullFieldEditor(html, expression, null);
}
public static MvcHtmlString FullFieldEditor<T, TValue>(this HtmlHelper<T> html, Expression<Func<T, TValue>> expression, object htmlAttributes)
{
var metadata = ModelMetadata.FromLambdaExpression(expression, html.ViewData);
if (!metadata.ShowForEdit)
{
return MvcHtmlString.Empty;
}
if (metadata.HideSurroundingHtml)
{
return html.EditorFor(expression);
}
var wrapper = new TagBuilder("div");
wrapper.AddCssClass("field-wrapper");
var table = new TagBuilder("table");
table.Attributes["border"] = "0";
//added this to even out table columns
table.Attributes["width"] = "100%";
var tbody = new TagBuilder("tbody");
var td1 = new TagBuilder("td");
td1.Attributes["width"] = "40%";
td1.Attributes["valign"] = "top";
var label = new TagBuilder("div");
label.AddCssClass("field-label");
label.AddCssClass("mylabelstyle");
label.InnerHtml += html.MyLabelFor(expression);
td1.InnerHtml = label.ToString();
var td2 = new TagBuilder("td");
td2.Attributes["width"] = "50%";
td2.Attributes["valign"] = "top";
var input = new TagBuilder("div");
input.AddCssClass("field-input");
// option1
input.InnerHtml += html.EditorFor(expression, htmlAttributes);
td2.InnerHtml = input.ToString();
var td3 = new TagBuilder("td");
td3.Attributes["width"] = "5%";
td3.Attributes["valign"] = "top";
if (metadata.IsRequired && !metadata.IsReadOnly)
{
// option2
td3.InnerHtml += html.RequiredFor(expression, htmlAttributes);
}
var td4 = new TagBuilder("td");
td4.Attributes["width"] = "5%";
td4.Attributes["valign"] = "middle";
if (!string.IsNullOrEmpty(metadata.Description))
{
td4.InnerHtml += html.TooltipFor(expression);
}
else
{
td4.InnerHtml += html.SpacerFor(expression);
}
td4.InnerHtml += html.ValidationMessageFor(expression);
var tr = new TagBuilder("tr");
tr.InnerHtml = td1.ToString() + td2.ToString() + td3.ToString() + td4.ToString();
tbody.InnerHtml = tr.ToString();
table.InnerHtml = tbody.ToString();
wrapper.InnerHtml = table.ToString();
return new MvcHtmlString(wrapper + Environment.NewLine);
}
```
**UPDATE**
While I still believe below is the "correct" answer, here is what your tagbuilder is missing so you can pass custom attributes along:
```
input.MergeAttributes(new RouteValueDictionary(htmlAttributes));
```
**Original Answer**
I have a feeling that what is going to happen by trying to mix razor and knockout, is the razor stuff will render, then when knockout is attached, the values in the knockout viewmodel are going to override whatever was in the razor view.
Here is my suggestion if you are trying to refactor to knockout:
1. Create an html only view (no razor).
2. Add your knockout bindings to it.
3. Pass your model data to knockout in the form of a json object using [Json.NET](http://json.codeplex.com/). You can do this in a variety of ways, but the easiest being simply stuffing the data on the view inside of a `<script>` tag that your javascript can find.
4. Use [Knockout Mapping](http://knockoutjs.com/documentation/plugins-mapping.html) to load the json object into the viewmodel. |
191280 | I have been tasked with writing a winforms c# application that allows the users to run ad-hoc queries. I have searched online and found a lot of tools that can be purchased ([EasyQuery](http://devtools.korzh.com/query-builder-net/)) but at this time buying it is out of the question.
So I am trying to write this myself. At this point I have created a treeview that is populated at run-time with the tables/columns and I can select the columns that are checked by the users. But now I have to figure out how to dynamically `JOIN` the tables that they have selected.
Partial table structure is below:
```
Table - Roles - PK = RoleId
RoleId
RoleName
Table - Center PK = CenterId/RoleId/Prefix
CenterId
RoleId
Prefix
Table - Employee - PK = EmployeeID
EmployeeId
Name
CenterId
Dept
Org
Table - Prof - PK = EmployeeId/Profile
EmployeeId
Profile
```
I have a total of 6 tables that can be joined on a variety of fields, but since the users need to join on the fly I need to determine the `JOIN` when they want to generate the SQL. At this point I don't know the best way to proceed to generate these `JOINs`.
I even thought about creating a table in the database with the `JOINs` listed for each table/column and then I could just build it from there but I am at a loss.
I have also tried something similar to this but I don't want to start down the wrong path if there are suggestions for a different way:
```
private void GetJoins()
{
string joinList = string.Empty;
foreach (TreeNode tn in TablesTreeView.Nodes)
{
if (tn.Checked)
if (tn.Nodes.Count > 0) // this would be parent items only
{
foreach (TreeNode childNode in tn.Nodes)
{
// for first child checked
// check the next checked parent nodes for matching checked fields
// if next parent node has same field name checked then add the JOIN
}
}
}
}
```
Does this seem like it is on the right track or can you suggest another way?
I am aware that this is going to be a incredibly difficult task and I haven't even gotten to the dynamic `WHERE` clause yet. I am just looking for advice on the best way to create the `JOINs` on an ad-hoc basis. | I can appreciate your urge to write this from scratch, after all it's challenging and fun! But don't make the mistake of wasting valuable resources writing something that has already been written many times over. Creating a functional and secure query tool is much more complex then it may seem on the surface.
[SQL Server 2008 Management Studio Express](http://www.microsoft.com/download/en/details.aspx?id=7593) is free last time I checked.
[Versabanq Squel](http://versabanq.com/products/squel.php) is a robust and free sql query tool.
[There are many others](https://stackoverflow.com/questions/5170/sql-server-management-studio-alternatives)
[And even more here](http://www.databasejournal.com/features/mysql/article.php/3880961/Top-10-MySQL-GUI-Tools.htm) |
191357 | Two different ways to define a Kähler metric on a complex manifold are:
1) The fundamental form $\omega = g(J\cdot,\cdot)$ is closed, ie, $d\omega=0$;
2) The complex structure $J$ is parallel with respect to the Levi-Civita connection of $g$, ie $\nabla J=0$.
Using the formula
$$d\omega(X,Y,Z) = X\omega(Y,Z) - Y\omega(X,Z) + Z\omega(X,Y) - \omega([X,Y],Z) + \omega([X,Z],Y) - \omega([Y,Z],X),$$
I could easily prove that $\nabla J = 0$, i.e. $\nabla\_XJY=J\nabla\_YX$.
What I'm trying to understand is why $d \omega=0 \Rightarrow \nabla J =0$. There's a proof on Kobayashi-Nomizu vol. 2, a consequence of the formula
$$4g((\nabla\_X J)Y,Z) = 6 d \omega (X,JY,JZ) - 6 d \omega(X,Y,Z) + g(N(Y,Z),JX)$$
but it mentions the Nijenhuis tensor $N$. I'm looking for a proof that doesn't use that. I think a proof is possible by using the Koszul formula for the Levi-Civita connection and the formula for $d\omega$ above wisely used. Does anyone knows a proof that doesn't mention $N$ explicitly? | A complete proof is given in Voisin's book: Hodge Theory and Complex Algebraic Geometry: [Theorem 3.13](http://books.google.com/books?id=dAHEXVcmDWYC&q=Theorem+3.13#v=snippet&q=Theorem%203.13&f=false). |
191791 | I am having trouble using the RestSharp client in a Windows service.
When the API is down the connection is lost. But once the API runs again, the rest client keeps throwing the same error.
Even if I set up a new instance of the RestClient.
Anyone with the same problem and a working solution or proposal? | Since your `li`elements are floated, you have to add `overflow: auto;` to the `ul` so that it will wrap the floated `li` s (otherwise it will have 0px height and therefore not be visible). Also, you have to apply the background color to the `ul`:
```css
nav li {
list-style-type: none;
display: inline-block;
padding: 1em;
background-color: aqua;
float: right;
}
nav a {
text-decoration: none;
color: white;
background-color: black;
padding: 1em;
font-size: 1.2em;
}
img#sushi {}
nav ul {
overflow: hidden;
background-color: aqua;
}
```
```html
<a href=""><img src="../_images/imgres.jpg" id="Sushi" width="50" height="50" alt="Sushi"></a>
<nav role="navigation">
<ul>
<li><a href="">Home</a></li>
<li><a href="">About</a></li>
<li><a href="">Contact Us</a></li>
</ul>
</nav>
```
P.S.: The image link in your question leads to a page that shows nothing... |
191890 | After setting up WinQual and WER for the first time, I intentionally inserted a crash in a release build expecting\hoping to get the WER dialogue but instead still get the dialogue containing "runtime error! The application has requested the runtime to terminate in an unusual way...".
Everything seems to be working correctly regarding the setup of WinQual (along with all the supporting symbol server, source server,WinQual account, submitted mapping files and verified their presence my WinQual account). Now I want to verify that dump files are created, submitted to WinQual and I can retrieve them for debugging.
I verified that my PC's (XP Pro SP3) error reporting is enabled (system properties-error reporting). I figured the hard part would be setting up everything above not getting the program to actually show the WER dialogue. Is there some modification to the exe or the PC needed? | It's good to know I may not be (completely) crazy. You're right that external issues were causing problems for the WER dialogue.
I changed the crash to the code above, just in case my version was too brutal, and ran the application on three machines and it appears that the presence of Visual Studio and/or just-in-time debugging, on XP and Win7, was affecting the WER dialogue. For anyone interested this is what I saw:
* XP with Visual Studio. Asked to choose a debugger and if I chose No, the program exited without the WER dialogue.
* XP without Visual Studio. Displayed WER dialogue and sent the error report (yeah).
* Win7 with Visual Studio. Did not crash at all.
* Win7 without Visual Studio. I have not tested yet but suspect it will behave correctly.
So as you implied, a combination of the crashing code and unrepresentative testing environment were thwarting my test.
Thanks so much! |
191984 | I have this code. I enter the project key from the Google console as the snederId and get an error:
service not available.
which steps would you recommend for me to double check in setting up the registration key?
```
private void registerInBackground() {
new AsyncTask<Void, Void, String>() {
@Override
protected String doInBackground(Void... params) {
String msg = "";
try {
if (gcm == null) {
gcm = GoogleCloudMessaging.getInstance(context);
}
regId = gcm.register(SENDER_ID);
msg = "Device registered, registration ID=" + regId;
// You should send the registration ID to your server over
// HTTP, so it
// can use GCM/HTTP or CCS to send messages to your app.
sendRegistrationIdToBackend();
saveRegIdToDb();
// For this demo: we don't need to send it because the
// device will send
// upstream messages to a server that echo back the message
// using the
// 'from' address in the message.
// Persist the regID - no need to register again.
storeRegistrationId(context, regId);
} catch (IOException ex) {
msg = "Error :" + ex.getMessage();
// If there is an error, don't just keep trying to register.
// Require the user to click a button again, or perform
// exponential back-off.
}
return msg;
}
@Override
protected void onPostExecute(String msg) {
// mDisplay.append(msg + "\n");
}
}.execute(null, null, null);
}
``` | Your port name is wrong. Windows CE requires port names to be suffixed with a colon. The exception message probably told you that the requested port name was not found.
Change the code to this:
```
mySerialPort.PortName = "COM2:"
``` |
192567 | I am reading values from something in Python. The values come in a strange format. The documentation says:
>
>
> ```
> Color is represented by a single value from 0 to 8355711. The RGB(r,g,b) function calculates
> ((R*127 x 2^16) + (G*127 x 2^8) + B*127), where R, G and B are values between 0.00 and 1.00
>
> ```
>
>
Som, a red color has the value of `16712965` I would love to know how to 'unpack' those values as a tuple or something but am struggling with that math. If this is not possible, a way to convert that value to an rgb value somehow would be great. Please help! Thanks | The [PnP Configuration Manager API](http://msdn.microsoft.com/en-us/library/windows/hardware/ff549717(v=vs.85).aspx) is your friend here:
* [CM\_Locate\_DevNode](http://msdn.microsoft.com/en-us/library/windows/hardware/ff538742%28v=vs.85%29.aspx) opens a device handle given a device ID;
* [CM\_Get\_Parent](http://msdn.microsoft.com/en-us/library/windows/hardware/ff538610%28v=vs.85%29.aspx) finds the parent device;
* [CM\_Get\_Device\_ID\_Size](http://msdn.microsoft.com/en-us/library/windows/hardware/ff538441%28v=vs.85%29.aspx) and [CM\_Get\_Device\_ID](http://msdn.microsoft.com/en-us/library/windows/hardware/ff538405%28v=vs.85%29.aspx) take the device handle and return the device ID. |
192719 | ![THIS IS WHAT IT LOOKS LIKE](https://i.stack.imgur.com/0VwLT.png)
I would like there to be space in between the navigation bar and the image but it isnt seeming to work.
Any help would be much appreciated! Here is my code:
```
addSubview(profileImageView)
profileImageView.anchor(top: topAnchor, left: self.leftAnchor, bottom: nil, right: nil, paddingTop: 75, paddingLeft: 95, paddingBottom: 12, paddingRight: 12, width: 180, height: 180)
profileImageView.layer.cornerRadius = 180 / 2
profileImageView.clipsToBounds = true
```
Here is declared variable:
```
let profileImageView: UIImageView = {
let iv = UIImageView()
iv.backgroundColor = .red
return iv
}()
``` | Seems like you have missed to add your service in providers array on module.
```js
@NgModule({
imports: [],
providers: [FormsLibService],
declarations: [FormsLibComponent, FormsLibService],
exports: [FormsLibComponent, FormsLibService],
})
``` |
192768 | So the Radiance skill says that your abilities are boosted, but which abilities are boosted?
![Radiance skill](https://i.stack.imgur.com/xQzCd.png)
Is it just melee? Does it include grenades? What use is this skill on it's own without any upgrades? | Radiance greatly speeds up cooldown times for your melee ability (energy drain, scorch) and grenade. You also drop at least one light orb for every enemy you kill while Radiance is active.
I believe cooldown times are the only thing boosted by the base Radiance ability. I've never noticed any damage boost per grenade or melee ability while using Radiance. It simply lets you use a lot more of them in a short time span.
[Source](http://destiny.wikia.com/wiki/Sunsinger) for more sunsinger/radiance information.
**Edit:** ran a quick test last night: hit a dreg with a fusion grenade, then I activated Radiance and hit another dreg of the same level with another fusion grenade. The grenade did the same damage to both enemies. Similar tests with scorch produced the same result. |
195047 | How can one optimize this query:
```
declare @MyParam nvarchar(100) = 25846987;
select top 100 * from MySelectTable
where
(MyParam = @MyParam)
OR
(@MyParam = 0 and MyParam in (SELECT MyParam FROM aMassiveSlowTable WHERE Id = 'random1'))
OR
(@MyParam = 1 and MyParam in (SELECT MyParam FROM aMassiveSlowTable WHERE Id = 'random2'))
OR
(@MyParam = 2 and MyParam in (SELECT MyParam FROM aMassiveSlowTable WHERE Id = 'random3'))
```
When i use only this part:
```
declare @MyParam nvarchar(100) = 25846987;
select top 100 * from MySelectTable
where
(MyParam = @MyParam)
```
It returns in 1 second.
When using all the parameters, it takes about 5 minutes.
I believe it's because it's scanning aMassiveSlowTable when all it has to do is match MyParam = @MyParam.
How do I make it skip all the other comparisons if @MyParam matches MyParam? I tried using CASE statements but they don't work with **IN** clauses. I tried rearranging the AND's in the paranthesis and even adding additional filtering to *aMassiveSlowTable*.
If @MyParam does not match MyParam, it's ok if the query takes a little longer. | You're right that you should be adding the onClicks to your SVGs after the DOM is rendered, and that that work should be done in a directive.
The hard part (and this is a recurring issue) is that there's no event that tells you when you should start adding those onClicks. Angular has no way of telling you, since it doesn't know when the DOM is ready. You'll have to add something to the directive that figures this out heuristically, by using jQuery to watch the DOM periodically with a setTimeout and figure out when it's ready for you. |
195959 | I'm implementing react app with redux-toolkit, and I get such errors when fetching data from firestore.
>
> A non-serializable value was detected in an action, in the path: `payload.0.timestamps.registeredAt`
>
>
> A non-serializable value was detected in an action, in the path: `payload.0.profile.location`.
>
>
>
The former's data type is [firebase.firestore.Timestamp](https://firebase.google.com/docs/reference/js/firebase.firestore.Timestamp), and the latter is [GeoPoint](https://firebase.google.com/docs/reference/android/com/google/firebase/firestore/GeoPoint?hl=en).
I think both might not be serializable, but I want to fetch those data and show them to users.
How can I overcome those errors?
According to [this issue](https://github.com/giantmachines/redux-websocket/issues/101), I might write like that;
```
const store = configureStore({
reducer: rootReducer,
middleware: [
...getDefaultMiddleware({
serializableCheck: {
ignoredActionPaths: ['meta.arg', 'meta.timestamp'], // replace the left to my path
},
}),
reduxWebsocketMiddleware,
// ... your other middleware
],
});
```
But I'm not sure if it's safe and even if so, I don't know how to write the path dynamic way since my payload is array.
I'd really appreciate if anyone gives some clue. | Check it out at
[Getting an error "A non-serializable value was detected in the state" when using redux toolkit - but NOT with normal redux](https://stackoverflow.com/questions/61704805/getting-an-error-a-non-serializable-value-was-detected-in-the-state-when-using)
accordingly with Rahad Rahman answer I did the follow in store.ts:
```
export default configureStore({
reducer: {
counter: counterReducer,
searchShoes: searchShoesReducer,
},
middleware: (getDefaultMiddleware) => getDefaultMiddleware({
serializableCheck: false
}),
});
``` |
195992 | I'm stuck with an error using mongoose and mongodb that won't let me locate things in my database beause of cast errors.
I have a user schema
```
const schema = new mongoose.Schema({
_id: mongoose.Schema.Types.ObjectId,
name: { type: String, min: 1, required: true }
});
export default mongoose.model('user',schema);
```
in another file I have this where the user\_id gets input from an input field
```
User.findOne({ name: user_id })
.then(user => {
if(user) {
console.log("found user");
console.log(user);
} else {
console.log("user not found");
}
})
});
```
The problem is that I get the cast error, also in the error I can see the value of user\_id is getting the correct value, i.e 'testing', name of user in database i'm inputting from input field.
```
(node:1127) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): CastError: Cast to string failed for value "{ user_id: 'testing' }" at path "name" for model "user"
```
If I change the first line of code to
```
User.findOne({})
```
The query finds one and prints out
```
{ _id: 5a952b07327915e625aa0fee, name: 'testing' }
```
but if I change the code to either
```
User.findOne({ name: user_id })
```
and enter in the search box 'testing', or
```
User.findOne({ _id: user_id })
```
and then enter in the id '5a952b07327915e625aa0fee', I get the error. I also get a similar error when using
```
User.findById(user_id)
node:1166) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): CastError: Cast to ObjectId failed for value "{ user_id: '5a952adf327915e625aa0fed' }" at path "_id" for model "user"
```
Thanks | Try the following:
```
User.findById('5a952adf327915e625aa0fed')
.then(user => console.log(user)) // user can be undefined
.catch(e => console.error(e))
```
or
```
User.findOne({name: 'testing'})
.then(user => console.log(user)) // user can be undefined
.catch(e => console.error(e))
``` |
196187 | I have a situation whereby an Oracle Standby (managed by data guard) is really out of date. The redo logs are now being properly sync'd however there is a big gap.
I have the current SCNs for primary and standby but cannot find out how to get RMAN 9.2 to create an incremental backup from the specific SCN.
Does anyone have any experience with this?
TIA | ### Correct results?
First off: correctness. You want to produce an array of unique elements? Your current query does not do that. The function `uniq()` from the [**intarray** module](https://www.postgresql.org/docs/current/intarray.html) only promises to:
>
> remove adjacent duplicates
>
>
>
Like [instructed in the manual](https://www.postgresql.org/docs/current/intarray.html#INTARRAY-FUNC-TABLE), you would need:
```
SELECT l.d + r.d, uniq(**sort(**array_agg_mult(r.arr)**)**)
FROM ...
```
Also gives you *sorted* arrays - assuming you want that, you did not clarify.
I see you [*have* `sort()` in your fiddle](http://sqlfiddle.com/#!15/2f8c70/2/0), so this may just be a typo in your question.
### Postgres 9.5 or later
Either way,since Postgres 9.5 `array_agg()` has the capabilities of my `array_agg_mult()` built-in out of the box, and much faster, too:
* [Selecting data into a Postgres array](https://stackoverflow.com/a/11763245/939860)
* [Is there something like a zip() function in PostgreSQL that combines two arrays?](https://stackoverflow.com/a/12414884/939860)
There have also been other performance improvements for array handling.
### Query
The main purpose of `array_agg_mult()` is to aggregate multi-dimensional arrays, but you only produce 1-dimensional arrays anyway. So I would at least try this alternative query:
```
SELECT l.d + r.d AS d_sum, array_agg(DISTINCT elem) AS result_arr
FROM left2 l
JOIN right2 r USING (t1)
, unnest(r.arr) elem
GROUP BY 1
ORDER BY 1;
```
Which also addresses your question:
>
> Can the aggregate function remove duplicates directly?
>
>
>
Yes, it can, with `DISTINCT`. But that's not faster than `uniq()` for integer arrays, which has been optimized for integer arrays, while `DISTINCT` is generic for all qualifying data types.
Doesn't require the `intarray` module. **However**, the result is not necessarily sorted. Postgres uses varying algorithms for `DISTINCT`. Big sets are typically hashed, which leaves the result unsorted unless you add explicit `ORDER BY`. If you need sorted arrays, you *could* add `ORDER BY` to the aggregate function directly:
```
array_agg(DISTINCT elem ORDER BY elem)
```
But that's typically *slower* than feeding pre-sorted data to `array_agg()` (one big sort versus many small sorts). So I would sort in a subquery and *then* aggregate:
```
SELECT d_sum, uniq(array_agg(elem)) AS result_arr
FROM (
SELECT l.d + r.d AS d_sum, elem
FROM left2 l
JOIN right2 r USING (t1)
, unnest(r.arr) elem
ORDER BY 1, 2
) sub
GROUP BY 1
ORDER BY 1;
```
This was the fastest variant in my cursory test on Postgres 9.4.
[**SQL Fiddle**](http://sqlfiddle.com/#!17/ed90a/1) based on the one you provided.
### Index
I don't see much potential for any index here. The only option would be:
```
CREATE INDEX ON right2 (t1, arr);
```
Only makes sense if you get index-only scans out of this - which will happen if the underlying table `right2` is substantially wider than just these two columns and your setup qualifies for [index-only scans. Details in the Postgres Wiki.](https://wiki.postgresql.org/wiki/Index-only_scans) |
196954 | A friend and I want to book a house for Labor Day weekend and we found a place on Craigslist that looks pretty nice. I'm pretty well-traveled myself, but neither of us have rented out a house before. Before you all suggest sites like Airbnb or HomeAway, I've been to them and others. Craigslist is just another outlet, out of the many, I've looked at.
My friend emailed the owner who is asking $200/night and a $700 security deposit. I then asked him to see how payment is to be made. Obviously, paying cash is a big no-no, especially with a refundable security deposit. I'm also wary about personal checks, wire transfers, etc.
Anyway, the owner requested a *bank deposit* for payment and I'm not so sure about that, it sounds kind of fishy to me.
At the very minimum I'd like to use PayPal or a credit card. Obviously Airbnb, Roomorama, or HomeAway are the better alternatives, but I'm assuming that's not an option.
So I guess what I'm asking is:
A) does this sound like a scam and if not
B) what other ways might I suggest to the owner that I pay such that I can guarantee both his/her and my security? | It sounds fishy to me too. Not as fishy as Western Union or MoneyGram, because I could just *guarantee* those are scams. For a transaction done on a site that doesn't offer assurances of their own (so this wouldn't apply to AirBnB) I don't think I would do anything other than a credit card, which I could reverse-charge for fraud.
Hard as it is to believe, these scammers often use the same alias and the same address multiple times. Google to see if there are complaints at scam sites. You can also use 411.org or similar to see if the name on the bank account matches the person living at the address—not surefire, but a help. |
197201 | I was hoping that someone might be able to shed some light on a problem I'm having. I have a lab that I created that is a multi-region Hub and Spoke network. There are 5 Regional Hubs along with one central hub. Each Region has 4 spokes.
My problem comes when I try to use Phase 3 for DMVPN. I think I know what is happening, I'm just not quite sure how to go about fixing it. Basically, I will try to ping from a computer off of the spoke to the Central Hub. The first couple of pings go through, and then no further pings will go through. When the first pings are sent through, they should be traversing the entire network (Host-Spoke-Regional Hub-Central Hub). As NHRP Mappings are established, it attempts to go point-to-point for that connection and it breaks the network since the Tunnels are on different subnets. Here is the relevant network information:
All Routers are running EIGRP (AS 1). All neighborships are created properly and the routing table is populated as I would expect.
The Regional to Central Hub network is Tunnel 0. 172.16.0.0/24 network.
The Regional to Spoke network is Tunnel 1. 172.16.1.0/24 network.
All outside interfaces are on the 192.168.1.0/24 network.
Inside interfaces follow the following scheme: 10.region.spoke.0/24 (Central is 10.0.0.0/24, Regional Hubs are 10.region.0.0/24).
Any insight that you can provide is greatly appreciated. | So we got CenterHubx1->RegionHubx5->Spokesx4->Computer
Computer pings -> Hub ,
does nhrp lookup on Spoke, set's up new tunnel to CenterHub.
Problem Spoke vpn in subnet 172.16.1/24 and Hub 172.16.0/24
The Spoke should not try to setup the direct connection to the CenterHub, only to other spokes of it's RegionHub
To prevent this **use different NHRP network-id**'s for different tunnel subnets.
Quote:
>
> NHRP network ID is used to define the NHRP domain for an NHRP
> interface
>
>
> |
197281 | I'm a new learner on Python and SQLAlchemy, and I met a curious problem as below.
```
user = Table('users', meta, autoload=True, autoload_with=engine)
```
then I
```
print(user.columns)
```
it works fine, the output are user.ID, user.Name, etc. But then:
```
Session = sessionmaker(bind=engine)
session = Session()
session.query(user).order_by(user.id)
```
shows error:
```
AttributeError: 'Table' object has no attribute 'id'
```
I change the "id" to "Name", it's the same error.
I also tried the `filter_by` method, the same error.
Why this happened? | You could use:
```
session.query(user).order_by(user.c.id)
``` |
197368 | I think I've done everything listed as a pre-req for this, but I just can't get the instances to appear in Systems Manager as managed instances.
I've picked an AMI which i believe should have the agent in by default.
```
ami-032598fcc7e9d1c7a
PS C:\Users\*> aws ec2 describe-images --image-ids ami-032598fcc7e9d1c7a
{
"Images": [
{
"ImageLocation": "amazon/amzn2-ami-hvm-2.0.20200520.1-x86_64-gp2",
"Description": "Amazon Linux 2 AMI 2.0.20200520.1 x86_64 HVM gp2",
```
I've also created my own Role, and included the following policy which i've used previously to get instances into Systems Manager.
[![Role](https://i.stack.imgur.com/uRHaJ.png)](https://i.stack.imgur.com/uRHaJ.png)
Finally I've attached the role to the instances.
[![role attached to instance](https://i.stack.imgur.com/SSrca.png)](https://i.stack.imgur.com/SSrca.png)
I've got Systems Manager set to a 30 min schedule and waited this out and the instances don't appear. I've clearly missed something here, would appreciate suggestions of what.
Does the agent use some sort of backplane to communicate, or should I have enabled some sort of communication with base in the security groups?
Could this be because the instances have private IPs only? Previous working examples had public IPs, but I dont want that for this cluster. | Besides the role for ec2 instances, SSM also needs to be able to assume role to securely run commands on the instances. You only did the first step. All the steps are described in [AWS documentation for SSM](https://docs.aws.amazon.com/systems-manager/latest/userguide/getting-started-restrict-access-quickstart.html).
**However**, I strongly recommend you use the **Quick Setup** feature in System Manager to setup everything for you in no time!
In AWS Console:
1. Go to **Systems Manager**
2. Click on **Quick Setup**
3. Leave all the defaults
4. In the **Targets** box at the bottom, select **Choose instances manually** and tick your ec2 instance(s)
5. Finish the setup
6. It will automatically create **AmazonSSMRoleForInstancesQuickSetup** Role and assign it to the selected ec2 instance(s) and also create proper AssumeRole for SSM
7. Go to EC2 Console, find that ec2 instance(s), right-click and reboot it by choosing **Instance State > Reboot**
8. Wait for a couple of minutes
9. Refresh the page and try to **Connect** via **Session Manager** tab
**Notes**:
* It's totally fine and recommended to create your ec2 instances in private subnets if you don't need them to be accessed from internet. However, make sure the private subnet has internet access itself via NAT. It's a hidden requirement of SSM!
* Some of the AmazonLinux2 images like **amzn2-ami-hvm-2.0.20200617.0-x86\_64-gp2** does not have proper SSM Agent pre-installed. So, recreate your instance using a different AMI and try again with the above steps if it didn't work.
[![enter image description here](https://i.stack.imgur.com/UIcPc.png)](https://i.stack.imgur.com/UIcPc.png) |
197445 | I m new to Jboss, but I have multiple web applications each using spring-hibernate and other open source libraries and portlets, so basically now each war file includes those jar files. How do I move these jars to a common location so that I don't have to put these in each war file? I guess location is `server/default/lib`, but I'm not sure.
Also, how is it different to have those jars at `WEB-INF/lib` vs. `JBOSS/server/default/lib`? Will I face any classloader issue?
Also I have static data stored in static fields like `Singleton`, will those be shared across all WAR files? | **Classloading:**
You're right, put the `.jar`s to `JBOSS/server/<configuration>/lib`, or `JBOSS/lib`.
JBoss AS comes with bundled Hibernate libs which are tested with that AS version.
See `jboss-6.0.0-SNAPSHOT\server\default\conf\jboss-service.xml`:
```
<server>
<!-- Load all jars from the JBOSS_HOME/server/<config>/lib directory and
the shared JBOSS_HOME/common/lib directory. This can be restricted to
specific jars by specifying them in the archives attribute.
TODO: Move this configuration elsewhere
-->
<classpath codebase="${jboss.server.lib.url}" archives="*"/>
<classpath codebase="${jboss.common.lib.url}" archives="*"/>
</server>
```
Also see:
* <http://community.jboss.org/wiki/classloadingconfiguration>
* <http://community.jboss.org/wiki/JbossClassLoadingUseCases>
**Difference between `WEB-INF/lib` and `JBOSS/server/default/lib`:**
Libs in `WEB/lib` come with your WAR and are only visible within that WAR.
If you have other module, e.g. EJB JAR, they will not be visible from it and you'll get `ClassNotFoundException` or (if you have the class in multiple places) `ClassCastException`.
Libs in `JBOSS-AS/server/<config>/lib` are visible for whole server, thus all deployed apps and their modules. However (IIRC) they don't have precedence, so if you bring that lib e.g. in a WAR, but not in EJB jar, you can end up using two different versions, which is undesirable (will likely lead to aforementioned `ClassCastException`).
Class loading behavior may be tweaked several ways, see e.g. [JBoss wiki](http://community.jboss.org/wiki/ClassLoadingConfiguration).
**Static data:**
Don't rely on static fields in Java EE, that brings troubles. For instance,. the same class can be loaded by different classloaders, so there will be multiple instances of these static values.
If you want to share data amongst more WARs, use an external storage - a database, a file (with synchronization if you write to it), JBoss Cache, etc. |
197585 | I am doing the following equation
[![enter image description here](https://i.stack.imgur.com/9N5dR.png)](https://i.stack.imgur.com/9N5dR.png)
```
\documentclass{article}
\usepackage{amsmath,amssymb}
\begin{document}
\begin{equation}
\left\{\begin{array}{l}
x_{i+1}=\left\{\begin{array}{ll}
\left(4 a x_{i}\left(1-x_{i}\right)+2 b y_{i}\right) \bmod 1 & \text { for } y_{i}<0.5 \\
\left(4 a x_{i}\left(1-x_{i}\right)+2 b\left(1-y_{i}\right)\right) \bmod 1 & \text { for } y_{i} \geq 0.5 \end{array}\\
y_{i+1}=\left\begin{cases}\left(4 a y_{i}\left(1-y_{i}\right)+2 b x_{i}\right) \bmod 1 & \text { for } x_{i}<0.5 \\
\left(4 a y_{i}\left(1-y_{i}\right)+2 b\left(1-x_{i}\right)\right) \bmod 1 & \text { for } x_{i} \geq 0.5\end{cases} \right.
\end{array}\right.
\end{equation}
\end{document}
```
But unfortunately I am getting the following errors
```
Missing delimiter (. inserted). ^^I y_{i+1}=\left\begin{cases}
Missing delimiter (. inserted). ^^I y_{i+1}=\left\begin{cases}
```
Somebody please help me to correct these errors | A bit off-topic, however:
* Fahrenheit are not standard unit, instead it is correct to use Celsius degrees.
* If you for some reason persist to use it, than is sensible to define them as part od `siunitx` package:
```
\documentclass{article}
\usepackage{siunitx} % <---
\DeclareSIUnit{\fahrenheit}{^\circ\mkern-1mu\mathrm{F}} % <---
\begin{document}
proposed solution: \qty{350}{\fahrenheit};
by your solution: 350$^\circ$F
\end{document}
```
since its use gives typographical more correct form than it would be with your intended definition:
[![enter image description here](https://i.stack.imgur.com/d2egd.png)](https://i.stack.imgur.com/d2egd.png) |
197727 | I'm trying to convert a Fahrenheit temperature into Celsius.
doing the following I'm always getting zero:
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Celcius_Farenheit_Converter
{
class Program
{
static double Celcius(double f)
{
double c = 5/9*(f - 32);
return c;
}
static void Main(string[] args)
{
string text = "enter a farenheit tempature";
double c = Celcius(GetTempature(text));
Console.WriteLine("the tempature in Celicus is {0}", c);
Console.ReadKey(true);
}
static double GetTempature(string text)
{
Console.WriteLine(text);
bool IsItTemp = false;
double x = 0;
do
{
IsItTemp = double.TryParse(Console.ReadLine(), out x);
} while (!IsItTemp);
return x;
}
}
}
```
can you help me fix it? | `5/9` performs an integer division—that is, it always discards the fractional part—so it will always return 0.
`5.0/9.0` performs floating-point division, and will return the expected 0.55555...
Try this instead:
```
static double Celcius(double f)
{
double c = 5.0/9.0 * (f - 32);
return c;
}
```
**Further Reading**
* [/ Operator (C# Reference)](http://msdn.microsoft.com/en-us/library/3b1ff23f.aspx) |
197750 | I want to replace the selected values through "RandomChoice" by 1 and all other values in matrix with 0. I want output in binary form matirx.
```
mat = Table[Subscript[m, i, j], {i, 6}, {j, 3}];
mat // MatrixForm
n = 18*0.5;
k = RandomChoice[{mat[[1, 1]], mat[[1, 2]], mat[[1, 3]], mat[[2, 1]],
mat[[2, 2]], mat[[2, 3]], mat[[3, 1]], mat[[3, 2]], mat[[3, 3]],
mat[[4, 1]], mat[[4, 2]], mat[[4, 3]], mat[[5, 1]], mat[[5, 2]],
mat[[5, 3]], mat[[6, 1]], mat[[6, 2]], mat[[6, 3]]}, n]
``` | I think, your question is as much about techniques of evaluation control, as it is about code generation. I will show several different methods to achieve the first result you asked for, which would illustrate different approaches and techniques of evaluation control.
### Working with unevaluated parts
You can, if you want to, work with parts of your expression, carefully avoiding evaluation leaks. This method is perhaps one of the most difficult and least economical, since it requires a good knowledge of evaluation control, and even then can be tedious and error-prone, if one doesn't stay really focused.
I will first show the solution step by step, and then assemble it into a function. We will start with the lists of variables and values, but first let us give some of the variables some global values, to test that our solution will not capture global bindings:
```
ClearAll[a, b, c, d];
a = 10;
b = 20;
{{a, b, c, d}, {1, 2, 3, 4}}
(* {{10, 20, c, d}, {1, 2, 3, 4}} *)
```
The idiom with `MapThread` that you suggested is a good one, and we will use it. But not with `Set` directly. We could use a temporary wrapper, and replace it with `Set` afterwards, but I will pick a different road. Here is what we would like to do:
```
MapThread[Hold[#1 = #2] &, {{a, b, c, d}, {1, 2, 3, 4}}]
(* {Hold[10 = 1], Hold[20 = 2], Hold[c = 3], Hold[d = 4]} *)
```
But this doesn't work. The first step in the right direction will be to insert `Unevaluated`:
```
MapThread[Hold[#1 = #2] &, Unevaluated@{{a, b, c, d}, {1, 2, 3, 4}}]
(* {Hold[10 = 1], Hold[20 = 2], Hold[c = 3], Hold[d = 4]} *)
```
This, however, is not good enough. The premature evaluation happens also in `Function` during the parameter-passing stage, and thus we need this:
```
step1 = MapThread[
Function[Null, Hold[#1 = #2], HoldAll],
Unevaluated @ {{a, b, c, d}, {1, 2, 3, 4}}
]
(* {Hold[a = 1], Hold[b = 2], Hold[c = 3], Hold[d = 4]} *)
```
We are good now, but what we need is we need is single `Hold` around an assignment list, while we have a `List` of assignments each wrapped in individual `Hold`. The useful trick here is to use `Thread[#, Hold]&`, so that:
```
step2 = Thread[#, Hold] &@step1
(* Hold[{a = 1, b = 2, c = 3, d = 4}] *)
```
This is closer to what we need, but now we want to wrap `Module` around the variable / assignment list, and get it into `Hold`. One of the simplest ways to do this is by using rules, which would be a version of a so-called [injector pattern](https://mathematica.stackexchange.com/a/1937/81):
```
step3 = Replace[step2, Hold[a_List] :> Hold[Module[a, someBody[]]]]
(* Hold[Module[{a = 1, b = 2, c = 3, d = 4}, someBody[]]] *)
```
which is the final result.
As a function, the resulting code would look like this:
```
ClearAll[genMT];
SetAttributes[genMT, HoldAll];
genMT[vars : {___Symbol}, vals_List] :=
Composition[
Replace[Hold[a_List] :> Hold[Module[a, someBody[]]]],
Thread[#, Hold] &
] @ MapThread[
Function[Null, Hold[#1 = #2], HoldAll],
Unevaluated @ {vars, vals}
]
```
Where I used an operator form for `Replace`. So, for example, we have
```
genMT[{a, b, c, d}, {1, 2, 3, 4}]
(* Hold[Module[{a = 1, b = 2, c = 3, d = 4}, someBody[]]] *)
```
Note that this method is hard. We had to use several advanced techniques (evaluation control, a trick with `Thread`, injector pattern) in combination. I have shown this method to illustrate the "brute-force" approach, which is certainly possible, but is not the most economical or elegant one. One reason why this approach is hard is that we don't use rules and pattern-matching to the full extent.
### Using holding symbolic containers for expression transformations
There is an alternative way to build an expression we need. I frequently find it more convenient than other methods, since it often gives you more control, and makes it easier to control evaluation.
The method is to replace iteration (where it is necessary) with recursion, and use symbolic containers with `Hold*` attributes to transform unevaluated expressions into suitable form. For the case at hand, it may look like this:
```
ClearAll[a, b, c, d, gen];
SetAttributes[gen, HoldAll];
gen[vars : {___Symbol}, vals_List] := gen[{}, vars, vals];
gen[{a___}, {v_, rvars___}, {val_, rvals___}] :=
gen[{a, Set[v, val]}, {rvars}, {rvals}];
gen[a_List, {}, {}] := Hold[Module[a, someBody[]]];
```
We can test:
```
a = 1;
b = 2;
gen[{a, b, c, d}, {1, 2, 3, 4}]
(* Hold[Module[{a = 1, b = 2, c = 3, d = 4}, someBody[]]] *)
```
The way this works is that `gen` function gradually picks elements from lists of variable names and values and accumulates the assignments built out of them. When the lists are exhausted, we obtain a list of assignments in an unevaluated form, and can then generate the code we need, straightforwardly.
### Using Block trick
The method I will describe here is completely different. It is based on the ability of `Block` to make (almost) any symbol "forget" its definitions temporarily. This may be used to non-trivially change the evaluation order. For the case at hand, it will be one of the most economical methods.
Here is the code:
```
ClearAll[genB];
SetAttributes[genB, HoldAll];
genB[{vars___Symbol}, vals_List] :=
Block[{vars, Set, Module, Hold},
SetAttributes[Module, HoldRest];
Hold[Module[MapThread[Set, {{vars}, vals}], someBody[]]]
]
```
The way this works is that it temporarily switches off the standard behavior of `Set`, `Module`, and `Hold`, making them just symbolic wrappers during the evaluation of the body of the `Block`. This is what we need here for code generation. I had to attach a temporary `HoldRest` to `Module` inside `Block`, so that the body of the `Module` (`someBody[]`) is kept unevaluated.
We can test:
```
genB[{a, b, c, d}, {1, 2, 3, 4}]
(* Hold[Module[{a = 1, b = 2, c = 3, d = 4}, someBody[]]] *)
```
### More advanced techniques
One can base more formal, universal and advanced techniques on methods I have described above, by generalizing and automating them to fit one's needs.
For example, some time ago I've developed code generator based on the generalized `Block` trick used above. The seriously simplified version of this technique I posted [here](https://stackoverflow.com/questions/8741671/unevaluated-form-of-ai/8746584#8746584), and with the help of the `FrozenCodeEval` function defined there, the solution to your problem becomes trivial:
```
FrozenCodeEval[
Hold[Module[MapThread[Set, {{a, b, c, d}, {1, 2, 3, 4}}], someBody[]]],
{MapThread}
]
(* Hold[Module[{a = 1, b = 2, c = 3, d = 4}, someBody[]]] *)
```
basically, here you just tell `FrozenCodeEval`, which heads should evaluate (in this case, only `MapThread`), and all the rest are kept "frozen". This technique I have also illustrated (among some others) in [this answer](https://mathematica.stackexchange.com/a/22259/81).
Another possible (but IMO somewhat less powerful) technique is to use rules. I have illustrated this technique in [this answer](https://stackoverflow.com/questions/6214946/how-to-dynamically-generate-mathematica-code/6215394#6215394).
Yet another technique can be based on generalizing the holding symbolic container approach I described above. One good example of this approach is the `LetL` macro (nested `With` replacement), which is described in [this answer](https://mathematica.stackexchange.com/a/10451/81), and particularly its [more complex extension](https://mathematica.stackexchange.com/a/64944/81). This method is the one I've preferred recently.
One can also base code generation on introducing several (many) inert heads (like `mySet`), and replacing them with some "actual" heads (like `Set`) at the end, after the code has been generated. This was one of your original suggestions, and is certainly also a possibility.
Finally, one may implement a macro system, which would provide macros in a way similar to `Lisp`. However, that would require introducing read-time, which currently sort-of exists (parsing code to expressions), but can't really be used much. This is possible to do if one implements an alternative code loader, that would parse code to symbolic expression and give the user access to that parsed code before actually evaluating it.
### Why I think that `Inactive` is in general not an answer
This is certainly an opinionated topic, but I personally did not see (almost) any case where `Inactive` would solve a problem of evaluation control (outside math-related functionality), in a way that would be simpler than other means of evaluation control.
From the viewpoint of evaluation control, the problem of `Inactive` is that while it inactivates a given head, it does not hold the arguments of that head, so they can freely evaluate. And the effort needed to prevent them from doing so will most of the time invalidate all the benefits of using `Inactivate`, as compared to `Hold`, `Unevaluated` etc.
For example, in our case (taking your code):
```
Hold[Module[assignments, Print[a, b, c, d]]] //. {
assignments -> MapThread[Inactive[Set], {{a, b, c, d}, {1, 2, 3, 4}}
]} // Activate
```
(\* Hold[Module[{10 = 1, 20 = 2, c = 3, d = 4}, Print[a, b, c, d]]] \*)
shows that the global bindings to `a` and `b` were picked up in this method. And avoiding that will make the code no less complex than the first alternative I have shown above.
### Summary
There exist several (many) techniques, allowing one to dynamically generate Mathematica code. The ones I have illustrated are based on
* Direct manipulations with unevaluated parts of expressions
* Using symbolic holding containers for expression transformations
* Using `Block` trick
One can come up with more advanced / general / universal frameworks for code generation. The ones I mentioned are based on
* `Block` trick (code "freezing")
* Symbolic holding containers
* Repeated rule application to held expressions
* Inert intermediate (temporary) heads
* (Read-time) macros
Which one to use depends on the case. For simple cases, some of the simpler techniques may be just enough. If you need to routinely generate code, you may be better off with some heavier but more general and systematic approach. |
197897 | I would like to change the `GridView` item background in `onCreateView()`, after setting the adapter. I tried this way:
```
final int numVisibleChildren = gv_categories.getAdapter().getCount();
for ( int i = 0; i < numVisibleChildren; i++ ) {
int positionOfView = i + 1;
try {
if (positionOfView == Integer.valueOf(m_transactionItem.getIcon())) {
View view = gv_categories.getChildAt(i);
int nColor = Color.parseColor(m_transactionItem.getcolor());
if(view != null) {
view.setBackgroundColor(nColor);
}
}
} catch(Exception e) {
Log.e("GridviewItemErr", e.toString());
}
}
```
But `view` is always `null`. This is my custom adapter that contains an `ImageView` and the `TextView`:
```
public class CategoryAdapter extends BaseAdapter {
private Context mContext;
int id = 0;
private LayoutInflater inflater;
ViewHolder holder;
public static class ViewHolder {
private TextView text;
private ImageView icon;
private int mId;
public int getId() {
return mId;
}
public void setId(int id) {
mId = id;
}
}
public CategoryAdapter(Context c) {
mContext = c;
inflater = (LayoutInflater) c.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
}
public int getCount() {
return mAllIcons.length;
}
public Object getItem(int position) {
return 0;
}
public long getItemId(int position) {
return 0;
}
// create a new ImageView for each item referenced by the Adapter
@Override
public View getView(int position, View convertView, ViewGroup parent) {
if (convertView == null) {
id++;
holder = new ViewHolder();
convertView = inflater.inflate(R.layout.categories_grid_item, null);
holder.icon = (ImageView) convertView.findViewById(R.id.ivGrid_Item_Category);
//holder.icon.setId(id);
holder.icon.setScaleType(ImageView.ScaleType.FIT_CENTER);
holder.text = (TextView) convertView.findViewById(R.id.tvGrid_Item_Category);
//holder.text.setId(id);
holder.setId(id);
} else {
holder = (ViewHolder) convertView.getTag();
}
holder.icon.setImageResource(mAllIcons[position]);
holder.text.setText(mCategNames[position]);
convertView.setTag(holder);
return convertView;
}
private Integer[] mAllIcons = {
R.drawable.icon_1_prijevoz_neselktirane,
R.drawable.icon_2_putovanje_neselktirane,
R.drawable.icon_3_hoteli_neselektirane,
R.drawable.icon_4_odrzavanje_neselktirane,
R.drawable.icon_5_zabava_neselktirane,
R.drawable.icon_6_kucne_potrepstine_neselektirane,
R.drawable.icon_7_odjeca_neselektirane,
R.drawable.icon_8_trgovina_neselektirane,
R.drawable.icon_9_rezije_neselktirane,
R.drawable.icon_10_restorani_neselktirane,
R.drawable.icon_11_slobodno_vrijeme_neselktirane,
R.drawable.icon_12_luksuz_neselktirane,
R.drawable.icon_13_odmor_i_rekreacija_neselktirane,
R.drawable.icon_14_ostalo_neselektirane,
R.drawable.icon_15_prihod_neselektirane,
R.drawable.icon_16_stalni_prihodi_neselektirane,
R.drawable.icon_17_default_neselektirane
};
private String[] mCategNames = {
"Prijevoz", "Putovanje",
"Hoteli", "Održavanje",
"Zabava", "Kućne potrepštine",
"Odjeća", "Trgovina",
"Režije", "Restorani",
"Slobodno vrijeme", "Luksuz",
"Odmor i rekreacija",
"Ostalo", "Prihod",
"Stalni prihod", "Default",
};
}
```
I tried many other ways to do this but I cant figure out how to get "visible" GridView items.
**NOTE:**
The background needs to be changed when fragment is created, not in `onItemClickedListener` or `onScrollListener` or like... | I think that your problem arises from the fact that you are trying to reference the grid's views before your ImageAdapter had created them, if you wish really to set the background color at the time of `onCreateView()` and not at the time of calling `getView()`, I would suggest that you use `getViewTreeObserver().addOnGlobalLayoutListener` , to make sure that the gridview had been properly inflated before calling getChildAt(i), your code would look something like this:
```
gv_categories.getViewTreeObserver().addOnGlobalLayoutListener(new ViewTreeObserver.OnGlobalLayoutListener() {
@Override
public void onGlobalLayout() {
//At this point the layout is complete and the
//dimensions of myView and any child views are known.
final int numVisibleChildren = gv_categories.getAdapter().getCount();
for ( int i = 0; i < numVisibleChildren; i++ ) {
int positionOfView = i + 1;
try {
if (positionOfView == Integer.valueOf(m_transactionItem.getIcon())) {
View view = gv_categories.getChildAt(i);
int nColor = Color.parseColor(m_transactionItem.getcolor());
if(view != null) {
view.setBackgroundColor(nColor);
}
}
} catch(Exception e) {
Log.e("GridviewItemErr", e.toString());
}
}
});
``` |
198065 | The following is done comparing Coldfusion 9.0.1 against Railo 4.0.2.002
I have the following (truncated) as the init for a cfc
```
component {
public myComponent function init(
required string inSetting1,
required string inSetting2
) {
return this;
}
}
```
If I run the following, then it invokes correctly
```
objInstance = new myComponent(
inSetting1 = 'xxx',
inSetting2 = 'yyy'
);
```
However I am attempting to handle a misconfiguration requesting this object without all required arguments. So far I have got the following
```
try {
// inSetting2 is not defined == will cause error
objInstance = new myComponent(
inSetting1 = 'xxx'
);
} catch (coldfusion.runtime.MissingArgumentException e) {
// This catches the error for CF 9.0.1
} catch (expression e) {
// This catches the error for Railo 4.0.2
debug(e); // get the error dump for stack trace below
}
```
So far in place of `expression` I have tried the following to no avail based off the opening `railo.runtime.type.UDFImpl.defineArguments(UDFImpl.java:171)` from the stack trace (which is how I retrieved the specific type for CF9)
* railo
* railo.runtime
* railo.runtime.type
* railo.runtime.type.UDFImpl
* railo.runtime.type.UDFImpl.defineArguments
Now while I could just do `catch (expression e)` or even `catch (any e)`, I would much rather be able to catch the specific errors for the simple reason that in some locations within the components which I am currently updating to work with Railo it has relied on being able to determine the reason for failure to be able to differentiate configuration errors (missing arguments) from deployment errors (missing template).
So, is anyone aware of a method by which I could make the above try/catch look specifically for an equivilant of `coldfusion.runtime.MissingArgumentException` (or for that matter, a method by which I could check the required arguments prior to attempting to invoke/run the method(s) in question)
Edit : I forgot to include the stack trace. Here is the Railo stack trace. I have also already tried the following in place of `expression`
```
The parameter environment to function init is required but was not
passed in. at
railo.runtime.type.UDFImpl.defineArguments(UDFImpl.java:171):171 at
railo.runtime.type.UDFImpl._call(UDFImpl.java:369):369 at
railo.runtime.type.UDFImpl.callWithNamedValues(UDFImpl.java:275):275
at railo.runtime.ComponentImpl._call(ComponentImpl.java:608):608 at
railo.runtime.ComponentImpl._call(ComponentImpl.java:490):490 at
railo.runtime.ComponentImpl.callWithNamedValues(ComponentImpl.java:1800):1800
at
railo.runtime.functions.other._CreateComponent.call(_CreateComponent.java:49):49
at
mso.dev_test315.mycomponenttest_cfc$cf._1(/var/www/html/myComponentTest.cfc:133):133
at
mso.dev_test315.mycomponenttest_cfc$cf.udfCall(/var/www/html/myComponentTest.cfc):-1
at railo.runtime.type.UDFImpl.implementation(UDFImpl.java:103):103 at
railo.runtime.type.UDFImpl._call(UDFImpl.java:371):371 at
railo.runtime.type.UDFImpl.callWithNamedValues(UDFImpl.java:275):275
at railo.runtime.ComponentImpl._call(ComponentImpl.java:608):608 at
railo.runtime.ComponentImpl._call(ComponentImpl.java:490):490 at
railo.runtime.ComponentImpl.callWithNamedValues(ComponentImpl.java:1796):1796
at railo.runtime.tag.Invoke.doComponent(Invoke.java:209):209 at
railo.runtime.tag.Invoke.doEndTag(Invoke.java:182):182 at
mxunit.framework.testcase_cfc$cf._2(/var/www/html/mxunit/framework/TestCase.cfc:115):115
at
mxunit.framework.testcase_cfc$cf.udfCall(/var/www/html/mxunit/framework/TestCase.cfc):-1
at railo.runtime.type.UDFImpl.implementation(UDFImpl.java:103):103 at
railo.runtime.type.UDFImpl._call(UDFImpl.java:371):371 at
railo.runtime.type.UDFImpl.callWithNamedValues(UDFImpl.java:275):275
at railo.runtime.ComponentImpl._call(ComponentImpl.java:608):608 at
railo.runtime.ComponentImpl._call(ComponentImpl.java:490):490 at
railo.runtime.ComponentImpl.callWithNamedValues(ComponentImpl.java:1800):1800
at
railo.runtime.util.VariableUtilImpl.callFunctionWithNamedValues(VariableUtilImpl.java:749):749
at
railo.runtime.PageContextImpl.getFunctionWithNamedValues(PageContextImpl.java:1521):1521
at
mxunit.framework.decorators.dataproviderdecorator_cfc$cf._1(/var/www/html/mxunit/framework/decorators/DataProviderDecorator.cfc:31):31
at
mxunit.framework.decorators.dataproviderdecorator_cfc$cf.udfCall(/var/www/html/mxunit/framework/decorators/DataProviderDecorator.cfc):-1
at railo.runtime.type.UDFImpl.implementation(UDFImpl.java:103):103 at
railo.runtime.type.UDFImpl._call(UDFImpl.java:371):371 at
railo.runtime.type.UDFImpl.call(UDFImpl.java:284):284 at
railo.runtime.ComponentImpl._call(ComponentImpl.java:607):607 at
railo.runtime.ComponentImpl._call(ComponentImpl.java:490):490 at
railo.runtime.ComponentImpl.call(ComponentImpl.java:1781):1781 at
railo.runtime.util.VariableUtilImpl.callFunctionWithoutNamedValues(VariableUtilImpl.java:723):723
at
railo.runtime.PageContextImpl.getFunction(PageContextImpl.java:1506):1506
at
mxunit.framework.testsuiterunner_cfc$cf._1(/var/www/html/mxunit/framework/TestSuiteRunner.cfc:99):99
at
mxunit.framework.testsuiterunner_cfc$cf.udfCall(/var/www/html/mxunit/framework/TestSuiteRunner.cfc):-1
at railo.runtime.type.UDFImpl.implementation(UDFImpl.java:103):103 at
railo.runtime.type.UDFImpl._call(UDFImpl.java:371):371 at
railo.runtime.type.UDFImpl.call(UDFImpl.java:284):284 at
railo.runtime.type.scope.UndefinedImpl.call(UndefinedImpl.java:774):774
at
railo.runtime.util.VariableUtilImpl.callFunctionWithoutNamedValues(VariableUtilImpl.java:723):723
at
railo.runtime.PageContextImpl.getFunction(PageContextImpl.java:1506):1506
at
mxunit.framework.testsuiterunner_cfc$cf._1(/var/www/html/mxunit/framework/TestSuiteRunner.cfc:52):52
at
mxunit.framework.testsuiterunner_cfc$cf.udfCall(/var/www/html/mxunit/framework/TestSuiteRunner.cfc):-1
at railo.runtime.type.UDFImpl.implementation(UDFImpl.java:103):103 at
railo.runtime.type.UDFImpl._call(UDFImpl.java:371):371 at
railo.runtime.type.UDFImpl.call(UDFImpl.java:284):284 at
railo.runtime.ComponentImpl._call(ComponentImpl.java:607):607 at
railo.runtime.ComponentImpl._call(ComponentImpl.java:490):490 at
railo.runtime.ComponentImpl.call(ComponentImpl.java:1781):1781 at
railo.runtime.util.VariableUtilImpl.callFunctionWithoutNamedValues(VariableUtilImpl.java:723):723
at
railo.runtime.PageContextImpl.getFunction(PageContextImpl.java:1506):1506
at
mxunit.framework.testsuite_cfc$cf._1(/var/www/html/mxunit/framework/TestSuite.cfc:131):131
at
mxunit.framework.testsuite_cfc$cf.udfCall(/var/www/html/mxunit/framework/TestSuite.cfc):-1
at railo.runtime.type.UDFImpl.implementation(UDFImpl.java:103):103 at
railo.runtime.type.UDFImpl._call(UDFImpl.java:371):371 at
railo.runtime.type.UDFImpl.call(UDFImpl.java:284):284 at
railo.runtime.ComponentImpl._call(ComponentImpl.java:607):607 at
railo.runtime.ComponentImpl._call(ComponentImpl.java:490):490 at
railo.runtime.ComponentImpl.call(ComponentImpl.java:1781):1781 at
railo.runtime.util.VariableUtilImpl.callFunctionWithoutNamedValues(VariableUtilImpl.java:723):723
at
railo.runtime.PageContextImpl.getFunction(PageContextImpl.java:1506):1506
at
mxunit.framework.remotefacade_cfc$cf._1(/var/www/html/mxunit/framework/RemoteFacade.cfc:76):76
at
mxunit.framework.remotefacade_cfc$cf.udfCall(/var/www/html/mxunit/framework/RemoteFacade.cfc):-1
at railo.runtime.type.UDFImpl.implementation(UDFImpl.java:103):103 at
railo.runtime.type.UDFImpl._call(UDFImpl.java:371):371 at
railo.runtime.type.UDFImpl.call(UDFImpl.java:284):284 at
railo.runtime.ComponentImpl._call(ComponentImpl.java:607):607 at
railo.runtime.ComponentImpl._call(ComponentImpl.java:498):498 at
railo.runtime.ComponentImpl.call(ComponentImpl.java:1789):1789 at
railo.runtime.ComponentWrap.call(ComponentWrap.java:165):165 at
railo.runtime.net.rpc.server.ComponentController._invoke(ComponentController.java:56):56
at
railo.runtime.net.rpc.server.ComponentController.invoke(ComponentController.java:34):34
at __138.mxunit.framework.remotefacade_wrap.executeTestCase(Unknown
Source):-1 at sun.reflect.GeneratedMethodAccessor61.invoke(Unknown
Source):-1 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown
Source):-1 at java.lang.reflect.Method.invoke(Unknown Source):-1 at
org.apache.axis.providers.java.RPCProvider.invokeMethod(RPCProvider.java:397):397
at
org.apache.axis.providers.java.RPCProvider.processMessage(RPCProvider.java:186):186
at
org.apache.axis.providers.java.JavaProvider.invoke(JavaProvider.java:323):323
at
org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32):32
at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118):118 at
org.apache.axis.SimpleChain.invoke(SimpleChain.java:83):83 at
org.apache.axis.handlers.soap.SOAPService.invoke(SOAPService.java:454):454
at org.apache.axis.server.AxisServer.invoke(AxisServer.java:281):281
at
railo.runtime.net.rpc.server.RPCServer.doPost(RPCServer.java:312):312
at
railo.runtime.ComponentPage.callWebservice(ComponentPage.java:783):783
at railo.runtime.ComponentPage.call(ComponentPage.java:155):155 at
railo.runtime.PageContextImpl.doInclude(PageContextImpl.java:801):801
at
railo.runtime.PageContextImpl.doInclude(PageContextImpl.java:753):753
at
railo.runtime.listener.ModernAppListener._onRequest(ModernAppListener.java:183):183
at
railo.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:18):18
at
railo.runtime.PageContextImpl.execute(PageContextImpl.java:2255):2255
at
railo.runtime.PageContextImpl.execute(PageContextImpl.java:2222):2222
at
railo.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:310):310
at railo.loader.servlet.CFMLServlet.service(CFMLServlet.java:29):29 at
javax.servlet.http.HttpServlet.service(HttpServlet.java:728):728 at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305):305
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210):210
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222):222
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123):123
at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472):472
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171):171
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99):99
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118):118
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407):407
at
org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:200):200
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589):589
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310):310
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
Source):-1 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source):-1
at java.lang.Thread.run(Unknown Source):-1
``` | A simpler workaround than what you are currently using:
```
try
{
// standard function call
}
catch (expression e)
{
if ( NOT refind('The parameter \S+ to function \S+ is required but was not passed in',e.message) )
rethrow;
}
```
Of course, this relies on the wording of the error message, so is somewhat brittle.
You could alternatively do a less precise but more flexible version, just checking for the keywords:
```
if ( NOT find('parameter',e.message) AND find('required',e.message) )
``` |
198152 | ```
int main()
{
srand((unsigned)time(0));
int random_integer;
int lowest=0, highest=10;
int range=(highest-lowest)+1;
for(int index=0; index<20; index++){
random_integer = (rand() % range) + lowest/(RAND_MAX + 1.0);
cout << random_integer << endl;
}
}
```
I am getting the output from 0 to 10, 11 numbers, but I don't want to get number 10, just numbers 0 to 9, that means 10 random numbers, what should I do? | Modulo operation `x % c;` returns the remainder of division of number `x` by `c`. If you do `x % 10` then there are `10` possible return values: `0 1 2 3 4 5 6 7 8 9`.
Note that generating random numbers by using `rand()` with `%` produces skewed results and those numbers are not uniformly distributed.
Here's the simple C-style function that generates random number from the interval from `min` to `max`, inclusive:
```
int irand(int min, int max) {
return ((double)rand() / ((double)RAND_MAX + 1.0)) * (max - min + 1) + min;
}
```
Note, that numbers generated by this functions are uniformly distributed:
```
int occurences[8] = {0};
srand(time(0));
for (int i = 0; i < 100000; ++i)
++occurences[irand(1,7)];
for (int i = 1; i <= 7; ++i)
cout << occurences[i] << ' ';
```
output: `14253 14481 14210 14029 14289 14503 14235`
Also have a look at:
[Generate a random number within range?](https://stackoverflow.com/q/2999075/1168156)
[Generate random numbers uniformly over an entire range](https://stackoverflow.com/q/288739/1168156)
[What is the best way to generate random numbers in C++?](https://stackoverflow.com/q/9471604/1168156) |
198753 | EDIT: No longer getting an undefined URI once I passed $(this).attr('red') to a variable. Still getting the 500 Server error though.
EDIT: [Here is all the code on github](https://github.com/WayneHaworth/nodetest2.git) (just in case I have missed something!)
EDIT: Here is the how the function is called in global.js
```
$('#userList table tbody').on('click', 'td a.linkedituser', editUser);
```
And here is the html.
```
<div id="wrapper">
<div id="userInfo">
<h2>User Info</h2>
<p><strong>Name:</strong> <span id='userInfoName'></span><br />
<strong>Age:</strong> <span id='userInfoAge'></span><br />
<strong>Gender:</strong> <span id='userInfoGender'></span><br />
<strong>Location:</strong> <span id='userInfoLocation'></span></p>
</div>
<h2>User List</h2>
<div id="userList">
<table>
<thead>
<tr>
<th>UserName</th>
<th>Email</th>
<th>Delete</th>
<th>Edit</th>
</tr>
</thead>
</table>
</div>
<h2>Edit User</h2>
<div id="editUser">
<fieldset>
<input id="editUserName" type="text" placeholder="Username" /><input id=
"editUserEmail" type="text" placeholder="Email" /><br />
<input id="editUserFullname" type="text" placeholder="Full Name" /><input id=
"editUserAge" type="text" placeholder="Age" /><br />
<input id="editUserLocation" type="text" placeholder="Location" /><input id=
"editUserGender" type="text" placeholder="Gender" /><br />
<button id="btnEditUser">Edit User</button>
</fieldset>
</div>
</div>
```
I am trying to submit a form as a PUT request to edit a user's information.
I receive the following error in my Chrome console when I submit the form.
```
[Error] Failed to load resource: the server responded with a status of 500 (Internal Server Error) (undefined, line 0)
```
The right of the error shows the ID just fine so I imagine this is something to do with the router end?
```
http://localhost:3000/users/edituser/53fa0acbc069f52257312d2c
```
Here is my client side function.
```
function editUser(event) {
event.preventDefault();
var thisUserID = $(this).attr('rel');
var arrayPosition = userListData.map(function(arrayItem) { return arrayItem._id; }).indexOf(thisUserID);
var thisUserObject = userListData[arrayPosition];
$('#editUserName').val(thisUserObject.email);
$('#editUserEmail').val(thisUserObject.email);
$('#editUserFullname').val(thisUserObject.fullname);
$('#editUserAge').val(thisUserObject.age);
$('#editUserLocation').val(thisUserObject.location);
$('#editUserGender').val(thisUserObject.gender);
var editUser = {
'username' : $('#editUserName').val(),
'email' : $('#editUserEmail').val(),
'fullname' : $('#editUserFullname').val(),
'age' : $('#editUserAge').val(),
'location' : $('#editUserLocation').val(),
'gender' : $('#editUserGender').val(),
}
//just a test to prove that I can read the id from the db
$('body > h1').text($(this).attr('rel'));
$('#btnEditUser').on('click', function() {
$.ajax({
type: 'PUT',
url: '/users/edituser/' + thisUserID,
data: JSON.stringify(editUser),
dataType: 'json',
success: function(msg) { alert('Success'); },
error: function(err) {alert('Error'); }
}).done(function(response) {
if (response.msg === '') {
}
else {
alert('Error: ' + response.msg);
}
populateTable();
});
});
};
```
Here is my router on the server side inside /routes/users.js.
```
router.put('/edituser/:id', function(req, res) {
var db = req.db;
var userToEdit = req.params('id');
db.collection('userlist').findById(userToEdit, function(err, result) {
res.send((result === 1) ? { msg: '' } : { msg: 'error: ' + err });
});
});
```
Can anyone help? Thanks! | You are pulling the `id` from the body, when it's in the url.
So replace `var userToEdit = req.body.id;` with:
```
var userToEdit = req.params.id;
```
Or you can use `req.param('id')` which goes through [all the methods](http://expressjs.com/api#req.param) to find the value.
Also do `res.json(..)` instead of `res.send(..)`. |
198796 | I'm using Rails 3.0.10 and I'm currently using CanCan for determining the different abilities that users have within the application.
I am using Devise for the user authentication itself.
Is there a good way for an admin user to "become" another user temporarily using CanCan? This would be especially useful to become specific users who may be experiencing unique issues pertaining to their account.
So in simple terms, I just want to be able to sign in as any given user to see what they see. Not sure if there is a CanCan or Devise feature out there.
**UPDATE**
I just came across this:
[How-To: Sign in as another user if you are an Admin](https://github.com/plataformatec/devise/wiki/How-To%3a-Sign-in-as-another-user-if-you-are-an-admin).
I haven't tested it yet, but it may be the answer I was looking for!! Feel free to provide some insight if this is the approach you took, or if you have another idea! | I found what I was looking for, and there's a gem for it!
[switch\_user](https://github.com/flyerhzm/switch_user)
It was really easy to use. I just have to add `gem switch_user` to my Gemfile.
Then add `<%= switch_user_select %>` to my layout. Exactly what I wanted! |
199011 | I am working with a dataset which contains two tables: one that gives the value of a number of variables at the lowest level, and one that shows all child-parent relations which I should use to calculate the totals.
My data looks something like this:
```
> # table that shows child-parent relation
> ID <- c(5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100)
> PARENT_ID <- c(NA, 5, 10, 10, 5, 5, 5, 60, 60, 60, 60)
> df_relation <- data.frame(ID, PARENT_ID)
> df_relation
ID PARENT_ID
1 5 NA
2 10 5
3 20 10
4 30 10
5 40 5
6 50 5
7 60 5
8 70 60
9 80 60
10 90 60
11 100 60
> # For instance, ID 70 + 80 + 90 + 100 gives the value for ID 60
>
> # table that contains values at the lowest level
> ID <- c(20, 30, 40, 50, 70, 80, 90, 100)
> VALUE <- c(14, 1, 4329, 98503, 272, 667, 712, 55)
> df_value <- data.frame(ID, VALUE)
> df_value
ID VALUE
1 20 14
2 30 1
3 40 4329
4 50 98503
5 70 272
6 80 667
7 90 712
8 100 55
>
> # add values to relation table
> df <- left_join(df_relation, df_value, by = "ID")
> df
ID PARENT_ID VALUE
1 5 NA NA
2 10 5 NA
3 20 10 14
4 30 10 1
5 40 5 4329
6 50 5 98503
7 60 5 NA
8 70 60 272
9 80 60 667
10 90 60 712
11 100 60 55
>
> # compute totals (ID = 5, 10 & 60)
> # ID 60 = 70 + 80 + 90 + 100
> # ID 10 = 20 = 30
> # ID 5 = 10 + 40 + 50 + 60
```
Off course, I can do this manually, but I also have a number of additional tables which have even more child-parent levels. So I would rather have a function which could do the job.
Would anyone know how to do this? | It looks like you need to use recursion here:
```
sum_children <- function(ID, data)
{
result <- 0;
children <- which(data$PARENT_ID == ID)
if(length(children) == 0)
return(data$VALUE[which(data$ID == ID)])
else
for(i in seq_along(children))
result <- result + sum_children(data$ID[children[i]], data)
return(result)
}
```
This function will take an ID whose child sums you wish to calculate and return the result.
```
sum_children(60, df)
#> [1] 1706
```
It will work to any depth. For example, each node with a value in your example is ultimately descended from ID 5. Therefore the child sum of 5 should be the same as the total of `VALUE`
```
sum(df$VALUE, na.rm = T)
#> [1] 104553
sum_children(5, df)
#> [1] 104553
```
And in fact, you can quickly get the result for all descendants for each ID by doing:
```
df$child_sum <- sapply(df$ID, sum_children, df)
df
#> ID PARENT_ID VALUE child_sum
#> 1 5 NA NA 104553
#> 2 10 5 NA 15
#> 3 20 10 14 14
#> 4 30 10 1 1
#> 5 40 5 4329 4329
#> 6 50 5 98503 98503
#> 7 60 5 NA 1706
#> 8 70 60 272 272
#> 9 80 60 667 667
#> 10 90 60 712 712
#> 11 100 60 55 55
``` |
199299 | Here's my scenario: I'm a developer that inherited (unbeknownst to me) three servers located within my office. I also inherited the job of being the admin of the servers with a distinct lack of server administration knowledge and google / ServerFault as a reference point. Luckily, I've never actually had to come into contact physically with the machines or address any issues as they've always 'just worked'.
All three machines are located within the same data room and serve the following purpose:
`Machine1` - IIS 8.0 hosting a number of internal applications
`Machine2` - SQL Server 2008 R2 data store for the internal applications
`Machine3` - SQL Server 2008 R2 mirror store of `Machine2`
All three have external hard drives connected that complete back ups frequently.
I've been informed that all three need to move from one data room to another within the same premises. I wont be completing the physical moving of the hardware, that'll be handled by a competent mover.
Apart from completing a full back up of each, what considerations do I need to make prior to hypothetically flicking the power switch and watching my world move?
I'm aware that it's far from ideal having all three located in the same room / premises but that's past the scope of this question. | It's quite difficult to tell and borderline "too broad" for our format. The most important thing you need to check is if you need to reconfigure your network in any way of if they can keep running with the same addresses. Even if they can keep the same addresses, make sure they are not configured via DHCP and/or verify the DHCP server will be available at the new location.
Side note: As you already stated, having the SQL server and it's mirror is far from ideal. However, having the backup drives at the same location is **really** dangerous. You need to have your backup in a different physical location. |
199312 | I am writing a program in java that works with Postgresql. Records in database have different languages, such as English, french and Spanish. While using searching I'm getting nullPointerException when a program comes to converting of the record from database (customer name) toLowerCase(). My problem is that I don't know the language of processed record, because characters looks similar.
Question: are there alternatives for toLowerCase() that compares characters without specifying a locale, just by comparing character by character and if one is different that another one, it returns false. Simply if Chinese character is different than the English one, return false.
Something like a = a returns true, but a = 诶 returns false, without giving me nullPointException. | `.equals()` does what you want. You must be trying to call a method (`toLowerCase()`?) on a `null` string. |
200076 | I want to search all names that start with a particular letter, that means when I type one character in search bar it will show all names that start with that character but unfortunately it also shows the names that contain this character.I want to show only those names that start with a particular character that I type in search bar . Please show me the solution.
```
- (void)searchBar:(UISearchBar *)searchBars textDidChange:(NSString *)search
{
if(search.length == 0)
{
isFiltered = FALSE;
}
else{
isFiltered=true;
itemData =[[NSMutableArray alloc] init];
for (Item *itemsearch in listOfItemForStock) {
NSString *stringId = [NSString stringWithFormat:@"%i",itemsearch.iditem];
NSString *stringName = [NSString stringWithFormat:@"%@",itemsearch.name];
NSRange idRange = [stringId rangeOfString:search options:(NSCaseInsensitiveSearch)];
NSRange nameRange = [stringName rangeOfString:search options:(NSCaseInsensitiveSearch)];
if(idRange.location!=NSNotFound || nameRange.location != NSNotFound){
[self.itemData addObject:itemsearch];
}
}
}
[self.tableView reloadData];
```
} | Change the name check to a prefix check. Replace the nameRange variable with a BOOL that only indicates whether the stringName begins with the search string...
```
/// ...
BOOL nameMatches = [[stringName lowercaseString] hasPrefix:[search lowercaseString]];
if(idRange.location!=NSNotFound || nameMatches){
//..
``` |
200688 | My route config looks like this
```
public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(
name: "Default",
url: "{controller}/{action}/{id}",
defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }
);
}
```
I have my user controller divided by action into separate files, so all `GET` operations are in the `GetUserController.cs` file, `PUT` operations in `PutUserController.cs` and so on...
The `GET` file has a partial class like this
```
[RoutePrefix("api/users/{userId:Guid}/locations")]
public partial class UsersController : MyCustomApiController
{
[Route("{locationId:Guid}/list")]
[HttpPost]
public async Task<IHttpActionResult> GetUsers(Guid userId, Guid locationId, [FromBody] Contracts.UserRequest request)
{
}
}
```
The `PUT` file has a partial class like this
```
public partial class UsersController : MyCustomApiController
{
[Route("{locationId:Guid}/insert")]
[HttpPut]
public async Task<IHttpActionResult> InsertUser(Guid userId, Guid locationId, [FromBody] Contracts.UserRequest request)
{
}
}
```
No matter what I do, I always get a 404 Error. I am testing with Postman using `Content-Type` as `application/json`
The URL I am using is
`http://localhost:52450/api/users/3F3E0740-1BCB-413A-93E9-4C9290CB2C22/locations/4F3E0740-1BCB-413A-93E9-4C9290CB2C22/list` with a `POST` since I couldn't use `GET` to post a complex type for the first method
and
`http://localhost:52450/api/users/3F3E0740-1BCB-413A-93E9-4C9290CB2C22/locations/4F3E0740-1BCB-413A-93E9-4C9290CB2C22/insert` with a `PUT`
What other routes do I need to setup in the route config if at all?
**EDIT**
Strangely another controller which is also a partial class seems to work with the same configuration
```
[RoutePrefix("api/products")]
public partial class ProductController : MyCustomApiController
{
[Route("insert")]
[HttpPut]
public async Task<IHttpActionResult> InsertProduct([FromBody] InsertProductRequest request)
{
}
}
```
This is the *global.asax.cs* that wires up everything.
```
protected void Application_Start(object sender, EventArgs e)
{
AreaRegistration.RegisterAllAreas();
GlobalConfiguration.Configure(WebApiConfig.Register);
RouteConfig.RegisterRoutes(RouteTable.Routes);
FilterConfig.RegisterFilters(GlobalFilters.Filters);
// Only allow Tls1.2!
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
}
``` | Here is an alternative with a stored proc that uses `Loop`:
```
DROP PROCEDURE IF EXISTS proc_loop_test;
CREATE PROCEDURE proc_loop_test()
BEGIN
DECLARE int_val INT DEFAULT 0;
test_loop : LOOP
IF (int_val = 10) THEN
LEAVE test_loop;
END IF;
INSERT INTO `user_acc`(`playRegion`, `firsttimelogin`) VALUES
(RAND() * (6)+1,1) ;
SET int_val = int_val +1;
END LOOP;
END;
call proc_loop_test;
select * from `user_acc`;
```
I limited to 10 just for testing
[Functional Example](http://rextester.com/SAJXX26044) that you can play around with
[Credit for this solution](https://dba.stackexchange.com/questions/62812/how-to-do-while-loops) |
200751 | I have one table:
```
A B
---------
1 Date1
1 Date1
1 Date1
1 Date2
2 Date1
3 Date3
4 Date2
4 Date1
1 Date3
SELECT A, count(A) FROM `table` WHERE B BETWEEN 'Date1' AND 'Date2'
GROUP BY A
ORDER BY A
```
When I do normal query:
```
A B
----------
1 1
1 1
1 1
2 1
3 1
4 1
4 1
```
Need to count no of days? is there any simple queries for this? please help
Need data like:
```
A B
----------
1 3
2 1
3 1
4 2
``` | Seems you need count(distinct B)
```
SELECT A, count(distinct B)
FROM `table` WHERE B BETWEEN 'Date1' AND 'Date2'
GROUP BY A
ORDER BY A;
``` |
200901 | I am trying to align each child DIRECTLY after one another (vertically). Apparently `align-items: flex-start` doesn't do that completely because there is some auto-spacing between each child.
Below is a snippet of the result I get. The children align themselves along their parent's available space, which is not what I like to achieve. What I want is each child to align directly after its previous sibling (vertically, as in the snippet).
Thanks in advance!
EDIT: `flex-flow: column wrap` and `align-content: flex-start` both did the trick. I forgot, however, to add `align-self: flex-end` to the last child, which doesn't work when either of the two solutions above is applied.
```css
* {
box-sizing: border-box;
}
#container {
position: fixed;
top: 0; right: 0; bottom: 0; left: 0;
display: flex;
align-items: flex-start;
flex-flow: row wrap;
}
#container > div {
width: 100%;
margin: 10px;
border: 2px solid #ff0000;
}
#container > div:nth-child(1) { height: 5%; }
#container > div:nth-child(2) { height: 10%; }
#container > div:nth-child(3) { height: 20%; align-self: flex-end; }
```
```html
<div id="container">
<div></div>
<div></div>
<div></div>
</div>
``` | You need to set [`align-content: flex-start`](https://developer.mozilla.org/en/docs/Web/CSS/align-content) on parent element because initial value is `stretch`.
>
> **stretch**
>
>
> If the combined size of the items along the cross axis is less than the size of the alignment container, any auto-sized items have their size increased equally (not proportionally), while still respecting the constraints imposed by max-height/max-width (or equivalent functionality), so that the combined size exactly fills the alignment container along the cross axis.
>
>
>
```css
* {
box-sizing: border-box;
}
#container {
position: fixed;
top: 0; right: 0; bottom: 0; left: 0;
display: flex;
align-items: flex-start;
flex-flow: row wrap;
align-content: flex-start;
}
#container > div {
width: 100%;
margin: 10px;
border: 2px solid #ff0000;
}
#container > div:nth-child(1) { height: 5%; }
#container > div:nth-child(2) { height: 10%; }
#container > div:nth-child(3) { height: 20%; }
```
```html
<div id="container">
<div></div>
<div></div>
<div></div>
</div>
```
Update: Another solution is to set `flex-direction: column` then you can use `margin` to position specific flex item.
```css
* {
box-sizing: border-box;
}
#container {
position: fixed;
top: 0;
right: 0;
bottom: 0;
left: 0;
display: flex;
flex-direction: column;
}
#container > div {
width: calc(100% - 20px);
margin: 10px;
border: 2px solid #ff0000;
}
#container > div:nth-child(1) {
height: 5%;
}
#container > div:nth-child(2) {
height: 10%;
}
#container > div:nth-child(3) {
height: 20%;
margin-top: auto;
}
```
```html
<div id="container">
<div></div>
<div></div>
<div></div>
</div>
``` |
201368 | I am receiving long string of checked html checkbox values (Request.Form["mylist"] return Value1,Value2,Value3....) on the form post in ASP.NET 2.0 page.
Now I simply want to loop these but I don't know what is the best practice to loop this array of string. I am trying to do something like this:
```
foreach (string Item in Request.Form["mylist"]){
Response.Write(Request.Form["mylist"][Item] + "<hr>");
}
```
But it does not work. | You have to split the comma separated string. Try
```
string myList = Request.Form["myList"];
if(string.isNullOrEmpty(myList))
{
Response.Write("Nothing selected.");
return;
}
foreach (string Item in myList.split(new char[] {','}, StringSplitOptions.RemoveEmptyEntries))
{
Response.Write(item + "<hr>");
}
``` |
201981 | I'm just starting out with Python web data in Python 3.6.1. I was learning sockets and I had a problem with my code which I couldn't figure out. The website in my code works fine, but when I run this code I get a 400 Bad Request error. I am not really sure what the problem with my code is. Thanks in advance.
```
import socket
mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysock.connect(('data.pr4e.org', 80))
mysock.send(('GET http://data.pr4e.org/romeo.txt HTTP/1.0 \n\n').encode())
while True:
data = mysock.recv(512)
if ( len(data) < 1 ):
break
print (data)
mysock.close()
``` | >
>
> ```
> GET http://data.pr4e.org/romeo.txt HTTP/1.0 \n\n
>
> ```
>
>
Welcome in the wonderful world of HTTP where most users think that this is an easy protocol since it is a human readable but in fact it can be a very complex protocol. Given your request above there are several problems:
* The path should be not a full URL but only `/romeo.txt`. Full URL's will be used only when doing a request to a proxy.
* The line end must be `\r\n` not `\n`.
* There should be no space after `HTTP/1.0` before the end of the line.
* While a Host header is only required with HTTP/1.1 many servers (including the one you are trying to access) need it also with HTTP/1.0 since they have multiple hostnames on the same IP address and need to distinguish which name you want.
With this in mind the data you send should be instead
```
GET /romeo.txt HTTP/1.0\r\nHost: data.pr4e.org\r\n\r\n
```
And I've tested that it works perfectly with this modification.
But, given that HTTP is not as simple as it might look I really recommend to use a library like requests for accessing the target. If this looks like too much overhead to you please study the [HTTP standard](https://www.ietf.org/rfc/rfc2616.txt) to implement it properly instead of just guessing how HTTP works from some examples - and guessing it wrong.
Note also that servers differ in how forgiving they are regarding broken implementations like yours. Thus, what once worked with one server might not work with the next server or even with the same server after some software upgrade. Using a robust and well tested and maintained library instead of doing everything on your own might thus save you lots of troubles later too. |
202052 | How do you solve mixins with reflux using ES6?
Like this:
```
mixins: [Reflux.listenTo(myStore, "onChange")]
^
```
Results in error "Unexpected token" with arrow shown above.
React v 0.14.7
Reflux v 0.4.0 | Because the Edit button is disabled with :
```
event.target.disabled = true;
```
You can add in the .cNeD click event :
```
$(".editBtn").prop('disabled', false);
```
[DEMO from codepen.io](http://codepen.io/anon/pen/VajRbj)
**Edit :**
To enable only one edit button (see u\_mulder comment):
```
$(this).closest('.container').find(".editBtn").prop('disabled', false);
```
[DEMO from codepen.io](http://codepen.io/anon/pen/aNZMVz) |
202110 | >
> **Possible Duplicate:**
>
> [Loading from JAR as an InputStream?](https://stackoverflow.com/questions/2614095/loading-from-jar-as-an-inputstream)
>
>
>
Is it possible for me to load a jar file from an input stream (from a url connection, for example), load it into a classloader, and execute it?
Thanks for your time. | Yes; [`URLClassLoader`](http://docs.oracle.com/javase/7/docs/api/java/net/URLClassLoader.html) is intended for this purpose. It can load classes from an array of URLs.
```
URL externalJar = new URL("http://example.com/app.jar");
URL localJar = new URL("C:/Documents/app.jar");
URLClassLoader cl = new URLClassLoader(URL[]{ externalJar, localJar });
Class<?> clazz = cl.loadClass("SomeClass"); // you now can load classes
```
If your `InputStream` is not based on a `URL`, you could instead write the stream's contents to a temporary jar file which you then load using the above approach. (You *can* load a jar entirely in memory, meaning no temp file is created, but this method takes considerably more effort to do right, because you will need define a custom ClassLoader). |
202201 | I am not sure whether I have a correct model of Java initialization mechanism in my head. Each class follows the following:
1. Initialize superclass, if there exists one
2. Initialize yourself
Initialize follows the following:
1. Initialize static and instance variables which have their values set outside of initializer.
2. Run static initialization block
3. Run constructor block
Is that precise/correct? | Two problems: `sin` and `cos` take arguments in *radians*, not degrees; and you probably want to rotate your polygon about its centre, not about (0, 0).
To fix the first problem, change:
```
angle = 45;
```
to:
```
angle = 45.0 * 2.0 * M_PI / 360.0;
```
To fix the second problem you need to first calculate the centroid of your polygon, then do your translations of the coordinates of the vertices relative to the centroid. |
202266 | I have been using Latex (with TexStudio) for some time to keep a research notebook and made some observations:
* After the overall style of a document has more or less settled, one wants to worry about formatting as little as possible. All the Latex markup for even simple tasks (like including an image) becomes tedious.
* Liberal use of `\newcommand` helps facilitate repetitive tasks, but due to difficulties of Latex syntax (like lack of named arguments and abstruse compiler errors), it's not always straightforward. Also, sometimes updating old `\newcommand`s requires going back and changing every instance of previous usage.
* Rearranging parts of the notebook (as some sections sprawl more than expected or the logical structure of the project evolves) sometimes takes a bit of finesse.
It is often said that Latex separates style from content. It certainly does a much better job of this than alternatives like Word, but it doesn't, really - for example, if I have some information that needs to be presented as a table, I end up having the information interwoven with low-level markup for table structure.
I got the idea that I could decide on some kind of pure-data format, perhaps with minimal semantic tags (for instance, to indicate species names so they can later be italicized), to record only my research progress. I could then write a script (eg. in Python) to generate Latex source for a report on this. In theory, this would be analogous to how CoffeeScript is compiled into JavaScript. It would also introduce an extra layer of separation between recorded research (which I would like to write once, and never modify) and stylistic changes (moving sections around, renaming a chapter, changing the formatting of tables).
Are there any major problems with this idea? Has this already been done? Am I massively underestimating how much work it would be? | You might want to take a look at [Pandoc](http://johnmacfarlane.net/pandoc/), which describes itself as the "swiss-army knife" of document conversion. With Pandoc, you're able to create documents in various languages, most notable ones with Markdown-style syntax and are able to easily create LaTeX documents from them.
It also supports a heap of customization options which should allow you to suit it to your specific task and I presume it's way easier doing it this way than writing up a new solution from the ground up; you'd end up trying to reinvent the wheel.
Try taking a look at the [README](http://johnmacfarlane.net/pandoc/README.html) and the [scripting guide](http://johnmacfarlane.net/pandoc/scripting.html) provided on the official web page for more information. |
202397 | I have the following script to recursive copy data and create a log file of the destination, could any assist please, I would like to pause for 10 seconds after each file is copied so each file allocated a different created time stamp.
```
$Logfile ='File_detaisl.csv'
$SourcePath = Read-Host 'Enter the full path containing the files to copy'
""
""
$TargetPath = Read-Host 'Enter the destination full path to copy the files'
""
#$str1FileName = "File_Details.csv"
Copy-Item -Path $SourcePath -Destination $TargetPath -recurse -Force
Get-ChildItem -Path $TargetPath -Recurse -Force -File | Select-Object Name,DirectoryName,Length,CreationTime,LastWriteTime,@{N='MD5 Hash';E={(Get-FileHash -Algorithm MD5 $_.FullName).Hash}},@{N='SHA-1 Hash';E={(Get-FileHash -Algorithm SHA1 $_.FullName).Hash}} | Sort-Object -Property Name,DirectoryName | Export-Csv -Path $TargetPath$Logfile
``` | `Copy-Item` has a `-PassThru` parameter that outputs each item that is currently processed. By piping to `ForEach-Object` you can add a delay after each file.
```sh
Copy-Item -Path $SourcePath -Destination $TargetPath -recurse -Force -PassThru |
Where-Object PSIsContainer -eq $false |
ForEach-Object { Start-Sleep 10 }
```
The `Where-Object` is there to exclude folders from the `ForEach-Object` processing. For folder items the `PSIsContainer` property is `$true` and for files it is `$false`. |
202520 | Fresh install of Magento 1.9.1.
Magento is ignoring the attribute position set in Catalogue->Attributes->Manage Attributes->Manage Labels/Options for a configurable product drop down. Instead it is using the Product ID to determine list order.
Have compared the following files/functions and, apart from a small tax calculation, none of the code has changed since 1.7.0.2.
Mage/Catalog/Model/Product/Type/Configuarable.php:
```
public function getConfigurableAttributes($product = null)
```
Mage/Catalog/Model/Product/Option.php:
```
public function getProductOptionCollection(Mage_Catalog_Model_Product $product)
```
Mage/Catalog/Block/Product/View/Type/Configuarable.php:
```
public function getJsonConfig()
```
I have also tested on a copy database of a live site and all the attribute sorting is based on Product ID.
To replicate:
1. Create an attribute - Colour
2. Add labels - Black, Red, Green, Blue
3. Save attribute.
4. Create configurable and simple associated products with the attributes in the above order.
Edit attribute and change label positions. Blue 0, Green 1, Red 3, Black 4
When viewing product Magento still sorts the attributes by Product ID and ignores positions. | The answer by Meogi works but is not the perfect answer as it will only sort the options on the frontend. Try creating an order from the admin panel for a configurable product. You will still get the incorrectly sorted attribute option list.
Instead, you can copy app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php to local folder app/code/local/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php and apply this patch:
```
Index: app/code/local/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php
===================================================================
--- app/code/local/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php
+++ app/code/local/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php
@@ -301,7 +301,28 @@
}
}
}
+ /**
+ * Mage 1.9+ fix for configurable attribute options not sorting to position
+ * @author Harshit <support@cubixws.co.uk>
+ */
+ $sortOrder = 1;
+ foreach ($this->_items as $item) {
+ $productAttribute = $item->getProductAttribute();
+ if (!($productAttribute instanceof Mage_Eav_Model_Entity_Attribute_Abstract)) {
+ continue;
+ }
+ $options = $productAttribute->getFrontend()->getSelectOptions();
+ foreach ($options as $option) {
+ if (!$option['value']) {
continue;
}
+ if (isset($values[$item->getId() . ':' . $option['value']])) {
+ $values[$item->getId() . ':' . $option['value']]['order'] = $sortOrder++;
+ }
+ }
+ }
+ usort($values, function ($a, $b) {
+ return $a['order'] - $b['order'];
+ });
+
foreach ($values as $data) {
$this->getItemById($data['product_super_attribute_id'])->addPrice($data);
}
```
If you are hesitant of copying across a core file to local folder then I can create a quick module, `<rewrite>` this Collection.php file and just override the \_loadPrices() function and introduce this fix. |
202543 | I have written that seems to not be working, but MySQL does not return any error. It is supposed to get data from `database1.table` to update `database2.table.column`
```
<?php
$dbh1 = mysql_connect('localhost', 'tendesig_zink', 'password') or die("Unable to connect to MySQL");
$dbh2 = mysql_connect('localhost', 'tendesig_zink', 'password', true) or die("Unable to connect to MySQL");
mysql_select_db('tendesig_zink_dev', $dbh1);
mysql_select_db('tendesig_zink_production', $dbh2);
$query = " UPDATE
tendesig_zink_dev.euid0_hikashop_product,
tendeig_zink_production.euid0_hikashop_product
SET
tendesig_zink_dev.euid0_hikashop_product.product_quantity = tendesig_zink_production.euid0_hikashop_product.product_quantity
WHERE
tendesig_zink_dev.euid0_hikashop_product.product_id = tendesig_zink_production.euid0_hikashop_product.product_id";
if (mysql_query($query, $dbh1 ))
{
echo "Record inserted";
}
else
{
echo "Error inserting record: " . mysql_error();
}
?>
``` | You can't just issue a cross-database update statement from PHP like that!
You will need to execute a query to read data from the source db (execute that on the source database connection: `$dbh2` in your example) and then separately write and execute a query to insert/update the target database (execute the insert/update query on the target database connection: `$dbh1` in your example).
Essentially, you'll end up with a loop that reads data from the source, and executes the update query on each iteration, for each value you're reading from the source. |
202713 | I am using Azure AD for user authentication, and using user flows for singup and login,
we also have some custom attributes which are set through the user flow itself.
I am trying to update those custom attributes for a user using his access token
Now reading through the azure ad documentation i came across Azure Ad graph apis
but the access token token of the user comes as invalid (request is shown below),
I am assuming that we need some administrator token here, but based on the requirements we need
to use the user access token itself.
```
(PATCH) https://graph.windows.net/{myorganization}/users/{user_object_id}?api-version=1.6
Authorization: bearer --user access_token which we get after using the login user flow---
Request body:
{
"someCustomAttribute": "some updated value"
}
Response:
{
"odata.error": {
"code": "Authentication_ExpiredToken",
"message": {
"lang": "en",
"value": "Your access token has expired. Please renew it before submitting the request."
}
}
```
}
I made sure that token is valid and is not expired (tested using passport). | This is one option using `dplyr` based on the output of your code:
```
df %>%
group_by(File) %>%
mutate(ID_new = seq(1, n()) + first(ID_raw) - 1)
# A tibble: 18 x 5
# Groups: File [3]
ID_raw Val File ID_diff ID_new
<int> <int> <fct> <int> <dbl>
1 1 1 A 4 1
2 1 1 A 4 2
3 1 1 A 4 3
4 1 1 A 4 4
5 5 2 A 2 5
6 5 2 A 2 6
7 7 8 A 1 7
8 5 54 B 2 5
9 5 54 B 2 6
10 7 23 B 2 7
11 7 23 B 2 8
12 9 88 B 4 9
13 9 88 B 4 10
14 9 88 B 4 11
15 9 88 B 4 12
16 3 32 C 3 3
17 3 32 C 3 4
18 3 32 C 3 5
``` |
202822 | I have an object list that retrieves multiple values from a database, like so:
```
List<Object> listRequest = daoManager.getLaptopsForRequest(BigInteger.valueOf(itemTransItem.getAssetType().getId()), BigInteger.valueOf(approverId));
```
The result of which looks like this (printed out on the console, via for each):
```
{asset_type_id=1, inventory_id=1, from_dt=2015-09-18 18:04:55.77, id=1, asset_id=1, status=1, thru_dt=null}
{asset_type_id=1, inventory_id=1, from_dt=2015-09-18 18:04:55.77, id=2, asset_id=2, status=1, thru_dt=null}
{asset_type_id=1, inventory_id=1, from_dt=2015-09-18 18:04:55.77, id=3, asset_id=3, status=1, thru_dt=null}
```
What's the quickest and/or most efficient way to get only the object where asset\_id = 2, or an array of asset\_id (1 and 2), and putting the results in another array?
I contemplated casting each object as a string, and then turning each string into an array (split by the comma), *and then* turning each item of the array into a further array (or a hashmap) by using the `=`, but that seems like a long, long, complex way of nested for loops that might fail (see comparing array of assets).
Perhaps there's another quicker / less complex way to do this that I'm missing? Any suggestions? Thanks.
EDIT: For reference, here's the `getLaptopsForRequest` function:
```
public List getLaptopsForRequest(BigInteger asset_type_id, BigInteger party_id){
SQLQuery query = sessionFactory.getCurrentSession().createSQLQuery(laptopsForRequestSql);
query.setResultTransformer(Criteria.ALIAS_TO_ENTITY_MAP);
List forRequest = query.setBigInteger(0, asset_type_id).setBigInteger(1, party_id).list();
return forRequest;
}
```
It returns a list of the results of the query. As this code has been in place, I'm *not* allowed to edit it. | You have to create a new class, usually called a ViewModel and pass the data in that class:
```
public class MyViewModel
{
public IEnumerable<Telephone_Search.Models.tbl_users> users;
public IEnumerable<OtherType> otherThings;
public string SomeOtherProp {get;set;}
}
```
In the view:
```
@model MyViewModel
foreach (var usr in Model.users) {...}
foreach (var ot in Model.OtherThings) {...}
<span>@Model.SomeOtherProp</span>
```
In the controller:
```
var test = from a in db.tbl_users
where a == 2
select a;
var otherTypes = from x in db.tbl_otherTypes where x.Prop > 10 select x;
return this.View(new MyViewModel
{
users = test,
otherThings = otherTypes,
SomeOtherProp = "stackoverflow answer"
});
``` |
202920 | I am upgrading to a 400 Amp service in Washington State. My power company is Mason County PUD 1. The AHJ is Washington State L&I. The panel I'm planning on using is the Siemens MC0816B1400RLTM. What size/type wires should I use between the mast head and the line side of the meter? How do I connect these wires to the power company's service wires? This is max 14' run (10' of mast, 2' on either end).
The Siemens spec sheet says the max wire on the line side of the meter is (1)600kcmil OR (2)350kcmil for the two hots and (1) 500kcmil OR (2) 250kcmil on the neutral. In their meter requirements, the power company says "Lugs must accept 350 MCM aluminum wire." They say this is the largest wire they carry. I just confirmed with them that they will have six service wires coming to me from the power pole.
I haven't been able to find a consistent answer to what wire I need. I'm thinking AL XHHW-2.
If there is some way I can join the power company's parallel lines into one, I think I would use 600 MCM for hot and 500 MCM for neutral. According to <https://www.wireandcableyourway.com/xhhw-2/>, 600 MCM is 385 amps at 90 C. Assuming I never use the full 400 Amps, does this work? What gizmo would I use to bond each of the two wires from the power company with my one wire that leads to the mast head?
Or should I continue their parallel runs through the mast all the way to the meter? Again sticking with AL XHHW-2, the same place says 4/0 is 205 Amps at 90 C. So, in parallel, does that make it 410 Amps? Is that sufficient? Or is it close enough that I'd be better off going with parallel 250 MCM? Again, what gizmo do I use to bond each of the six wires from the power company to one of my six going into my mast?
I had been planning on a 3" dia. mast. Is that sufficient? More than sufficient? | First off, the utility generally makes the hookups at the service masthead
--------------------------------------------------------------------------
The first order of business is a point of clarification. According to what I have been able to find, your utility is conventional in that they make the final connection from the service drop to the service entrance conductors coming out of the weatherhead; the 18" tails are required so the linesperson can make those splices comfortably.
As to sizing...
---------------
This leaves us with mast and conduit sizing. A 3" rigid (galvanized) conduit mast is generally considered adequate for a 400A service entrance, at least from the utility service requirements I have read, as physical strength is also a factor here, not just conduit fill (although a smaller conduit wouldn't work anyway, as you'll see below, due to the need to upsize the wires).
This brings us to wire sizing, which is a bit non-intuitive as parallel wires still need to have their ampacities *derated* for fill as per NEC 310.15(B)(3)(a), but we also have to apply the 83% demand factor for residential services provided by 310.15(B)(7) point 1. Furthermore, meter socket lugs aren't rated for 90°C, so we have to use the 75°C column instead. This means your 4/0 is right out as two 4/0 wires in parallel can't even get us to 400A to begin with before we apply the derating.
So, we must upsize the wire. While 250kcmil seems like an attractive next stop, and two 250kcmil wires in parallel can manage 410A on 75°C terminations, we have to take 20% off the top to account for the fact there are 6 wires in the conduit. This puts us down at 328A for the parallel combination, just short of the 332A required by the 83% rule for a 400A nominal service entrance.
As a result, we find ourselves looking at 4 300kcmil Al XHHW-2 wires in the service entrance mast, 2 per leg, along with 2 250kcmil wires for the neutrals since the line neutral lug on your meter base can't take anything larger. These give us a healthy 368A after the derate is applied, which is more than sufficient to meet our 332A requirement. (Were this *not* a residential service, you'd have to provide for the full 400A nominal requirement, which'd require another size bump to paralleled 350kcmil conductors.)
You are correct that 600kcmil is the right size for using a single wire per leg for the service entrance, though -- this gives you 340A at 75°C, just enough to meet our 332A requirement. Furthermore, it is about $0.50/ft cheaper at the time of this writing than the parallel conductors, and also takes up slightly less conduit area, albeit not enough to actually matter. However, the larger conductors can be more difficult to pull through the conduit, which may be an issue, although the reduced surface area compared to the paralleled conductors means you have less chance of skinning the wire, all the same. |
203145 | What I want to achieve is to have a static Container in the left half and a scroll-able list of items on the right half of the screen:
```
|----------------------|
| | |
| Container | ListView |
| | |
|----------------------|
```
It is for a desktop app, so I like this layout. My implementation: (appStatWidgets is a list of containers with text)
```
Row(
children: [
Container(height: 400, width: 400, color: Colors.green),
Expanded(
child: ListView(
children: appStatWidgets,
),
)
]),
```
But if i do this, the ListView is invisible and I get:
`Vertical viewport was given unbounded height.`
But I thought, that
Expanded would fix that.
What works is putting SizedBox instead of Expanded, but that gives me static size and I would like it to be flexible. | ```
Row(
children: [
Container(height: 400, width: 400, color: Colors.green),
Expanded(
child:SizedBox(
width:100,
heigth:100,
child:ListView(
children: appStatWidgets,
),
)
)
]),
```
Like this |
203445 | I'm trying to build a React Native app with the following file structure:
```
Kurts-MacBook-Pro-2:lucy-app kurtpeek$ tree -L 1
.
├── README.md
├── __tests__
├── android
├── app.json
├── assets
├── index.js
├── ios
├── node_modules
├── package.json
├── src
└── yarn.lock
```
The `package.json` is
```
{
"name": "app",
"version": "0.0.1",
"private": true,
"scripts": {
"android": "concurrently 'emulator @Nexus_5X_API_27_x86' 'yarn android-noavd'",
"android-noavd": "react-native run-android",
"android-px": "concurrently 'emulator @Pixel_2_API_27' 'yarn android-noavd'",
"android:release": "cross-env ENVFILE=.env.release yarn run android",
"android:staging": "cross-env ENVFILE=.env.staging yarn run android",
"build:android:dev": "cross-env ENVFILE=.env ./android/gradlew assembleRelease -p ./android/",
"build:android:release": "cross-env ENVFILE=.env.release ./android/gradlew assembleRelease -p ./android/",
"build:android:staging": "cross-env ENVFILE=.env.staging ./android/gradlew assembleRelease -p ./android/",
"clean": "concurrently 'rimraf ./android/build/' 'rimraf ./ios/build/' 'rimraf node_modules/' 'yarn cache clean'",
"codepush": "yarn codepush:ios; yarn codepush:android",
"codepush:android": "code-push release-react Lucy-Eng/LucyApp-Android android",
"codepush:ios": "code-push release-react Lucy-Eng/LucyApp-iOS ios --plistFile ios/LucyApp/Info.plist",
"codepush:ls:apps": "code-push app ls",
"codepush:ls:deploys": "echo iOS && code-push deployment ls Lucy-Eng/LucyApp-iOS; echo ANDROID && code-push deployment ls Lucy-Eng/LucyApp-Android",
"codepush:promote:android": "code-push promote Lucy-Eng/LucyApp-Android Staging Production",
"codepush:promote:ios": "code-push promote Lucy-Eng/LucyApp-iOS Staging Production",
"ios": "react-native run-ios --simulator='iPhone 7'",
"ios8": "react-native run-ios --simulator='iPhone 8'",
"ios:release": "cross-env ENVFILE=.env.release yarn run ios",
"ios:staging": "cross-env ENVFILE=.env.staging yarn run ios",
"iosx": "react-native run-ios --simulator='iPhone X'",
"lint": "eslint .",
"log:android": "react-native log-android",
"log:ios": "react-native log-ios",
"react-devtools": "react-devtools",
"start": "./node_modules/react-native/local-cli/cli.js start",
"test": "jest"
},
"dependencies": {
"analytics-react-native": "^1.2.0",
"immutability-helper": "^2.5.0",
"libphonenumber-js": "^1.1.10",
"lodash": "^4.17.4",
"moment": "^2.19.0",
"moment-timezone": "^0.5.14",
"prop-types": "^15.6.0",
"querystring": "^0.2.0",
"raven-for-redux": "^1.3.0",
"react": "^16.2.0",
"react-native": "^0.53.3",
"react-native-android-keyboard-adjust": "^1.1.1",
"react-native-code-push": "^5.3.2",
"react-native-config": "^0.11.5",
"react-native-country-picker-modal": "^0.5.1",
"react-native-datepicker": "^1.6.0",
"react-native-intercom": "^8.0.0",
"react-native-keyboard-aware-scroll-view": "^0.4.4",
"react-native-markdown-renderer": "^3.1.0",
"react-native-material-kit": "git://github.com/xinthink/react-native-material-kit#95b0980",
"react-native-material-menu": "^0.2.3",
"react-native-modal": "^4.1.1",
"react-native-onesignal": "^3.0.6",
"react-native-phone-input": "^0.2.1",
"react-native-router-flux": "4.0.0-beta.27",
"react-native-sentry": "^0.35.3",
"react-native-smart-splash-screen": "^2.3.5",
"react-native-snackbar": "^0.4.3",
"react-native-swiper": "^1.5.13",
"react-native-vector-icons": "^4.4.0",
"react-navigation": "^1.5.11",
"react-redux": "^5.0.6",
"redux": "^3.7.2",
"redux-devtools-extension": "^2.13.2",
"redux-form": "^7.3.0",
"redux-logger": "^3.0.6",
"redux-persist": "^4.10.1",
"redux-thunk": "^2.2.0",
"reselect": "^3.0.1",
"validator": "^10.2.0"
},
"devDependencies": {
"babel-core": "^6.26.3",
"babel-eslint": "^8.0.1",
"babel-jest": "21.2.0",
"babel-preset-react-native": "4.0.0",
"code-push-cli": "^2.1.6",
"concurrently": "^3.5.1",
"cross-env": "^5.1.4",
"enzyme": "^3.1.1",
"enzyme-adapter-react-16": "^1.0.4",
"eslint": "^4.8.0",
"eslint-config-airbnb": "^15.1.0",
"eslint-import-resolver-reactnative": "^1.0.2",
"eslint-plugin-import": "^2.7.0",
"eslint-plugin-jsx-a11y": "^5.1.1",
"eslint-plugin-react": "^7.4.0",
"eslint-plugin-react-native": "^3.1.0",
"jest": "21.2.1",
"react-devtools": "^3.1.0",
"react-dom": "^16.0.0",
"react-test-renderer": "16.0.0-beta.5",
"rimraf": "^2.6.2"
},
"jest": {
"preset": "react-native",
"setupTestFrameworkScriptFile": "<rootDir>src/test-config/enzyme-config.js"
},
"rnpm": {
"assets": [
"./assets/fonts/"
]
}
}
```
and there is an `ios/Podfile` like so:
```
target 'LucyApp' do
pod 'React', :path => '../node_modules/react-native', :subspecs => [
'Core',
'BatchedBridge',
'DevSupport', # Include this to enable In-App Devmenu if RN >= 0.43
'RCTText',
'RCTNetwork',
'RCTWebSocket', # needed for debugging
# 'RCTBridge',
# Add any other subspecs you want to use in your project
]
# Explicitly include Yoga if you are using RN >= 0.42.0
pod 'yoga', :path => '../node_modules/react-native/ReactCommon/yoga'
# Third party deps podspec link
pod 'Intercom'
pod 'CodePush', :path => '../node_modules/react-native-code-push'
pod 'SentryReactNative', :path => '../node_modules/react-native-sentry'
# Add new pods below this line
end
```
When I try to build this app in Xcode, I get an import error from `SentryReactNative`:
[![enter image description here](https://i.stack.imgur.com/IdtHH.png)](https://i.stack.imgur.com/IdtHH.png)
Similarly, when I try to run the simulator using `yarn ios`, I get the following error:
```
In file included from /Users/kurtpeek/Documents/Dev/lucy/lucy-app/node_modules/react-native-sentry/ios/RNSentry.m:1:
In file included from /Users/kurtpeek/Documents/Dev/lucy/lucy-app/node_modules/react-native-sentry/ios/RNSentry.h:4:
/Users/kurtpeek/Documents/Dev/lucy/lucy-app/node_modules/react-native/React/Base/RCTBridge.h:12:9: fatal error: 'React/RCTBridgeDelegate.h' file not found
#import <React/RCTBridgeDelegate.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~
** BUILD FAILED **
The following commands produced analyzer issues:
Analyze Base/RCTModuleMethod.mm normal x86_64
(1 command with analyzer issues)
The following build commands failed:
CompileC /Users/kurtpeek/Documents/Dev/lucy/lucy-app/ios/build/Build/Intermediates.noindex/Pods.build/Debug-iphonesimulator/SentryReactNative.build/Objects-normal/x86_64/RNSentryEventEmitter.o /Users/kurtpeek/Documents/Dev/lucy/lucy-app/node_modules/react-native-sentry/ios/RNSentryEventEmitter.m normal x86_64 objective-c com.apple.compilers.llvm.clang.1_0.compiler
(1 failure)
Installing build/Build/Products/Debug-iphonesimulator/LucyApp.app
An error was encountered processing the command (domain=NSPOSIXErrorDomain, code=2):
Failed to install the requested application
An application bundle was not found at the provided path.
Provide a valid path to the desired application bundle.
Print: Entry, ":CFBundleIdentifier", Does Not Exist
Command failed: /usr/libexec/PlistBuddy -c Print:CFBundleIdentifier build/Build/Products/Debug-iphonesimulator/LucyApp.app/Info.plist
Print: Entry, ":CFBundleIdentifier", Does Not Exist
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
```
Following <https://facebook.github.io/react-native/docs/integration-with-existing-apps.html#configuring-cocoapods-dependencies>, I thought at first that I might need to add `'RCTBridge'` to the `subspec`s of `'React'`, which is the reason for the commented-out line in the `Podfile`. However, if I uncomment that line and try to `pod install`, I get a "CocoaPods could not find compatible versions" error:
[![enter image description here](https://i.stack.imgur.com/mDSuc.png)](https://i.stack.imgur.com/mDSuc.png)
How can I update the imports to make the app build? | Try this -
* In Xcode, go to the project scheme (Product -> Scheme -> Manage Scheme -> double click your project).
* Click on the 'Build' option at the left pane.
* Uncheck 'Parallelize Build' under Build Options.
* Then in Targets section, click '+' button then search for 'React'.
Select it and click 'Add'.
* 'React' should now appear under Targets section. Click and drag it to
the top so that it will be the first item in the list (before your project).
* Clean the project and build.
For more options check the issue on RN repo -
<https://github.com/facebook/react-native/issues/12042> |
203688 | I want `mimetypes` library to return mimetype from file name. For example:
```py
>>> mimetypes.types_map['.docx']
'application/vnd.openxmlformats-officedocument.wordprocessingml.document'
```
But, currently I got an exception:
```py
>>> mimetypes.types_map['.docx']
Traceback (most recent call last):
File "<console>", line 1, in <module>
KeyError: '.docx'
```
Also I tried `guess_type` func:
```py
>>> mimetypes.guess_type('.docx')
(None, None)
```
---
Full code:
```py
import mimetypes
mimetypes.init()
mimetypes.types_map['.docx'] # throws an Exception
mimetypes.guess_type('.docx') # returns (None, None)
```
---
Python: 3.8.10
Windows 10 build ver. 19044.2006
On my system I got installed Ubuntu WSL 2. When executing my **Full code** there: no exceptions are raised and mimetype returns as excepted (also python 3.8.10) | You need to pass a "name" before dot.
```
>>> mimetypes.guess_type('.docx')
(None, None)
>>> mimetypes.guess_type('filename.docx')
('application/vnd.openxmlformats-officedocument.wordprocessingml.document', None)
```
Docs: <https://docs.python.org/3/library/mimetypes.html> |
203745 | I have line this, I want to replace spaces with-in the quot with underscore.
```
"Hello How are you" Mr. John
```
with
```
"Hello_How_are_you" Mr. John
```
I like to have the perl one-liner for this.
Thanks
SR | When you use `cPickle.load()` you are trying to unpickle (deserialize) a previously pickled file into a Python object.
To pickle (serialize) an object to a file you should use `cPickle.dump()`. |
203909 | Lately I've been reading a little bit (mostly here and on Wikipedia) about [exotic matter](http://en.wikipedia.org/wiki/Exotic_matter), particularly matter with a [negative mass](http://en.wikipedia.org/wiki/Negative_mass). [This question](https://physics.stackexchange.com/q/44934/) appears to be the original question on the site of the likelihood (or lack there of) of the existence of negative mass. It appears that the current consensus is that based on the lack of observational support, as well as expected properties that seem to not play nicely with our current understanding of the universe, it's extremely unlikely that such matter exists.
My question is more nuanced than the one that I linked to above, and hopefully distinct. What is the motivation for negative mass particles as a concept? Are there any predictions of GR, QM, or any of the attempts to unify them that would seem to point to their existence? Because there is no observational support for negative mass, I'm curious if there is a theoretical framework that predicts/suggests its existence, or if the idea is simply the result of playing around with known physical laws, just to see what mathematical oddities pop up. | Mostly it's to throw physics at the wall and see what sticks.
You see, there are several important [energy conditions](http://en.wikipedia.org/wiki/Energy_condition) in GR. These energy conditions allow you to derive several important theorems, see [here](http://strangebeautiful.com/papers/curiel-primer-energy-conds.pdf) for a list.
Many of these theorems are about the topology and causal structure of spacetime. For example, assuming the averaged null energy condition allows you to rule out traversable wormholes -- which is highly desirable if you want to get rid of time machines! A very common and very cute argument goes like this: any positive energy density bends light like a *convergent* lens. But a wormhole does the opposite: it must, because of its topology, act as a *divergent* lens. So there has to be a negative energy density somewhere.
The problem, however, is that there are known physical configurations that violate some of these conditions. A famous example are squeezed vacuum states where energy fluctuations may be positive or negative. Another is the Casimir effect, where there is a constant negative energy density in the region between the plates. The problem of negative energies in squeezed vacuum states may be dealt with by observing that certain "quantum inequalities" are obeyed. They allow for short lived concentrations of energy density as long as the energy you "borrowed" is paid back later, with interest. For the Casimir effect it doesn't quite work, but the magnitude of the energy density you get is very small -- it comes with a coefficient of $\pi^2\over720$.
Both of these are quantum, but there are classical examples as well. A [scalar field non-minimally coupled to gravity](http://arxiv.org/abs/gr-qc/0003025) can produce such violations. Being classical, there is no quantum inequality to constrain it. This example is the most serious because it violates *all* the energy conditions mentioned above. The existence of the Casimir effect and squeezed vacuum states quickly convinced people that the pointwise conditions cannot hold and must be replaced by their "averaged" counterparts. This fixes the squeezed vacuum problem, and there are [indications](http://arxiv.org/abs/hep-th/0506136) it fixes the Casimir effect too. But it doesn't work for the non minimally coupled scalar field example, which, while speculative, doesn't seem a priori ruled out.
That's the main crux of the issue: negative masses don't seem to be *ruled out*, and by all accounts they should be. If they were, we'd be able to prove that all sorts of nasty things like traversable wormholes and warp drives are impossible. Conversely, if negative masses are in fact possible, we must understand how to ensure causality holds in a universe where it's possible to make time machines. |
204781 | I have an animating set of UIImageView's that represent a current value and continually keeps ticking upwards...ideally never stopping, also in the view is a scrollView or to and whenever I'm scrolling or zooming in the scrollView, the animation stops, and starts up again when the scrollView stops moving completely. I believe this is due to a threading issue since redrawing elements all happens on the main thread, At first i tried UIView animation and then even core animation to no effect...Is there a way to have my cake and eat it too?
Any and all help would be appreciated
code follows
```
- (void)TestJackpotAtRate:(double)rateOfchange
{
double roc = rateOfchange;
for (int i = 0; i < [_jackPotDigits count]; ++i)
{
roc = rateOfchange/(pow(10, i));
UIImageView *jackpotDigit = [_jackPotDigits objectAtIndex:i];
float foreveryNseconds = 1/roc;
NSDictionary *dict = @{@"interval" : [NSNumber numberWithFloat:foreveryNseconds],
@"jackPotDigit" : jackpotDigit
};
[NSTimer scheduledTimerWithTimeInterval:foreveryNseconds target:self selector:@selector(AscendDigit:) userInfo:dict repeats:YES];
}
}
-(void)AscendDigit:(NSTimer*)timer
{
NSDictionary *dict = [timer userInfo];
NSTimeInterval interval = [(NSNumber*)[dict objectForKey:@"interval"] floatValue];
UIImageView *jackpotDigit = [dict objectForKey:@"jackPotDigit"];
float duration = (interval < 1) ? interval : 1;
if (jackpotDigit.frame.origin.y < -230 )
{
NSLog(@"hit");
[timer invalidate];
CGRect frame = jackpotDigit.frame;
frame.origin.y = 0;
[jackpotDigit setFrame:frame];
[NSTimer scheduledTimerWithTimeInterval:interval target:self selector:@selector(AscendDigit:) userInfo:dict repeats:YES];
}
[UIView animateWithDuration:duration delay:0 options:UIViewAnimationOptionAllowUserInteraction animations:^
{
CGRect frame = [jackpotDigit frame];
double yDisplacement = 25;
frame.origin.y -= yDisplacement;
[jackpotDigit setFrame:frame];
}
completion:^(BOOL finished)
{
}];
}
``` | As [danypata](https://stackoverflow.com/users/2315974/danypata/ "danypata") pointed out in my comments via this thread [My custom UI elements are not being updated while UIScrollView is scrolled](https://stackoverflow.com/questions/4109898/my-custom-ui-elements-are-not-being-updated-while-uiscrollview-is-scrolled/4136988#4136988) It has something to do with the NStimer thread rather than the animation thread, or possibly both if someone can clarify. In any case when scrolling it seems all scroll events get exclusive use of the main loop, the solution is to put the timer you are using to do your animation into the same loop mode which is [UITrackingLoopMode](http://developer.apple.com/library/ios/#documentation/UIKit/Reference/UIApplication_Class/Reference/Reference.html#//apple_ref/doc/constant_group/Run_Loop_Mode_for_Tracking) so it too get's use of the main loop when scrolling and...
```
NSTimer *timer = [NSTimer scheduledTimerWithTimeInterval:foreveryNseconds target:self selector:@selector(AscendDigit:) userInfo:dict repeats:YES];
[[NSRunLoop currentRunLoop] addTimer:timer forMode:NSRunLoopCommonModes];
```
Tada. |
204876 | In Python 3.3.1, this works:
```
i = 76
def A():
global i
i += 10
print(i) # 76
A()
print(i) # 86
```
This also works:
```
def enclosing_function():
i = 76
def A():
nonlocal i
i += 10
print(i) # 76
A()
print(i) # 86
enclosing_function()
```
But this doesn't work:
```
i = 76
def A():
nonlocal i # "SyntaxError: no binding for nonlocal 'i' found"
i += 10
print(i)
A()
print(i)
```
The documentation for the `nonlocal` keyword [states](http://docs.python.org/3/reference/simple_stmts.html#grammar-token-nonlocal_stmt) (emphasis added):
>
> The nonlocal statement causes the listed identifiers to refer to
> previously bound variables in the **nearest enclosing scope**.
>
>
>
In the third example, the "nearest enclosing scope" just happens to be the global scope. So why doesn't it work?
PLEASE READ THIS BIT
--------------------
I do notice that the documentation goes on to state (emphasis added):
>
> The [`nonlocal`] statement allows encapsulated code to
> rebind variables outside of the local scope **besides the global
> (module) scope**.
>
>
>
but, strictly speaking, this doesn't mean that what I'm doing in the third example shouldn't work. | The search order for names is LEGB, i.e Local, Enclosing, Global, Builtin. So the global scope is not an enclosing scope.
**EDIT**
From the [docs](http://docs.python.org/3/reference/simple_stmts.html#nonlocal):
>
> The nonlocal statement causes the listed identifiers to refer to
> previously bound variables in the nearest enclosing scope. This is
> important because the default behavior for binding is to search the
> local namespace first. The statement allows encapsulated code to
> rebind variables outside of the local scope besides the global
> (module) scope.
>
>
> |
205342 | I'm currently working on a MySQL query, but I've been having a bit of trouble with it and am looking for help. I have been struggling with it for a while now and have searched for the solution from the Internet and from different coding forums, but, alas, still nothing.
Basically, I have a MySQL table as follows:
```
Table name: 'data'
'id' SMALLINT(5) auto_increment
'value_1' SMALLINT(5)
'value_2' SMALLINT(5)
```
The column 'value\_2' has duplicate values and, so, I need to find the distinct values to process the results. So I created a MySQL query to do that.
```
SELECT DISTINCT value_1 FROM data WHERE value_2!="0"
```
The problem I'm having is that I want to order the results I just had by not the id nor by any other field, but by the number of rows that had those certain distinct values in that column ('value\_1').
Could anyone please assist me? | What you'll need is `GROUP BY` and `COUNT`.
```
SELECT `value_1`, COUNT(`value_1`) AS `count` FROM `data` WHERE `value_2` != '0' GROUP BY `value_1` ORDER BY `count`;
```
For more information, see: <http://dev.mysql.com/doc/refman/5.1/en/counting-rows.html>. |
205812 | I want to use AzCopy to copy a blob from account A to account B. But instead of using access key for the source, I only have access to the Shared Access Key. I've tried appending the SAS after the URL, but it throws a 404 error. This is the syntax I tried
```
AzCopy "https://source-blob-object-url?sv=blah-blah-blah-source-sas" "https://dest-blob-object-url" /destkey:base64-dest-access-key
```
The error I got was
```
Error parsing source location "https://source-blob-object-url?sv=blah-blah-blah-source-sas":
The remote server returned an error: (404) Not Found.
```
How can I get AzCopy to use the SAS URL? Or that it doesn't support SAS?
**Update:**
With the SourceSAS and FilePattern options, I'm still getting the 404 error. This is the command I use:
```
AzCopy [source-container-url] [destination-container-url] [file-pattern] /SourceSAS:"?sv=2013-08-15&sr=c&si=ReadOnlyPolicy&sig=[signature-removed]" /DestKey:[destination-access-key]
```
This will get me a 404 Not Found. If I change the signature to make it invalid, AzCopy will throw a 403 Forbidden instead. | You're correct. Copy operation using SAS on both source and destination blobs is only supported when source and destination blobs are in same storage account. Copying across storage accounts using SAS is still not supported by Windows Azure Storage. This has been covered (though one liner only) in this blog post from storage team: <http://blogs.msdn.com/b/windowsazurestorage/archive/2013/11/27/windows-azure-storage-release-introducing-cors-json-minute-metrics-and-more.aspx>. From the post:
>
> Copy blob now allows Shared Access Signature (SAS) to be used for the
> destination blob if the copy is within the same storage account.
>
>
>
**UPDATE**
So I tried it and one thing I realized is that it is meant for copying all blobs from one container to another. Based on my trial/error, a few things you would need to keep in mind are:
* Source SAS is for source container and not the blob. Also ensure that you have both **`Read`** and **`List`** permission on the blob container in the SAS.
* If you want to copy a single file, please ensure that it is defined as "filepattern" parameter.
Based on these, can you please try the following:
```
AzCopy "https://<source account>.blob.core.windows.net/<source container>?<source container sas with read/list permission>" "https://<destination account>.blob.core.windows.net/<destination container>" "<source blob name to copy>" /DestKey:"destination account key"
```
**UPDATE 2**
>
> Error parsing source location [container-location]: Object reference
> not set to an instance of an object.
>
>
>
I was able to recreate the error. I believe the reason for this error is the version of storage client library (and thus the REST API) which is used to create SAS token. If I try to list contents of a blob container using a SAS token created by using version 3.x of the library, this is the output I get:
```
<?xml version="1.0" encoding="utf-8"?>
<EnumerationResults ServiceEndpoint="https://cynapta.blob.core.windows.net/" ContainerName="vhds">
<Blobs>
<Blob>
<Name>test.vhd</Name>
<Properties>
<Last-Modified>Fri, 17 May 2013 15:23:39 GMT</Last-Modified>
<Etag>0x8D02129A4ACFFD7</Etag>
<Content-Length>10486272</Content-Length>
<Content-Type>application/octet-stream</Content-Type>
<Content-Encoding />
<Content-Language />
<Content-MD5>uflK5qFmBmek/zyqad7/WQ==</Content-MD5>
<Cache-Control />
<Content-Disposition />
<x-ms-blob-sequence-number>0</x-ms-blob-sequence-number>
<BlobType>PageBlob</BlobType>
<LeaseStatus>unlocked</LeaseStatus>
<LeaseState>available</LeaseState>
</Properties>
</Blob>
</Blobs>
<NextMarker />
</EnumerationResults>
```
However if I try to list contents of a blob container using a SAS token created by using version 2.x of the library, this is the output I get:
```
<?xml version="1.0" encoding="utf-8"?>
<EnumerationResults ContainerName="https://cynapta.blob.core.windows.net/vhds">
<Blobs>
<Blob>
<Name>test.vhd</Name>
<Url>https://cynapta.blob.core.windows.net/vhds/test.vhd</Url>
<Properties>
<Last-Modified>Fri, 17 May 2013 15:23:39 GMT</Last-Modified>
<Etag>0x8D02129A4ACFFD7</Etag>
<Content-Length>10486272</Content-Length>
<Content-Type>application/octet-stream</Content-Type>
<Content-Encoding />
<Content-Language />
<Content-MD5>uflK5qFmBmek/zyqad7/WQ==</Content-MD5>
<Cache-Control />
<x-ms-blob-sequence-number>0</x-ms-blob-sequence-number>
<BlobType>PageBlob</BlobType>
<LeaseStatus>unlocked</LeaseStatus>
<LeaseState>available</LeaseState>
</Properties>
</Blob>
</Blobs>
<NextMarker />
</EnumerationResults>
```
Notice the difference in `<EnumerationResults>` XElement.
Now `AzCopy` uses version 2.1.0.4 version of the storage client library. As a part of copying operation it first lists the blobs in source container using the SAS token. Now as we saw above the XML returned is different in both versions so storage client library 2.1.0.4 fails to parse the XML returned by storage service. Because it fails to parse the XML, it is not able to create a `Blob` object and thus you get the `NullReferenceException`.
**Solution:**
One possible solution to this problem is to create a SAS token using version 2.1.0.4 version of the library. I tried doing that and was able to successfully copy the blob. Do give it a try. That should fix the problem you're facing. |
206748 | Is @react-native-firebase/admob deprecated? or just.. Why it doesn't work?
I am using @react-native-firebase/admob (<https://rnfb-docs.netlify.app/admob/usage>).
Everything works fine before to use "admob()". When I add admob() to the code appears this error:
"TypeError: (0, \_admob.default) is not a function"
Do someone know why?
My code below (basic usage):
```
import React from 'react';
import { Text, View} from 'react-native';
import admob, { MaxAdContentRating } from '@react-native-firebase/admob';
import { InterstitialAd, RewardedAd, BannerAd, TestIds } from '@react-native-
firebase/admob';
import { BannerAdSize} from '@react-native-firebase/admob';
class App extends React.Component{
componentDidMount(){
// this was taked of official page: https://rnfb-docs.netlify.app/admob/usage#installation
admob()
.setRequestConfiguration({
// Update all future requests suitable for parental guidance
maxAdContentRating: MaxAdContentRating.PG,
// Indicates that you want your content treated as child-directed for purposes of COPPA.
tagForChildDirectedTreatment: true,
// Indicates that you want the ad request to be handled in a
// manner suitable for users under the age of consent.
tagForUnderAgeOfConsent: true,
})
.then(() => {
// Request config successfully set!
});
}
render(){
return(
<View style={{
alignItems:"center",
justifyContent:"center",
height:"100%"}}>
<Text style={{color:"black"}}>
Hola
</Text>
<BannerAd
unitId={TestIds.BANNER}
size={BannerAdSize.FULL_BANNER} />
</View>
)
}
}
export default App;
``` | Despite @CodeJoe Answer, I still got confused by different Documentations for the React Native Firebase that where around, hence I spend lots of time and energy to get around it.
I open an issue [here](https://github.com/invertase/react-native-firebase/issues/5580) where is confirmed that Google removed the AdMob Package since `v11.5.0`.
>
> AdMob is no longer in Firebase APIs, so this module no longer wraps it, there is no documentation to correct.
> However it did exist as recently as v11.5.0 here and if you browse the repository at that point, you may consider the e2e tests for AdMob at the time a primer on "How To Use AdMob" <https://github.com/invertase/react-native-firebase/tree/0217bff5cbbf233d7915efb4dbbdfe00e82dff23/packages/admob/e2e>
>
>
>
---
Please, don't be like me and check the correct Documentation and website:
Correct
-------
**<https://rnfirebase.io>**
Wrong Wrong Wrong, this refers to an older verison
--------------------------------------------------
**<https://rnfb-docs.netlify.app>**
---
>
> The internet has a long memory, so there are stale copies of the docs out and about yes, but rnfirebase.io is always the official and current doc site
>
>
>
Admob was removed completely from the firebase ecosystem by Google so it does not exist here no. There are some community packages for it, our v11.5 version that has it, and we hope to peel our implementation out and update it for the community but it takes time and we are unfortunately still backlogged on official firebase apis, so it has not happened yet
---
So for AdMob solution I would use another Library, and use react-native-firebase for the Solutions that they currently provide
Alternative Library (August 2021)
---------------------------------
**DISCLAIMER**
* React Native Firebase is a great library still for the other packages they provide (Firebase, Analitycs...) and the Admob version 11.5 is still a solution. These are just suggestion for alternatives for Admob.
* [react-native-admob-alpha](https://www.npmjs.com/package/react-native-admob-alpha) Simple and fresh library, recently updated.
* [react-native-admob-native-ads](https://www.npmjs.com/package/react-native-admob-native-ads) Another brand new library, they implement **Admob Native Advanced Ads**.
* [Expo AdMob (Available also for bare React-Native Projects)](https://docs.expo.dev/versions/latest/sdk/admob/) |
207041 | I have JSON like below, how to make it correct with javascript or some php script? I think it should have quotation marks on arrays.
like
```
{ "1":[ {"id":"171113524", "title":"xxxx", "category":"4",
{ 1:[ {id:"171113524", title:"xxxx", category:"4",
start:"20160913062500", stop:"20160913093000"} , {id:"171115415",
title:"xxxx", category:"1",
start:"20160913093000", stop:"20160913100000"} , {id:"171115421",
title:"xxxx", category:"2", start:"20160913100000",
stop:"20160913104702"} , {id:"171115471", title:"xxxx
", category:"6", start:"20160913104702",
stop:"20160913110000"} , {id:"17111049", title:"xxxx",
category:"4", start:"20160913110000", stop:"20160913110500"} ,
{id:"17111335", title:"xxxx", category:"4",
start:"20160913110500", stop:"20160913111200"} , {id:"17111354",
title:"xxxx", category:"4",
start:"20160913111200", stop:"20160913111900"}
```
My AJAX/Javascript is like below
```
$.ajax({
url: "http://domain/corsproxy?url=http://json.json/json.json",
type: "GET",
timeout: 3000,
dataType: "json",
success: function(parse) {
var strHtml = '';
$.each(parse, function(key, value) {
strHtml += ' <span class="location">' + value.title + '</span><br />';
});
document.getElementById("results").innerHTML =
strHtml;
},
error: function(jqXHR, exception) {
console.log(jqXHR);
}
});
``` | I was working on a similar requirement, I did try out a number of combinations before resolving the exception.
I am not sure why liberty is not providing with client implementation classes..
you could try including jersey-client through maven pom.xml..
```
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-client</artifactId>
<version>2.23.1</version>
</dependency>
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>org.eclipse.persistence.moxy</artifactId>
<version>2.6.1</version>
</dependency>
```
In Server.xml remove the features
```
<feature>jaxrs-2.0</feature>
<feature>jaxrsClient-2.0</feature>
```
and just add below feature.
```
<feature>beanValidation-1.1</feature>
``` |
207102 | For reasons that aren't worth going into here, Google have been indexing one of my sites with an unnecessary query string in the URL. I'd like to modify my htaccess file to 301 redirect all those requests to the URL without the unneeded part of the query string, but retain any other parts of the query string which may (or may not) be part of the address to make sure other functionality is retained.
For example, the following URLs
```
http://example.com/product.php?query=12345
http://example.com/index.php?section=ABC123&query=56789
http://example.com/search.php?query=987654&term=keyword
```
Would become
```
http://example.com/product.php
http://example.com/index.php?section=ABC123
http://example.com/search.php?term=keyword
```
Effectively stripping 'query=XXXXX' plus the ? or &, but making sure other queries are retained and in correct syntax after the removal (i.e., just removing ?query=98765 from the third example wouldn't work, as it would leave the link as search.php&term=keyword rather than search.php?term=keyword).
Was looking through some other Stack answers and found this:
```
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{QUERY_STRING} (.*)?&?query=[^&]+&?(.*)? [NC]
RewriteCond %{REQUEST_URI} /(.*)
RewriteRule .* %3?%1%2? [R=301,L]
```
But that doesn't seem to work, and my knowledge of htaccess & regex isn't good enough to figure out why.
Thanks in advance for any pointers. | Try this (i tested it with your examples and it is working fine):
```
RewriteEngine on
RewriteCond %{QUERY_STRING} ^(.*)&?query=[^&]+&?(.*)$ [NC]
RewriteRule ^/?(.*)$ /$1?%1%2 [R=301,L,NE]
```
Also, you should add a condition to apply this rule only on existing files (that's up to you).
If you want it:
```
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} -f
RewriteCond %{QUERY_STRING} ^(.*)&?query=[^&]+&?(.*)$ [NC]
RewriteRule ^/?(.*)$ /$1?%1%2 [R=301,L,NE]
``` |
207518 | Is it possible to use IF ELSE inside WHERE clause something like this...
```
WHERE transactionId like @transactionId
AND transactionType like @transactionType
AND customerId like @customerId
IF(@transactionType='1')
AND qTable.statusId like 'SIGNED'
ELSE
AND rTable.statusId like 'ACTIVE'
``` | ```
WHERE transactionId like @transactionId
AND transactionType like @transactionType
AND customerId like @customerId
AND
((@transactionType='1' AND qTable.statusId like 'SIGNED')
OR
(@transactionType <> '1' AND like 'ACTIVE') )
``` |
207737 | I've got this little ServiceStack message:
```
[Route("/server/time", "GET")]
public class ServerTime : IReturn<ServerTime>
{
public DateTimeOffset DateTime { get; set; }
public TimeZoneInfo TimeZone { get; set; }
}
```
And its respective service handler is as follow:
```
public object Get(ServerTime request)
{
return new ServerTime
{
DateTime = DateTimeOffset.Now,
TimeZone = TimeZoneInfo.Local,
};
}
```
The client test code looks like this:
```
var client = new JsonServiceClient("http://localhost:54146/");
var response = client.Get<ServerTime>(new ServerTime());
```
But response.TimeZoneInfo is ALWAYS empty...
Also the metadata for the service (JSON) does not show it:
(Sample request in the JSON metadata page)
```
POST /json/reply/ServerTime HTTP/1.1
Host: localhost
Content-Type: application/json
Content-Length: length
{"DateTime":"\/Date(-62135596800000)\/"}
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: length
{"DateTime":"\/Date(-62135596800000)\/"}
```
The XML and CSV formats, on the other hand, seems to handle it correctly:
```
POST /xml/reply/ServerTime HTTP/1.1
Host: localhost
Content-Type: application/xml
Content-Length: length
<ServerTime xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/PtsSampleService.ServiceModel">
<DateTime xmlns:d2p1="http://schemas.datacontract.org/2004/07/System">
<d2p1:DateTime>0001-01-01T00:00:00Z</d2p1:DateTime>
<d2p1:OffsetMinutes>0</d2p1:OffsetMinutes>
</DateTime>
<TimeZone xmlns:d2p1="http://schemas.datacontract.org/2004/07/System" i:nil="true" />
</ServerTime>
HTTP/1.1 200 OK
Content-Type: application/xml
Content-Length: length
<ServerTime xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/PtsSampleService.ServiceModel">
<DateTime xmlns:d2p1="http://schemas.datacontract.org/2004/07/System">
<d2p1:DateTime>0001-01-01T00:00:00Z</d2p1:DateTime>
<d2p1:OffsetMinutes>0</d2p1:OffsetMinutes>
</DateTime>
<TimeZone xmlns:d2p1="http://schemas.datacontract.org/2004/07/System" i:nil="true" />
</ServerTime>
```
**Why am I asking this instead of using the "XML client"?**
This is all about consistency. If the API is not consistent over all possible clients, then it cannot be relied upon! I will either have to drop the JSON formatter (which I can't do because I want to consume it in JavaScript) or I'll have to split the many TimeZoneInfo fields individually... Or find a way to make the JSON serializer to handle it!
And, in fact, XML doesn't work either. The `XmlServiceClient` gives me this error:
>
> {"Error in line 1 position 290. Element ':AdjustmentRules' contains
> data from a type that maps to the name
> '<http://schemas.datacontract.org/2004/07/System:ArrayOfTimeZoneInfo.AdjustmentRule>'.
> The deserializer has no knowledge of any type that maps to this name.
> Consider using a DataContractResolver or add the type corresponding to
> 'ArrayOfTimeZoneInfo.AdjustmentRule' to the list of known types - for
> example, by using the KnownTypeAttribute attribute or by adding it to
> the list of known types passed to DataContractSerializer."}
>
>
>
Does anyone know why it's not handled by default? | There does not seem to be an elegant way to pass `TimeZoneInfo` to the client, so I just created a DTO for it unsurprisingly named `TimeZoneInformation`
```
[Serializable]
public class TimeZoneInformation
{
public string Id { get; set; }
public TimeSpan BaseUtcOffset { get; set; }
public string DisplayName { get; set; }
public string DaylightName { get; set; }
public string StandardName { get; set; }
public bool SupportsDST { get; set; }
public static implicit operator TimeZoneInformation(TimeZoneInfo source)
{
return new TimeZoneInformation
{
Id = source.Id,
DisplayName = source.DisplayName,
BaseUtcOffset = source.BaseUtcOffset,
DaylightName = source.DaylightName,
StandardName = source.StandardName,
SupportsDST = source.SupportsDaylightSavingTime,
};
}
public static implicit operator TimeZoneInfo(TimeZoneInformation source)
{
return TimeZoneInfo.FindSystemTimeZoneById(source.Id);
}
}
```
And my service method not looks like this:
```
public object Get(ServerTime request)
{
return new ServerTime
{
DateTime = DateTimeOffset.Now,
TimeZoneInfo = TimeZoneInfo.Local,
};
}
```
And the metadata for the call is:
```
POST /json/reply/ServerTime HTTP/1.1
Host: localhost
Content-Type: application/json
Content-Length: length
{"DateTime":"\/Date(-62135596800000)\/","TimeZoneInfo":{"Id":"String","BaseUtcOffset":"PT0S","DisplayName":"String","DaylightName":"String","StandardName":"String","SupportsDST":false}}
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: length
{"DateTime":"\/Date(-62135596800000)\/","TimeZoneInfo":{"Id":"String","BaseUtcOffset":"PT0S","DisplayName":"String","DaylightName":"String","StandardName":"String","SupportsDST":false}}
```
An actual call to the method returns this JSON:
```
{"DateTime":"\/Date(1371925432883-0300)\/","TimeZoneInfo":{"Id":"E. South America Standard Time","BaseUtcOffset":"-PT3H","DisplayName":"(UTC-03:00) Brasília","DaylightName":"Horário brasileiro de verão","StandardName":"Hora oficial do Brasil","SupportsDST":true}}
```
And XML:
```
<ServerTime><DateTime><d2p1:DateTime>2013-06-22T21:24:22.2741641Z</d2p1:DateTime><d2p1:OffsetMinutes>-180</d2p1:OffsetMinutes></DateTime><TimeZoneInfo><_x003C_BaseUtcOffset_x003E_k__BackingField>-PT3H</_x003C_BaseUtcOffset_x003E_k__BackingField><_x003C_DaylightName_x003E_k__BackingField>Horário brasileiro de verão</_x003C_DaylightName_x003E_k__BackingField><_x003C_DisplayName_x003E_k__BackingField>(UTC-03:00) Brasília</_x003C_DisplayName_x003E_k__BackingField><_x003C_Id_x003E_k__BackingField>E. South America Standard Time</_x003C_Id_x003E_k__BackingField><_x003C_StandardName_x003E_k__BackingField>Hora oficial do Brasil</_x003C_StandardName_x003E_k__BackingField><_x003C_SupportsDST_x003E_k__BackingField>true</_x003C_SupportsDST_x003E_k__BackingField></TimeZoneInfo></ServerTime>
```
And all Typed Clients deserialize it perfectly.
Am not 100% happy, but as happy as possible with this...
Some may ask me why I don't send only the ID. The Timezone ID is specific to windows. Since I must talk to disparate clients I can't send an ID an hope they could reconstruct the server's timezone data accordingly. I'm already sending the OFFSET on the DateTimeOffset returned to the clients, but for some scenarios it is not enough, as the offset alone is not sufficient information to determine the timezone or if daylight savings time was into effect. So sending everything that a client could need to interpret the server's date and time correctly was the best solution for this specific application. |
207999 | I just set up Datatorrent RTS (Apache Apex) platform and run the pi demo.
I want to consume "avro" messages from kafka and then aggregate and store the data into hdfs.
Can I get an example code for this or kafka? | Here is code for a complete working application uses the new Kafka input operator and the file output operator from Apex Malhar. It converts the byte arrays to Strings and writes them out to HDFS using rolling files with a bounded size (1K in this example); until the file size reaches the bound, it will have a temporary name with the `.tmp` extension. You can interpose additional operators between these two as suggested by DevT in <https://stackoverflow.com/a/36666388>):
```
package com.example.myapexapp;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import org.apache.apex.malhar.kafka.AbstractKafkaInputOperator;
import org.apache.apex.malhar.kafka.KafkaSinglePortInputOperator;
import org.apache.hadoop.conf.Configuration;
import com.datatorrent.api.annotation.ApplicationAnnotation;
import com.datatorrent.api.StreamingApplication;
import com.datatorrent.api.DAG;
import com.datatorrent.lib.io.ConsoleOutputOperator;
import com.datatorrent.lib.io.fs.AbstractFileInputOperator.FileLineInputOperator;
import com.datatorrent.lib.io.fs.AbstractFileOutputOperator;
@ApplicationAnnotation(name="MyFirstApplication")
public class KafkaApp implements StreamingApplication
{
@Override
public void populateDAG(DAG dag, Configuration conf)
{
KafkaSinglePortInputOperator in = dag.addOperator("in", new KafkaSinglePortInputOperator());
in.setInitialPartitionCount(1);
in.setTopics("test");
in.setInitialOffset(AbstractKafkaInputOperator.InitialOffset.EARLIEST.name());
//in.setClusters("localhost:2181");
in.setClusters("localhost:9092"); // NOTE: need broker address, not zookeeper
LineOutputOperator out = dag.addOperator("out", new LineOutputOperator());
out.setFilePath("/tmp/FromKafka");
out.setFileName("test");
out.setMaxLength(1024); // max size of rolling output file
// create stream connecting input adapter to output adapter
dag.addStream("data", in.outputPort, out.input);
}
}
/**
* Converts each tuple to a string and writes it as a new line to the output file
*/
class LineOutputOperator extends AbstractFileOutputOperator<byte[]>
{
private static final String NL = System.lineSeparator();
private static final Charset CS = StandardCharsets.UTF_8;
private String fileName;
@Override
public byte[] getBytesForTuple(byte[] t) { return (new String(t, CS) + NL).getBytes(CS); }
@Override
protected String getFileName(byte[] tuple) { return fileName; }
public String getFileName() { return fileName; }
public void setFileName(final String v) { fileName = v; }
}
``` |
208133 | As is known, some blocking calls like `read` and `write` would return -1 and set `errno` to `EINTR`, and we need handle this.
My question is: Does this apply for non-blocking calls, e.g, set socket to `O_NONBLOCK`?
Since some articles and sources I have read said non-blocking calls don't need bother with this, but I have found no authoritative reference about it. If so, does it apply cross different implementations? | I cannot give you a definitive answer to this question, and the answer may further vary from system to system, but I would expect a non-blocking socket to never fail with `EINTR`. If you take a look at the man pages of various systems for the following socket functions `bind()`, `connect()`, `send()`, and `receive()`, or look those up in the POSIX standard, you'll notice something interesting: All these functions except one may return `-1` and set `errno` to `EINTR`. The one function that is not documented to ever fail with `EINTR` is `bind()`. And `bind()` is also the only function of that list that will never block by default. So it seems that only blocking functions may fail because of `EINTR`, including `read()` and `write()`, yet if these functions never block, they also will never fail with `EINTR` and if you use `O_NONBLOCK`, those functions will never block.
It would also make no sense from a logical perspective. E.g. consider you are using blocking I/O and you call `read()` and this call has to block, but while it was blocking, a signal is sent to your process and thus the read request is unblocked. How should the system handle this situation? Claiming that `read()` did succeed? That would be a lie, it did not succeed because no data was read. Claiming it did succeed, but zero bytes data were read? This wouldn't be correct either, since a "zero read result" is used to indicate end-of-stream (or end-of-file), so your process would to assume that no data was read, because the end of a file has been reached (or a socket/pipe has been closed at other end), which simply isn't the case. The end-of-file (or end-of-stream) has not been reached, if you call `read()` again, it will be able to return more data. So that would also be a lie. You expectation is that this read call either succeeds and reads data or fails with an error. Thus the read call has to fail and return `-1` in that case, but what `errno` value shall the system set? All the other error values indicate a critical error with the file descriptor, yet there was no critical error and indicating such an error would also be a lie. That's why `errno` is set to `EINTR`, which means: *"There was nothing wrong with the stream. Your read call just failed, because it was interrupted by a signal. If it wasn't interrupted, it may still have succeeded, so if you still care for the data, please try again."*
If you now switch to non-blocking I/O, the situation of above never arises. The read call will never block and if it cannot read data immediately, it will fail with an error `EAGAIN` (POSIX) or `EWOULDBLOCK` (unofficial, on Linux both are the same error, just alternative names for it), which means: *"There is no data available right now and thus your read call would have to block and wait for data arriving, but blocking is not allowed, so it failed instead."* So there is an error for every situation that may arise.
Of course, even with non-blocking I/O, the read call may have temporarily interrupted by a signal but why would the system have to indicate that? Every function call, whether this is a system function or one written by the user, may be temporarily interrupted by a signal, really every single one, no exception. If the system would have to inform the user whenever that happens, all system functions could possibly fail because of `EINTR`. However, even if there was a signal interruption, the functions usually perform their task all the way to the end, that's why this interruption is irrelevant. The error `EINTR` is used to tell the caller that the action he has requested was not performed because of a signal interruption, but in case of non-blocking I/O, there is no reason why the function should not perform the read or the write request, unless it cannot be performed right now, but then this can be indicated by an appropriate error.
To confirm my theory, I took a look at the kernel of MacOS (10.8), which is still largely based on the FreeBSD kernel and it seems to confirm the suspicion. If a read call is currently not possible, as no data are available, the kernel checks for the `O_NONBLOCK` flag in the file descriptor flags. If this flag is set, it fails immediately with `EAGAIN`. If it is not set, it puts the current thread to sleep by calling a function named `msleep()`. The function is [documented here](http://www.gsp.com/cgi-bin/man.cgi?section=9&topic=msleep) (as I said, OS X uses plenty of FreeBSD code in its kernel). This function causes the current thread to sleep until it is explicitly woken up (which is the case if data becomes ready for reading) or a timeout has been hit (e.g. you can set a receive timeout on sockets). Yet the thread is also woken up, if a signal is delivered, in which case `msleep()` itself returns `EINTR` and the next higher layer just passes this error through. So it is `msleep()` that produces the `EINTR` error, but if the `O_NONBLOCK` flag is set, `msleep()` is never called in the first place, hence this error cannot be returned.
Of course that was MacOS/FreeBSD, other systems may be different, but since most systems try to keep at least a certain level of consistency among these APIs, if a system breaks the assumption, that non-blocking I/O calls can never fail because of `EINTR`, this is probably not by intention and may even get fixed if your report it. |
208166 | Is there a real way to get Netbeans to load and work faster?
It is too slow and gets worse when you have been coding for some time. It eats all my RAM.
---
I am on a Windows machine, specifically Windows Server 2008 Datacenter Edition x64,
4Gb of RAM, 3Ghz Core 2 Duo processor, etc. I am using the x64 JDK. I use the NOD32 Antivirus since for me it is the best in machine performance.
In Task Manager netbeans.exe only shows no more than 20 Mb, java.exe more than 600Mb.
My project is a J2EE web application, more than 500 classes, only the project libraries not included (externals). And when I said slow, I mean 3, 4, 5 minutes or more Netbeans is frozen.
Is my project just too large for Netbeans, if it has to read all files to get the state of files like error warnings, svn status and more? Can I disable all this? Is it possible to set it to scan only when I open a file?
My CPU use is normally at 30 percent with all my tools opened, I mean Netbeans, MS SQL Manager, Notepad, XMLSpy, Task Manager, Delphi, VirtualBox. Netbeans eats more RAM than my virtualized systems.
In Linux it is as slow as in Windows in the same machine (Ubuntu 8.04 x64).
It is true that the Netbeans team improved startup speed but when it opens it begins to cache ALL.
I have used some JVM parameters to set high memory usage and others:
`"C:\Program Files\NetBeans Dev\bin\netbeans.exe" -J-Xms32m -J-Xmx512m -J-Xverify:none -J-XX:+CMSClassUnloadingEnabled`
But it is still slow. | Very simple solution to the problem when your **NetBeans** or **Eclipse** IDE seems to be using too much memory:
1. **Disable the plugins you are not using.**
2. **close the projects you are not working on.**
I was facing similar problem with Netbeans 7.0 on my Linux Mint as well Ubuntu box.
Netbeans was using > 700 MiB space and 50-80% CPU.
Then I decided do some clean up.
I had 30 plugins installed, and I was not using most of them.
So, I disabled the plugins I was not using, a whopping 19 plug ins I disabled. now memory uses down to 400+ MiB and CPU uses down to 10 and at max to 50%.
Now my life is much easier. |
208441 | As my client software uses lat/lon coordinates when communicating with my (spherical mercator) postgis database i decided to ST\_Transform every geometry to WGS-84.
However i noticed that ST\_Distance for WGS-84 returns units as degrees (i need meters). So i decided to use ST\_DistanceSpheroid(geom1, geom2, 'SPHEROID["WGS 84",6378137,298.257223563]') but this method seems to be extremely slow.
Therefor i switched back to using spherical mercator again and transforming my input and output to and from wgs-84 as that performs a lot better.
Am i using the correct method or is this a known issue? | WGS-84 is unprojected data. It uses a geodetic coordinate system, which means points are located on a spherical (ellipsoidal to be exact) modelisation of the earth.
As a consequence, euclidian geometry is not valid for this kind of data.
PostGIS «geometry» data type and associated functions work with planar coordinates and euclidian geometry computations. If you want to use them you have to project your data to a specific coordinate system, which by definition are only locally accurate.
Non-planar Geometry computation will always be way slower than planar geometry computation. See the difference between distance computation on a plane and the Harversine formulae on a sphere as an example :
<http://en.wikipedia.org/wiki/Euclidean_distance>
<http://en.wikipedia.org/wiki/Haversine_formula>
That said, recent version of PostGIS have a «geography» data type and some associated functions for those who must use unprojected geodata.
How do you choose between the two ?
If you just store and retrieve your data from the database, using latlon (geography or wgs84 srid) will be fine.
If you have a dataset with a worldwide repartition, you may be interested in using the geography type to be able to deal with all your data in the same manner, without having to deal with multiple projections. Computations will be slower though, and the functions set in postgis is narrower.
If you do processing on your data, need to be fast and want to use PostGIS pletoric set of functions, you have to project your data, and do dynamic coordinate transformation on input/output with st\_transform, just as you do.
So yes, you're right, but still the real right decision depends on the specific use case. |
208442 | I am relatively new to Backbone.js and having difficulty rendering a subView. I have subViews in other parts of the app working properly, but I cant even render simple text in this one.
View:
```
Feeduni.Views.UnifeedShow = Backbone.View.extend({
template: JST['unifeeds/show'],
tagName: "section",
className: "unifeed-show",
render: function() {
var content = this.template({ unifeed: this.model });
this.$el.html(content);
var subView;
var that = this;
this.model.stories().each(function(stories) {
subView = new Feeduni.Views.StoriesShow({ model: stories });
that.subViews.push(subView);
that.$el.find(".show-content").append(subView.render().$el);
});
return this;
},
});
```
Subview:
```
Feeduni.Views.StoriesShow = Backbone.View.extend({
template: JST['stories/show'],
tagName: "div",
className: 'stories-show',
render: function() {
this.$el.text("Nothing shows up here");
return this;
},
});
```
Model:
```
Feeduni.Models.Unifeed = Backbone.Model.extend({
urlRoot: "/api/uninews",
stories: function() {
this._stories = this._stories || new Feeduni.Subsets.StoriesSub([], {
parentCollection: Feeduni.all_unifeeds
});
return this._stories;
},
});
```
The text "Nothing shows up here" should be displaying in the "show content" element, but all I get is this:
```
<section class="unifeed-show">
<article class="show-content">
</article>
</section>
``` | addition to another answer
just single quotes to the variable names used in the query
```
DECLARE @IO_dy AS VARCHAR(100) = '>9C604-M'
DECLARE @style_dy AS VARCHAR (100) = 'S1415MBS06'
DECLARE @query AS VARCHAR(8000)
DECLARE @con AS VARCHAR(8000)
SET @con = STUFF((SELECT distinct ',' + QUOTENAME(Size_id)
FROM iPLEXSTY_SIQ
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
SET @query = 'SELECT *
FROM (
SELECT DISTINCT a.Po_no,a.Article_id,a.Season_id,a.Customer_id,a.Destn_id,b.planned_dt, (c.Description + c.Resource_id) AS comb_size,a.Qty,a.Size_id
from iPLEXSTY_SIQ a
INNER JOIN iPLEX_BULK_PO_DET b on b.upload_batch_id = a.Batch_id
INNER JOIN iPLEXCOLORS c on c.Seq_no = a.Seq_no
WHERE IO_no = '''+@IO_dy+''' AND Style_id = '''+@style_dy+''' //ERROR HERE
GROUP BY a.Po_no,a.Article_id,a.Season_id,a.Customer_id,a.Destn_id,b.planned_dt,(c.Description + c.Resource_id),a.Qty,A.Size_id
) as s
PIVOT
(
SUM(Qty)
FOR Size_id IN (' +@con+ ')
)AS pvt'
EXEC(@query)
```
and also as another answer explains to set values to varchar using `default` keyword |
208818 | Lets say I have a Mongoose model like this:
```
var AdSchema = new mongoose.Schema({
vehicleDescription: String,
adDescription: String,
...
});
```
I want to search both fields for the same key word, for example
find all WHERE vehicleDescription contains the word TEST, OR adDescription contains the word TEST.
It needs to be case sensitive also.
Currently I can query one field like this:
```
var query = {};
query['adDescription'] = ({ $regex: search.keyWord, $options: 'i' });
```
But I need to search both fields at the same time.
Thanks. | You can search for more than one fields at the same time by specifying the field and value in a `{field 1: value 1}, {field 2: value 2},...,{field n: value n}`format in the `find()`function. Additionally, you can also specify various conditions that the results must satisfy, such as AND, OR etc.
In your case, a sample code in Node.js would look like this
```
var result = db.collection('AdSchema').find({
$or: [ {vehicleDescription : { $regex: search.keyWord, $options: 'i' }}, { adDescription: { $regex: search.keyWord, $options: 'i' } } ]
});
```
For more detail, refer the [documentation page](https://docs.mongodb.com/manual/tutorial/query-documents/) for MongoDB. |
208845 | ```
/**
* A JavaScript value representing a signed integer.
*/
class V8_EXPORT Integer : public Number {
public:
static Local<Integer> New(Isolate* isolate, int32_t value);
static Local<Integer> NewFromUnsigned(Isolate* isolate, uint32_t value);
int64_t Value() const;
V8_INLINE static Integer* Cast(v8::Value* obj);
private:
Integer();
static void CheckCast(v8::Value* obj);
};
```
The above code is from Google's V8 engine. Example initialization of this is:
```
Handle<Value> x = Integer::New(42);
```
From what I can see in the source code, they marked the constructor as private and want you to use the New function to create instances of this class. Isn't this against the standard C++ design patterns? Why didn't they just overload the constructor instead of making static functions for creation? This usually the kind of thing you see when people try to port a library from one language to another (the only one I can think of off the top of my head right now is Xamarin's iOS kit).
I've tried to Google around for a name for this type of convention but was unable to find anything on it really. | There are two types of handles. One of them is a "Local" handle. As shown in the code, local handles have the class `Handle<SomeType>`.
<https://developers.google.com/v8/embed>
>
> Note: The handle stack is not part of the C++ call stack, but the
> handle scopes are embedded in the C++ stack. Handle scopes can only be
> stack-allocated, not allocated with new.
>
>
>
<https://developers.google.com/v8/get_started>
>
> * A handle is a pointer to an object. All V8 objects are accessed using handles, they are necessary because of the way the V8 garbage
> collector works.
>
>
> |
209885 | I am trying to change one bit of an unsigned char variable to 1. I keep getting a segmentation fault though. Here is the code fragment that fails:
```
unsigned char bitvector[16];
int addbit(unsigned char *bitv,int bit){
int a = bit/CHAR_BIT; //part of char array we want. CHAR_BIT is 8
bitv[a] |= 1 <<bit;
return 1
}
```
...
```
if(checkbit(pointer->bitvector,i)==0){
//checkbit works great! bitv[0] has the value of 71,
//a turns to be 0 when I call it
addbit(pointer->bitvector,i);
```
Compile:
make: \*\*\* [all] Segmentation fault (core dumped) | As your question is not clear, I made the assumption you wanted to change the bit n of a word of (16x8bits). Here is a working example :
```
#define CHAR_BIT 8
int addbit(unsigned char *bitv,int bit)
{
int a = bit/CHAR_BIT; //part of char array we want. CHAR_BIT is 8
bitv[a] |= 1 <<(bit%CHAR_BIT);
return 1;
}
int main()
{
unsigned char bitvector[16]={0};
addbit(bitvector,9);
for (int i=0;i<16;i++)
printf ("%x ",bitvector[i]);
}
```
It displays: 0 2 0 0 0 0 ... |
209996 | I am trying to send activation email and reset email in Django. I am using Django registration module but after registration, the page returned back to the registration page and did not send any email to me.
Part of **settings.py** :
```
DEBUG = False
ALLOWED_HOSTS = ['mysite.com', 'www.mysite.com']
EMAIL_BACKEND ='django.core.mail.backends.smtp.EmailBackend'
DEFAULT_FROM_EMAIL = 'info@mysite.com'
EMAIL_HOST_USER = 'info@mysite.com'
EMAIL_HOST_PASSWORD = 'my pass'
EMAIL_USE_TLS = True
EMAIL_PORT = 1025
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'posts',
'registration',
'django_countries',
'crispy_forms',
'django_forms_bootstrap',
'captcha',
]
ACCOUNT_ACTIVATION_DAYS = 7
REGISTRATION_AUTO_LOGIN = True
SITE_ID = 1
```
After clicking on the register button, browser waits for 2 seconds on this page.
[![enter image description here](https://i.stack.imgur.com/vhH2L.png)](https://i.stack.imgur.com/vhH2L.png) | You forget to add
```
EMAIL_HOST = 'smtp.mysite.com'
```
to send email. |
210099 | Is there any way to use `dispatch_after` within a loop? I have the delay function below:
```
func delay(delay:Double, closure:()->()) {
dispatch_after(
dispatch_time(
DISPATCH_TIME_NOW,
Int64(delay * Double(NSEC_PER_SEC))
),
dispatch_get_main_queue(), closure)
}
```
I want to execute it inside of a loop like this:
```
while true {
self.delay(1.0) {
// Do something repeatedly
}
}
```
But I can't seem to get it working. Is it possible to do this? | Use a timer-type dispatch source to call the closure repeatedly. Example:
```
import Cocoa
import XCPlayground
func withTimerInterval(seconds: NSTimeInterval, block: dispatch_block_t) -> dispatch_source_t {
let source = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, dispatch_get_main_queue())
let interval = Int64(seconds * NSTimeInterval(NSEC_PER_SEC))
dispatch_source_set_timer(source, dispatch_time(DISPATCH_TIME_NOW, interval), UInt64(interval), 20 * NSEC_PER_MSEC)
dispatch_source_set_event_handler(source, block)
dispatch_resume(source)
return source
}
XCPlaygroundPage.currentPage.needsIndefiniteExecution = true
let timer = withTimerInterval(1) {
print("hello again")
}
```
Cancel the timer like this:
```
dispatch_source_cancel(timer)
``` |
210491 | [![Here is a picture of the module](https://i.stack.imgur.com/LwuPs.jpg)](https://i.stack.imgur.com/LwuPs.jpg)
I am trying to get the right pinout to this module from a Canon MX492. I would like to know which one is goes to red, black, green and white for a usb connector. I don't have the whole printer to test what goes where and I'm fairly new to the whole hardware hacking thing. This has eight pins on a ribbon cable. I would like to use it for a RPi. All I know is it is possibly a USB module.
Edit:
[![enter image description here](https://i.stack.imgur.com/loOCV.jpg)](https://i.stack.imgur.com/loOCV.jpg)
Edit: Chip markings.
Marvell
88W8782U-NAP2
P14V750A2JW
1436 B0P
TV C005
[![enter image description here](https://i.stack.imgur.com/J2N2u.jpg)](https://i.stack.imgur.com/J2N2u.jpg)
[![enter image description here](https://i.stack.imgur.com/r13MR.jpg)](https://i.stack.imgur.com/r13MR.jpg)
[![enter image description here](https://i.stack.imgur.com/QOBly.jpg)](https://i.stack.imgur.com/QOBly.jpg) | So - the pages of the datasheet you're posted don't show any USB interface pins ... but:
- [this Marvell Databrief](https://www.marvell.com/wireless/assets/marvell_avastar_88W8782.pdf) says is has a USB 2.0 interface
- [this Panasonic module](https://na.industrial.panasonic.com/sites/default/pidsa/files/panasonic_pan9020_data_sheet_rev.0.3_10.30.14.pdf) which uses the Marvell 88W8782 indicates that it has a USB option, and that the USB D-/D+ pins are shared with SD\_CMD & SD\_DAT[0] respectively.
So:
USB D+ (green) -> pin 53 (SD\_DAT[0])
USB D- (white) -> pin 52 (SD\_CMD)
I don't see anything on that board which looks like a regulator, and that IC appears to run on 3.3V, so you'll need a separate 3.3V regulator (because your RPi can't source enough current on the 3.3V rail) and feed it to pins 1 & 2.
Do not connect the USB 5V (red) to anything other than the input to that regulator or the smoke will come out somewhere...
You can probably hook Ground/0V (USB black) to anything connected to the shielding can.
Note that the pin numbers I'm referring are the IC pins, not the connector.
You could probably beep those through to the IC with a continuity tester. |
210839 | I want to have `FormData` ([documentation](https://microsoft.github.io/PowerBI-JavaScript/interfaces/_node_modules_typedoc_node_modules_typescript_lib_lib_dom_d_.formdata.html#get)) interface with specified required fields. So I want to use TypeScript to check if my formData has all required fields.
```
export interface AddRequest {
image: Blob;
username: string;
}
// This is invalid
export interface AddRequestApi extends FormData {
image: FormDataEntryValue;
username: FormDataEntryValue;
}
```
so I can do:
```
export const mapRequest = (request: AddRequest): AddRequestApi => {
const { image, username } = request;
const formData = new FormData();
formData.append('image', image);
formData.append('username', username);
// I want my ts compiler to check if returned formData has required fields
// that should be stated in AddRequestApi type (or interface)
return formData;
};
``` | As @Fabian Lauer said:
>
> The approach of multiple .append() calls means the code can only be checked at runtime, not compile time (this means TS can't check it)
>
>
>
Now, there are some ways runtime type-check can be achieved with TS.
Check out this blog
<https://levelup.gitconnected.com/how-we-use-our-typescript-type-information-at-runtime-6e95b801cfeb>
>
> Validation — using another equally as awesome tool, Another JSON Schema Validator or ajv, we can validate any object at runtime simply by passing in the object and our schema. We’ll even get error outputs in a format that we can use programmatically for things such as showing errors on a form against the field that is invalid, automatically fixing invalid properties, etc.
>
>
> |
210911 | I've created 2 projects:
1. Normal, basic ASP.NET MVC 4 application
2. Basic ASP.NET WebAPI application
What I did is I added my custom message handler, derived from `DelegatingHandler` to both of them. Here it is:
```
public class MyHandler : DelegatingHandler
{
protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
{
return base.SendAsync(request, cancellationToken);
}
}
```
I registered it in `global.asax` in both:
```
GlobalConfiguration.Configuration.MessageHandlers.Add(new MyHandler());
```
I put a breakpoint in
```
return base.SendAsync(request, cancellationToken);
```
The difference between ASP.NET MVC and ASP.NET WebAPI is that when I call ASP.NET MVC application (`http://localhost:4189/Something`) **the breakpoint is not triggered**. When I call Web API however (`http://localhost:7120/api/values`), the breakpoint **is triggered**.
Why is that? Are there any differences in those application types execution flows?
In addition, when I try to request normal `Controller`, not `ApiController` of WebAPI application, like `http://localhost:7120/Home`, the break point **is not triggered**. | As shown below in the famous `ASP.NET Web API: HTTP Message LIFECYLE` diagram by Microsoft, the ASP.NET Web API has an additional extensibility point known as `Message Handlers` (section `HTTP Message Handlers`).
So far ASP.NET MVC (5 currently) doesn't provide such extensibility point.
[[Link to the full size PDF poster]](http://www.asp.net/media/4071077/aspnet-web-api-poster.pdf)
[![enter image description here](https://i.stack.imgur.com/Zkeyj.jpg)](https://i.stack.imgur.com/Zkeyj.jpg)
The only extensibility point that the MVC and WebAPI share (or at least conceptually) are Filters. Beware that MVC and WebAPI have different ways to register filters and this puts many people in a similar situation as the one you are in right now. |
211049 | I have a file `index.txt` with data like:
```
2013/10/13-121 f19f26f09691c2429cb33456cf64f867
2013/10/17-131 583d3936c814c1bf4e663fe1688fe4a3
2013/10/20-106 0f7082e2bb7224aad0bd7a6401532f56
2013/10/10-129 33f7592a4ad22f9f6d63d6a17782d023
......
```
And a second file in CSV format with data like:
```
2013/10/13-121, DLDFWSXDR, 15:33, 18:21, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965
2013/10/17-131, DLDFWXDR, 11:05, 15:08, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965
2013/10/20-106, DLSDXDR, 12:08, 13:06, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965
2013/10/10-129, DLXDAE, 15:33, 18:46, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965
```
Now i need a solution to add the MD5SUM at the end of the CSV when the ID (first column) is a match in the index file or vice versa , so at the end the file should look like this:
```
2013/10/13-121, DLDFWSXDR, 15:33, 18:21, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965, f19f26f09691c2429cb33456cf64f867
2013/10/17-131, DLDFWXDR, 11:05, 15:08, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965, 583d3936c814c1bf4e663fe1688fe4a3
2013/10/20-106, DLSDXDR, 12:08, 13:06, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965, 0f7082e2bb7224aad0bd7a6401532f56
2013/10/10-129, DLXDAE, 15:33, 18:46, Normal, No, 5F2B0121-4F2B-481D-B79F-2DC827B85093/22551965, 33f7592a4ad22f9f6d63d6a17782d023
``` | I am summarizing the comments to a complete answer. Note that [@MarkPlotnick](https://unix.stackexchange.com/users/49439/mark-plotnick) was the first to point toward the right solution.
As you can see in `ls -lL` output, the file pointed by you link is a **socket**, non a regular file or a pipe.
```
~$ ls -lL /dev/log
srw-rw-rw- 1 root root 0 Aug 23 07:13 /dev/log
```
Look at the first character of the output. That `s` means that the file is a socket.
You cannot use the redirection mechanism `>` of `bash` (or, AFIK, any other shell) to write in a socket because the shell will try to *open* the file and `open` does not support sockets. See [man open](https://linux.die.net/man/2/open) for details.
You have to use a program that connects to a socket. See [man connect](https://linux.die.net/man/2/connect) for details.
As an example, you can use `netcat` or `socat` (see [How can I communicate with a Unix domain socket via the shell on Debian Squeeze?](https://unix.stackexchange.com/questions/26715/how-can-i-communicate-with-a-unix-domain-socket-via-the-shell-on-debian-squeeze)).
For sake of completeness, you can use the redirection on pipes.
```
~$ mkfifo /tmp/fifo
~$ ls -l /tmp/fifo
prw-rw-rw- 1 root root 0 27 ago 15.04 /tmp/fifo
~$ echo "hello" > /tmp/fifo
```
Look at the first character of the `ls` output. That `p` means that the file is a pipe. |
211129 | I am trying to use GameObject.Find within a prefab to reference an object that exists in the scene. The name of the object is exactly the same as the actual object.
```
LoseLives loseLives;
GameObject loseLivesing;
private void Awake()
{
loseLivesing = GameObject.Find("MasterManager");
}
private void Start()
{
loseLives = loseLivesing.GetComponent<LoseLives>();
target = Waypoints.waypoints[0];
}
loseLives.LoseLife();
```
[Null Reference](https://i.stack.imgur.com/y2Ok6.png)
I get a null reference when "loseLives.LoseLife();" runs.
Any help with solving the no reference problem would be greatly appreciated!
If you think there is a better way to create the connection between a script on a prefab and a script on a in-scene gameObject could you please explain it or point me to a tutorial, because in all my research I did not have any luck.
Thank you! | First at all I need to admit, I'm not familiar with the `arma` framework or the memory layout; the least if the syntax `result += slice(i) * weight` evaluates lazily.
Two primary problem and its solution anyway lies in the memory layout and the memory-to-arithmetic computation ratio.
To say `a+=b*c` is problematic because it needs to read the `b` and `a`, write `a` and uses up to two arithmetic operations (two, if the architecture does not combine multiplication and accumulation).
If the memory layout is of form `float tensor[rows][columns][channels]`, the problem is converted to making `rows * columns` dot products of length `channels` and should be expressed as such.
If it's `float tensor[c][h][w]`, it's better to unroll the loop to `result+= slice(i) + slice(i+1)+...`. Reading four slices at a time reduces the memory transfers by 50%.
It might even be better to process the results in chunks of 4\*N results (reading from all the 150 channels/slices) where N<16, so that the accumulators can be allocated explicitly or implicitly by the compiler to SIMD registers.
There's a possibility of a minor improvement by padding the slice count to multiples of 4 or 8, by compiling with `-ffast-math` to enable fused multiply accumulate (if available) and with multithreading.
The constraints indicate the need to perform 13.5GFlops, which is a reasonable number in terms of arithmetic (for many modern architectures) but also it means at least 54 Gb/s memory bandwidth, which could be relaxed with fp16 or 16-bit fixed point arithmetic.
**EDIT**
Knowing the memory order to be `float tensor[150][45][35000]` or `float tensor[kSlices][kRows * kCols == kCols * kRows]` suggests to me to try first unrolling the outer loop by 4 (or maybe even 5, as 150 is not divisible by 4 requiring special case for the excess) streams.
```
void blend(int kCols, int kRows, float const *tensor, float *result, float const *w) {
// ensure that the cols*rows is a multiple of 4 (pad if necessary)
// - allows the auto vectorizer to skip handling the 'excess' code where the data
// length mod simd width != 0
// one could try even SIMD width of 16*4, as clang 14
// can further unroll the inner loop to 4 ymm registers
auto const stride = (kCols * kRows + 3) & ~3;
// try also s+=6, s+=3, or s+=4, which would require a dedicated inner loop (for s+=2)
for (int s = 0; s < 150; s+=5) {
auto src0 = tensor + s * stride;
auto src1 = src0 + stride;
auto src2 = src1 + stride;
auto src3 = src2 + stride;
auto src4 = src3 + stride;
auto dst = result;
for (int x = 0; x < stride; x++) {
// clang should be able to optimize caching the weights
// to registers outside the innerloop
auto add = src0[x] * w[s] +
src1[x] * w[s+1] +
src2[x] * w[s+2] +
src3[x] * w[s+3] +
src4[x] * w[s+4];
// clang should be able to optimize this comparison
// out of the loop, generating two inner kernels
if (s == 0) {
dst[x] = add;
} else {
dst[x] += add;
}
}
}
}
```
**EDIT 2**
Another starting point (before adding multithreading) would be consider changing the layout to
```
float tensor[kCols][kRows][kSlices + kPadding]; // padding is optional
```
The downside now is that `kSlices = 150` can't anymore fit all the weights in registers (and secondly kSlices is not a multiple of 4 or 8). Furthermore the final reduction needs to be horizontal.
The upside is that reduction no longer needs to go through memory, which is a big thing with the added multithreading.
```
void blendHWC(float const *tensor, float const *w, float *dst, int n, int c) {
// each thread will read from 4 positions in order
// to share the weights -- finding the best distance
// might need some iterations
auto src0 = tensor;
auto src1 = src0 + c;
auto src2 = src1 + c;
auto src3 = src2 + c;
for (int i = 0; i < n/4; i++) {
vec8 acc0(0.0f), acc1(0.0f), acc2(0.0f), acc3(0.0f);
// #pragma unroll?
for (auto j = 0; j < c / 8; c++) {
vec8 w(w + j);
acc0 += w * vec8(src0 + j);
acc1 += w * vec8(src1 + j);
acc2 += w * vec8(src2 + j);
acc3 += w * vec8(src3 + j);
}
vec4 sum = horizontal_reduct(acc0,acc1,acc2,acc3);
sum.store(dst); dst+=4;
}
}
```
These `vec4` and `vec8` are some custom SIMD classes, which map to SIMD instructions either through intrinsics, or by virtue of the compiler being able to do compile `using vec4 = float __attribute__ __attribute__((vector_size(16)));` to efficient SIMD code. |
211166 | So this is one of my methods which fetches a user's profile:
```
public static UserProfile getUserProfile(String uid) {
if (StringUtils.isBlank(uid)) {
return null;
}
Session session = HibernateUtil.getSession();
UserProfile userProfile = null;
try {
userProfile = (UserProfile) session.createQuery("from " + UserProfile.class.getSimpleName() + " where uid = :uid").setParameter("uid", uid).uniqueResult();
} catch (Exception e) {
e.printStackTrace();
} finally {
if(session.isOpen()) {
session.close();
}
}
if (userProfile != null) {
return userProfile;
}
return null;
}
```
This is mapped to www.example.com/getUserProfile - I feel like this is such a hack able API. Anybody could start guessing UID, the whole database is structured around UID. All of the API is in the front end. Anybody can see what methods I have and what that method accepts as an argument. Am I overthinking here or am I really missing something here? Firebase provides UID upon a successful login. How do I really verify that the user is actually who the user says he/she is? What does that even do for me; that I've verified the user? Like, okay, I've verified the user who is actually a hacker, I still don't have anything. | The issue is that right here:
```
this.div.innerHTML += content;
```
When you assign a value to `.innerHTML`, the entire previous value is overwritten with the new value. Even if the new value contains the same HTML string as the previous value, any DOM event bindings on elements in the original HTML will have been lost. The solution is to not use `.innerHTML` and instead use `.appendChild`. To accomplish this in your case (so that you don't lose the existing content), you can create a "dummy" element that you *could* use `.innerHTML` on, but because of performance issues with `.innerHTML`, it's better to set non-HTML content up with the `.textContent` property of a DOM object.
You were also going to have troubles inside `close()` locating the correct `parentNode` and node to remove, so I've updated that.
```js
class Popup {
constructor(content){
this.div = document.createElement("div");
this.div.className = "block";
this.div.style.position = "fixed";
this.div.style.margin = "auto auto";
//caption
this.caption = document.createElement("div");
this.caption.style.textAlign = "right";
//closeButton
this.closeButton = document.createElement("button");
this.closeButton.textContent = "X";
this.closeButton.addEventListener("click", this.close);
this.caption.appendChild(this.closeButton);
this.div.appendChild(this.caption);
// Create a "dummy" wrapper that we can place content into
var dummy = document.createElement("div");
dummy.textContent = content;
// Then append the wrapper to the existing element (which won't kill
// any event bindings on DOM elements already present).
this.div.appendChild(dummy);
document.body.appendChild(this.div);
}
close() {
var currentPopup = document.querySelector(".block");
currentPopup.parentNode.removeChild(currentPopup);
delete this;
}
}
var popup = new Popup("hm hello");
``` |
211280 | I've got a `mailto:` link in a page here including `subject=` and `body=` parameters but I'm not sure on how to correctly escape the data in the parameters.
The page is encoded in `utf-8` so I guess all special chars like German umlauts should be encoded into `utf-8` representations for the URL too?
At the moment I'm using `rawurlencode()` as `urlencode()` would insert `+` characters in all locations where spaces should have been but I'm unsure if this is the correct way to do so. | You just need to `rawurlencode()` the link at the end of the email address according the the W3C standards.
There is an example on the PHP Manual for urlencode (search for mailto on that page):
<http://php.net/manual/en/function.urlencode.php> |
212343 | I want to use loadash debounce in my text input (React Native App).
Why isn't my debounce working correctly?? the input value inside the textInput disappears/re-appears as I hit backspace.
```
handleSearch(text, bool){
// search logic here
}
<TextInput
value={value}
onChangeText={_.debounce((text) =>{this.handleSearch(text, true);this.setFlat(true);this.renderDropDown(false)},200)}
onKeyPress={({ nativeEvent }) => {
if (nativeEvent.key === 'Backspace' && value !== "") {
this.handleClearText();
}
if (nativeEvent.key === 'Backspace' && value === "") {
this.handleClearFilter();
}
}}
/>
``` | You're not using the callback the way it was meant. If you keep it like this, every time your component is rendered that piece of code is being executed, besides if that render was fired because that event.
You should move that code to and method/function, and asign that method/function as callback. Something like this:
```
const handleTextChange = _.debounce((text) =>{
this.handleSearch(text,true);
this.setFlat(true);
this.renderDropDown(false)
},200);
/* .... */
<TextInput
value={value}
onChangeText={handleTextChahge}
/>
``` |
212367 | A few years back I used UCINET for some social network analysis. Theese days I'd like to use SNA again - but this time I prefer a unified analysis framework - which for me is R.
I have looked at the sna and statnet documentation but am a bit overwhelmed.
What I'd like to do: First: Load an bipartite/incidence matrix pulled directly from e.g. a websurvey (often valued). Convert this matrix to two adjacency matrix' (affiliatoin by affiliation and cases by cases). It could also be a directed, valued cases by cases matrix.
Second: Load a file (also from e.g. websurvey data) of vertice attributes.
Third: Then plot the graph with e.g. vertice size according to some centrality measure, colored and labeled by some vertice attributes, with only edges with value over a certain threshold being drawn.
This is a mini incidence matrix:
```
data <- structure(list(this = c(0, 1, 0, 1, 1, 2, 0, 1, 3),
that = c(1, 1, 3, 0, 0, 0, 2, 1, 0),
phat = c(0, 0, 2, 1, 0, 0, 1, 2, 0)),
.Names = c("this", "that", "phat"),
row.names = c("a", "b", "c", "d", "e", "f", "g", "h", "i"),
class = "data.frame")
```
with som attribute data:
```
att <-structure(list(sex = structure(c(1L, 1L, 2L, 2L, 1L, 2L, 1L,
1L, 1L), .Label = c("F", "M"), class = "factor"), agegr = c(1L,
1L, 3L, 1L, 3L, 1L, 1L, 3L, 1L), place = structure(c(1L, 2L,
1L, 1L, 1L, 1L, 2L, 2L, 1L), .Label = c("Lower", "Upper"),
class = "factor")), .Names = c("sex",
"agegr", "place"), row.names = c(NA, -9L), class = "data.frame")
```
p.s. maybe SNA would be a good tag for this post? I just don't have the nescassary SO goodwill :-) | This is a good question, and provides some opportunity for further exploration of SNA in R. I am more familiar with the [igraph package](http://igraph.sourceforge.net/index.html), so I will answer your question using the the functions in that library.
The first part of your question has a fairly straightforward solution:
```
# Convert data frame to graph using incidence matrix
G<-graph.incidence(as.matrix(data),weighted=TRUE,directed=FALSE)
summary(G)
# Vertices: 12
# Edges: 30
# Directed: TRUE
# No graph attributes.
# Vertex attributes: type, name.
# Edge attributes: weight.
```
This returns a graph object with undirected and weighted edges from the incidence matrix. To generate the affiliations graphs from the bipartite graph you have two options. The quick and easy one is this:
```
proj<-bipartite.projection(G)
```
This will return a list with each projection indexed as $proj1 and proj2, the unfortunate thing is that these projects do not contain the edge weights that you would normally want when performing this manipulation. To do this the best solution is to simply perform the matrix multiplication yourself.
```
# Create the matrices, and set diagonals to zero
M<-as.matrix(data)
affil.matrix<-M%*%t(M)
diag(affil.matrix)<-0
cases.matrix<-t(M)%*%M
diag(cases.matrix)<-0
# Create graph objects from matrices
affil.graph<-graph.incidence(affil.matrix,weighted=TRUE)
cases.graph<-graph.incidence(cases.matrix,weighted=TRUE)
```
Generating the plots with the attribute data is a bit trickier and requires more coding, but I recommend looking through some of the [igraph examples](http://igraph.sourceforge.net/screenshots.html), or even [some of my own](http://www.drewconway.com/zia/wp-content/uploads/redirects/sna_in_r.html) as there is plenty there to get you started. Good luck! |
212655 | I have to show progress graphs exactly in following way where percentage would be in center of circular graph
![circular progress indicator](https://i.stack.imgur.com/iouR5.png)
How can i do this using javascript/jQuery?
Can it be done using Google Chart? | There's a plugin for this at:
<http://anthonyterrien.com/knob/>
>
> [Demo](http://anthonyterrien.com/demo/knob/)
>
>
> ### jQuery Knob
>
>
> * canvas based ; no png or jpg sprites.
> * touch, mouse and mousewheel, keyboard events implemented.
> * downward compatible ; overloads an input element...
>
>
> |
212752 | Hi frnds i am new to thread concept.
Why threadTask.IsAlive always false;
i have tried with threadstate but i got "Unstarted" always.
```
foreach (DataRow TaskRow in dtCurrentTask.Rows)
{
sTaskId = TaskRow["TaskId"].ToString();
sTaskAssigned = TaskRow["TaskAssigned"].ToString();
threadTask = new Thread(() => RunTask(sTaskId));
threadTask.Name = "thread_" + sTaskId;
Console.WriteLine(threadTask.Name + " is alive :" + threadTask.IsAlive);
if (!threadTask.IsAlive || sTaskAssigned == "0")
{
Console.WriteLine("Starting thread :" + threadTask.Name);
Thread.Sleep(7000);
threadTask.Start();
UpdateTaskStatus(true);
}
}
``` | In the loop for threadTask variable, each time you are creating a new instance. So whenever you check for isAlive, it will refer to the newly created instance in the current loop and not the previously created thread, therefore it always return false. |
212835 | I want to decode **.pcap** file using **jnetpcap** library.
I am using eclipse to run project and i have set all the environments.
When i call java class separably then i am able to decode pcap file but when i am calling that java class throw JSP, then i am getting below error..
================================================================
```
SEVERE: Servlet.service() for servlet [jsp] in context with path [/ProjectName] threw exception [javax.servlet.ServletException: java.lang.NoClassDefFoundError: org/jnetpcap/packet/JPacketHandler] with root cause
java.lang.NoClassDefFoundError: org/jnetpcap/packet/JPacketHandler
at org.apache.jsp.NFA_005fHome_jsp._jspService(NFA_005fHome_jsp.java:148)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:728)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:728)
```
================================================================
Waiting for reply
Thanks,
Laxdeep. | Do it this way:
1. Iterate the whole list, noting down the last node you found worth saving (init with the structure saving the list head).
2. Remove all following nodes.
That means two iterations, though none nested. O(#oldlist + #removed)
Your current approach cannot work. |
212910 | I have a via with some jQuery function to get value from HTML element (to access element available and element not yet available). On of these element is a dropdown, when I select a value an another vue is added in the bottom of the page in a div. When I try to access an element added by this view, I received "undefined". What can I do ?
In the div #ProductDetail, I add element. and it's these elements I can't access.
Update 1 (trying to be more clear)
- I have a page with some HTML element (several in put, a dropdown)
- I have javascript method available on this page to access HTML element present or not yet present on this pahge
- I have a
- When I selected a value via the dropdown, I receive a view, this view is added to the .
- When I try to access the HTML element present at the origin that'd work
- When I try to access the HTML elemetn added in the that's not work, I received "undifined element"
```
$.ajax({
type: "POST",
url: "/Product/Edition",
data: {
id: getId()
},
success: function(data) {
$("#divDisplayDialogProduct").html(data);
$('#ProductType').change(function() {
$.ajax({
type: "POST",
url: "/Product/ShowDetail",
data: { id: $('#ProductType').val() },
success: function(data) { $("#ProductDetail").html(data); },
error: function(XMLHttpRequest, textStatus, errorThrown) { }
})
});
},
error: function(XMLHttpRequest, textStatus, errorThrown) {
}
})
``` | Base on your comment response:
You still need the `#` to reference an element by ID like this:
```
function getDVDNameVO() {
return $('#MyNewElement').val();
}
``` |
213068 | In my rails application I need to display a large number of outputs, by using will\_paginate. But since the controller code calls a definition which is defined in models I cant use pagination since the condition to fetch the queries is defined in my models.
My models file
```
class Tweets<ActiveRecord::Base
attr_accessible :id, :tweet_created_at, :tweet_id, :tweet_text, :tweet_source, :user_id, :user_name, :user_sc_name, :user_loc, :user_img, :longitude, :latitude, :place, :country
def self.for_coordinates(coordinates)
bbox = { min_lat: coordinates.latitude - 1.0, max_lat: coordinates.latitude + 1.0,
min_lng: coordinates.longitude - 1.0, max_lng: coordinates.longitude + 1.0
}
Tweets.where("(longitude BETWEEN ? and ?) AND (latitude BETWEEN ? and ?) OR (user_loc LIKE ?) " ,
bbox[:min_lng], bbox[:max_lng], bbox[:min_lat], bbox[:max_lat], "%#{coordinates.city}%" )
end
def self.for_user_location(city)
@tweets= Tweets.where("user_loc LIKE ?", "%#{city}%")
end
end
```
My code for the controller
```
class TweetsController<ApplicationController
def index
city = params[:show]
search_term = params[:text]
search_term.gsub!(/\s/, '%')
city_coordinates = Coordinates.where('city=?', city)
if (city_coordinates.count == 1 && city_coordinates.first.valid_location?)
@tweets = Tweets.for_coordinates(city_coordinates.first) & Tweets.where("tweet_text LIKE?" , "%#{search_term}%")
elsif (city_coordinates.count != 1 )
@tweets = Tweets.for_user_location(city) & Tweets.where("tweet_text LIKE?" , "%#{search_term}%")
else
Tweets.where("tweet_text LIKE? ", "%#{search_term}%").paginate(page: params[:page], per_page: 50)
end
end
end
```
Since the controller is calling a definition which is defined in the model I am stuck up with this to use pagination using will\_paginate. Any help please? | Call paginate on the results, not in the `Model`
```
(Tweets.for_coordinates(city_coordinates.first) & Tweets.where("tweet_text LIKE?" , "%#{search_term}%")).paginate(page: params[:page], per_page: 50)
```
You will have to call `require 'will_paginate/array'` |
213241 | As I'm designing my node.js application, I'm having trouble figuring out why my anchor links are not working/not clickable. The code itself was working fine before I designed the app. Is there a CSS property I need to configure to fix this? This is for the .player ul li links.
HTML:
```
<!DOCTYPE html>
<html lang="en">
<head>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.1/css/bootstrap.min.css">
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js"></script>
<link href='http://fonts.googleapis.com/css?family=Bree+Serif' rel='stylesheet' type='text/css'>
<link rel="stylesheet" href="/style.css">
<link href='http://fonts.googleapis.com/css?family=Open+Sans' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Indie+Flower' rel='stylesheet' type='text/css'>
<script src="http://connect.soundcloud.com/sdk.js"></script>
<script src="http://code.jquery.com/jquery-1.11.1.js"></script>
<script src="https://cdn.socket.io/socket.io-1.2.0.js"></script>
<!-- <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.1/css/bootstrap.min.css"> -->
<meta charset="UTF-8">
<title>Open Mic Chatroom</title>
</head>
<header>
<nav class="navbar navbar-default">
<div class="container-fluid">
<div class="navbar-header">
<!-- <a class="navbar-brand" href="#">
<img alt="Brand" src="...">
</a> -->
<h1>OpenMic</h1>
</div>
</div>
</nav>
</header>
<body>
<div class="container">
<div class="row">
<div class="musicians col-sm-3 col-md-3">
<!-- <div class="musicians col-sm-6 col-md-4"> -->
<!-- <div class="col-sm-6 col-md-4"> -->
<!-- <div class="thumbnail musicians-box"> -->
<!-- <img src="..." alt="..."> -->
<div class="caption">
<h2>Musicians Online</h2>
<p>
<div id="chatters">
</div>
</p>
<!-- <p><a href="#" class="btn btn-primary" role="button">Button</a> <a href="#" class="btn btn-default" role="button">Button</a></p> -->
<!-- </div> -->
</div>
</div>
<div class="chatroom col-sm-8 col-md-8">
<!-- <div class="chatroom col-sm-6 col-md-4">
--><!-- <div class="col-sm-6 col-md-4">-->
<!-- <div class="thumbnail chatroom-box"> -->
<!-- <img src="..." alt="..."> -->
<div class="caption">
<h2>Messages</h2>
<div id="messagesContainer"></div>
<!-- </div> -->
</div>
<form action="" id="chat_form">
<input id="info" autocomplete="off" />
<input type="submit" value="Submit">
</form>
</div>
</div>
</div>
<!-- <h3>Messages</h3>
<div id="messagesContainer"></div>
<form action="" id="chat_form">
<input id="info" autocomplete="off" />
<input type="submit" value="Submit">
</form> -->
<!-- SoundCloud Recording Feature -->
<div class="row player">
<div class="soundcloud-player">
<ul>
<div id="player">
<!-- <button type="button" id="startRecording" class="btn btn-default btn-lg">
<span class="glyphicon glyphicon-record" aria-hidden="true"></span>
</button> -->
<li id="startRecording"><a href="#">Start Recording</a></li>
</div>
<!-- <button type="button" id="stopRecording" class="btn btn-default btn-lg">
<span class="glyphicon glyphicon-stop" aria-hidden="true"></span>
</button> -->
<li id="stopRecording"><a href="#">Stop Recording</a></li>
<!-- <button type="button" id="playBack" class="btn btn-default btn-lg">
<span class="glyphicon glyphicon-play" aria-hidden="true"></span>
</button> -->
<li id="playBack"><a href="#">Play Recorded Sound</a></li>
<!-- <button type="button" id="upload" class="btn btn-default btn-lg">
<span class="glyphicon glyphicon-open" aria-hidden="true"></span>
</button> -->
<li id="upload"><a href="#">Upload</a></li>
</ul>
<p class="status"></p>
<div class="audioplayer">
</div>
</div>
</div>
<!-- <script src="/socket.io/socket.io.js"></script> -->
</div>
<script>
$(document).ready(function() {
var trackUrl;
//establish connection
var socket = io.connect('http://localhost:3000');
$('#chat_form').on('submit', function (e) {
e.preventDefault();
var message = $('#info').val();
socket.emit('messages', message);
$('#info').val('');
});
socket.on('messages', function (data) {
console.log("new message received");
$('#messagesContainer').append(data);
$('#messagesContainer').append('<br>');
// insertMessage(data);
});
socket.on('add_chatter', function (name) {
var chatter = $('<li>' + name + '</li>').data('name', name);
$('#chatters').append(chatter);
});
// //Embed SoundCloud player in the chat
// socket.on('track', function (track) {
// console.log("new track", track);
// SC.oEmbed(track, {auto_play: true}, function (oEmbed) {
// console.log('oEmbed response: ' + oEmbed);
// });
// SC.stream(track, function (sound) {
// console.log("streaming", sound);
// sound.play();
// });
// // socket.on('remove chatter', function (name) {
// // $('#chatters li[data-name=]' + name + ']').remove();
// });
//SOUNDCLOUD RECORDING FEATURE
function updateTimer(ms) {
// update the timer text. Used when user is recording
$('.status').text(SC.Helper.millisecondsToHMS(ms));
}
//Connecting with SoundCloud
console.log("calling SoundCloud initialize");
SC.initialize({
client_id: "976d99c7318c9b11fdbb3f9968d79430",
redirect_uri: "http://localhost:3000/auth/soundcloud/callback"
});
SC.connect(function() {
SC.record({
start: function() {
window.setTimeout(function() {
SC.recordStop();
SC.recordUpload({
track: { title: 'This is my sound' }
});
}, 5000);
}
});
//Adds SoundCloud username to chat app
console.log("connected to SoundCloud");
SC.get('/me', function(me) {
console.log("me", me);
socket.emit('join', me.username);
});
// SC.get('/me/tracks', {}, function(tracks) {
// var myTracks = $("<div>");
// var heading = $("<h1>").text("Your tracks");
// var list = $("<ul>");
// tracks.forEach(function (single) {
// var listItem = $("<li>").text(single.permalink);
// listItem.on("click", function () {
// SC.oEmbed(single.permalink_url, {
// auto_play: true
// }, function(oEmbed) {
// console.log("oEmbed", oEmbed);
// $('.status').html(oEmbed.html);
// });
// });
// list.append(listItem);
// });
// $("body").append(myTracks.append(heading, list));
// });
// username = prompt("What is your username?");
// socket.emit('join', username);
// });
//Start recording link
$('#startRecording a').click(function (e) {
$('#startRecording').hide();
$('#stopRecording').show();
e.preventDefault();
SC.record({
progress: function(ms, avgPeak) {
updateTimer(ms);
}
});
});
//Stop Recording link
$('#stopRecording a').click(function (e) {
e.preventDefault();
$('#stopRecording').hide();
$('#playBack').show();
$('upload').show();
SC.recordStop();
});
//Playback recording link
$('#playBack a').click(function (e) {
e.preventDefault();
updateTimer(0);
SC.recordPlay({
progress: function (ms) {
updateTimer(ms);
}
});
});
//Uploaded SoundCloud Track after recorded
$('#upload').click(function (e) {
e.preventDefault();
SC.connect({
connected: function() {
console.log("CONNECTED");
$('.status').html("Uploading...");
SC.recordUpload({
track: {
title: 'My Recording',
sharing: 'public'
}
},
//When uploaded track, provides link user, but not everyone
function (track) {
// console.log("TRACK", track);
setTimeout(function () {
SC.oEmbed(track.permalink_url, {auto_play: true}, function(oEmbed) {
console.log("THIS IS oEMBED", oEmbed);
// console.log(oEmbed.html);
socket.emit('messages', oEmbed.html );
$('.status').html(oEmbed.html);
});
}, 8000);
// $('.status').html("Uploaded: <a href='" + track.permalink_url + "'>" + track.permalink_url + "</a>");
});
}
});
});
});
});
</script>
</body>
</html>
```
CSS:
```
body {
background: url('microphone.svg') no-repeat center center fixed;
background-position: 4% 100%;
}
h1, h2, h3 {
font-family: 'Bree Serif', serif;
}
body {
font-family: 'Open Sans', sans-serif;
}
h1 {
color: white;
padding-left: 5%;
padding-bottom: 5%;
letter-spacing: 4px;
}
.navbar-default {
background-color: #000000;
/* background-image: url('SoundWave.jpg');
background-position: center;
opacity: 0.8;*/
}
input, button, select, textarea {
line-height: 3;
}
.chatroom, .musicians {
border-style: solid;
border-radius: 6%;
border-color: black;
margin-left: 20px;
height: 500px;
background-color: #F2F2F2;
opacity: 0.8;
}
.musicians {
height: fixed;
}
input#info {
margin: 47% 1%;
width: 656px;
border-radius: 10%;
}
.visualSoundContainer__teaser {
height: 100px;
}
.row {
padding-top: 2%;
}
.player {
text-align: center;
}
.player ul li {
z-index: 30;
}
``` | upload the image to a folder and insert the path into database. If logo is the path of the image. then program will work. |
213314 | I am having some issues on `<tr>` background color, having the same `<tr id="">`
Here is my Table.
```
<table class="table">
<tr id="2552">
<td>Roll No 1</td>
<td>800</td>
</tr>
<tr id="2552">
<td>Roll No 1</td>
<td>700</td>
</tr>
<tr id="4444">
<td>Roll No 11</td>
<td>800</td>
</tr>
<tr id="4444">
<td>Roll No 11</td>
<td>900</td>
</tr>
<tr id="7676">
<td>Roll No 12</td>
<td>800</td>
</tr>
<tr id="7676">
<td>Roll No 12</td>
<td>900</td>
</tr>
</table>
```
What I want.
Every 2 `<tr>` have same `id` But these ids are dynamic.
I want the `<tr>` having same `id` get different `background-color`.
Now in this table there are `3 ids` are used. So 3 different colors can be there.
I have applied many jquery codes but failed.
Please help me | I Developed Dynamic table row id based color set and i hope its solve your problem. [Code Reference](http://jsfiddle.net/a2pq65hL/1/)
```
var table = document.getElementsByTagName("table")[0];
var secondRow = table.rows[1];
var count=0;
var trid =[];
var clr =[];
for(var i=0;i<table.rows.length;i++)
{
trid.push(table.rows[i].id);
}
var uniqueArray = [];
for(i=0; i < trid.length; i++){
if(uniqueArray.indexOf(trid[i]) === -1) {
uniqueArray.push(trid[i]);
}
}
for(var i = 0; i < uniqueArray.length; i++)
{
clr.push('#'+Math.floor(Math.random()*16777215).toString(16));
}
for (var i = 0; i < table.rows.length; i++) {
for(var j = 0; j < uniqueArray.length; j++)
{
if(table.rows[i].id ==uniqueArray[j]){
table.rows[i].style.backgroundColor = clr[j];
}
}
}
```
[![enter image description here](https://i.stack.imgur.com/WOZoP.jpg)](https://i.stack.imgur.com/WOZoP.jpg) |