Search is not available for this dataset
qid
int64 1
74.7M
| question
stringlengths 10
43.1k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 0
33.7k
| response_k
stringlengths 0
40.5k
|
---|---|---|---|---|---|
62,674,091 | This is my first attempt on using Docker. I followed the instructions [here](https://docs.docker.com/engine/install/ubuntu/) to install Docker on an Ubuntu instance.
When I reached the `apt-cache madison docker-ce` step, it showed many versions, a series of version 5.19.xx and another series of version 18.06.xx. Partial output:
```
docker-ce | 5:19.03.12~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 5:19.03.11~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
docker-ce | 5:18.09.0~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.3~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.2~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
```
Should I be installing version 5 or 18? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/936293/"
] | There is a `truncate()` function for double type which returns the integer part discarding the fractional part. We can subtract that from the original double to get the fraction.
```
double myDouble = 4734.602654867;
double fraction = myDouble - myDouble.truncate();
print(fraction); // --> prints 0.602654867
```
*Edit:*
If we want 4 digits specifically from the fractional part, we can do this..
```
int result = (fraction*10000).truncate();
print(result); // --> prints 6026
```
To do all this one line, we can do it like this..
```
int result = ((myDouble - myDouble.truncate())*10000).truncate(); // <-- 6026
``` | I got a simpler and straight forward answer, especially if you want to still use the fractional part as decimal, and not as a string
```dart
double kunle = 300.0/7.0; // the decimal number
int s = kunle.floor(); // takes out the integer part
double fractionalPart = kunle-s; // takes out out the fractional part
print(fractionalPart); // print the fractional part
``` |
62,674,091 | This is my first attempt on using Docker. I followed the instructions [here](https://docs.docker.com/engine/install/ubuntu/) to install Docker on an Ubuntu instance.
When I reached the `apt-cache madison docker-ce` step, it showed many versions, a series of version 5.19.xx and another series of version 18.06.xx. Partial output:
```
docker-ce | 5:19.03.12~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 5:19.03.11~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
docker-ce | 5:18.09.0~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.3~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.2~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
```
Should I be installing version 5 or 18? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/936293/"
] | ```
final double number = 4734.602654867;
final String result = number.toStringAsFixed(5).split('.').last.substring(0,4);
``` | I got a simpler and straight forward answer, especially if you want to still use the fractional part as decimal, and not as a string
```dart
double kunle = 300.0/7.0; // the decimal number
int s = kunle.floor(); // takes out the integer part
double fractionalPart = kunle-s; // takes out out the fractional part
print(fractionalPart); // print the fractional part
``` |
62,674,091 | This is my first attempt on using Docker. I followed the instructions [here](https://docs.docker.com/engine/install/ubuntu/) to install Docker on an Ubuntu instance.
When I reached the `apt-cache madison docker-ce` step, it showed many versions, a series of version 5.19.xx and another series of version 18.06.xx. Partial output:
```
docker-ce | 5:19.03.12~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 5:19.03.11~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
docker-ce | 5:18.09.0~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.3~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.2~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
```
Should I be installing version 5 or 18? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/936293/"
] | ```
final double number = 4734.602654867;
final String result = number.toStringAsFixed(5).split('.').last.substring(0,4);
``` | ```
void main() {
double d = 4734.602654867;
print(((d - d.floor()) * 10000).floor()); // --> prints 6026
}
```
[floor](https://www.tutorialspoint.com/dart_programming/dart_programming_floor_method.htm) method returns the decimal part of the number |
62,674,091 | This is my first attempt on using Docker. I followed the instructions [here](https://docs.docker.com/engine/install/ubuntu/) to install Docker on an Ubuntu instance.
When I reached the `apt-cache madison docker-ce` step, it showed many versions, a series of version 5.19.xx and another series of version 18.06.xx. Partial output:
```
docker-ce | 5:19.03.12~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 5:19.03.11~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
docker-ce | 5:18.09.0~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.3~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.2~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
```
Should I be installing version 5 or 18? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/936293/"
] | You can do that using `split()`
Like this..
```dart
var s = 4734.602654867;
var a = s.toString().split('.')[1]. substring(0,4); // here a = 6026
```
Hope it solves your issue.. | ```
final double number = 4734.602654867;
final String result = number.toStringAsFixed(5).split('.').last.substring(0,4);
``` |
62,674,091 | This is my first attempt on using Docker. I followed the instructions [here](https://docs.docker.com/engine/install/ubuntu/) to install Docker on an Ubuntu instance.
When I reached the `apt-cache madison docker-ce` step, it showed many versions, a series of version 5.19.xx and another series of version 18.06.xx. Partial output:
```
docker-ce | 5:19.03.12~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 5:19.03.11~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
docker-ce | 5:18.09.0~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.3~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.2~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
```
Should I be installing version 5 or 18? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/936293/"
] | You can do that using `split()`
Like this..
```dart
var s = 4734.602654867;
var a = s.toString().split('.')[1]. substring(0,4); // here a = 6026
```
Hope it solves your issue.. | ```
void main() {
double d = 4734.602654867;
print(((d - d.floor()) * 10000).floor()); // --> prints 6026
}
```
[floor](https://www.tutorialspoint.com/dart_programming/dart_programming_floor_method.htm) method returns the decimal part of the number |
62,674,091 | This is my first attempt on using Docker. I followed the instructions [here](https://docs.docker.com/engine/install/ubuntu/) to install Docker on an Ubuntu instance.
When I reached the `apt-cache madison docker-ce` step, it showed many versions, a series of version 5.19.xx and another series of version 18.06.xx. Partial output:
```
docker-ce | 5:19.03.12~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 5:19.03.11~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
docker-ce | 5:18.09.0~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.3~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.2~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
```
Should I be installing version 5 or 18? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/936293/"
] | Something like
```dart
import 'dart:math' show pow;
var number = 4734.602654867;
var wantedDigits = 4;
var fraction = (number % 1 * pow(10, wantedDigits)).floor();
print(fraction);
```
should work.
[Dartpad example](https://dartpad.dev/7bf7aa276288a84c2685fc393524e613). | I got a simpler and straight forward answer, especially if you want to still use the fractional part as decimal, and not as a string
```dart
double kunle = 300.0/7.0; // the decimal number
int s = kunle.floor(); // takes out the integer part
double fractionalPart = kunle-s; // takes out out the fractional part
print(fractionalPart); // print the fractional part
``` |
62,674,091 | This is my first attempt on using Docker. I followed the instructions [here](https://docs.docker.com/engine/install/ubuntu/) to install Docker on an Ubuntu instance.
When I reached the `apt-cache madison docker-ce` step, it showed many versions, a series of version 5.19.xx and another series of version 18.06.xx. Partial output:
```
docker-ce | 5:19.03.12~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 5:19.03.11~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
docker-ce | 5:18.09.0~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.3~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 18.06.2~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
...
```
Should I be installing version 5 or 18? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/936293/"
] | There is a `truncate()` function for double type which returns the integer part discarding the fractional part. We can subtract that from the original double to get the fraction.
```
double myDouble = 4734.602654867;
double fraction = myDouble - myDouble.truncate();
print(fraction); // --> prints 0.602654867
```
*Edit:*
If we want 4 digits specifically from the fractional part, we can do this..
```
int result = (fraction*10000).truncate();
print(result); // --> prints 6026
```
To do all this one line, we can do it like this..
```
int result = ((myDouble - myDouble.truncate())*10000).truncate(); // <-- 6026
``` | You can do that using `split()`
Like this..
```dart
var s = 4734.602654867;
var a = s.toString().split('.')[1]. substring(0,4); // here a = 6026
```
Hope it solves your issue.. |
62,674,115 | I have a `checkbox` and if the `checkbox.Value = False` I want to disable my TextInput-Object. On the internet there where some suggestions, but the methods which were used are not working for me, because the methods are not found.
I tried it with the `.Valid`-method:
```
Dim tf As TextInput
Dim checkbox As CheckBox
Sub checkbox_click()
Set checkbox = ActiveDocument.FormFields("checkbox").CheckBox
Set tf = ActiveDocument.FormFields("textfield").TextInput
If checkbox.Value = False
tf.Valid = False
End If
End Sub
```
But that doesn't work for some reason. I found `tf.Enabled = False` on the internet, but this method is unknown in my case. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674115",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12704433/"
] | You need something more like this:
```
Dim ff As FormField
Dim checkbox As CheckBox
.
.
Set checkbox = ActiveDocument.FormFields("checkbox").CheckBox
Set ff = ActiveDocument.FormFields("textfield")
If checkbox.Value = False
ff.Enabled = False
End If
```
With legacy FormField objects, some of the properties you need are associated with the FormField itself, and others are associated with child objects of the FormField such as the FormField.Checkbox
So the problem here is that `tf` is a `FormField.TextInput` object, but `.Enabled` is a property of the `FormField` object.
Not relevant to your question, but just as an observation, FormFields do not have Word events associated with them in the normal sense of VBA Events. The settings of each field tell Word to run a named Sub "on entry" and/or "on exit" - that's it. No actual click events. No problem with using names that makes those things look like events but I just thought I'd mention it. | Whilst the internet can be helpful you should also use the Object Browser in the VBE along with Help. 5 seconds spent doing a search for FormField would have given you the answer.
```
Dim tf As TextInput
Dim checkbox As CheckBox
Sub checkbox_click()
Set checkbox = ActiveDocument.FormFields("checkbox").CheckBox
If checkbox.Value = False
ActiveDocument.FormFields("textfield").Enabled = False
End If
End Sub
``` |
62,674,125 | I am working on an application that deals with loading some SAS datasets into SQL using C#. I use the SAS OLE DB Adapter to load the SAS dataset into C# data table and its working. However, the datevalues which is stored as string datatype in the SAS file is getting loaded as 10 digit number and stored as string format in the data table.
Hence, I tried to convert them to datetime however I am getting only the date value and time value is 00:00:00 for all the records. The source has the date and time values for each record. Below is my code that I use to convert the string to datetime string.
```
public string ComputeDate(string sSourceData)
{
string sProcessedDate = string.Empty;
int iDateValue = ((Convert.ToInt32(sSourceData) / 86400) + 21916);
Double dDate = Convert.ToInt32(iDateValue);
DateTime dtDateValue = DateTime.FromOADate(dDate);
sProcessedDate = dtDateValue.ToString();
return sProcessedDate;
}
```
The source data sample and the data that is getting loaded in the data table is below
[![Source and DataTable](https://i.stack.imgur.com/52tmC.png)](https://i.stack.imgur.com/52tmC.png)
The output after converting the OADate is below
[![Final Output](https://i.stack.imgur.com/10b94.png)](https://i.stack.imgur.com/10b94.png)
It would be great if someone can help me out in providing me an solution for getting the output along with original time value provided in the source.
Thanks in advance. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674125",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4380772/"
] | Integer ratios such as
```
Convert.ToInt32(sSourceData) / 86400
```
will yield an integer.
Ensure the result contains the day fraction (`[0..1]`) which is the time part in `SQL / C#`
```
Double dDate = Convert.ToInt32(sSourceData) / 86400.0 + 21916.0;
``` | Found the solution for my problem.
Instead of setting the datatype as Int, I need to set the datatype as double so that the timestamp of the OADate will be coming as decmimal values.
This way its getting output as datetime.
```
public string ComputeDate(string sSourceData)
{
string sProcessedDate = string.Empty;
Double iDateValue = ((Convert.ToDouble(sSourceData) / 86400) + 21916);
DateTime dtDateValue = DateTime.FromOADate(iDateValue);
sProcessedDate = dtDateValue.ToString();
return sProcessedDate;
}
``` |
62,674,140 | How can I get the output I expected in texts like the example below?
```
x<-c("Commerce recommend erkanexample.com.tr. This site erkanexample.com. erkandeneme.com is widely. The company name is apple.commerce is coma. spread")
x<-gsub("(.com)\\S+", "",x)
x
[1] "Commerce r erkanexample This site erkanexample erkandeneme.com is widely. The name is apple is"
expected
[1] "Commerce recommend This site. is widely. The company name is apple.commerce is coma. spread"
>
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11441610/"
] | Integer ratios such as
```
Convert.ToInt32(sSourceData) / 86400
```
will yield an integer.
Ensure the result contains the day fraction (`[0..1]`) which is the time part in `SQL / C#`
```
Double dDate = Convert.ToInt32(sSourceData) / 86400.0 + 21916.0;
``` | Found the solution for my problem.
Instead of setting the datatype as Int, I need to set the datatype as double so that the timestamp of the OADate will be coming as decmimal values.
This way its getting output as datetime.
```
public string ComputeDate(string sSourceData)
{
string sProcessedDate = string.Empty;
Double iDateValue = ((Convert.ToDouble(sSourceData) / 86400) + 21916);
DateTime dtDateValue = DateTime.FromOADate(iDateValue);
sProcessedDate = dtDateValue.ToString();
return sProcessedDate;
}
``` |
62,674,155 | I have two data frames here.
```
SCORE_df <- data.frame("Participant" = rep(1:2, times=1, each=7), "Score" = as.integer(runif(14, 0, 100)), "Time" = c('17:00:00', '17:00:01', '17:00:02', '17:00:03', '17:00:04', '17:00:05', '17:00:06', '19:50:30', '19:50:31', '19:50:32', '19:50:33', '19:50:34', '19:50:35', '19:50:36'))
TIME_df <- data.frame("Participant" = c(1,2), "Start" = c('17:00:02', '19:50:31'), "End" = c('17:00:05', '19:50:33'))
> SCORE_df
Participant Score Time
1 1 56 17:00:00
2 1 77 17:00:01
3 1 27 17:00:02
4 1 78 17:00:03
5 1 46 17:00:04
6 1 22 17:00:05
7 1 35 17:00:06
8 2 26 19:50:30
9 2 64 19:50:31
10 2 29 19:50:32
11 2 29 19:50:33
12 2 90 19:50:34
13 2 0 19:50:35
14 2 51 19:50:36
> TIME_df
Participant Start End
1 1 17:00:02 17:00:05
2 2 19:50:31 19:50:33
```
I want to use the Start and End information in TIME\_df to subset SCORE\_df data -- i.e., only retain the Score data "within" the Start to End time (inclusive) for "each" participant.
The lines below does wrong subsetting.
```
for(p in 1:nrow(SCORE_df)){
ppt <- SCORE_df$Participant[p]
SCORE_df_trimmed <- with(SCORE_df[which(SCORE_df$Participant==ppt),], SCORE_df[strptime(Time, "%H:%M:%S") >= strptime(TIME_df$Start[TIME_df$Participant==ppt], "%H:%M:%S") & strptime(Time, "%H:%M:%S") <= strptime(TIME_df$End[TIME_df$Participant==ppt], "%H:%M:%S"),])
}
> SCORE_df_trimmed
Participant Score Time
2 1 77 17:00:01
3 1 27 17:00:02
4 1 78 17:00:03
9 2 64 19:50:31
10 2 29 19:50:32
11 2 29 19:50:33
```
I will truly appreciate it if anyone could pinpoint the error in the lines mentioned above. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674155",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4416394/"
] | An option with `data.table`:
```r
library(data.table)
setDT(SCORE_df)
setDT(TIME_df)
# POSIXct works with a date reference, so I add in just any date
SCORE_df[, Time := as.POSIXct(paste0("2000-01-01 ", Time), tz = "UTC")]
TIME_df[, Start := as.POSIXct(paste0("2000-01-01 ", Start), tz = "UTC")]
TIME_df[, End := as.POSIXct(paste0("2000-01-01 ", End), tz = "UTC")]
SCORE_df[TIME_df,
on = .(Participant = Participant, Time >= Start, Time <= End),
.(Participant, Score, Time = x.Time)]
# Participant Score Time
# 1: 1 49 2000-01-01 17:00:02
# 2: 1 75 2000-01-01 17:00:03
# 3: 1 8 2000-01-01 17:00:04
# 4: 1 7 2000-01-01 17:00:05
# 5: 2 49 2000-01-01 19:50:31
# 6: 2 59 2000-01-01 19:50:32
# 7: 2 13 2000-01-01 19:50:33
``` | You can use this:
```
library(lubridate)
SCORE_df$Time <- hms(SCORE_df$Time)
TIME_df$Start <- hms(TIME_df$Start)
TIME_df$End <- hms(TIME_df$End)
library(dplyr)
SCORE_df1 <- SCORE_df %>%
filter(Participant == 1 & Time >= TIME_df$Start[1] & Time <= TIME_df$End[1])
SCORE_df2 <- SCORE_df %>%
filter(Participant == 2 & Time >= TIME_df$Start[2] & Time <= TIME_df$End[2])
rbind(SCORE_df1, SCORE_df2)
```
```
Participant Score Time
1 1 82 17H 0M 2S
2 1 35 17H 0M 3S
3 1 46 17H 0M 4S
4 1 42 17H 0M 5S
5 2 44 19H 50M 31S
6 2 61 19H 50M 32S
7 2 69 19H 50M 33S
``` |
62,674,155 | I have two data frames here.
```
SCORE_df <- data.frame("Participant" = rep(1:2, times=1, each=7), "Score" = as.integer(runif(14, 0, 100)), "Time" = c('17:00:00', '17:00:01', '17:00:02', '17:00:03', '17:00:04', '17:00:05', '17:00:06', '19:50:30', '19:50:31', '19:50:32', '19:50:33', '19:50:34', '19:50:35', '19:50:36'))
TIME_df <- data.frame("Participant" = c(1,2), "Start" = c('17:00:02', '19:50:31'), "End" = c('17:00:05', '19:50:33'))
> SCORE_df
Participant Score Time
1 1 56 17:00:00
2 1 77 17:00:01
3 1 27 17:00:02
4 1 78 17:00:03
5 1 46 17:00:04
6 1 22 17:00:05
7 1 35 17:00:06
8 2 26 19:50:30
9 2 64 19:50:31
10 2 29 19:50:32
11 2 29 19:50:33
12 2 90 19:50:34
13 2 0 19:50:35
14 2 51 19:50:36
> TIME_df
Participant Start End
1 1 17:00:02 17:00:05
2 2 19:50:31 19:50:33
```
I want to use the Start and End information in TIME\_df to subset SCORE\_df data -- i.e., only retain the Score data "within" the Start to End time (inclusive) for "each" participant.
The lines below does wrong subsetting.
```
for(p in 1:nrow(SCORE_df)){
ppt <- SCORE_df$Participant[p]
SCORE_df_trimmed <- with(SCORE_df[which(SCORE_df$Participant==ppt),], SCORE_df[strptime(Time, "%H:%M:%S") >= strptime(TIME_df$Start[TIME_df$Participant==ppt], "%H:%M:%S") & strptime(Time, "%H:%M:%S") <= strptime(TIME_df$End[TIME_df$Participant==ppt], "%H:%M:%S"),])
}
> SCORE_df_trimmed
Participant Score Time
2 1 77 17:00:01
3 1 27 17:00:02
4 1 78 17:00:03
9 2 64 19:50:31
10 2 29 19:50:32
11 2 29 19:50:33
```
I will truly appreciate it if anyone could pinpoint the error in the lines mentioned above. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674155",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4416394/"
] | An option with `data.table`:
```r
library(data.table)
setDT(SCORE_df)
setDT(TIME_df)
# POSIXct works with a date reference, so I add in just any date
SCORE_df[, Time := as.POSIXct(paste0("2000-01-01 ", Time), tz = "UTC")]
TIME_df[, Start := as.POSIXct(paste0("2000-01-01 ", Start), tz = "UTC")]
TIME_df[, End := as.POSIXct(paste0("2000-01-01 ", End), tz = "UTC")]
SCORE_df[TIME_df,
on = .(Participant = Participant, Time >= Start, Time <= End),
.(Participant, Score, Time = x.Time)]
# Participant Score Time
# 1: 1 49 2000-01-01 17:00:02
# 2: 1 75 2000-01-01 17:00:03
# 3: 1 8 2000-01-01 17:00:04
# 4: 1 7 2000-01-01 17:00:05
# 5: 2 49 2000-01-01 19:50:31
# 6: 2 59 2000-01-01 19:50:32
# 7: 2 13 2000-01-01 19:50:33
``` | using functions from the `tidyverse` you can try
```
library(tidyverse)
TIME_df %>%
mutate_at(vars(Start, End), lubridate::hms) %>%
pmap(., ~SCORE_df %>%
mutate(Time = lubridate::hms(Time)) %>%
filter(Time >= ..2 & Time <= ..3)) %>%
bind_rows()
Participant Score Time
1 1 21 17H 0M 2S
2 1 19 17H 0M 3S
3 1 83 17H 0M 4S
4 1 92 17H 0M 5S
5 2 23 19H 50M 31S
6 2 65 19H 50M 32S
7 2 70 19H 50M 33S
```
Without any transformation you can try
```
SCORE_df %>%
left_join(pivot_longer(TIME_df, c(Start, End)), by=c("Participant", "Time" = "value")) %>%
group_by(Participant) %>%
slice(which(name=="Start"):which(name=="End"))
# A tibble: 7 x 4
# Groups: Participant [2]
Participant Score Time name
<dbl> <int> <chr> <chr>
1 1 21 17:00:02 Start
2 1 19 17:00:03 NA
3 1 83 17:00:04 NA
4 1 92 17:00:05 End
5 2 23 19:50:31 Start
6 2 65 19:50:32 NA
7 2 70 19:50:33 End
``` |
62,674,160 | I wrote a JAX-RS test project. After I run it in eclipse with wildfly 11, I opened the URL with IE and receive a 404 error.
The code is like below:
Package name: Test\_rs\_01.
HellowWorldResource.java
```
package org.jboss.jaxrs.rest;
import javax.ws.rs.*;
@Path("helloworld")
public class HelloWorldResource {
@GET
@Path("helloworld")
public String helloworld() {
return "Hello World!";
}
}
```
RestApplication.java
```
package rest.rest;
import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;
@ApplicationPath("/rest")
public class RestApplication extends Application {
}
```
I tried with this URL,http://localhost:8080/Test\_\_rs\_01/rest/helloworld/helloworld and 404 error happened. Can someone help? Thx. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9269017/"
] | An option with `data.table`:
```r
library(data.table)
setDT(SCORE_df)
setDT(TIME_df)
# POSIXct works with a date reference, so I add in just any date
SCORE_df[, Time := as.POSIXct(paste0("2000-01-01 ", Time), tz = "UTC")]
TIME_df[, Start := as.POSIXct(paste0("2000-01-01 ", Start), tz = "UTC")]
TIME_df[, End := as.POSIXct(paste0("2000-01-01 ", End), tz = "UTC")]
SCORE_df[TIME_df,
on = .(Participant = Participant, Time >= Start, Time <= End),
.(Participant, Score, Time = x.Time)]
# Participant Score Time
# 1: 1 49 2000-01-01 17:00:02
# 2: 1 75 2000-01-01 17:00:03
# 3: 1 8 2000-01-01 17:00:04
# 4: 1 7 2000-01-01 17:00:05
# 5: 2 49 2000-01-01 19:50:31
# 6: 2 59 2000-01-01 19:50:32
# 7: 2 13 2000-01-01 19:50:33
``` | You can use this:
```
library(lubridate)
SCORE_df$Time <- hms(SCORE_df$Time)
TIME_df$Start <- hms(TIME_df$Start)
TIME_df$End <- hms(TIME_df$End)
library(dplyr)
SCORE_df1 <- SCORE_df %>%
filter(Participant == 1 & Time >= TIME_df$Start[1] & Time <= TIME_df$End[1])
SCORE_df2 <- SCORE_df %>%
filter(Participant == 2 & Time >= TIME_df$Start[2] & Time <= TIME_df$End[2])
rbind(SCORE_df1, SCORE_df2)
```
```
Participant Score Time
1 1 82 17H 0M 2S
2 1 35 17H 0M 3S
3 1 46 17H 0M 4S
4 1 42 17H 0M 5S
5 2 44 19H 50M 31S
6 2 61 19H 50M 32S
7 2 69 19H 50M 33S
``` |
62,674,160 | I wrote a JAX-RS test project. After I run it in eclipse with wildfly 11, I opened the URL with IE and receive a 404 error.
The code is like below:
Package name: Test\_rs\_01.
HellowWorldResource.java
```
package org.jboss.jaxrs.rest;
import javax.ws.rs.*;
@Path("helloworld")
public class HelloWorldResource {
@GET
@Path("helloworld")
public String helloworld() {
return "Hello World!";
}
}
```
RestApplication.java
```
package rest.rest;
import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;
@ApplicationPath("/rest")
public class RestApplication extends Application {
}
```
I tried with this URL,http://localhost:8080/Test\_\_rs\_01/rest/helloworld/helloworld and 404 error happened. Can someone help? Thx. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9269017/"
] | An option with `data.table`:
```r
library(data.table)
setDT(SCORE_df)
setDT(TIME_df)
# POSIXct works with a date reference, so I add in just any date
SCORE_df[, Time := as.POSIXct(paste0("2000-01-01 ", Time), tz = "UTC")]
TIME_df[, Start := as.POSIXct(paste0("2000-01-01 ", Start), tz = "UTC")]
TIME_df[, End := as.POSIXct(paste0("2000-01-01 ", End), tz = "UTC")]
SCORE_df[TIME_df,
on = .(Participant = Participant, Time >= Start, Time <= End),
.(Participant, Score, Time = x.Time)]
# Participant Score Time
# 1: 1 49 2000-01-01 17:00:02
# 2: 1 75 2000-01-01 17:00:03
# 3: 1 8 2000-01-01 17:00:04
# 4: 1 7 2000-01-01 17:00:05
# 5: 2 49 2000-01-01 19:50:31
# 6: 2 59 2000-01-01 19:50:32
# 7: 2 13 2000-01-01 19:50:33
``` | using functions from the `tidyverse` you can try
```
library(tidyverse)
TIME_df %>%
mutate_at(vars(Start, End), lubridate::hms) %>%
pmap(., ~SCORE_df %>%
mutate(Time = lubridate::hms(Time)) %>%
filter(Time >= ..2 & Time <= ..3)) %>%
bind_rows()
Participant Score Time
1 1 21 17H 0M 2S
2 1 19 17H 0M 3S
3 1 83 17H 0M 4S
4 1 92 17H 0M 5S
5 2 23 19H 50M 31S
6 2 65 19H 50M 32S
7 2 70 19H 50M 33S
```
Without any transformation you can try
```
SCORE_df %>%
left_join(pivot_longer(TIME_df, c(Start, End)), by=c("Participant", "Time" = "value")) %>%
group_by(Participant) %>%
slice(which(name=="Start"):which(name=="End"))
# A tibble: 7 x 4
# Groups: Participant [2]
Participant Score Time name
<dbl> <int> <chr> <chr>
1 1 21 17:00:02 Start
2 1 19 17:00:03 NA
3 1 83 17:00:04 NA
4 1 92 17:00:05 End
5 2 23 19:50:31 Start
6 2 65 19:50:32 NA
7 2 70 19:50:33 End
``` |
62,674,182 | Gradle has this new feature that listen to file system events instead of touching the filesystem to detect changes to local files.
It can be enabled on the command line with `--watch-fs`. How can it be enabled persistently in a file that I would check in source control? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674182",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/300053/"
] | As of today, with Gradle 6.5 you can enable the experimental feature by putting the following in `gradle.properties` in the project root, or in your `~/.gradle/gradle.properties` file:
```
org.gradle.unsafe.watch-fs=true
```
The feature will eventually be enabled by default, and the property name will change (losing the `unsafe` part). | Blog post from May 2020 gives the property as this:
`org.gradle.vfs.watch=true`
<https://blog.gradle.org/introducing-file-system-watching>
and it's confirmed on the Build Properties list here (for Gradle 7.0 and up):
<https://docs.gradle.org/current/userguide/build_environment.html#sec:gradle_configuration_properties>
![explainer of the url shared](https://i.stack.imgur.com/zHBPu.png) |
62,674,191 | Is there an attribute or capability in C# to automatically memoize static function results, based on input parameters, without writing all the logic for it myself?
For example in Python you can depend on functools:
```
import functools
@functools.lru_cache()
def my_function(param1):
...
```
Is there some similar package or built-in for C#?
```
[Memoizable]
public static MyFunction(MyType param1)
{
...
```
Results should be cached to avoid re-calculating for the same inputs. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3310334/"
] | Nope, does not exist. There are external tools that do so, but there is nothing automatic in C# for this. You will have to program it, if you need it, or use some external tool that does code rewrite. | ```
public class SampleClass
{
private readonly IDictionary<int,int> _cache =
new Dictionary<int, int>();
public int Calculate(int input)
{
if (!_cache.ContainsKey(input))
{
int result = input*2;
_cache.Add(input,result);
}
return _cache[input];
}
}
``` |
62,674,192 | Many times I make a data checkpoint on my code, so I start from there next day without running the code again. I do that by saving the python objects into a dictionary and then to pickle. e.g.
`saved_dict = { 'A' : 'a', 'B' : ['b'], 'C' : 3} etc`
And when I load it i do:
`A = saved_dict['A']`
`B = saved_dict['B']`
... etc
I wonder if there is a way to do that in an automated way, maybe with a for loop, instead of writing them all one by one.
Any ideas ? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674192",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6346825/"
] | You can assign directly to [globals()](https://docs.python.org/3/library/functions.html#globals), without the need of `exec`, which could be a source of security issues.
```
>>> saved_dict = { 'A' : 'a', 'B' : ['b'], 'C' : 3}
>>> for k,v in saved_dict.items():
... globals()[k] = v
...
>>> A
'a'
>>> B
['b']
>>> C
3
``` | Note that this is a terrible idea and there are definetly better solutions for your problem itself (for example not storing a dict). But I shall entertain this:
```py
for k in saved_dict:
exec(f"{k} = {saved_dict[k]}")
```
Note that you need `exec` and not `eval`, as assignments are not expressions.
This is an attrocity and im going to hell for this crime. |
62,674,197 | I have a PLC Siemens s7-1500 that did not include the license for the OPC-UA server, in consequence I'm trying to look for alternatives because I need to link this PLC information to an OPC-UA server than will be then consulted by another OPC-UA client.
By the moment I have explored the python library python-opcua, but I am new to IoT and OPC-UA and I was hoping to get some guidance for continuing this implementation.
Best regards
Alejandro | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846249/"
] | You can assign directly to [globals()](https://docs.python.org/3/library/functions.html#globals), without the need of `exec`, which could be a source of security issues.
```
>>> saved_dict = { 'A' : 'a', 'B' : ['b'], 'C' : 3}
>>> for k,v in saved_dict.items():
... globals()[k] = v
...
>>> A
'a'
>>> B
['b']
>>> C
3
``` | Note that this is a terrible idea and there are definetly better solutions for your problem itself (for example not storing a dict). But I shall entertain this:
```py
for k in saved_dict:
exec(f"{k} = {saved_dict[k]}")
```
Note that you need `exec` and not `eval`, as assignments are not expressions.
This is an attrocity and im going to hell for this crime. |
62,674,204 | I tried to right a procedure to return a table.
that procedure should return a list with an employee info when `emp_num` parameter equal `SSN` otherwise return list with all employees info.
First I create a record as below:
```
CREATE OR REPLACE TYPE emp_record IS OBJECT(emp_lname VARCHAR2(30),
emp_ssn CHAR(9),
emp_sal NUMBER(6));
```
Then I create table of type `emp_record` as below:
`CREATE OR REPLACE TYPE emp_table IS TABLE OF emp_record;`
Then I create a procedure `get_employee` with two parameters:
```
CREATE OR REPLACE PROCEDURE get_employee(emp_num IN NUMBER , output_emp OUT emp_table) AS
CURSOR emp_cur IS
SELECT LNAME,SSN,SALARY
FROM EMPLOYEE
WHERE NVL((emp_num = SSN),(SSN = SSN));
BEGIN
IF NOT (emp_cur%ISOPEN) THEN
OPEN emp_cur;
END IF;
LOOP
FETCH emp_cur BULK COLLECT INTO output_emp;
EXIT WHEN output_emp.count=0;
CLOSE emp_cur;
END LOOP;
END;
```
And when I run that code the below error has appear:
**[Warning] ORA-24344: success with compilation error
6/20 PL/SQL: ORA-00907: missing right parenthesis
4/1 PL/SQL: SQL Statement ignored
(1: 0): Warning: compiled but with compilation errors** | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846063/"
] | You have used the alien code here, There is a syntax error:
```
WHERE
NVL(
(EMP_NUM = SSN),
(SSN = SSN)
);
```
`NVL` can take two columns/constants as an input parameter, not a boolean.
You want a `WHERE` condition - either `EMP_NUM` is full or if it is not null then it is equal to `SSN` then You need to use something like this:
```
WHERE
NVL(EMP_NUM,SSN) = SSN;
```
or better to use `OR` condition as follows:
```
WHERE EMP_NUM IS NULL OR EMP_NUM = SSN;
``` | Please refer [Oracle Documentation](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions105.htm) for [NVL](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions105.htm) syntax. Based on similarity of syntax I feel you probably want to use [DECODE](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions040.htm) instead. If so [CASE](https://docs.oracle.com/cd/B19306_01/server.102/b14200/expressions004.htm) is the way to go. |
62,674,216 | In my project, I used @Configuration, @EnableAutoConfiguration, @ComponentScan and ImportResource configuration with annotation. I did not used @SpringBootApplication, but application is built successfully without @SpringBootApplication annotation. I don't understand why @RestController class not invoked?
```
@Configuration
@EnableAutoConfiguration(exclude = {
//removed default db config
DataSourceAutoConfiguration.class, XADataSourceAutoConfiguration.class})
@ComponentScan(basePackages = { "com.test.debasish.dummy" }, excludeFilters = {
@ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, value = Test.class))})
@ImportResource( value = {"classpath*:*beans*.xml"})
public class TestApplication{
public static void main(String[] args) {
SpringApplication.run(TestApplication.class, args);
}
}
@RestController
public class TestController {
private static final String template = "Hello, %s!";
private final AtomicLong counter = new AtomicLong();
@GetMapping("/test")
@ResponseBody
public Greeting getResource(@RequestParam(name="name", required=false, defaultValue="Stranger") String name) {
return new Greeting(counter.incrementAndGet(), String.format(template, name));
}
}
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674216",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4837425/"
] | You have used the alien code here, There is a syntax error:
```
WHERE
NVL(
(EMP_NUM = SSN),
(SSN = SSN)
);
```
`NVL` can take two columns/constants as an input parameter, not a boolean.
You want a `WHERE` condition - either `EMP_NUM` is full or if it is not null then it is equal to `SSN` then You need to use something like this:
```
WHERE
NVL(EMP_NUM,SSN) = SSN;
```
or better to use `OR` condition as follows:
```
WHERE EMP_NUM IS NULL OR EMP_NUM = SSN;
``` | Please refer [Oracle Documentation](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions105.htm) for [NVL](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions105.htm) syntax. Based on similarity of syntax I feel you probably want to use [DECODE](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions040.htm) instead. If so [CASE](https://docs.oracle.com/cd/B19306_01/server.102/b14200/expressions004.htm) is the way to go. |
62,674,228 | I want to test the type of the first object in a signature. The following shows some ways I have found that work. But why does a smart match on a Type (2nd of the 3 tests below) not work?
Is there a better way than stringifying and testing for the string equivalent of the Type?
(Below is the use case I am working on)
```
raku -e "sub a( |c ) { say so |c[0].WHAT.raku ~~ /'Rat'/, so |c[0].WHAT ~~ Rat, so |c[0].^name ~~ /'Rat'/ };a(3/2);a(2)"
TrueFalseTrue
FalseFalseFalse
# OUTPUT:
#TrueFalseTrue
#FalseFalseFalse
```
I am writing a `proto sub handle`, and most of the subs have similar signatures, eg. `multi sub handle( Pod $node, MyObj $p, Int $level --> Str)`
So most of the multi subs do different things depending on what is in $node. However, how to handle cases when the `handle` is called with `Nil` or a plain string. I am thinking about something like
```
proto handle(|c) {
if |c[0].^name ~~ /'Str'/ { # code for string }
else { {*} }
}
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6293949/"
] | >
> A better way to introspect ...
>
>
>
In general, a better way to do *anything* in any programming language is to *not* introspect if you can avoid it.
In general, in Raku, you *can* avoid manual introspection. See the section **Introspection** toward the end of this answer for further discussion.
>
> ... a capture
>
>
>
The best tool for getting the functionality that introspection of a capture provides is to use a signature. That's their main purpose in life.
>
> I want to test the type of the first object in a signature
>
>
>
Use signatures:
```
proto handle(|) {*}
multi handle( Pod $node ) { ... }
multi handle( Str $string ) { ... }
multi handle( Nil ) { ... }
```
>
> The following shows some ways I have found that work.
>
>
>
While they do what you want, they are essentially ignoring all of Raku's signature features. They reduce the signature to just a binding to the capture as a single structure; and then use manual introspection of that capture in the routine's body.
There's almost always a simpler and better way to do such things using signatures.
>
> why does [`|c[0].WHAT ~~ Rat`, with `c[0] == 3/2`] not work?
>
>
>
I'll simplify first, then end up with what your code is doing:
```
say 3/2 ~~ Rat; # True
say (3/2) ~~ Rat; # True
say (3/2).WHAT ~~ Rat; # True
say |((3/2).WHAT ~~ Rat); # True
say (|(3/2).WHAT) ~~ Rat; # False
say |(3/2).WHAT ~~ Rat; # False
```
The last case is because `|` has a higher [precedence](https://docs.raku.org/language/operators#Operator_precedence) than `~~`.
>
> Is there a better way than stringifying and testing for the string equivalent of the Type?
>
>
>
OMG yes.
Use the types, Luke.
(And in your use case, do so using signatures.)
Introspection
=============
Compared to code that manually introspects incoming data in the body of a routine, appropriate use of signatures will typically:
* Read better;
* Generate better low-level code;
* Be partially or fully evaluated during the *compile* phase.
If a language and its compiler have addressed a use case by providing a particular feature, such as signatures, then using that feature instead of introspection will generally lead to the above three benefits.
Languages/compilers can be broken into four categories, namely those that:
1. Do not do or allow any introspection;
2. Allow the *compiler* to introspect, but not *devs*;
3. Allow both the compiler and devs to introspect, but aim to make it a last resort, at least for devs;
4. Enable and encourage devs to introspect.
Raku(do) are in the third category. In the context of this SO, signatures are the primary feature that all but eliminates any need for a dev to manually introspect. | You can simply smartmatch to a type:
```
raku -e "sub a( *@c ) { say @c[0] ~~ Rat };a(3/2);a(2)"
True
False
```
Also I am using here a [slurpy](https://docs.raku.org/type/Signature#index-entry-slurpy_argument) and not a capture, which is another alternative. Any way, with a single argument you're probably better off using [type captures](https://docs.raku.org/type/Signature#index-entry-Type_capture)
```
raku -e "sub a( ::T $ ) { say ::T ~~ Rat };a(3/2);a(2)"
True
False
``` |
62,674,228 | I want to test the type of the first object in a signature. The following shows some ways I have found that work. But why does a smart match on a Type (2nd of the 3 tests below) not work?
Is there a better way than stringifying and testing for the string equivalent of the Type?
(Below is the use case I am working on)
```
raku -e "sub a( |c ) { say so |c[0].WHAT.raku ~~ /'Rat'/, so |c[0].WHAT ~~ Rat, so |c[0].^name ~~ /'Rat'/ };a(3/2);a(2)"
TrueFalseTrue
FalseFalseFalse
# OUTPUT:
#TrueFalseTrue
#FalseFalseFalse
```
I am writing a `proto sub handle`, and most of the subs have similar signatures, eg. `multi sub handle( Pod $node, MyObj $p, Int $level --> Str)`
So most of the multi subs do different things depending on what is in $node. However, how to handle cases when the `handle` is called with `Nil` or a plain string. I am thinking about something like
```
proto handle(|c) {
if |c[0].^name ~~ /'Str'/ { # code for string }
else { {*} }
}
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6293949/"
] | You can simply smartmatch to a type:
```
raku -e "sub a( *@c ) { say @c[0] ~~ Rat };a(3/2);a(2)"
True
False
```
Also I am using here a [slurpy](https://docs.raku.org/type/Signature#index-entry-slurpy_argument) and not a capture, which is another alternative. Any way, with a single argument you're probably better off using [type captures](https://docs.raku.org/type/Signature#index-entry-Type_capture)
```
raku -e "sub a( ::T $ ) { say ::T ~~ Rat };a(3/2);a(2)"
True
False
``` | You can pull stuff out of a Capture in the signature.
```raku
# ( |C ( ::Type $a, +@b ) )
proto handle( | ( ::Type, +@ ) ) {
if Type ~~ Str {
…
} else {
{*}
}
}
```
Basically a `::Foo` before a parameter (or instead of it) is similar to `.WHAT` on that parameter.
It also becomes usable as a type descriptor.
```raku
sub foo ( ::Type $a ) {
my Type $b = $a;
}
```
---
It is an incredibly bad idea to compare types based on their name.
```raku
my $a = anon class Foo { has $.a }
my $b = anon class Foo { has $.b }
say $a.WHAT =:= $b.WHAT; # False
say $a.^name eq $b.^name; # True
```
As far as Raku is concerned it is entirely a coincidence that two types happen to have the same name.
If you do use the names, your code will be confused about the reality of the situation. |
62,674,228 | I want to test the type of the first object in a signature. The following shows some ways I have found that work. But why does a smart match on a Type (2nd of the 3 tests below) not work?
Is there a better way than stringifying and testing for the string equivalent of the Type?
(Below is the use case I am working on)
```
raku -e "sub a( |c ) { say so |c[0].WHAT.raku ~~ /'Rat'/, so |c[0].WHAT ~~ Rat, so |c[0].^name ~~ /'Rat'/ };a(3/2);a(2)"
TrueFalseTrue
FalseFalseFalse
# OUTPUT:
#TrueFalseTrue
#FalseFalseFalse
```
I am writing a `proto sub handle`, and most of the subs have similar signatures, eg. `multi sub handle( Pod $node, MyObj $p, Int $level --> Str)`
So most of the multi subs do different things depending on what is in $node. However, how to handle cases when the `handle` is called with `Nil` or a plain string. I am thinking about something like
```
proto handle(|c) {
if |c[0].^name ~~ /'Str'/ { # code for string }
else { {*} }
}
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6293949/"
] | >
> A better way to introspect ...
>
>
>
In general, a better way to do *anything* in any programming language is to *not* introspect if you can avoid it.
In general, in Raku, you *can* avoid manual introspection. See the section **Introspection** toward the end of this answer for further discussion.
>
> ... a capture
>
>
>
The best tool for getting the functionality that introspection of a capture provides is to use a signature. That's their main purpose in life.
>
> I want to test the type of the first object in a signature
>
>
>
Use signatures:
```
proto handle(|) {*}
multi handle( Pod $node ) { ... }
multi handle( Str $string ) { ... }
multi handle( Nil ) { ... }
```
>
> The following shows some ways I have found that work.
>
>
>
While they do what you want, they are essentially ignoring all of Raku's signature features. They reduce the signature to just a binding to the capture as a single structure; and then use manual introspection of that capture in the routine's body.
There's almost always a simpler and better way to do such things using signatures.
>
> why does [`|c[0].WHAT ~~ Rat`, with `c[0] == 3/2`] not work?
>
>
>
I'll simplify first, then end up with what your code is doing:
```
say 3/2 ~~ Rat; # True
say (3/2) ~~ Rat; # True
say (3/2).WHAT ~~ Rat; # True
say |((3/2).WHAT ~~ Rat); # True
say (|(3/2).WHAT) ~~ Rat; # False
say |(3/2).WHAT ~~ Rat; # False
```
The last case is because `|` has a higher [precedence](https://docs.raku.org/language/operators#Operator_precedence) than `~~`.
>
> Is there a better way than stringifying and testing for the string equivalent of the Type?
>
>
>
OMG yes.
Use the types, Luke.
(And in your use case, do so using signatures.)
Introspection
=============
Compared to code that manually introspects incoming data in the body of a routine, appropriate use of signatures will typically:
* Read better;
* Generate better low-level code;
* Be partially or fully evaluated during the *compile* phase.
If a language and its compiler have addressed a use case by providing a particular feature, such as signatures, then using that feature instead of introspection will generally lead to the above three benefits.
Languages/compilers can be broken into four categories, namely those that:
1. Do not do or allow any introspection;
2. Allow the *compiler* to introspect, but not *devs*;
3. Allow both the compiler and devs to introspect, but aim to make it a last resort, at least for devs;
4. Enable and encourage devs to introspect.
Raku(do) are in the third category. In the context of this SO, signatures are the primary feature that all but eliminates any need for a dev to manually introspect. | You can pull stuff out of a Capture in the signature.
```raku
# ( |C ( ::Type $a, +@b ) )
proto handle( | ( ::Type, +@ ) ) {
if Type ~~ Str {
…
} else {
{*}
}
}
```
Basically a `::Foo` before a parameter (or instead of it) is similar to `.WHAT` on that parameter.
It also becomes usable as a type descriptor.
```raku
sub foo ( ::Type $a ) {
my Type $b = $a;
}
```
---
It is an incredibly bad idea to compare types based on their name.
```raku
my $a = anon class Foo { has $.a }
my $b = anon class Foo { has $.b }
say $a.WHAT =:= $b.WHAT; # False
say $a.^name eq $b.^name; # True
```
As far as Raku is concerned it is entirely a coincidence that two types happen to have the same name.
If you do use the names, your code will be confused about the reality of the situation. |
62,674,236 | I have a backend app in django python and it is being served on http://localhost:8000.
I have a angular frontend which is being served on http://localhost:4200.
I have disabled CORS on django.
On hitting the login api on http://localhost:8000/auth/login/, I am getting a valid response
along with the Set-Cookie header.
[![enter image description here](https://i.stack.imgur.com/Xcz3X.png)](https://i.stack.imgur.com/Xcz3X.png)
Here is my angular code to print the cookies:
```
this.http.post<any>('http://localhost:8000/auth/login/', this.LoginForm, { observe: 'response' }).subscribe(response => {
console.log("response is ", response);
var cookies = this.cookieService.getAll();//('cookies');
console.log("cookies is :", cookies);
```
It prints an empty object on console.
How do I make this work? I want to use cookies for authentication. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674236",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1109919/"
] | You are trying to set cross domain cookies, which will not work straight away. There are a few steps to follow to be able to do that.
1. Set `withCredentials: true` when making the authentication request from angular
`this.http.post<any>('http://localhost:8000/auth/login/', this.LoginForm, { observe: 'response', withCredentials: true })`
2. Configure your server to return the following CORS headers: `Access-Control-Allow-Credentials: true` and `Access-Control-Allow-Origin: http://localhost:4200`
**Note**
One of the cookies that you are setting is `HttpOnly`. As such, you cannot access it from Javascript (see [documentation](https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#Security)).
You may not need to access the cookies with JS anyway. If you just want to send the cookies in the next API requests, just pass `withCredentials: true` to `HttpClient` other api calls
```
this.http.get('http://localhost:8000/path/to/get/resource',
{ withCredentials: true }).subscribe(response => {
``` | ### Set-Cookies:
In the example in the Question, both client and server are in the same domain, localhost.
On deployment, this may not be the case.
Let us assume the domains as below,
* Client : client1.client.com
* Server: server1.server.com
A http request from the Angular web app in `client1.client.com` to <https://server1.server.com/api/v1/getSomething> has `Set-Cookie: JSESSIONID=xyz` in the response header.
The cookie will be set on `server1.server.com` and NOT on `client1.client.com`.
You can enter `server1.server.com` in the URL bar and see the cookie being set.
### withCredentials:
There is no need for the angular app to read the cookie and send it in the following requests. `withCredentials` property of http request can be used for this.
Refer: <https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/withCredentials>
Example:
```
public getSomething(): Observable<object> {
const httpOptions = {
withCredentials: true
};
return this.http.get(`${this.serverUrl}/getSomething`, httpOptions);
}
```
Refer: <https://angular.io/api/common/http/HttpRequest>
`withCredentials` will set the cookies from the server's domain in the requests to the server.
As mentioned before `Set-Cookie: JSESSIONID=xyz` in the response from `server1.server.com` will be set in `server1.server.com`. The Angular app in `client1.client.com` need not read it. `withCredentials` will take care of it.
### cross domain issues:
When the server and client are in different domains, using withCredentials may not work in all browsers, as they are considered as third party cookies.
In my recent testing on May 2020, I found that withCredentials is not working in certain browsers when the client and server are in different domains.
[![enter image description here](https://i.stack.imgur.com/w0oyB.png)](https://i.stack.imgur.com/w0oyB.png)
* In Safari, the issue occurs when "Prevent cross-site tracking" is enabled (by default). The issue is prevented by disabling the same. <https://support.apple.com/en-in/guide/safari/sfri40732/mac>
* In Android apps, the issue can be avoided by using Chrome Custom Tabs instead of Android WebView. <https://github.com/NewtonJoshua/custom-tabs-client> , <https://developer.chrome.com/multidevice/android/customtabs>
### Same domain:
Looks like mainstream browsers are moving to block third-party cookies.
* Safari - [Full Third-Party Cookie Blocking and More](https://webkit.org/blog/10218/full-third-party-cookie-blocking-and-more/)
* Chrome (by 2022) - [Building a more private web: A path towards making third party cookies obsolete](https://blog.chromium.org/2020/01/building-more-private-web-path-towards.html)
The solution is to have both the client and server in the same domain.
* Client: client1.myapp.com
* Server: server1.myapp.com
And in the `Set-Cookie` response include the root domain too.
Example: "JSESSIONID=xyz; Domain=.myapp.com; Path=/"
This will make sure the cookies are set in all cases. |
62,674,248 | The example bellow appears in both Stroustrup's and CPPreference websites:
```
struct S {
char a; // location #1
int b:5, // location #2
int c:11,
int :0, // note: :0 is "special"
int d:8; // location #3
struct {int ee:8;} e; // location #4
};
```
Considering the definition of *memory location* provided by the C++ standard:
* An object of scalar type;
* The largest contiguous sequence of bit fields of non-zero length.
It is quite clear that:
* 'a' is one location;
* 'b-c' is another location;
* 'd' is the start of a new location.
However it is not clear to me, considering the definition above, why 'd-ee' or 'd-e' is not a single location, but two instead.
References:
* <https://www.stroustrup.com/C++11FAQ.html#memory-model>
* <https://en.cppreference.com/w/cpp/language/memory_model> | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4690542/"
] | The C++ standard ([6.7.1 intro.memory](https://timsong-cpp.github.io/cppwp/intro.memory)) uses extremely similar language and even exactly the same example:
>
> ... or a maximal sequence of adjacent bit-fields all having nonzero width ...
>
>
> [ Example: A class declared as
>
>
>
> ```
> struct {
> char a;
> int b:5,
> c:11,
> :0,
> d:8;
> struct {int ee:8;} e;
> }
>
> ```
>
> contains four separate memory locations: The member `a` and bit-fields `d` and `e.ee` are each separate memory locations ...
>
>
>
One difference is that it uses the word "adjacent" rather than "contiguous".
Are `d` and `ee` *contiguous*? The word "contiguous" literally means without gaps in between, which seems to suggest it's talking about memory layout (do they have padding between?). But this is the part of the standard that defines memory layout, so it would be circular if it defined things in terms of this! This is a very poor choice of word and seems to be the source of the confusion.
Are `d` and `ee` *adjacent*? This is certainly a better word since it's more obviously about the code rather than about memory layout, but I think you could still interpret it so that the answer is either yes or no in this case. But given that there is an example that shows they're not, we have to accept "adjacent" as a shorthand for "fields whose definitions are adjacent to each other directly within the same struct". | First, there is syntax error, bit-fields are separate declarations.
```
struct S {
char a; // location #1 and storage of subobject a
int b:5; // location #2, beginning of new sequence
int c:11;
int :0; // note: :0 is "special", it ends sequence of fields
int d:8; // location #3, beginning of new sequence
struct { // storage of subobject e, sequence of fields is over
int ee:8; // location #4 within e
} e;
};
```
Order of locations for fields is implementation-defined.
In terms of C++ definitions `a`, `b`, `c`, `d` and `e` are separate objects. But `a`,`b`,`c` and `d` are allowed to share memory-locations (physical words or bytes) within a contiguous sequence of locations because they are bit fields, albeit `e` is an object of class type and must have a unique memory location. Non-static member `ee` is sub-object of `e`, which means it occupies part of `e`'s storage, so it is guaranteed to be a separate memory location.
As a result of non-uniqueness you can take address (`operator&`) of `e`, but not `d` or `ee`. |
62,674,257 | ```
var test: Int! {
didSet {
switch test {
case 1: print("one")
case 2: print("two")
default: print("something else")
}
}
}
var toupleTest: (one: Int, two: Int)! {
didSet {
switch toupleTest! {
case (1, 1): print("one, one")
case (2, 2): print("two, two")
default: print("something else")
}
}
}
test = 2 // prints "two"
toupleTest = (1, 1) // prints "one, one"
```
In a case of Int value everything is ok.
But in a case of tuple I have to unwrap the optional twice! Otherwise I've got a compile error.
Looks like swift logic is inconsistent. Or is it a bug? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674257",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2167345/"
] | It is inconsistent, but the reason is a bit complex.
In order for a type to be used in a `switch` statement, it needs to conform to `Equatable`.
Look at this example using a new `struct` type:
```
struct MyType: Equatable {
let x: Int
static func ==(lhs: MyType, rhs: MyType) -> Bool {
return lhs.x == rhs.x
}
}
var mytypeTest: MyType! {
didSet {
switch mytypeTest {
case MyType(x: 1): print("one")
case MyType(x: 2): print("two")
default: print("something else")
}
}
}
mytypeTest = MyType(x: 1)
```
This works, but if you remove `: Equatable` from `MyType` you will get the error **Operator function '~=` requires that 'MyType' conform to 'Equatable'**.
So there's the first hint. `switch` uses the `~=` operator for comparisons and the type must be `Equatable`.
So what happens if we try to compare two tuples using '~=':
```
if (1, 3) ~= (1, 3) {
print("same")
}
```
This gives the error: **Type '(Int, Int)' cannot conform to 'Equatable'; only struct/enum/class types can conform to protocols**.
So, this would imply that tuples can't be used in a `switch` and we know that isn't true.
Well, tuples have a special place in a `switch`, and they're used in *pattern matching* for deconstructing the tuple. For example:
```
let a = (1, 2)
switch a {
case let (x, y):
print("the tuple values are \(x) and \(y)")
}
```
This prints `the tuple values are 1 and 2`.
So, tuples are used in `switch` for matching and deconstructing using *pattern matching*. So you can use a tuple in a `switch` even though it doesn't conform to `Equatable` because it has this special use.
The problem with your example is that the pattern doesn't match the type you are switching on. The type of your value is `(Int, Int)?` and the pattern is `(Int, Int)`.
So, how can you fix this without force unwrapping the tuple value? Change your pattern to match by adding `?` to the pattern:
```
var toupleTest: (one: Int, two: Int)! {
didSet {
switch toupleTest {
case (1, 1)?: print("one, one")
case (2, 2)?: print("two, two")
default: print("something else")
}
}
}
```
Note: Adding `?` works for your `Int` example as well:
```
var test: Int! {
didSet {
switch test {
case 1?: print("one")
case 2?: print("two")
default: print("something else")
}
}
}
```
but it isn't necessary because `Int` is `Equatable` and Swift knows how to compare an `Int?` to an `Int` for equality. | I think that's happened because tuples is value type in swift. And default types also default types as well. Basically your unwrap do nothing, you just mark that this value will be non optional. But to access values inside you have to unwrap your optional tuple and reach values |
62,674,261 | I want to extract the paragraphs after the `AB -` , that can appear 9000 times in a text file.
**Minified example :**
```
AB - This is the part I want to match !
CD - This part is useless
AB - I can also match
texts on multiple
lines !
EF - Did you get my problem ?
GH - Ok, i think that's
enough.
```
**Expected output:**
```
This is the part I want to match !
I can also match
texts on multiple
lines !
```
[Here](https://i.stack.imgur.com/FkGID.png) is a screenshot of the real file, if you want to see what it really looks like.
Kindly help me how I can delete extra information or please guide me on how I can only extract abstracts without any other information. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846214/"
] | It is inconsistent, but the reason is a bit complex.
In order for a type to be used in a `switch` statement, it needs to conform to `Equatable`.
Look at this example using a new `struct` type:
```
struct MyType: Equatable {
let x: Int
static func ==(lhs: MyType, rhs: MyType) -> Bool {
return lhs.x == rhs.x
}
}
var mytypeTest: MyType! {
didSet {
switch mytypeTest {
case MyType(x: 1): print("one")
case MyType(x: 2): print("two")
default: print("something else")
}
}
}
mytypeTest = MyType(x: 1)
```
This works, but if you remove `: Equatable` from `MyType` you will get the error **Operator function '~=` requires that 'MyType' conform to 'Equatable'**.
So there's the first hint. `switch` uses the `~=` operator for comparisons and the type must be `Equatable`.
So what happens if we try to compare two tuples using '~=':
```
if (1, 3) ~= (1, 3) {
print("same")
}
```
This gives the error: **Type '(Int, Int)' cannot conform to 'Equatable'; only struct/enum/class types can conform to protocols**.
So, this would imply that tuples can't be used in a `switch` and we know that isn't true.
Well, tuples have a special place in a `switch`, and they're used in *pattern matching* for deconstructing the tuple. For example:
```
let a = (1, 2)
switch a {
case let (x, y):
print("the tuple values are \(x) and \(y)")
}
```
This prints `the tuple values are 1 and 2`.
So, tuples are used in `switch` for matching and deconstructing using *pattern matching*. So you can use a tuple in a `switch` even though it doesn't conform to `Equatable` because it has this special use.
The problem with your example is that the pattern doesn't match the type you are switching on. The type of your value is `(Int, Int)?` and the pattern is `(Int, Int)`.
So, how can you fix this without force unwrapping the tuple value? Change your pattern to match by adding `?` to the pattern:
```
var toupleTest: (one: Int, two: Int)! {
didSet {
switch toupleTest {
case (1, 1)?: print("one, one")
case (2, 2)?: print("two, two")
default: print("something else")
}
}
}
```
Note: Adding `?` works for your `Int` example as well:
```
var test: Int! {
didSet {
switch test {
case 1?: print("one")
case 2?: print("two")
default: print("something else")
}
}
}
```
but it isn't necessary because `Int` is `Equatable` and Swift knows how to compare an `Int?` to an `Int` for equality. | I think that's happened because tuples is value type in swift. And default types also default types as well. Basically your unwrap do nothing, you just mark that this value will be non optional. But to access values inside you have to unwrap your optional tuple and reach values |
62,674,272 | The description of the Kafka topic *cleanup.policy* configuration is
>
> A string that is either "delete" or "compact" or both. [...]
>
>
>
I wonder how to set both values. I am not able to get it work. A try to change a configuration this way has not the desired effect:
```
c:\Progs\kafka_2.12-2.2.0\bin\windows>kafka-configs.bat --zookeeper <...> --entity-type topics --entity-name MyTopic --alter --add-config cleanup.policy=[delete, compact]
Completed Updating config for entity: topic 'MyTopic'.
c:\Progs\kafka_2.12-2.2.0\bin\windows>kafka-configs.bat --zookeeper <...> --entity-type topics --entity-name MyTopic --describe
Configs for topic 'MyTopic' are cleanup.policy=delete,segment.ms=300000,retention.ms=86400000
```
And this way does not work either:
```
c:\Progs\kafka_2.12-2.2.0\bin\windows>kafka-configs.bat --zookeeper <...> --entity-type topics --entity-name MyTopic --alter --add-config "cleanup.policy=delete compact"
Error while executing config command with args '--zookeeper <...> --entity-type topics --entity-name MyTopic --alter --add-config cleanup.policy=delete compact'
org.apache.kafka.common.config.ConfigException: Invalid value delete compact for configuration cleanup.policy: String must be one of: compact, delete
at org.apache.kafka.common.config.ConfigDef$ValidString.ensureValid(ConfigDef.java:931)
at org.apache.kafka.common.config.ConfigDef$ValidList.ensureValid(ConfigDef.java:907)
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:480)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:464)
at kafka.log.LogConfig$.validate(LogConfig.scala:305)
at kafka.zk.AdminZkClient.validateTopicConfig(AdminZkClient.scala:319)
at kafka.zk.AdminZkClient.changeTopicConfig(AdminZkClient.scala:331)
at kafka.zk.AdminZkClient.changeConfigs(AdminZkClient.scala:268)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:152)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:103)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:80)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/367285/"
] | It is inconsistent, but the reason is a bit complex.
In order for a type to be used in a `switch` statement, it needs to conform to `Equatable`.
Look at this example using a new `struct` type:
```
struct MyType: Equatable {
let x: Int
static func ==(lhs: MyType, rhs: MyType) -> Bool {
return lhs.x == rhs.x
}
}
var mytypeTest: MyType! {
didSet {
switch mytypeTest {
case MyType(x: 1): print("one")
case MyType(x: 2): print("two")
default: print("something else")
}
}
}
mytypeTest = MyType(x: 1)
```
This works, but if you remove `: Equatable` from `MyType` you will get the error **Operator function '~=` requires that 'MyType' conform to 'Equatable'**.
So there's the first hint. `switch` uses the `~=` operator for comparisons and the type must be `Equatable`.
So what happens if we try to compare two tuples using '~=':
```
if (1, 3) ~= (1, 3) {
print("same")
}
```
This gives the error: **Type '(Int, Int)' cannot conform to 'Equatable'; only struct/enum/class types can conform to protocols**.
So, this would imply that tuples can't be used in a `switch` and we know that isn't true.
Well, tuples have a special place in a `switch`, and they're used in *pattern matching* for deconstructing the tuple. For example:
```
let a = (1, 2)
switch a {
case let (x, y):
print("the tuple values are \(x) and \(y)")
}
```
This prints `the tuple values are 1 and 2`.
So, tuples are used in `switch` for matching and deconstructing using *pattern matching*. So you can use a tuple in a `switch` even though it doesn't conform to `Equatable` because it has this special use.
The problem with your example is that the pattern doesn't match the type you are switching on. The type of your value is `(Int, Int)?` and the pattern is `(Int, Int)`.
So, how can you fix this without force unwrapping the tuple value? Change your pattern to match by adding `?` to the pattern:
```
var toupleTest: (one: Int, two: Int)! {
didSet {
switch toupleTest {
case (1, 1)?: print("one, one")
case (2, 2)?: print("two, two")
default: print("something else")
}
}
}
```
Note: Adding `?` works for your `Int` example as well:
```
var test: Int! {
didSet {
switch test {
case 1?: print("one")
case 2?: print("two")
default: print("something else")
}
}
}
```
but it isn't necessary because `Int` is `Equatable` and Swift knows how to compare an `Int?` to an `Int` for equality. | I think that's happened because tuples is value type in swift. And default types also default types as well. Basically your unwrap do nothing, you just mark that this value will be non optional. But to access values inside you have to unwrap your optional tuple and reach values |
62,674,275 | I have Blank UWP project packed to Windows Application Packaging Project.
Both projects have:
* Target version **Windows 10, version 1903 (10.0; Build 18362)**
* Min version **Windows 10, version 1809 (10.0; Build 17763)**
[![enter image description here](https://i.stack.imgur.com/Wg7Ng.png)](https://i.stack.imgur.com/Wg7Ng.png)
I wrote a simple code in **App.xaml.cs** to create setting value
```
using Windows.Storage;
...
protected override void OnLaunched(LaunchActivatedEventArgs e)
{
ApplicationDataContainer Sett = ApplicationData.Current.LocalSettings;
if (Sett.Values["test"] == null)
{
Sett.Values["test"] = true;
}
// Sett.Values["test"] = true; // Also causes an error
...
}
```
When i run TestPkg (x86 debug) i got this message in debug console
```
onecoreuap\base\appmodel\statemanager\winrt\lib\windows.storage.applicationdatafactory.server.cpp(235)\Windows.Storage.ApplicationData.dll!7B6D1391: (caller: 05769996) ReturnHr(1) tid(546c) 8000000B The operation attempted to access data outside the valid range
Msg:[User XXX]
```
Value saved correctly i can read it without problem.
The error only occurs if you pack UWP to Windows Application Packaging Project.
I also tried this with windows.fullTrustProcess (UWP + WPF) on both In both cases, the behavior is similar.
How can i fix this?
Source: <https://github.com/steam3d/Windows.Storage.Error> | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674275",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12738889/"
] | I also see this error. I’m using VS 2019.
Here are some more data points about the problem. Maybe they will provide a clue.
1. I kept commenting out my app initialization code and eventually found that the one statement
var localSettings = Windows.Storage.ApplicationData.Current.LocalSettings;
would generate the error “The operation attempted to access data outside the valid range”
2. After my app is initialized, I use the statement above before writing to the localSettings. In that case, I get a different error “There are no more endpoints available from the endpoint mapper.”
3. It seems to be real time related. In one use case of my app, I issue the statement above every 5 seconds. In that case, I get the error “There are no more endpoints...” at total seconds of 5, 15, 25, etc. but not at total seconds of 10, 20, 30, etc.
If I put a breakpoint at the statement, and slow things down, then I get the error every time as in 5, 10, 15, 20, 25, etc. | Check the bellow code may solved
```js
ApplicationDataContainer Sett = ApplicationData.Current.LocalSettings;
var value = Sett.Values["#Test"];
if (value == null)
{
Sett.Values["#Test"] = 5;
}
``` |
62,674,275 | I have Blank UWP project packed to Windows Application Packaging Project.
Both projects have:
* Target version **Windows 10, version 1903 (10.0; Build 18362)**
* Min version **Windows 10, version 1809 (10.0; Build 17763)**
[![enter image description here](https://i.stack.imgur.com/Wg7Ng.png)](https://i.stack.imgur.com/Wg7Ng.png)
I wrote a simple code in **App.xaml.cs** to create setting value
```
using Windows.Storage;
...
protected override void OnLaunched(LaunchActivatedEventArgs e)
{
ApplicationDataContainer Sett = ApplicationData.Current.LocalSettings;
if (Sett.Values["test"] == null)
{
Sett.Values["test"] = true;
}
// Sett.Values["test"] = true; // Also causes an error
...
}
```
When i run TestPkg (x86 debug) i got this message in debug console
```
onecoreuap\base\appmodel\statemanager\winrt\lib\windows.storage.applicationdatafactory.server.cpp(235)\Windows.Storage.ApplicationData.dll!7B6D1391: (caller: 05769996) ReturnHr(1) tid(546c) 8000000B The operation attempted to access data outside the valid range
Msg:[User XXX]
```
Value saved correctly i can read it without problem.
The error only occurs if you pack UWP to Windows Application Packaging Project.
I also tried this with windows.fullTrustProcess (UWP + WPF) on both In both cases, the behavior is similar.
How can i fix this?
Source: <https://github.com/steam3d/Windows.Storage.Error> | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674275",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12738889/"
] | I had a similar error using WinUI 3 Preview 3 with a desktop app packaged in MSIX.
In my case the exception occurred some time during launch. I do have code in there that accesses Windows.Storage.ApplicationData.
The exception I see in my debugger is copied below with a source code line.
onecoreuap\base\appmodel\statemanager\winrt\lib\windows.storage.applicationdatafactory.server.cpp(235)\Windows.Storage.ApplicationData.dll!16891391: (caller: 0E7F9FB1) ReturnHr(1) tid(33bc) 8000000B The operation attempted to access data outside the valid range
Msg:[User S-1-5-21-3122765350-3099923779-1351685958-1002]
Exception thrown at 0x7679A892 (KernelBase.dll) in Q5U.exe: 0x40080202: WinRT transform error (parameters: 0x8000000B, 0x80070490, 0x00000014, 0x005BE3F4).
OnLaunched (Kind=Launch) | Check the bellow code may solved
```js
ApplicationDataContainer Sett = ApplicationData.Current.LocalSettings;
var value = Sett.Values["#Test"];
if (value == null)
{
Sett.Values["#Test"] = 5;
}
``` |
62,674,295 | I have a big json file, about ~40Gb in size. When I try to convert this file of array of objects to a list of java objects, it crashes. I've used all sizes of maximum heap `xmx` but nothing has worked!
```
public Set<Interlocutor> readJsonInterlocutorsToPersist() {
String userHome = System.getProperty(USER_HOME);
log.debug("Read file interlocutors "+userHome);
try {
ObjectMapper mapper = new ObjectMapper();
// JSON file to Java object
Set<Interlocutor> interlocutorDeEntities = mapper.readValue(
new File(userHome + INTERLOCUTORS_TO_PERSIST),
new TypeReference<Set<Interlocutor>>() {
});
return interlocutorDeEntities;
} catch (Exception e) {
log.error("Exception while Reading InterlocutorsToPersist file.",
e.getMessage());
return null;
}
}
```
Is there a way to read this file using `BufferedReader` and then to push object by object? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1544690/"
] | >
> is there a way to read this file using BufferedReader and then to push
> object by object ?
>
>
>
Of course, not. Even you can open this file how you can store 40GB as java objects in memory? I think you don't have such amount of memory in you computers (but technically using `ObjectMapper` you should have about 2 times more operation memory - 40GB for store json + 40GB for store results as java objects = 80 GB).
I think you should use any way from this [questions](https://stackoverflow.com/questions/9390368/java-best-approach-to-parse-huge-extra-large-json-file), but store information in databases or files instead of memory. For example, if you have millions rows in json, you should parse and save every rows to database without keeping it all in memory. And then you can get this data from database step by step (for example, not more then 1GB for every time). | You should definitly have a look at the Jackson Streaming API (<https://www.baeldung.com/jackson-streaming-api>). I used it myself for GB large JSON files. The great thing is you can divide your JSON into several smaller JSON objects and then parse them with `mapper.readTree(parser)`. That way you can combine the convenience of normal Jackson with the speed and scalability of the Streaming API.
Related to your problem:
I understood that your have a really large array (which is the reason for the file size) and some much more readable objects:
e.g.:
```
[ // 40GB
{}, // Only 400 MB
{},
]
```
What you can do now is to parse the file with Jackson's Streaming API and go through the array. But each individual object can be parsed as "regular" Jackson object and then processed easily.
You may have a look at this [Use Jackson To Stream Parse an Array of Json Objects](https://stackoverflow.com/questions/24835431/use-jackson-to-stream-parse-an-array-of-json-objects) which actually matches your problem pretty well. |
62,674,306 | I would like to override the method of an instance of class A with a method from class B but in a way so that all references to the old method of the instance (made before overriding the method) then 'link' to the new one. In code:
```
import types
class A:
def foo(self):
print('A')
class B:
def foo(self):
print('B')
class C:
def __init__(self, a):
self.func = a.foo
def do_something(self):
self.func()
a = A()
c = C(a)
method_name = 'foo' # it has to be dynamic
new_method = getattr(B, method_name)
setattr(a, method_name, types.MethodType(new_method, a))
c.do_something() # still prints A, I want it to print B now
```
I want `c.func` to hold the new method from class B after the attribute of a has been set (without doing anything with the c object).
Is there a way to set the attribute of the instance a so that all previously made references then refer to the new method?
Sorry if this question is kind of stupid, I am not that much into this. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6793328/"
] | You could do it like this, for example:
```
...
def retain(foo):
return lambda *args, **kwargs: getattr(foo.__self__, foo.__name__)(*args, **kwargs)
class C:
def __init__(self, a):
self.func = retain(a.foo)
...
``` | Just adding to Alex's answer.
In my case, the described dynamic partly comes from a need for serialization and deserialization. To serialize certain method references, I used to use `func.__name__`.
However, `c.func.__name__` would only return `<lambda>` using Alex's approach. I prevented this by creating a callable class that uses the retain function but stores the method's name separately which in my case is enough because it's just some specific references I need to serialize.
```
def retain(foo):
return lambda *args, **kwargs: getattr(foo.__self__, foo.__name__)(*args, **kwargs)
class M:
def __init__(self, method):
self.method_name = method.__name__
self.method = retain(method)
def __call__(self, *args, **kwargs):
self.method(*args, **kwargs)
class A:
def foo(self):
print('A')
class B:
def foo(self):
print('B')
class C:
def __init__(self, method):
self.func = M(method)
def do_something(self):
self.func()
a = A()
c = C(a.foo)
setattr(a, method_name, types.MethodType(getattr(B, 'foo'), a))
c.do_something() # now prints B
# when serializing
method_name = c.func.method_name
``` |
62,674,312 | i have following array
```
[
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 1 },
{ label: 'Last Name*', results: 'ggg', sequence: 4, entry_id: 1 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 1 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 1 },
{ label: 'address', results: '', sequence: 5, entry_id: 1 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 1 },
{ label: 'Last Name*', results: 'ggg', sequence: 4, entry_id: 1 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 1 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 1 },
{ label: 'address', results: '', sequence: 5, entry_id: 1 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 1 },
{ label: 'Last Name*', results: 'ggg', sequence: 4, entry_id: 1 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 1 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 1 },
{ label: 'address', results: 'address', sequence: 5, entry_id: 1 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 1 },
{ label: 'Last Name*', results: 'bansal', sequence: 4, entry_id: 1 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 1 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 1 },
{ label: 'address', results: '', sequence: 5, entry_id: 1 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 2 },
{ label: 'Last Name*', results: 'ggg', sequence: 4, entry_id: 2 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 2 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 2 },
{ label: 'address', results: '', sequence: 5, entry_id: 2 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 3 },
{
label: 'Last Name*',
results: 'sss.ca',
sequence: 4,
entry_id: 3
},
{ label: 'Country', results: 'India', sequence: 3, entry_id: 3 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 3 },
{ label: 'address', results: '', sequence: 5, entry_id: 3 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 4 },
{
label: 'Last Name*',
results: 'ssss.ca',
sequence: 4,
entry_id: 4
},
{ label: 'Country', results: 'India', sequence: 3, entry_id: 4 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 4 },
{ label: 'address', results: 'add', sequence: 5, entry_id: 4 }
]
```
i need to convert into HTML Table
```
> <table><tr><th>Untitled</th><th>Last
> Name*</th><th>Category</th><th>address</th></tr>
>
> <tr><td>1234</td><td>ggg</td><td>test</td><td></td></tr> .... ... so
> on </table>
```
Can anybody tell me how to make an array to simplify this so i can create a table ? do we need multidimensional array. i need final html string. any trick please
Thanks | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674312",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846339/"
] | Here is an example of solution using JS:
```js
var html_code = "<table><tr><th>label</th><th>results</th><th>sequence</th><th>entry_id</th></tr>";
var arr= [
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 1 },
{ label: 'Last Name*', results: 'ggg', sequence: 4, entry_id: 1 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 1 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 1 },
{ label: 'address', results: '', sequence: 5, entry_id: 1 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 1 },
{ label: 'Last Name*', results: 'ggg', sequence: 4, entry_id: 1 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 1 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 1 },
{ label: 'address', results: '', sequence: 5, entry_id: 1 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 1 },
{ label: 'Last Name*', results: 'ggg', sequence: 4, entry_id: 1 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 1 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 1 },
{ label: 'address', results: 'address', sequence: 5, entry_id: 1 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 1 },
{ label: 'Last Name*', results: 'bansal', sequence: 4, entry_id: 1 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 1 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 1 },
{ label: 'address', results: '', sequence: 5, entry_id: 1 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 2 },
{ label: 'Last Name*', results: 'ggg', sequence: 4, entry_id: 2 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 2 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 2 },
{ label: 'address', results: '', sequence: 5, entry_id: 2 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 3 },
{
label: 'Last Name*',
results: 'sss.ca',
sequence: 4,
entry_id: 3
},
{ label: 'Country', results: 'India', sequence: 3, entry_id: 3 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 3 },
{ label: 'address', results: '', sequence: 5, entry_id: 3 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 4 },
{
label: 'Last Name*',
results: 'ssss.ca',
sequence: 4,
entry_id: 4
},
{ label: 'Country', results: 'India', sequence: 3, entry_id: 4 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 4 },
{ label: 'address', results: 'add', sequence: 5, entry_id: 4 }
];
for(var i=0;i<arr.length;i++)
{
html_code+="<tr><td>"+ arr[i].label +"</td><td>"+ arr[i].results +"</td><td>"+ arr[i].sequence +"</td><td>"+ arr[i].entry_id +"</td></tr>";
}
html_code+="</table>";
document.getElementById("test").innerHTML = html_code;
```
```html
<div id="test"></div>
``` | You can also try this JQuery Code.
```
//HTML code
<div id="here_table"></div>
//Script
var arr= [
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 1 },
{ label: 'Last Name*', results: 'ggg', sequence: 4, entry_id: 1 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 1 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 1 },
{ label: 'address', results: '', sequence: 5, entry_id: 1 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 1 },
{ label: 'Last Name*', results: 'ggg', sequence: 4, entry_id: 1 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 1 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 1 },
{ label: 'address', results: '', sequence: 5, entry_id: 1 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 1 },
{ label: 'Last Name*', results: 'ggg', sequence: 4, entry_id: 1 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 1 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 1 },
{ label: 'address', results: 'address', sequence: 5, entry_id: 1 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 1 },
{ label: 'Last Name*', results: 'bansal', sequence: 4, entry_id: 1 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 1 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 1 },
{ label: 'address', results: '', sequence: 5, entry_id: 1 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 2 },
{ label: 'Last Name*', results: 'ggg', sequence: 4, entry_id: 2 },
{ label: 'Country', results: 'India', sequence: 3, entry_id: 2 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 2 },
{ label: 'address', results: '', sequence: 5, entry_id: 2 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 3 },
{
label: 'Last Name*',
results: 'sss.ca',
sequence: 4,
entry_id: 3
},
{ label: 'Country', results: 'India', sequence: 3, entry_id: 3 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 3 },
{ label: 'address', results: '', sequence: 5, entry_id: 3 },
{ label: 'Untitled', results: '1234', sequence: 2, entry_id: 4 },
{
label: 'Last Name*',
results: 'ssss.ca',
sequence: 4,
entry_id: 4
},
{ label: 'Country', results: 'India', sequence: 3, entry_id: 4 },
{ label: 'Category', results: 'test', sequence: 6, entry_id: 4 },
{ label: 'address', results: 'add', sequence: 5, entry_id: 4 }
];
var content = "<table>"
arr.forEach(function(singleRecord) {
content += '<tr><td>' + singleRecord.label + '</td><td>' + singleRecord.results + '</td><td>' + singleRecord.sequence + '</td></tr>';
});
content += "</table>"
$('#here_table').append(content);
``` |
62,674,323 | I have stable repo configured
```
▶ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
```
and I know I can perform
```
helm install stable/jenkins
```
Then why isn’t the following command retrieving any results?
```
▶ helm search repo stable/jenkins
No results found
~
▶ helm search repo jenkins
No results found
```
Using
```
▶ helm version --tls
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.8", GitCommit:"145206680c1d5c28e3fcf30d6f596f0ba84fcb47", GitTreeState:"clean"}
```
**edit**: even after an update
```
▶ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "flagger" chart repository
...Successfully got an update from the "incubator" chart repository
...Successfully got an update from the "stakater" chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈ Happy Helming!⎈
~/ 40m
▶ helm search repo stable/jenkins
No results found
```
I even tried to `remove` and `add` back again the `stable` repo; same result. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2409793/"
] | You are running `helm search repo stable/jenkins` and this is helm 3 syntax.
Have a look at this help for **helm3**:
```
$ helm search --help
Search provides the ability to search for Helm charts in the various places
they can be stored including the Helm Hub and repositories you have added. Use
search subcommands to search different locations for charts.
Usage:
helm search [command]
Available Commands:
hub search for charts in the Helm Hub or an instance of Monocular
repo search repositories for a keyword in charts
```
But in you question you wrote:
>
> helm version --tls
>
> Client: &version.Version{SemVer:"**v2.9.1** ...
>
>
>
This means that you are using **helm 2**. Now lets have a look at **helm 2** help command:
```
$ helm search --help
...
To look for charts with a particular name (such as stable/mysql), try
searching using vertical tabs (\v). Vertical tabs are used as the delimiter
between search fields. For example:
helm search --regexp '\vstable/mysql\v'
To search for charts using common keywords (such as "database" or
"key-value store"), use
helm search database
or
helm search key-value store
Usage:
helm search [keyword] [flags]
```
**TLDR:** Use:
```
helm search stable/jenkins
```
Let me know if you have any further questions. I'd be happy to help. | Try updating your repositories:
```sh
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm search repo stable/jenkins
NAME CHART VERSION APP VERSION DESCRIPTION
stable/jenkins 2.1.0 lts Open source continuous integration server. It s...
``` |
62,674,331 | I have create elastic service in AWS with `Dev Testing(t2 small)`
Detials shown below
```
VPCvpc-7620c30b
Security Groups
sg-7e9b1759
IAM RoleAWSServiceRoleForAmazonElasticsearchService
AZs and Subnets
us-east-1e: subnet-2f100a11
```
How to access my VPC endpoint <https://vpc-xxx.us-east-1.es.amazonaws.com> access from outside.
Kibana is below : <https://vpc-xx.us-east-1.es.amazonaws.com/_plugin/kibana/>
I am not running on Ec2 instance | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1875458/"
] | From [docs](https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html):
>
> To access the default installation of Kibana for a domain that resides within a VPC, users **must have access to the VPC**. This process varies by network configuration, but likely involves connecting to a VPN or managed network or using a **proxy server**.
>
>
>
One way of setting up the proxy server has been explained in detail in the recent **AWS blog** post:
* [How do I use an NGINX proxy to access Kibana from outside a VPC that's using Amazon Cognito authentication?](https://aws.amazon.com/premiumsupport/knowledge-center/kibana-outside-vpc-nginx-elasticsearch/)
The instruction could also be adapted to not using Congnito.
Extra links, with other, probably easier setup with **ssh tunnels**:
* [How to connect to AWS Elasticsearch cluster from outside of the VPC](https://dev.to/murat/how-to-connect-to-aws-elasticsearch-cluster-from-outside-of-the-vpc-5kb)
* [How To: Access Your AWS VPC-based Elasticsearch Cluster Locally](https://www.jeremydaly.com/access-aws-vpc-based-elasticsearch-cluster-locally/)
* [SSH Tunnel Access to AWS ElasticSearch Domain and Kibana | Howto](https://www.devopswiki.co.uk/wiki/elasticsearch/aws-elasticsearch-via-ssh-tunnel)
* [How can I use an SSH tunnel to access Kibana from outside of a VPC with Amazon Cognito authentication?](https://aws.amazon.com/premiumsupport/knowledge-center/kibana-outside-vpc-ssh-elasticsearch/) | VPC endpoints are not accessible directly from outside of the VPC.
If you want to allow this you will need to use a proxy instance in your VPC that can connect to the VPC endpoint, then proxy all requests through the EC2 instance in order to access the endpoint.
More information is available [here](https://d0.awsstatic.com/aws-answers/Accessing_VPC_Endpoints_from_Remote_Networks.pdf). |
62,674,343 | I wanted to increase the height of div on button click thus decided to use transition. But when I click on button height increases but without transition. Here is my code:
```
.view-all-container{
max-height: 0;
overflow-y:hidden;
transition: max-height 0.7s linear
}
.view-all-container.expanded {
max-height: 500px;
}
let btn = document.querySelector('.view-all-button');
let list = document.querySelector('.view-all-container');
btn.addEventListener('click', ()=>{
list.classList.toggle("expanded");
const expanded = list.classList.contains("expanded");
if (expanded) {
btn.innerHTML = "Hide All";
list.style.overflow = 'scroll'
list.style.overflowX= 'hidden';
} else {
btn.innerHTML = "View All";
list.classList.remove("expanded");
}
})
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674343",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13647380/"
] | From [docs](https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html):
>
> To access the default installation of Kibana for a domain that resides within a VPC, users **must have access to the VPC**. This process varies by network configuration, but likely involves connecting to a VPN or managed network or using a **proxy server**.
>
>
>
One way of setting up the proxy server has been explained in detail in the recent **AWS blog** post:
* [How do I use an NGINX proxy to access Kibana from outside a VPC that's using Amazon Cognito authentication?](https://aws.amazon.com/premiumsupport/knowledge-center/kibana-outside-vpc-nginx-elasticsearch/)
The instruction could also be adapted to not using Congnito.
Extra links, with other, probably easier setup with **ssh tunnels**:
* [How to connect to AWS Elasticsearch cluster from outside of the VPC](https://dev.to/murat/how-to-connect-to-aws-elasticsearch-cluster-from-outside-of-the-vpc-5kb)
* [How To: Access Your AWS VPC-based Elasticsearch Cluster Locally](https://www.jeremydaly.com/access-aws-vpc-based-elasticsearch-cluster-locally/)
* [SSH Tunnel Access to AWS ElasticSearch Domain and Kibana | Howto](https://www.devopswiki.co.uk/wiki/elasticsearch/aws-elasticsearch-via-ssh-tunnel)
* [How can I use an SSH tunnel to access Kibana from outside of a VPC with Amazon Cognito authentication?](https://aws.amazon.com/premiumsupport/knowledge-center/kibana-outside-vpc-ssh-elasticsearch/) | VPC endpoints are not accessible directly from outside of the VPC.
If you want to allow this you will need to use a proxy instance in your VPC that can connect to the VPC endpoint, then proxy all requests through the EC2 instance in order to access the endpoint.
More information is available [here](https://d0.awsstatic.com/aws-answers/Accessing_VPC_Endpoints_from_Remote_Networks.pdf). |
62,674,353 | I have **zero** experience with ETL.
Whenever a file(a .csv) is moved into a specific folder, it should be uploaded to SalesForce I don't know how to get this automated flow.
I hope I was clear enough.
I gotta use the opensource version, any helpful links or resources will be appreciated.
Thank you in advance | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674353",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846227/"
] | You could definitely use Talend Open Studio for ESB : this studio contains 'Routes' functionalities : you'll be able to use a cFile component, which will check your folder for new files, and raise an event that will propagate throughout the route to a designed endpoint (for example a salesForce API). Talend ESB maps Apache Camel components , which are well documented.
Check about Routes with Talend for ESB, it should do the trick. | We have tFileExists component, you can use that and configure to check the file.
Also you have tFileWait component, where you can defile the frame of the arrival of he files and the number of iterations it has to check the file.
But i would suggest id you have any scheduling tool, use file watcher concept and then use talend job to upload the file to a specific location.
Using talend itself to check the file arrival is not a feasible way as the jobs has be in running state continuously which consumes more java resource |
62,674,354 | I am trying to add a dynamic instead of a hard coded `get_inspect_url` method to a `PageModel`:
```
class MyModel(Page):
# ...
# this is what I have now, which breaks in prod due to a different admin url
def get_inspect_url(self):
inspect_url = f"/admin/myapp/mymodel/inspect/{self.id}/"
return inspect_url
```
I know that you can use the `ModelAdmin.url_helper_class` in `wagtail_hooks.py` to [get admin urls for an object](https://docs.wagtail.io/en/latest/reference/contrib/modeladmin/tips_and_tricks/reversing_urls.html#getting-the-edit-or-delete-or-inspect-url-for-an-object), but I can't figure a way to do the same on the model level as a class method. Is there a way? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5968943/"
] | thanks to @Andrey Maslov tip I found a way:
```
from django.urls import reverse
def get_inspect_url(self):
return reverse(
f"{self._meta.app_label}_{self._meta.model_name}_modeladmin_inspect",
args=[self.id],
)
```
The basic form of a Wagtail admin url is:
```
[app_label]_[model_name]_modeladmin_[action]
```
Just change the [action] to one of the following to generated other instance level admin urls:
* `edit`
* `delete` | if you have in you urls.py line like this
```
path(MyModel/view/<id:int>/', YourViewName, name='mymodel-info'),
```
then you can add to your models.py this lines
```
from django.urls import reverse
...
class MyModel(Page):
def get_inspect_url(self):
inspect_url = reverse('mymodel-info', kwargs={'id': self.id})
return inspect_url
``` |
62,674,356 | Making a RPS game for a school assignment
```
import random
#player score
p1 = 0
p2 = 0
#add score
def AddScore(a,c):
print(a + " +1")
if c == "hum":
p1+=1
else:
p2+=1
#gets player choices
while p1 < 3 or p2 < 3:
print("Rock, Paper, Scissors, Shoot!")
a = int(input("Rock, Paper or Scissors? (1, 2, 3):"))
b = random.randint(1,3)
if a == b:
print("Draw!")
elif a > b and a != 1:
AddScore("Human", "hum")
elif a == 1 and b == 3:
AddScore("Human", "hum")
elif b > a and b != 1:
AddScore("Computer", "comp")
elif a == b and a == 3:
AddScore("Computer", "comp")
print(p1)
print(p2)
```
returns error on lines 9 and 21:
UnboundLocalError: local variable 'p1' referenced before assignment | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674356",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10722739/"
] | thanks to @Andrey Maslov tip I found a way:
```
from django.urls import reverse
def get_inspect_url(self):
return reverse(
f"{self._meta.app_label}_{self._meta.model_name}_modeladmin_inspect",
args=[self.id],
)
```
The basic form of a Wagtail admin url is:
```
[app_label]_[model_name]_modeladmin_[action]
```
Just change the [action] to one of the following to generated other instance level admin urls:
* `edit`
* `delete` | if you have in you urls.py line like this
```
path(MyModel/view/<id:int>/', YourViewName, name='mymodel-info'),
```
then you can add to your models.py this lines
```
from django.urls import reverse
...
class MyModel(Page):
def get_inspect_url(self):
inspect_url = reverse('mymodel-info', kwargs={'id': self.id})
return inspect_url
``` |
62,674,357 | I have a `.sql` file from Oracle which contains create table/index statements and a lot of insert statements(around 1M insert).
I can manually modify the create table/index part(not too much), but for the insert part there are some Oracle functions like to\_date.
I know MySql has a similar function STR\_TO\_DATE but the usage of the parameter is different.
I can connect to MySQL, but the .sql file is the only thing I got from Oracle.
Is there any way I can import this Oracle .sql file into MySQL?
Thanks. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1416763/"
] | thanks to @Andrey Maslov tip I found a way:
```
from django.urls import reverse
def get_inspect_url(self):
return reverse(
f"{self._meta.app_label}_{self._meta.model_name}_modeladmin_inspect",
args=[self.id],
)
```
The basic form of a Wagtail admin url is:
```
[app_label]_[model_name]_modeladmin_[action]
```
Just change the [action] to one of the following to generated other instance level admin urls:
* `edit`
* `delete` | if you have in you urls.py line like this
```
path(MyModel/view/<id:int>/', YourViewName, name='mymodel-info'),
```
then you can add to your models.py this lines
```
from django.urls import reverse
...
class MyModel(Page):
def get_inspect_url(self):
inspect_url = reverse('mymodel-info', kwargs={'id': self.id})
return inspect_url
``` |
62,674,365 | I need to reconstruct a new dictionary based on the keys present in an array.from an existing dictionary. array value will be changing each and every time
```
olddict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964,
"createdby": "Mustang",
"orderedby": "Mustang"
}
arr = ['brand','year','createdby']
```
Required output :
```
newdict= {
"brand": "Ford",
"year": 1964,
"createdby": "Mustang"
}
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13840152/"
] | Try to use below code:
```
newdict = {param:olddict.get(param) for param in arr}
``` | ```
olddict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964,
"createdby": "Mustang",
"orderedby": "Mustang"
}
arr = ['brand','year','createdby']
newdict = {}
for key in arr:
# key = 'brand'
newdict[key] = olddict[key] # newdict['brand'] = olddict['brand']
newdict = {
"brand": "Ford",
"year": 1964,
"createdby": "Mustang"
}
``` |
62,674,365 | I need to reconstruct a new dictionary based on the keys present in an array.from an existing dictionary. array value will be changing each and every time
```
olddict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964,
"createdby": "Mustang",
"orderedby": "Mustang"
}
arr = ['brand','year','createdby']
```
Required output :
```
newdict= {
"brand": "Ford",
"year": 1964,
"createdby": "Mustang"
}
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13840152/"
] | Try to use below code:
```
newdict = {param:olddict.get(param) for param in arr}
``` | You can add this function:
```
def updated_dict(arr, olddict):
newdict = {}
for item in arr:
newdict[item] = olddict.get(item)
return newdict
``` |
62,674,365 | I need to reconstruct a new dictionary based on the keys present in an array.from an existing dictionary. array value will be changing each and every time
```
olddict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964,
"createdby": "Mustang",
"orderedby": "Mustang"
}
arr = ['brand','year','createdby']
```
Required output :
```
newdict= {
"brand": "Ford",
"year": 1964,
"createdby": "Mustang"
}
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13840152/"
] | you can use **Dictionary Comprehension** technique.
```
newdict = {key:olddict[key] for key in arr}
```
where `key` is everything in `arr` list/array and value of `newdict` is `olddict[key]`.
Hope you find it easy and useful <3 | ```
olddict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964,
"createdby": "Mustang",
"orderedby": "Mustang"
}
arr = ['brand','year','createdby']
newdict = {}
for key in arr:
# key = 'brand'
newdict[key] = olddict[key] # newdict['brand'] = olddict['brand']
newdict = {
"brand": "Ford",
"year": 1964,
"createdby": "Mustang"
}
``` |
62,674,365 | I need to reconstruct a new dictionary based on the keys present in an array.from an existing dictionary. array value will be changing each and every time
```
olddict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964,
"createdby": "Mustang",
"orderedby": "Mustang"
}
arr = ['brand','year','createdby']
```
Required output :
```
newdict= {
"brand": "Ford",
"year": 1964,
"createdby": "Mustang"
}
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13840152/"
] | you can use **Dictionary Comprehension** technique.
```
newdict = {key:olddict[key] for key in arr}
```
where `key` is everything in `arr` list/array and value of `newdict` is `olddict[key]`.
Hope you find it easy and useful <3 | You can add this function:
```
def updated_dict(arr, olddict):
newdict = {}
for item in arr:
newdict[item] = olddict.get(item)
return newdict
``` |
62,674,368 | ```
array1 = [{id: 1, email: 'test1@test.com', group_ids: ["25"], username: 'test1'},
{id: 2, email: 'test2@test.com', group_ids: ["22"], username: 'test2'},
{id: 3, email: 'test3@test.com', group_ids: ["25", "20"], username: 'test3'},
{id: 4, email: 'test4@test.com', group_ids: ["23"], username: 'test4'}]
array2 = [25, 22];
```
I want to get list of email from array1 whose having group\_ids in array2. I've tried below approach but I guess I'm doing wrong.
```
var user_groupId = [];
var emails = [];
var obj = [];
for (var i=0; i < array2.length; i++) {
obj.push(array1.find(o => o.group_ids.find( b => b == array2[i])));
}
for (var i=0; i< obj.length; i++){
emails.push(obj[i].email);
}
console.log(emails);
```
Here I get output Array ["test1@test.com", "test2@test.com"] but not "test3@test.com". I would appreciate your help. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5809965/"
] | Try to use below code:
```
newdict = {param:olddict.get(param) for param in arr}
``` | ```
olddict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964,
"createdby": "Mustang",
"orderedby": "Mustang"
}
arr = ['brand','year','createdby']
newdict = {}
for key in arr:
# key = 'brand'
newdict[key] = olddict[key] # newdict['brand'] = olddict['brand']
newdict = {
"brand": "Ford",
"year": 1964,
"createdby": "Mustang"
}
``` |
62,674,368 | ```
array1 = [{id: 1, email: 'test1@test.com', group_ids: ["25"], username: 'test1'},
{id: 2, email: 'test2@test.com', group_ids: ["22"], username: 'test2'},
{id: 3, email: 'test3@test.com', group_ids: ["25", "20"], username: 'test3'},
{id: 4, email: 'test4@test.com', group_ids: ["23"], username: 'test4'}]
array2 = [25, 22];
```
I want to get list of email from array1 whose having group\_ids in array2. I've tried below approach but I guess I'm doing wrong.
```
var user_groupId = [];
var emails = [];
var obj = [];
for (var i=0; i < array2.length; i++) {
obj.push(array1.find(o => o.group_ids.find( b => b == array2[i])));
}
for (var i=0; i< obj.length; i++){
emails.push(obj[i].email);
}
console.log(emails);
```
Here I get output Array ["test1@test.com", "test2@test.com"] but not "test3@test.com". I would appreciate your help. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5809965/"
] | Try to use below code:
```
newdict = {param:olddict.get(param) for param in arr}
``` | You can add this function:
```
def updated_dict(arr, olddict):
newdict = {}
for item in arr:
newdict[item] = olddict.get(item)
return newdict
``` |
62,674,368 | ```
array1 = [{id: 1, email: 'test1@test.com', group_ids: ["25"], username: 'test1'},
{id: 2, email: 'test2@test.com', group_ids: ["22"], username: 'test2'},
{id: 3, email: 'test3@test.com', group_ids: ["25", "20"], username: 'test3'},
{id: 4, email: 'test4@test.com', group_ids: ["23"], username: 'test4'}]
array2 = [25, 22];
```
I want to get list of email from array1 whose having group\_ids in array2. I've tried below approach but I guess I'm doing wrong.
```
var user_groupId = [];
var emails = [];
var obj = [];
for (var i=0; i < array2.length; i++) {
obj.push(array1.find(o => o.group_ids.find( b => b == array2[i])));
}
for (var i=0; i< obj.length; i++){
emails.push(obj[i].email);
}
console.log(emails);
```
Here I get output Array ["test1@test.com", "test2@test.com"] but not "test3@test.com". I would appreciate your help. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5809965/"
] | you can use **Dictionary Comprehension** technique.
```
newdict = {key:olddict[key] for key in arr}
```
where `key` is everything in `arr` list/array and value of `newdict` is `olddict[key]`.
Hope you find it easy and useful <3 | ```
olddict = {
"brand": "Ford",
"model": "Mustang",
"year": 1964,
"createdby": "Mustang",
"orderedby": "Mustang"
}
arr = ['brand','year','createdby']
newdict = {}
for key in arr:
# key = 'brand'
newdict[key] = olddict[key] # newdict['brand'] = olddict['brand']
newdict = {
"brand": "Ford",
"year": 1964,
"createdby": "Mustang"
}
``` |
62,674,368 | ```
array1 = [{id: 1, email: 'test1@test.com', group_ids: ["25"], username: 'test1'},
{id: 2, email: 'test2@test.com', group_ids: ["22"], username: 'test2'},
{id: 3, email: 'test3@test.com', group_ids: ["25", "20"], username: 'test3'},
{id: 4, email: 'test4@test.com', group_ids: ["23"], username: 'test4'}]
array2 = [25, 22];
```
I want to get list of email from array1 whose having group\_ids in array2. I've tried below approach but I guess I'm doing wrong.
```
var user_groupId = [];
var emails = [];
var obj = [];
for (var i=0; i < array2.length; i++) {
obj.push(array1.find(o => o.group_ids.find( b => b == array2[i])));
}
for (var i=0; i< obj.length; i++){
emails.push(obj[i].email);
}
console.log(emails);
```
Here I get output Array ["test1@test.com", "test2@test.com"] but not "test3@test.com". I would appreciate your help. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5809965/"
] | you can use **Dictionary Comprehension** technique.
```
newdict = {key:olddict[key] for key in arr}
```
where `key` is everything in `arr` list/array and value of `newdict` is `olddict[key]`.
Hope you find it easy and useful <3 | You can add this function:
```
def updated_dict(arr, olddict):
newdict = {}
for item in arr:
newdict[item] = olddict.get(item)
return newdict
``` |
62,674,378 | I have created a wallpaper app where I'm loading images from the firebase database in recyclerview. When I click on recyclerview item(image) that item's image URL is sent to the next activity and then that URL is loaded into imageView using glide.
I want to change this to something like [Image-Slider](https://drive.google.com/file/d/18X8HwRtv54WkAZoRwdku8_o0QN_zndWl/view). By clicking on the recyclerView item I want to show that image in full screen and slide from left or right(next or previous). But I don't know how to do that.
Here is my code.
```
recyclerView.setHasFixedSize(true);
recyclerView.setLayoutManager(new GridLayoutManager(getContext(), 3));
adapter = new FeaturedAdapter(list);
recyclerView.setAdapter(adapter);
databaseReference = FirebaseDatabase.getInstance().getReference().child("Wallpaper All");
Query query = databaseReference.orderByChild("dark").equalTo(true);
query.addListenerForSingleValueEvent(new ValueEventListener() {
@Override
public void onDataChange(@NonNull DataSnapshot dataSnapshot) {
list.clear();
for (DataSnapshot dataSnapshot1 : dataSnapshot.getChildren()) {
FeaturedModel model = dataSnapshot1.getValue(FeaturedModel.class);
list.add(model);
}
adapter.notifyDataSetChanged();
}
@Override
public void onCancelled(@NonNull DatabaseError databaseError) {
Log.e("TAG_DATABASE_ERROR", databaseError.getMessage());
}
});
```
FeaturedAdapter.java
```
public class FeaturedAdapter extends RecyclerView.Adapter<FeaturedAdapter.ViewHolder> {
private List<FeaturedModel> featuredModels;
public FeaturedAdapter(List<FeaturedModel> featuredModels) {
this.featuredModels = featuredModels;
}
@NonNull
@Override
public ViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View view = LayoutInflater.from(parent.getContext()).inflate(R.layout.custom_image, parent, false);
return new ViewHolder(view);
}
@Override
public void onBindViewHolder(@NonNull ViewHolder holder, int position) {
holder.setData(featuredModels.get(position).getImageLink()
, position,
featuredModels.get(position).isPremium());
}
@Override
public int getItemCount() {
return featuredModels.size();
}
class ViewHolder extends RecyclerView.ViewHolder {
private ImageView imageView;
private ImageView premiumImage;
public ViewHolder(@NonNull View itemView) {
super(itemView);
imageView = itemView.findViewById(R.id.imageview);
premiumImage = itemView.findViewById(R.id.premium);
}
private void setData(final String url, final int position, boolean premium) {
Glide.with(itemView.getContext().getApplicationContext()).load(url).into(imageView);
if (premium) {
premiumImage.setVisibility(View.VISIBLE);
itemView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent setIntent = new Intent(itemView.getContext(), PremiumViewActivity.class);
//setIntent.putExtra("title", url);
setIntent.putExtra("images", featuredModels.get(getAdapterPosition()).getImageLink());
setIntent.putExtra("id", featuredModels.get(position).getId());
itemView.getContext().startActivity(setIntent);
}
});
} else {
premiumImage.setVisibility(View.GONE);
itemView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent setIntent = new Intent(itemView.getContext(), ViewActivity.class);
//setIntent.putExtra("title", url);
setIntent.putExtra("images", featuredModels.get(getAdapterPosition()).getImageLink());
setIntent.putExtra("id", featuredModels.get(position).getId());
itemView.getContext().startActivity(setIntent);
}
});
}
}
}
}
```
ViewActivity
```
Random rnd = new Random();
int color = Color.argb(255, rnd.nextInt(256), rnd.nextInt(256), rnd.nextInt(256));
relativeLayout.setBackgroundColor(color);
Glide.with(this)
.load(getIntent().getStringExtra("images"))
.timeout(6000)
.listener(new RequestListener<Drawable>() {
@Override
public boolean onLoadFailed(@Nullable GlideException e, Object model, Target<Drawable> target, boolean isFirstResource) {
relativeLayout.setBackgroundColor(Color.TRANSPARENT);
return false;
}
@Override
public boolean onResourceReady(Drawable resource, Object model, Target<Drawable> target, DataSource dataSource, boolean isFirstResource) {
relativeLayout.setBackgroundColor(Color.TRANSPARENT);
return false;
}
})
.into(imageView);
setBackgroundWall.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
setBackgroundImage();
}
});
}
private void setBackgroundImage() {
Bitmap bitmap = ((BitmapDrawable) imageView.getDrawable()).getBitmap();
WallpaperManager manager = WallpaperManager.getInstance(getApplicationContext());
try {
manager.setBitmap(bitmap);
Toasty.success(getApplicationContext(), "Set Wallpaper Successfully", Toast.LENGTH_SHORT, true).show();
} catch (IOException e) {
Toasty.warning(this, "Wallpaper not load yet!", Toast.LENGTH_SHORT, true).show();
}
}
```
activity\_view.xml
```
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".activity.PremiumViewActivity">
<ImageView
android:id="@+id/viewImage"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:adjustViewBounds="true"
android:contentDescription="@null"
android:scaleType="centerCrop"
android:src="#00BCD4"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.0"
app:layout_constraintStart_toStartOf="parent" />
<com.airbnb.lottie.LottieAnimationView
android:id="@+id/lottieSuccess"
android:layout_width="180dp"
android:layout_height="180dp"
android:visibility="gone"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:lottie_fileName="checked.json" />
<RelativeLayout
android:id="@+id/wallpaper_layout"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="16dp"
android:visibility="gone"
app:layout_constraintBottom_toBottomOf="parent">
<ImageButton
android:id="@+id/saveImage"
android:layout_width="50dp"
android:layout_height="50dp"
android:layout_alignParentStart="true"
android:layout_alignParentBottom="true"
android:layout_marginStart="20dp"
android:background="@drawable/set_as_wallpaper_btn"
android:contentDescription="@null"
android:src="@drawable/save" />
<Button
android:id="@+id/setWallpaper"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentEnd="true"
android:layout_alignParentBottom="true"
android:layout_marginStart="4dp"
android:layout_marginEnd="8dp"
android:background="@drawable/set_as_wallpaper_btn"
android:minWidth="230dp"
android:text="Set as wallpaper"
android:textColor="#000"
android:textStyle="bold" />
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentTop="true"
android:layout_alignParentEnd="true"
android:orientation="horizontal">
<CheckBox
android:id="@+id/favoritesBtn_checkbox"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_margin="6dp"
android:button="@drawable/favourite_checkbox_selector" />
<ImageButton
android:id="@+id/shareBtn"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_margin="6dp"
android:background="@drawable/share"
android:contentDescription="@null" />
</LinearLayout>
</RelativeLayout>
<RelativeLayout
android:id="@+id/ads_layout"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical"
android:padding="12dp"
android:visibility="visible"
app:layout_constraintBottom_toBottomOf="parent">
<Button
android:id="@+id/watch_ads"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginStart="20dp"
android:layout_marginEnd="20dp"
android:background="@drawable/set_as_wallpaper_btn"
android:drawableStart="@drawable/advertizing"
android:paddingStart="50dp"
android:paddingEnd="50dp"
android:stateListAnimator="@null"
android:text="Watch Video Ad"
android:textColor="#000"
android:textStyle="bold" />
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_below="@id/watch_ads">
<Button
android:id="@+id/unlock_withCoins"
android:layout_width="match_parent"
android:layout_height="50dp"
android:layout_marginStart="20dp"
android:layout_marginTop="15dp"
android:layout_marginEnd="20dp"
android:background="@drawable/set_as_wallpaper_btn"
android:drawableStart="@drawable/diamond"
android:paddingStart="50dp"
android:paddingEnd="50dp"
android:stateListAnimator="@null"
android:text="Unlock with diamonds"
android:textColor="#000"
android:textStyle="bold" />
<TextView
android:id="@+id/diamonds_tv"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="50dp"
android:gravity="center"
android:text="Total Diamonds: 0"
android:textSize="10sp"
android:textStyle="bold" />
</RelativeLayout>
</RelativeLayout>
</androidx.constraintlayout.widget.ConstraintLayout>
```
custom\_image.xml
```
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="wrap_content">
<androidx.cardview.widget.CardView
android:layout_width="match_parent"
android:layout_height="200dp"
android:layout_centerHorizontal="true"
android:background="#fff"
app:cardCornerRadius="10dp"
app:cardUseCompatPadding="true">
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="match_parent">
<ImageView
android:id="@+id/imageview"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:contentDescription="@null"
android:scaleType="centerCrop"
android:src="@color/colorPrimary" />
<ImageView
android:id="@+id/premium"
android:contentDescription="@null"
android:layout_width="24dp"
android:layout_height="24dp"
android:layout_alignParentTop="true"
android:layout_alignParentEnd="true"
android:layout_margin="10dp"
app:srcCompat="@drawable/diamond" />
</RelativeLayout>
</androidx.cardview.widget.CardView>
</RelativeLayout>
```
```
Structure
MainActivity
|
| //Click button to open
|
FragActivity
|
| //FrameLayout
|
Fragment
|
| //here is the recyclerView
| //Open new Activity to view image
ViewActivity
```
[Screen Recording](https://drive.google.com/file/d/10miP7tHv5tgvFBxIL0uLx6XAydz4QBfZ/view?usp=sharing) | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674378",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12901786/"
] | Since the ViewActivity must show all the images(when slided) it must contain the adapter with the set of image urls.
Instead of using an ImageView, use a RecyclerView in the ViewActivity and attach the adapter. In your code, do the following to make the recycler view horizontal and add slide functionality.
```
recyclerView = view.findViewById(R.id.recycler_view);
recyclerView.setHasFixedSize(true);
recyclerView.setLayoutManager(new LinearLayoutManager(getApplicationContext(),LinearLayoutManager.HORIZONTAL,false));
SnapHelper snapHelper = new PagerSnapHelper();
snapHelper.attachToRecyclerView(recyclerView);
``` | Few Days ago, I had been looking for similar requirement for showing slid-able image View.
This situation can be solved using `ViewPager` in Android.
You can use following Tutorials for building such Slide Image View using `ViewPager`
Java Resources
--------------
1. [Blog Tutorial](https://www.androhub.com/android-image-slider-using-viewpager/)
2. [Video Tutorial](https://www.youtube.com/watch?v=Q2FPDI99-as)
Kotlin Resources
----------------
1. [Video Tutorial](https://www.youtube.com/watch?v=mvlCKRpKVZs) |
62,674,378 | I have created a wallpaper app where I'm loading images from the firebase database in recyclerview. When I click on recyclerview item(image) that item's image URL is sent to the next activity and then that URL is loaded into imageView using glide.
I want to change this to something like [Image-Slider](https://drive.google.com/file/d/18X8HwRtv54WkAZoRwdku8_o0QN_zndWl/view). By clicking on the recyclerView item I want to show that image in full screen and slide from left or right(next or previous). But I don't know how to do that.
Here is my code.
```
recyclerView.setHasFixedSize(true);
recyclerView.setLayoutManager(new GridLayoutManager(getContext(), 3));
adapter = new FeaturedAdapter(list);
recyclerView.setAdapter(adapter);
databaseReference = FirebaseDatabase.getInstance().getReference().child("Wallpaper All");
Query query = databaseReference.orderByChild("dark").equalTo(true);
query.addListenerForSingleValueEvent(new ValueEventListener() {
@Override
public void onDataChange(@NonNull DataSnapshot dataSnapshot) {
list.clear();
for (DataSnapshot dataSnapshot1 : dataSnapshot.getChildren()) {
FeaturedModel model = dataSnapshot1.getValue(FeaturedModel.class);
list.add(model);
}
adapter.notifyDataSetChanged();
}
@Override
public void onCancelled(@NonNull DatabaseError databaseError) {
Log.e("TAG_DATABASE_ERROR", databaseError.getMessage());
}
});
```
FeaturedAdapter.java
```
public class FeaturedAdapter extends RecyclerView.Adapter<FeaturedAdapter.ViewHolder> {
private List<FeaturedModel> featuredModels;
public FeaturedAdapter(List<FeaturedModel> featuredModels) {
this.featuredModels = featuredModels;
}
@NonNull
@Override
public ViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View view = LayoutInflater.from(parent.getContext()).inflate(R.layout.custom_image, parent, false);
return new ViewHolder(view);
}
@Override
public void onBindViewHolder(@NonNull ViewHolder holder, int position) {
holder.setData(featuredModels.get(position).getImageLink()
, position,
featuredModels.get(position).isPremium());
}
@Override
public int getItemCount() {
return featuredModels.size();
}
class ViewHolder extends RecyclerView.ViewHolder {
private ImageView imageView;
private ImageView premiumImage;
public ViewHolder(@NonNull View itemView) {
super(itemView);
imageView = itemView.findViewById(R.id.imageview);
premiumImage = itemView.findViewById(R.id.premium);
}
private void setData(final String url, final int position, boolean premium) {
Glide.with(itemView.getContext().getApplicationContext()).load(url).into(imageView);
if (premium) {
premiumImage.setVisibility(View.VISIBLE);
itemView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent setIntent = new Intent(itemView.getContext(), PremiumViewActivity.class);
//setIntent.putExtra("title", url);
setIntent.putExtra("images", featuredModels.get(getAdapterPosition()).getImageLink());
setIntent.putExtra("id", featuredModels.get(position).getId());
itemView.getContext().startActivity(setIntent);
}
});
} else {
premiumImage.setVisibility(View.GONE);
itemView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent setIntent = new Intent(itemView.getContext(), ViewActivity.class);
//setIntent.putExtra("title", url);
setIntent.putExtra("images", featuredModels.get(getAdapterPosition()).getImageLink());
setIntent.putExtra("id", featuredModels.get(position).getId());
itemView.getContext().startActivity(setIntent);
}
});
}
}
}
}
```
ViewActivity
```
Random rnd = new Random();
int color = Color.argb(255, rnd.nextInt(256), rnd.nextInt(256), rnd.nextInt(256));
relativeLayout.setBackgroundColor(color);
Glide.with(this)
.load(getIntent().getStringExtra("images"))
.timeout(6000)
.listener(new RequestListener<Drawable>() {
@Override
public boolean onLoadFailed(@Nullable GlideException e, Object model, Target<Drawable> target, boolean isFirstResource) {
relativeLayout.setBackgroundColor(Color.TRANSPARENT);
return false;
}
@Override
public boolean onResourceReady(Drawable resource, Object model, Target<Drawable> target, DataSource dataSource, boolean isFirstResource) {
relativeLayout.setBackgroundColor(Color.TRANSPARENT);
return false;
}
})
.into(imageView);
setBackgroundWall.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
setBackgroundImage();
}
});
}
private void setBackgroundImage() {
Bitmap bitmap = ((BitmapDrawable) imageView.getDrawable()).getBitmap();
WallpaperManager manager = WallpaperManager.getInstance(getApplicationContext());
try {
manager.setBitmap(bitmap);
Toasty.success(getApplicationContext(), "Set Wallpaper Successfully", Toast.LENGTH_SHORT, true).show();
} catch (IOException e) {
Toasty.warning(this, "Wallpaper not load yet!", Toast.LENGTH_SHORT, true).show();
}
}
```
activity\_view.xml
```
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".activity.PremiumViewActivity">
<ImageView
android:id="@+id/viewImage"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:adjustViewBounds="true"
android:contentDescription="@null"
android:scaleType="centerCrop"
android:src="#00BCD4"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.0"
app:layout_constraintStart_toStartOf="parent" />
<com.airbnb.lottie.LottieAnimationView
android:id="@+id/lottieSuccess"
android:layout_width="180dp"
android:layout_height="180dp"
android:visibility="gone"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:lottie_fileName="checked.json" />
<RelativeLayout
android:id="@+id/wallpaper_layout"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="16dp"
android:visibility="gone"
app:layout_constraintBottom_toBottomOf="parent">
<ImageButton
android:id="@+id/saveImage"
android:layout_width="50dp"
android:layout_height="50dp"
android:layout_alignParentStart="true"
android:layout_alignParentBottom="true"
android:layout_marginStart="20dp"
android:background="@drawable/set_as_wallpaper_btn"
android:contentDescription="@null"
android:src="@drawable/save" />
<Button
android:id="@+id/setWallpaper"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentEnd="true"
android:layout_alignParentBottom="true"
android:layout_marginStart="4dp"
android:layout_marginEnd="8dp"
android:background="@drawable/set_as_wallpaper_btn"
android:minWidth="230dp"
android:text="Set as wallpaper"
android:textColor="#000"
android:textStyle="bold" />
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentTop="true"
android:layout_alignParentEnd="true"
android:orientation="horizontal">
<CheckBox
android:id="@+id/favoritesBtn_checkbox"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_margin="6dp"
android:button="@drawable/favourite_checkbox_selector" />
<ImageButton
android:id="@+id/shareBtn"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_margin="6dp"
android:background="@drawable/share"
android:contentDescription="@null" />
</LinearLayout>
</RelativeLayout>
<RelativeLayout
android:id="@+id/ads_layout"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical"
android:padding="12dp"
android:visibility="visible"
app:layout_constraintBottom_toBottomOf="parent">
<Button
android:id="@+id/watch_ads"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginStart="20dp"
android:layout_marginEnd="20dp"
android:background="@drawable/set_as_wallpaper_btn"
android:drawableStart="@drawable/advertizing"
android:paddingStart="50dp"
android:paddingEnd="50dp"
android:stateListAnimator="@null"
android:text="Watch Video Ad"
android:textColor="#000"
android:textStyle="bold" />
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_below="@id/watch_ads">
<Button
android:id="@+id/unlock_withCoins"
android:layout_width="match_parent"
android:layout_height="50dp"
android:layout_marginStart="20dp"
android:layout_marginTop="15dp"
android:layout_marginEnd="20dp"
android:background="@drawable/set_as_wallpaper_btn"
android:drawableStart="@drawable/diamond"
android:paddingStart="50dp"
android:paddingEnd="50dp"
android:stateListAnimator="@null"
android:text="Unlock with diamonds"
android:textColor="#000"
android:textStyle="bold" />
<TextView
android:id="@+id/diamonds_tv"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="50dp"
android:gravity="center"
android:text="Total Diamonds: 0"
android:textSize="10sp"
android:textStyle="bold" />
</RelativeLayout>
</RelativeLayout>
</androidx.constraintlayout.widget.ConstraintLayout>
```
custom\_image.xml
```
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="wrap_content">
<androidx.cardview.widget.CardView
android:layout_width="match_parent"
android:layout_height="200dp"
android:layout_centerHorizontal="true"
android:background="#fff"
app:cardCornerRadius="10dp"
app:cardUseCompatPadding="true">
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="match_parent">
<ImageView
android:id="@+id/imageview"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:contentDescription="@null"
android:scaleType="centerCrop"
android:src="@color/colorPrimary" />
<ImageView
android:id="@+id/premium"
android:contentDescription="@null"
android:layout_width="24dp"
android:layout_height="24dp"
android:layout_alignParentTop="true"
android:layout_alignParentEnd="true"
android:layout_margin="10dp"
app:srcCompat="@drawable/diamond" />
</RelativeLayout>
</androidx.cardview.widget.CardView>
</RelativeLayout>
```
```
Structure
MainActivity
|
| //Click button to open
|
FragActivity
|
| //FrameLayout
|
Fragment
|
| //here is the recyclerView
| //Open new Activity to view image
ViewActivity
```
[Screen Recording](https://drive.google.com/file/d/10miP7tHv5tgvFBxIL0uLx6XAydz4QBfZ/view?usp=sharing) | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674378",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12901786/"
] | This can be solved by using `ViewPager` or `ViewPager2` in Android
First create an Adapter
ImageSwiperAdapter2.java
```
public class ImageSwiperAdapter2 extends RecyclerView.Adapter<ImageSwiperAdapter2.ImageSwiper> {
private List<FeaturedModel> list;
private Context context;
public ImageSwiperAdapter2(List<FeaturedModel> list, Context context) {
this.list = list;
this.context = context;
}
@NonNull
@Override
public ImageSwiper onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View view = LayoutInflater.from(parent.getContext()).inflate(R.layout.slidingimages,
parent, false);
return new ImageSwiper(view);
}
@Override
public void onBindViewHolder(@NonNull final ImageSwiper holder, int position) {
Random rnd = new Random();
int color = Color.argb(255, rnd.nextInt(256), rnd.nextInt(256), rnd.nextInt(256));
holder.relativeLayout.setBackgroundColor(color);
Glide.with(context.getApplicationContext())
.load(list.get(position).getImageLink())
.listener(new RequestListener<Drawable>() {
@Override
public boolean onLoadFailed(@Nullable GlideException e, Object model, Target<Drawable> target, boolean isFirstResource) {
holder.relativeLayout.setBackgroundColor(Color.TRANSPARENT);
return false;
}
@Override
public boolean onResourceReady(Drawable resource, Object model, Target<Drawable> target, DataSource dataSource, boolean isFirstResource) {
holder.relativeLayout.setBackgroundColor(Color.TRANSPARENT);
return false;
}
})
.into(holder.imageView);
}
@Override
public int getItemCount() {
return list.size();
}
class ImageSwiper extends RecyclerView.ViewHolder {
private ImageView imageView;
public ImageSwiper(@NonNull View itemView) {
super(itemView);
imageView = itemView.findViewById(R.id.imageView);
}
}
}
```
In ViewActivity/SwiperActivity.java
```
public class SwiperActivity extends AppCompatActivity {
private ViewPager2 viewPager;
private List<FeaturedModel> list;
private ImageSwiperAdapter2 adapter2;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_swiper);
viewPager = findViewById(R.id.view_pager);
final int pos = getIntent().getIntExtra("pos", 0);
Singleton singleton = Singleton.getInstance();
list = new ArrayList<>();
list = singleton.getListSin();
adapter2 = new ImageSwiperAdapter2(list, this);
viewPager.setAdapter(adapter2);
viewPager.setCurrentItem(pos);
viewPager.registerOnPageChangeCallback(new ViewPager2.OnPageChangeCallback() {
@Override
public void onPageScrolled(int position, float positionOffset, int positionOffsetPixels) {
super.onPageScrolled(position, positionOffset, positionOffsetPixels);
}
@Override
public void onPageSelected(int position) {
super.onPageSelected(position);
Toast.makeText(SwiperActivity.this, "Selected: " + position, Toast.LENGTH_SHORT).show();
}
@Override
public void onPageScrollStateChanged(int state) {
super.onPageScrollStateChanged(state);
}
});
}
}
```
You can pass the `list` and clicked item position in `FeaturedAdapter`.
In your `FeaturedAdapter's` `setData` method
```
private void setData(final String url, final int position, boolean premium) {
Glide.with(itemView.getContext().getApplicationContext()).load(url).into(imageView);
final Singleton a = Singleton.getInstance();
a.setListSin(featuredModels);
if (premium) {
premiumImage.setVisibility(View.VISIBLE);
itemView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent setIntent = new Intent(itemView.getContext(), SwiperActivity.class);
setIntent.putExtra("pos", position);
itemView.getContext().startActivity(setIntent);
CustomIntent.customType(itemView.getContext(), "fadein-to-fadeout");
}
});
} else {
premiumImage.setVisibility(View.GONE);
itemView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent setIntent = new Intent(itemView.getContext(), SwiperActivity.class);
setIntent.putExtra("pos", position);
itemView.getContext().startActivity(setIntent);
CustomIntent.customType(itemView.getContext(), "fadein-to-fadeout");
}
});
}
}
```
slidingimages.xml
```
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent">
<ImageView
android:id="@+id/imageView"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:adjustViewBounds="true"
android:contentDescription="@null"
android:scaleType="centerCrop" />
</RelativeLayout>
```
activity\_swiper.xml
```
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".SwiperActivity">
<androidx.viewpager2.widget.ViewPager2
android:orientation="horizontal"
android:id="@+id/view_pager"
android:layoutDirection="inherit"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</RelativeLayout>
``` | Since the ViewActivity must show all the images(when slided) it must contain the adapter with the set of image urls.
Instead of using an ImageView, use a RecyclerView in the ViewActivity and attach the adapter. In your code, do the following to make the recycler view horizontal and add slide functionality.
```
recyclerView = view.findViewById(R.id.recycler_view);
recyclerView.setHasFixedSize(true);
recyclerView.setLayoutManager(new LinearLayoutManager(getApplicationContext(),LinearLayoutManager.HORIZONTAL,false));
SnapHelper snapHelper = new PagerSnapHelper();
snapHelper.attachToRecyclerView(recyclerView);
``` |
62,674,378 | I have created a wallpaper app where I'm loading images from the firebase database in recyclerview. When I click on recyclerview item(image) that item's image URL is sent to the next activity and then that URL is loaded into imageView using glide.
I want to change this to something like [Image-Slider](https://drive.google.com/file/d/18X8HwRtv54WkAZoRwdku8_o0QN_zndWl/view). By clicking on the recyclerView item I want to show that image in full screen and slide from left or right(next or previous). But I don't know how to do that.
Here is my code.
```
recyclerView.setHasFixedSize(true);
recyclerView.setLayoutManager(new GridLayoutManager(getContext(), 3));
adapter = new FeaturedAdapter(list);
recyclerView.setAdapter(adapter);
databaseReference = FirebaseDatabase.getInstance().getReference().child("Wallpaper All");
Query query = databaseReference.orderByChild("dark").equalTo(true);
query.addListenerForSingleValueEvent(new ValueEventListener() {
@Override
public void onDataChange(@NonNull DataSnapshot dataSnapshot) {
list.clear();
for (DataSnapshot dataSnapshot1 : dataSnapshot.getChildren()) {
FeaturedModel model = dataSnapshot1.getValue(FeaturedModel.class);
list.add(model);
}
adapter.notifyDataSetChanged();
}
@Override
public void onCancelled(@NonNull DatabaseError databaseError) {
Log.e("TAG_DATABASE_ERROR", databaseError.getMessage());
}
});
```
FeaturedAdapter.java
```
public class FeaturedAdapter extends RecyclerView.Adapter<FeaturedAdapter.ViewHolder> {
private List<FeaturedModel> featuredModels;
public FeaturedAdapter(List<FeaturedModel> featuredModels) {
this.featuredModels = featuredModels;
}
@NonNull
@Override
public ViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View view = LayoutInflater.from(parent.getContext()).inflate(R.layout.custom_image, parent, false);
return new ViewHolder(view);
}
@Override
public void onBindViewHolder(@NonNull ViewHolder holder, int position) {
holder.setData(featuredModels.get(position).getImageLink()
, position,
featuredModels.get(position).isPremium());
}
@Override
public int getItemCount() {
return featuredModels.size();
}
class ViewHolder extends RecyclerView.ViewHolder {
private ImageView imageView;
private ImageView premiumImage;
public ViewHolder(@NonNull View itemView) {
super(itemView);
imageView = itemView.findViewById(R.id.imageview);
premiumImage = itemView.findViewById(R.id.premium);
}
private void setData(final String url, final int position, boolean premium) {
Glide.with(itemView.getContext().getApplicationContext()).load(url).into(imageView);
if (premium) {
premiumImage.setVisibility(View.VISIBLE);
itemView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent setIntent = new Intent(itemView.getContext(), PremiumViewActivity.class);
//setIntent.putExtra("title", url);
setIntent.putExtra("images", featuredModels.get(getAdapterPosition()).getImageLink());
setIntent.putExtra("id", featuredModels.get(position).getId());
itemView.getContext().startActivity(setIntent);
}
});
} else {
premiumImage.setVisibility(View.GONE);
itemView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent setIntent = new Intent(itemView.getContext(), ViewActivity.class);
//setIntent.putExtra("title", url);
setIntent.putExtra("images", featuredModels.get(getAdapterPosition()).getImageLink());
setIntent.putExtra("id", featuredModels.get(position).getId());
itemView.getContext().startActivity(setIntent);
}
});
}
}
}
}
```
ViewActivity
```
Random rnd = new Random();
int color = Color.argb(255, rnd.nextInt(256), rnd.nextInt(256), rnd.nextInt(256));
relativeLayout.setBackgroundColor(color);
Glide.with(this)
.load(getIntent().getStringExtra("images"))
.timeout(6000)
.listener(new RequestListener<Drawable>() {
@Override
public boolean onLoadFailed(@Nullable GlideException e, Object model, Target<Drawable> target, boolean isFirstResource) {
relativeLayout.setBackgroundColor(Color.TRANSPARENT);
return false;
}
@Override
public boolean onResourceReady(Drawable resource, Object model, Target<Drawable> target, DataSource dataSource, boolean isFirstResource) {
relativeLayout.setBackgroundColor(Color.TRANSPARENT);
return false;
}
})
.into(imageView);
setBackgroundWall.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
setBackgroundImage();
}
});
}
private void setBackgroundImage() {
Bitmap bitmap = ((BitmapDrawable) imageView.getDrawable()).getBitmap();
WallpaperManager manager = WallpaperManager.getInstance(getApplicationContext());
try {
manager.setBitmap(bitmap);
Toasty.success(getApplicationContext(), "Set Wallpaper Successfully", Toast.LENGTH_SHORT, true).show();
} catch (IOException e) {
Toasty.warning(this, "Wallpaper not load yet!", Toast.LENGTH_SHORT, true).show();
}
}
```
activity\_view.xml
```
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".activity.PremiumViewActivity">
<ImageView
android:id="@+id/viewImage"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:adjustViewBounds="true"
android:contentDescription="@null"
android:scaleType="centerCrop"
android:src="#00BCD4"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.0"
app:layout_constraintStart_toStartOf="parent" />
<com.airbnb.lottie.LottieAnimationView
android:id="@+id/lottieSuccess"
android:layout_width="180dp"
android:layout_height="180dp"
android:visibility="gone"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:lottie_fileName="checked.json" />
<RelativeLayout
android:id="@+id/wallpaper_layout"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="16dp"
android:visibility="gone"
app:layout_constraintBottom_toBottomOf="parent">
<ImageButton
android:id="@+id/saveImage"
android:layout_width="50dp"
android:layout_height="50dp"
android:layout_alignParentStart="true"
android:layout_alignParentBottom="true"
android:layout_marginStart="20dp"
android:background="@drawable/set_as_wallpaper_btn"
android:contentDescription="@null"
android:src="@drawable/save" />
<Button
android:id="@+id/setWallpaper"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentEnd="true"
android:layout_alignParentBottom="true"
android:layout_marginStart="4dp"
android:layout_marginEnd="8dp"
android:background="@drawable/set_as_wallpaper_btn"
android:minWidth="230dp"
android:text="Set as wallpaper"
android:textColor="#000"
android:textStyle="bold" />
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentTop="true"
android:layout_alignParentEnd="true"
android:orientation="horizontal">
<CheckBox
android:id="@+id/favoritesBtn_checkbox"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_margin="6dp"
android:button="@drawable/favourite_checkbox_selector" />
<ImageButton
android:id="@+id/shareBtn"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_margin="6dp"
android:background="@drawable/share"
android:contentDescription="@null" />
</LinearLayout>
</RelativeLayout>
<RelativeLayout
android:id="@+id/ads_layout"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical"
android:padding="12dp"
android:visibility="visible"
app:layout_constraintBottom_toBottomOf="parent">
<Button
android:id="@+id/watch_ads"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginStart="20dp"
android:layout_marginEnd="20dp"
android:background="@drawable/set_as_wallpaper_btn"
android:drawableStart="@drawable/advertizing"
android:paddingStart="50dp"
android:paddingEnd="50dp"
android:stateListAnimator="@null"
android:text="Watch Video Ad"
android:textColor="#000"
android:textStyle="bold" />
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_below="@id/watch_ads">
<Button
android:id="@+id/unlock_withCoins"
android:layout_width="match_parent"
android:layout_height="50dp"
android:layout_marginStart="20dp"
android:layout_marginTop="15dp"
android:layout_marginEnd="20dp"
android:background="@drawable/set_as_wallpaper_btn"
android:drawableStart="@drawable/diamond"
android:paddingStart="50dp"
android:paddingEnd="50dp"
android:stateListAnimator="@null"
android:text="Unlock with diamonds"
android:textColor="#000"
android:textStyle="bold" />
<TextView
android:id="@+id/diamonds_tv"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="50dp"
android:gravity="center"
android:text="Total Diamonds: 0"
android:textSize="10sp"
android:textStyle="bold" />
</RelativeLayout>
</RelativeLayout>
</androidx.constraintlayout.widget.ConstraintLayout>
```
custom\_image.xml
```
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="wrap_content">
<androidx.cardview.widget.CardView
android:layout_width="match_parent"
android:layout_height="200dp"
android:layout_centerHorizontal="true"
android:background="#fff"
app:cardCornerRadius="10dp"
app:cardUseCompatPadding="true">
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="match_parent">
<ImageView
android:id="@+id/imageview"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:contentDescription="@null"
android:scaleType="centerCrop"
android:src="@color/colorPrimary" />
<ImageView
android:id="@+id/premium"
android:contentDescription="@null"
android:layout_width="24dp"
android:layout_height="24dp"
android:layout_alignParentTop="true"
android:layout_alignParentEnd="true"
android:layout_margin="10dp"
app:srcCompat="@drawable/diamond" />
</RelativeLayout>
</androidx.cardview.widget.CardView>
</RelativeLayout>
```
```
Structure
MainActivity
|
| //Click button to open
|
FragActivity
|
| //FrameLayout
|
Fragment
|
| //here is the recyclerView
| //Open new Activity to view image
ViewActivity
```
[Screen Recording](https://drive.google.com/file/d/10miP7tHv5tgvFBxIL0uLx6XAydz4QBfZ/view?usp=sharing) | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674378",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12901786/"
] | This can be solved by using `ViewPager` or `ViewPager2` in Android
First create an Adapter
ImageSwiperAdapter2.java
```
public class ImageSwiperAdapter2 extends RecyclerView.Adapter<ImageSwiperAdapter2.ImageSwiper> {
private List<FeaturedModel> list;
private Context context;
public ImageSwiperAdapter2(List<FeaturedModel> list, Context context) {
this.list = list;
this.context = context;
}
@NonNull
@Override
public ImageSwiper onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View view = LayoutInflater.from(parent.getContext()).inflate(R.layout.slidingimages,
parent, false);
return new ImageSwiper(view);
}
@Override
public void onBindViewHolder(@NonNull final ImageSwiper holder, int position) {
Random rnd = new Random();
int color = Color.argb(255, rnd.nextInt(256), rnd.nextInt(256), rnd.nextInt(256));
holder.relativeLayout.setBackgroundColor(color);
Glide.with(context.getApplicationContext())
.load(list.get(position).getImageLink())
.listener(new RequestListener<Drawable>() {
@Override
public boolean onLoadFailed(@Nullable GlideException e, Object model, Target<Drawable> target, boolean isFirstResource) {
holder.relativeLayout.setBackgroundColor(Color.TRANSPARENT);
return false;
}
@Override
public boolean onResourceReady(Drawable resource, Object model, Target<Drawable> target, DataSource dataSource, boolean isFirstResource) {
holder.relativeLayout.setBackgroundColor(Color.TRANSPARENT);
return false;
}
})
.into(holder.imageView);
}
@Override
public int getItemCount() {
return list.size();
}
class ImageSwiper extends RecyclerView.ViewHolder {
private ImageView imageView;
public ImageSwiper(@NonNull View itemView) {
super(itemView);
imageView = itemView.findViewById(R.id.imageView);
}
}
}
```
In ViewActivity/SwiperActivity.java
```
public class SwiperActivity extends AppCompatActivity {
private ViewPager2 viewPager;
private List<FeaturedModel> list;
private ImageSwiperAdapter2 adapter2;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_swiper);
viewPager = findViewById(R.id.view_pager);
final int pos = getIntent().getIntExtra("pos", 0);
Singleton singleton = Singleton.getInstance();
list = new ArrayList<>();
list = singleton.getListSin();
adapter2 = new ImageSwiperAdapter2(list, this);
viewPager.setAdapter(adapter2);
viewPager.setCurrentItem(pos);
viewPager.registerOnPageChangeCallback(new ViewPager2.OnPageChangeCallback() {
@Override
public void onPageScrolled(int position, float positionOffset, int positionOffsetPixels) {
super.onPageScrolled(position, positionOffset, positionOffsetPixels);
}
@Override
public void onPageSelected(int position) {
super.onPageSelected(position);
Toast.makeText(SwiperActivity.this, "Selected: " + position, Toast.LENGTH_SHORT).show();
}
@Override
public void onPageScrollStateChanged(int state) {
super.onPageScrollStateChanged(state);
}
});
}
}
```
You can pass the `list` and clicked item position in `FeaturedAdapter`.
In your `FeaturedAdapter's` `setData` method
```
private void setData(final String url, final int position, boolean premium) {
Glide.with(itemView.getContext().getApplicationContext()).load(url).into(imageView);
final Singleton a = Singleton.getInstance();
a.setListSin(featuredModels);
if (premium) {
premiumImage.setVisibility(View.VISIBLE);
itemView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent setIntent = new Intent(itemView.getContext(), SwiperActivity.class);
setIntent.putExtra("pos", position);
itemView.getContext().startActivity(setIntent);
CustomIntent.customType(itemView.getContext(), "fadein-to-fadeout");
}
});
} else {
premiumImage.setVisibility(View.GONE);
itemView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent setIntent = new Intent(itemView.getContext(), SwiperActivity.class);
setIntent.putExtra("pos", position);
itemView.getContext().startActivity(setIntent);
CustomIntent.customType(itemView.getContext(), "fadein-to-fadeout");
}
});
}
}
```
slidingimages.xml
```
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent">
<ImageView
android:id="@+id/imageView"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:adjustViewBounds="true"
android:contentDescription="@null"
android:scaleType="centerCrop" />
</RelativeLayout>
```
activity\_swiper.xml
```
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".SwiperActivity">
<androidx.viewpager2.widget.ViewPager2
android:orientation="horizontal"
android:id="@+id/view_pager"
android:layoutDirection="inherit"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</RelativeLayout>
``` | Few Days ago, I had been looking for similar requirement for showing slid-able image View.
This situation can be solved using `ViewPager` in Android.
You can use following Tutorials for building such Slide Image View using `ViewPager`
Java Resources
--------------
1. [Blog Tutorial](https://www.androhub.com/android-image-slider-using-viewpager/)
2. [Video Tutorial](https://www.youtube.com/watch?v=Q2FPDI99-as)
Kotlin Resources
----------------
1. [Video Tutorial](https://www.youtube.com/watch?v=mvlCKRpKVZs) |
62,674,385 | I'm trying to make a VBA script for data that is being extracted from a website. Now I am struggling with hiding the rows that are larger than a certain amount of days.
The reason I struggle is because the format is as:
* 0d 8h 2m 29s
* 105d 19h 4m 19s
* 10d 17h 4m 48s
I have been browsing the internet but I've been unable to find any code that works with values without a separator. Usually time has a ":" separator. I tried to leave it out but I did't get my code to work.
My code looks like:
```
Dim BooBoo As Long, TheEnd As Long
'Tell me where is the end!
TheEnd = Cells(Rows.Count, "A").End(xlUp).Row
'Play it where
With Sheets("Sheet2")
For BooBoo = 2 To TheEnd
'Anything in column E that is older than 3 days, has to be hidden
If .Cells(BooBoo, "E").Value > "3d 0h 0m 0s" Then
.Rows(BooBoo).EntireRow.Hidden = True
If Rows(BooBoo).Hidden = True Then Rows(BooBoo).EntireRow.Delete
End If
Next
End With
```
I feel I'm approaching this the wrong way. Hopefully someone can help me with this.
Regards,
Sebastian | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11492451/"
] | Try using of this function, please:
```
Private Function TotalReference(strTime As String) As Double 'hours
Dim arrD As Variant
arrD = Split(strTime, " ")
TotalReference = Split(arrD(0), "d")(0) * 24 + Split(arrD(1), _
"h")(0) + Split(arrD(2), "m")(0) / 60 + Split(arrD(3), "s")(0) / 3600
End Function
```
You can test it in the next way:
```
Sub testHideBiggerDate()
Dim x As String
x = "3d 2h 20m 50s"
Debug.Print TotalReference(x)
End Sub
```
And in your code you can use it in this way:
```
If TotalReference(.Cells(BooBoo, "E").Value) > TotalReference("3d 0h 0m 0s") Then
```
Besides that, your code firstly hide the row then check if it is hidden and delete it. Why hiding it first...? And using `Rows(BooBoo).Hidden = True` will check the row of the active sheet, not the one your code processes, if it is not activate. You should also use: `.Rows(BooBoo).Hidden = True`.
Your code would like in this way:
```
Sub compareDatevalues()
Dim BooBoo As Long, TheEnd As Long
TheEnd = Cells(Rows.count, "A").End(xlUp).row
With Sheets("Sheet2")
For BooBoo = 2 To TheEnd
'Anything in column E that is older than 3 days, has to be hidden
If TotalReference(.Cells(BooBoo, "E").Value) > TotalReference("3d 0h 0m 0s") Then 'or > 72
.Rows(BooBoo).EntireRow.Hidden = True
If .Rows(BooBoo).Hidden = True Then Rows(BooBoo).EntireRow.Delete 'why to delete it in two phases?
End If
Next
End With
End Sub
``` | You can quite easily extract number of days from your string and then compare integers. I assume that you do not want to take into account also hours etc.
```
Dim BooBoo As Long, TheEnd As Long
'Tell me where is the end!
TheEnd = Cells(Rows.Count, "A").End(xlUp).Row
'Play it where
With Sheets("Sheet2")
For BooBoo = 2 To TheEnd
'Anything in column E that is older than 3 days, has to be hidden
If Mid(.Cells(BooBoo, "E"), 1, WorksheetFunction.Find("d", .Cells(BooBoo, "E").Value) - 1) * 1 > 3 Then
.Rows(BooBoo).EntireRow.Hidden = True
If Rows(BooBoo).Hidden = True Then Rows(BooBoo).EntireRow.Delete
End If
Next
End With
``` |
62,674,411 | I'm a little bit confused about https communication with influxdb. I am running a 1.8 Influxdb instance on a virtual machine with a public IP. It is an Apache2 server but for now I am not willing to use it as a webserver to display web pages to clients. I want to use it as a database server for influxdb.
I obtained a valid certificate from Let's Encrypt, indeed the welcome page <https://datavm.bo.cnr.it> works properly over encrypted connection.
Then I followed all the instructions in the docs in order to enable https: I put the `fullchain.pem` file in `/etc/ssl` directory, I set the file permissions (not sure about the meaning of this step though), I edited influxdb.conf with `https-enabled = true` and set the path for `https-certificate` and `https-private.key` (`fullchain.pem` for both, is it right?). Then, `systemctl restart influxdb`. When I run `influxdb -ssl -host datavm.bo.cnr.it` I get the following:
```
Failed to connect to https://datavm.bo.cnr.it:8086: Get https://datavm.bo.cnr.it:8086/ping: http: server gave HTTP response to HTTPS client
Please check your connection settings and ensure 'influxd' is running.
```
Any help in understanding what I am doing wrong is very appreciated! Thank you | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674411",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10144109/"
] | Try using of this function, please:
```
Private Function TotalReference(strTime As String) As Double 'hours
Dim arrD As Variant
arrD = Split(strTime, " ")
TotalReference = Split(arrD(0), "d")(0) * 24 + Split(arrD(1), _
"h")(0) + Split(arrD(2), "m")(0) / 60 + Split(arrD(3), "s")(0) / 3600
End Function
```
You can test it in the next way:
```
Sub testHideBiggerDate()
Dim x As String
x = "3d 2h 20m 50s"
Debug.Print TotalReference(x)
End Sub
```
And in your code you can use it in this way:
```
If TotalReference(.Cells(BooBoo, "E").Value) > TotalReference("3d 0h 0m 0s") Then
```
Besides that, your code firstly hide the row then check if it is hidden and delete it. Why hiding it first...? And using `Rows(BooBoo).Hidden = True` will check the row of the active sheet, not the one your code processes, if it is not activate. You should also use: `.Rows(BooBoo).Hidden = True`.
Your code would like in this way:
```
Sub compareDatevalues()
Dim BooBoo As Long, TheEnd As Long
TheEnd = Cells(Rows.count, "A").End(xlUp).row
With Sheets("Sheet2")
For BooBoo = 2 To TheEnd
'Anything in column E that is older than 3 days, has to be hidden
If TotalReference(.Cells(BooBoo, "E").Value) > TotalReference("3d 0h 0m 0s") Then 'or > 72
.Rows(BooBoo).EntireRow.Hidden = True
If .Rows(BooBoo).Hidden = True Then Rows(BooBoo).EntireRow.Delete 'why to delete it in two phases?
End If
Next
End With
End Sub
``` | You can quite easily extract number of days from your string and then compare integers. I assume that you do not want to take into account also hours etc.
```
Dim BooBoo As Long, TheEnd As Long
'Tell me where is the end!
TheEnd = Cells(Rows.Count, "A").End(xlUp).Row
'Play it where
With Sheets("Sheet2")
For BooBoo = 2 To TheEnd
'Anything in column E that is older than 3 days, has to be hidden
If Mid(.Cells(BooBoo, "E"), 1, WorksheetFunction.Find("d", .Cells(BooBoo, "E").Value) - 1) * 1 > 3 Then
.Rows(BooBoo).EntireRow.Hidden = True
If Rows(BooBoo).Hidden = True Then Rows(BooBoo).EntireRow.Delete
End If
Next
End With
``` |
62,674,413 | This is the time format I want to convert
```
time = Time.parse('2020-07-02 03:59:59.999 UTC')
#=> 2020-07-02 03:59:59 UTC
```
I want to convert to string in this format.
```
"2020-07-02T03:59:59.999Z"
```
I have tried.
```
time.strftime("%Y-%m-%dT%H:%M:%S.999Z")
```
Is this correct? Any better way? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You can just use [`Time#iso8601`](https://ruby-doc.org/stdlib-2.7.1/libdoc/time/rdoc/Time.html#method-i-xmlschema) with the desired number of fraction digits as an argument:
```
time = Time.current.end_of_hour
time.iso8601(3) #=> "2020-07-01T10:59:59.999Z"
``` | If you want to handle the output format explicitly via [`strftime`](https://ruby-doc.org/core-2.7.1/Time.html#strftime-method), there are some things to keep in mind:
Instead of hard-coding `999`, you should use `%L` to get the actual milliseconds:
```
time = Time.parse('2020-07-02 03:59:59.999 UTC')
#=> 2020-07-02 03:59:59 UTC
time.strftime('%Y-%m-%dT%H:%M:%S.%LZ')
#=> "2020-07-02T03:59:59.999Z"
```
Use combinations for common formats, e.g. `%F` for `%Y-%m-%d` and `%T` for `%H:%M:%S`:
```
time.strftime('%FT%T.%LZ')
#=> "2020-07-02T03:59:59.999Z"
```
If you are dealing with time zones other than UTC (maybe your machine's local time zone), make sure to convert your time instance to [`utc`](https://ruby-doc.org/core-2.7.1/Time.html#method-i-utc) first:
```
time = Time.parse('2020-07-02 05:59:59.999+02:00')
#=> 2020-07-02 05:59:59 +0200
time.utc
#=> 2020-07-02 03:59:59 UTC
time.strftime('%FT%T.%LZ')
#=> "2020-07-02T03:59:59.999Z"
```
or to use `%z` / `%:z` to append the actual time zone offset:
```
time = Time.parse('2020-07-02 05:59:59.999+02:00')
time.strftime('%FT%T.%L%:z')
#=> "2020-07-02T05:59:59.999+02:00"
``` |
62,674,413 | This is the time format I want to convert
```
time = Time.parse('2020-07-02 03:59:59.999 UTC')
#=> 2020-07-02 03:59:59 UTC
```
I want to convert to string in this format.
```
"2020-07-02T03:59:59.999Z"
```
I have tried.
```
time.strftime("%Y-%m-%dT%H:%M:%S.999Z")
```
Is this correct? Any better way? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You can just use [`Time#iso8601`](https://ruby-doc.org/stdlib-2.7.1/libdoc/time/rdoc/Time.html#method-i-xmlschema) with the desired number of fraction digits as an argument:
```
time = Time.current.end_of_hour
time.iso8601(3) #=> "2020-07-01T10:59:59.999Z"
``` | For APIs you should use `utc.iso8601`:
```rb
> timestamp = Time.now.utc.iso8601
=> "2015-07-04T21:53:23Z"
```
See:
<https://thoughtbot.com/blog/its-about-time-zones#working-with-apis> |
62,674,413 | This is the time format I want to convert
```
time = Time.parse('2020-07-02 03:59:59.999 UTC')
#=> 2020-07-02 03:59:59 UTC
```
I want to convert to string in this format.
```
"2020-07-02T03:59:59.999Z"
```
I have tried.
```
time.strftime("%Y-%m-%dT%H:%M:%S.999Z")
```
Is this correct? Any better way? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | If you want to handle the output format explicitly via [`strftime`](https://ruby-doc.org/core-2.7.1/Time.html#strftime-method), there are some things to keep in mind:
Instead of hard-coding `999`, you should use `%L` to get the actual milliseconds:
```
time = Time.parse('2020-07-02 03:59:59.999 UTC')
#=> 2020-07-02 03:59:59 UTC
time.strftime('%Y-%m-%dT%H:%M:%S.%LZ')
#=> "2020-07-02T03:59:59.999Z"
```
Use combinations for common formats, e.g. `%F` for `%Y-%m-%d` and `%T` for `%H:%M:%S`:
```
time.strftime('%FT%T.%LZ')
#=> "2020-07-02T03:59:59.999Z"
```
If you are dealing with time zones other than UTC (maybe your machine's local time zone), make sure to convert your time instance to [`utc`](https://ruby-doc.org/core-2.7.1/Time.html#method-i-utc) first:
```
time = Time.parse('2020-07-02 05:59:59.999+02:00')
#=> 2020-07-02 05:59:59 +0200
time.utc
#=> 2020-07-02 03:59:59 UTC
time.strftime('%FT%T.%LZ')
#=> "2020-07-02T03:59:59.999Z"
```
or to use `%z` / `%:z` to append the actual time zone offset:
```
time = Time.parse('2020-07-02 05:59:59.999+02:00')
time.strftime('%FT%T.%L%:z')
#=> "2020-07-02T05:59:59.999+02:00"
``` | For APIs you should use `utc.iso8601`:
```rb
> timestamp = Time.now.utc.iso8601
=> "2015-07-04T21:53:23Z"
```
See:
<https://thoughtbot.com/blog/its-about-time-zones#working-with-apis> |
62,674,427 | Consider the following datasets
```
data_00 <- data.table(ID = c(1,1,1,2,2,2,3,3,3,4,5,5,6),
COLOUR = c("blue","green","yellow","yellow","red","blue","green","green","white","green","blue","yellow","white"))
data_01 <- data.table(ID=c(1,2,2,2,3,3,4,4,5,6,6),
COLOUR=c("red","blue","green","white","yellow","blue","white","green","blue","white","pink"))
ID COLOUR
1: 1 blue
2: 1 green
3: 1 yellow
4: 2 yellow
5: 2 red
6: 2 blue
7: 3 green
8: 3 green
9: 3 white
10: 4 green
11: 5 blue
12: 5 yellow
13: 6 white
ID COLOUR
1: 1 red
2: 2 blue
3: 2 green
4: 2 white
5: 3 yellow
6: 3 blue
7: 4 white
8: 4 green
9: 5 blue
10: 6 white
11: 6 pink
```
with ID representing a person's identification and colour the wall colour of the person's room. I use the data.table package as the real data is very big and thus an efficient package is required. I want to compare the colours stated in b with the colours in a, investigating whether a specific household had this wall colour also in the year before and add a third column to data\_01 with the logical values for this question.
I tried with
```
data_01 <- data_01[COLOUR00:=(COLOUR %in% data_00$ID[COLOUR]),by=ID]
but the logical values returned are wrong.
```
The expected output is supposed to be:
```
ID COLOUR PREV_YEAR
1: 1 red FALSE
2: 2 blue TRUE
3: 2 green FALSE
4: 2 white FALSE
5: 3 yellow FALSE
6: 3 blue FALSE
7: 4 white FALSE
8: 4 green TRUE
9: 5 blue TRUE
10: 6 white TRUE
11: 6 pink FALSE
```
I want to have an extra row indicating whether the colour was present in the households home the year before.
Can someone help me tackle this problem? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674427",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13774653/"
] | You can just use [`Time#iso8601`](https://ruby-doc.org/stdlib-2.7.1/libdoc/time/rdoc/Time.html#method-i-xmlschema) with the desired number of fraction digits as an argument:
```
time = Time.current.end_of_hour
time.iso8601(3) #=> "2020-07-01T10:59:59.999Z"
``` | If you want to handle the output format explicitly via [`strftime`](https://ruby-doc.org/core-2.7.1/Time.html#strftime-method), there are some things to keep in mind:
Instead of hard-coding `999`, you should use `%L` to get the actual milliseconds:
```
time = Time.parse('2020-07-02 03:59:59.999 UTC')
#=> 2020-07-02 03:59:59 UTC
time.strftime('%Y-%m-%dT%H:%M:%S.%LZ')
#=> "2020-07-02T03:59:59.999Z"
```
Use combinations for common formats, e.g. `%F` for `%Y-%m-%d` and `%T` for `%H:%M:%S`:
```
time.strftime('%FT%T.%LZ')
#=> "2020-07-02T03:59:59.999Z"
```
If you are dealing with time zones other than UTC (maybe your machine's local time zone), make sure to convert your time instance to [`utc`](https://ruby-doc.org/core-2.7.1/Time.html#method-i-utc) first:
```
time = Time.parse('2020-07-02 05:59:59.999+02:00')
#=> 2020-07-02 05:59:59 +0200
time.utc
#=> 2020-07-02 03:59:59 UTC
time.strftime('%FT%T.%LZ')
#=> "2020-07-02T03:59:59.999Z"
```
or to use `%z` / `%:z` to append the actual time zone offset:
```
time = Time.parse('2020-07-02 05:59:59.999+02:00')
time.strftime('%FT%T.%L%:z')
#=> "2020-07-02T05:59:59.999+02:00"
``` |
62,674,427 | Consider the following datasets
```
data_00 <- data.table(ID = c(1,1,1,2,2,2,3,3,3,4,5,5,6),
COLOUR = c("blue","green","yellow","yellow","red","blue","green","green","white","green","blue","yellow","white"))
data_01 <- data.table(ID=c(1,2,2,2,3,3,4,4,5,6,6),
COLOUR=c("red","blue","green","white","yellow","blue","white","green","blue","white","pink"))
ID COLOUR
1: 1 blue
2: 1 green
3: 1 yellow
4: 2 yellow
5: 2 red
6: 2 blue
7: 3 green
8: 3 green
9: 3 white
10: 4 green
11: 5 blue
12: 5 yellow
13: 6 white
ID COLOUR
1: 1 red
2: 2 blue
3: 2 green
4: 2 white
5: 3 yellow
6: 3 blue
7: 4 white
8: 4 green
9: 5 blue
10: 6 white
11: 6 pink
```
with ID representing a person's identification and colour the wall colour of the person's room. I use the data.table package as the real data is very big and thus an efficient package is required. I want to compare the colours stated in b with the colours in a, investigating whether a specific household had this wall colour also in the year before and add a third column to data\_01 with the logical values for this question.
I tried with
```
data_01 <- data_01[COLOUR00:=(COLOUR %in% data_00$ID[COLOUR]),by=ID]
but the logical values returned are wrong.
```
The expected output is supposed to be:
```
ID COLOUR PREV_YEAR
1: 1 red FALSE
2: 2 blue TRUE
3: 2 green FALSE
4: 2 white FALSE
5: 3 yellow FALSE
6: 3 blue FALSE
7: 4 white FALSE
8: 4 green TRUE
9: 5 blue TRUE
10: 6 white TRUE
11: 6 pink FALSE
```
I want to have an extra row indicating whether the colour was present in the households home the year before.
Can someone help me tackle this problem? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674427",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13774653/"
] | You can just use [`Time#iso8601`](https://ruby-doc.org/stdlib-2.7.1/libdoc/time/rdoc/Time.html#method-i-xmlschema) with the desired number of fraction digits as an argument:
```
time = Time.current.end_of_hour
time.iso8601(3) #=> "2020-07-01T10:59:59.999Z"
``` | For APIs you should use `utc.iso8601`:
```rb
> timestamp = Time.now.utc.iso8601
=> "2015-07-04T21:53:23Z"
```
See:
<https://thoughtbot.com/blog/its-about-time-zones#working-with-apis> |
62,674,427 | Consider the following datasets
```
data_00 <- data.table(ID = c(1,1,1,2,2,2,3,3,3,4,5,5,6),
COLOUR = c("blue","green","yellow","yellow","red","blue","green","green","white","green","blue","yellow","white"))
data_01 <- data.table(ID=c(1,2,2,2,3,3,4,4,5,6,6),
COLOUR=c("red","blue","green","white","yellow","blue","white","green","blue","white","pink"))
ID COLOUR
1: 1 blue
2: 1 green
3: 1 yellow
4: 2 yellow
5: 2 red
6: 2 blue
7: 3 green
8: 3 green
9: 3 white
10: 4 green
11: 5 blue
12: 5 yellow
13: 6 white
ID COLOUR
1: 1 red
2: 2 blue
3: 2 green
4: 2 white
5: 3 yellow
6: 3 blue
7: 4 white
8: 4 green
9: 5 blue
10: 6 white
11: 6 pink
```
with ID representing a person's identification and colour the wall colour of the person's room. I use the data.table package as the real data is very big and thus an efficient package is required. I want to compare the colours stated in b with the colours in a, investigating whether a specific household had this wall colour also in the year before and add a third column to data\_01 with the logical values for this question.
I tried with
```
data_01 <- data_01[COLOUR00:=(COLOUR %in% data_00$ID[COLOUR]),by=ID]
but the logical values returned are wrong.
```
The expected output is supposed to be:
```
ID COLOUR PREV_YEAR
1: 1 red FALSE
2: 2 blue TRUE
3: 2 green FALSE
4: 2 white FALSE
5: 3 yellow FALSE
6: 3 blue FALSE
7: 4 white FALSE
8: 4 green TRUE
9: 5 blue TRUE
10: 6 white TRUE
11: 6 pink FALSE
```
I want to have an extra row indicating whether the colour was present in the households home the year before.
Can someone help me tackle this problem? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674427",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13774653/"
] | If you want to handle the output format explicitly via [`strftime`](https://ruby-doc.org/core-2.7.1/Time.html#strftime-method), there are some things to keep in mind:
Instead of hard-coding `999`, you should use `%L` to get the actual milliseconds:
```
time = Time.parse('2020-07-02 03:59:59.999 UTC')
#=> 2020-07-02 03:59:59 UTC
time.strftime('%Y-%m-%dT%H:%M:%S.%LZ')
#=> "2020-07-02T03:59:59.999Z"
```
Use combinations for common formats, e.g. `%F` for `%Y-%m-%d` and `%T` for `%H:%M:%S`:
```
time.strftime('%FT%T.%LZ')
#=> "2020-07-02T03:59:59.999Z"
```
If you are dealing with time zones other than UTC (maybe your machine's local time zone), make sure to convert your time instance to [`utc`](https://ruby-doc.org/core-2.7.1/Time.html#method-i-utc) first:
```
time = Time.parse('2020-07-02 05:59:59.999+02:00')
#=> 2020-07-02 05:59:59 +0200
time.utc
#=> 2020-07-02 03:59:59 UTC
time.strftime('%FT%T.%LZ')
#=> "2020-07-02T03:59:59.999Z"
```
or to use `%z` / `%:z` to append the actual time zone offset:
```
time = Time.parse('2020-07-02 05:59:59.999+02:00')
time.strftime('%FT%T.%L%:z')
#=> "2020-07-02T05:59:59.999+02:00"
``` | For APIs you should use `utc.iso8601`:
```rb
> timestamp = Time.now.utc.iso8601
=> "2015-07-04T21:53:23Z"
```
See:
<https://thoughtbot.com/blog/its-about-time-zones#working-with-apis> |
62,674,431 | When I click the **submit** button, I am rendering with json in controller.
My json is `{notes: array[1], array[2]}`. Now I am rendering `view.html.erb file`.
I want to show the values in notes like `<%= notes %>` in this file, how to display it ? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | If `notes` is an arary you can loop over it and display the content in your `.erb.html` file as below:
```html
<div>
<% notes.each_with_index do | note, index | %>
<div id="title">
<%= note[:title] %>
</div>
<% end %>
</div>
``` | sounds like you are confusing JSON requests with HTML request.
The incoming request should be a HTML
controller
```
# GET /the_controller/view
# not /the_controller/view.json <- wrong
def view
@notes = array
end
```
view.html.erb
```
<%= @notes %>
```
see @rishabh0211 answer for better presentation
if indeed you intend this to be a json request, through som ajax request or similiar, you need to specify that in your question. |
62,674,440 | I have a component library package and it's used in a lot of files like this:
```
import { Grid, Heading, Button } from 'component-library'
```
I want to change all of my import to something like this:
```
import { Grid } from 'component-library/lib/Grid'
import { Heading } from 'component-library/lib/Heading'
import { Button } from 'component-library/lib/Button'
```
I found an extension on VSCode for doing this:
<https://marketplace.visualstudio.com/items?itemName=angelomollame.batch-replacer&ssr=false#overview>
And it seems it's accepting regex too.
I tried to write a regex to find a replace old imports to a new one but my regex not working. This is my current regex:
```
(?:import)\s+([\w,{}\s\*]+)\s+(?:from)?\s*(?:["'])?([@\w\s\\\/.-]+)(?:["'])?\s*
```
Can you help me to fix this? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674440",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12417183/"
] | You can do it in two steps (that'as good as my regex-fu can do).
Step 1:
`(import \{ )?((\w+)[, ]+)(?=.*?(\} from '(.*)'))` Search regex
`import { $3 } from '$5/lib/$3'\n` Find regex
[Regex101](https://regex101.com/r/mP8jjq/4)
---
This leaves one stray `} from 'great-library';` behind for each original `import` line (maybe someone can improve the regex to avoid that), so remove it with:
`^\} from.*$` Search
Replace with nothing.
---
There are a few multi-regex replace extensions out there where you can combine these two steps into one. For example, using [Batch Replacer](https://marketplace.visualstudio.com/items?itemName=angelomollame.batch-replacer&ssr=false#overview) try this "script":
```
replace-regex "(import { )?((\w+)[, ]+)(?=.*?(} from '(.*)'))"
with "import { $3 } from '$5/lib/$3'\n"
replace-regex "(?<!.)} from.*\r?\n?"
with ""
```
---
[![Batch Replacer demo](https://i.stack.imgur.com/k4FjJ.gif)](https://i.stack.imgur.com/k4FjJ.gif) | Without any extensions, just use regular expression as many times as there is a match:
Find:
```
(import\s*{\s*)(\w+)\s*,([\w\s,*]+?)(\s*}(?:\s*from)?\s*)(["'])?([@\w\s\\/.-]*?)\5
```
Replace:
```
$1$2$4$5$6$5\n$1$3$4$5$6$5
```
See the [regex demo](https://regex101.com/r/140ZOq/1)
The pattern will match and capture parts of the pattern into separate groups. Only the first word in the round brackets will be captured each time and a line with just that word will be output + a newline and a line containing other words inside the brackets.
That is why you need to run search and replace until no matches are found. |
62,674,448 | I have a file1.txt and file2.txt which have few hostnames separated by comma ",". I have a working solution where I loop through the value of file2.txt and capture the values which are not in file1.txt. However, when I try to execute the same through the pipeline its not working.
Can anyone advice if by using Awk or Sed or any one-liner command to achieve the same?
**file1.txt**
host1,host2,host3,host4,host100
**file2.txt**
host2,host4
expected output:
**output.txt**
host1,host3,host100
Thanks | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674448",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2223715/"
] | You can do it in two steps (that'as good as my regex-fu can do).
Step 1:
`(import \{ )?((\w+)[, ]+)(?=.*?(\} from '(.*)'))` Search regex
`import { $3 } from '$5/lib/$3'\n` Find regex
[Regex101](https://regex101.com/r/mP8jjq/4)
---
This leaves one stray `} from 'great-library';` behind for each original `import` line (maybe someone can improve the regex to avoid that), so remove it with:
`^\} from.*$` Search
Replace with nothing.
---
There are a few multi-regex replace extensions out there where you can combine these two steps into one. For example, using [Batch Replacer](https://marketplace.visualstudio.com/items?itemName=angelomollame.batch-replacer&ssr=false#overview) try this "script":
```
replace-regex "(import { )?((\w+)[, ]+)(?=.*?(} from '(.*)'))"
with "import { $3 } from '$5/lib/$3'\n"
replace-regex "(?<!.)} from.*\r?\n?"
with ""
```
---
[![Batch Replacer demo](https://i.stack.imgur.com/k4FjJ.gif)](https://i.stack.imgur.com/k4FjJ.gif) | Without any extensions, just use regular expression as many times as there is a match:
Find:
```
(import\s*{\s*)(\w+)\s*,([\w\s,*]+?)(\s*}(?:\s*from)?\s*)(["'])?([@\w\s\\/.-]*?)\5
```
Replace:
```
$1$2$4$5$6$5\n$1$3$4$5$6$5
```
See the [regex demo](https://regex101.com/r/140ZOq/1)
The pattern will match and capture parts of the pattern into separate groups. Only the first word in the round brackets will be captured each time and a line with just that word will be output + a newline and a line containing other words inside the brackets.
That is why you need to run search and replace until no matches are found. |
62,674,456 | I created several routes that are working but my app.get('/') doesn't send anything from what I pass inside.
Nothing happens in the backend console and so neither in the frontend when going on 'localhost:3000/' .. My front Home component is displayed fine on the '/' path... Any Idea what is going wrong in the back?
app.js (backend)
```
const bodyParser = require('body-parser');
const app = express();
const morgan = require('morgan');
const connection = require('./helpers/db.js');
const session = require('express-session')
// Configure view engine to render EJS templates.
app.set('views', __dirname + '/views');
app.set('view engine', 'ejs');
// Use application-level middleware for common functionality, including
// logging, parsing, and session handling.
app.use(require('morgan')('combined'));
app.use(require('body-parser').urlencoded({ extended: true }));
app.use(require('express-session')({ secret: 'keyboard cat', resave: false, saveUninitialized: false }));
// Initialize Passport and restore authentication state, if any, from the
// session.
app.use(passport.initialize());
app.use(passport.session());
app.use(morgan('dev'));
app.use(bodyParser.urlencoded({ extended: false }));
app.use(bodyParser.json());
app.use(express.static(__dirname + '/public'));
app.get('/', (req, res) => {
console.log('heeeeeeere')
res.send('YOU ARE HERE')
//res.render('YOU ARE HERE')
//res.render('signup', { user: req.user });
});
app.post('/signup', (req, res) => {
var data = {
name: req.body.name,
lastname: req.body.lastname,
email: req.body.email,
password: req.body.password
};
var email = data.email
var passwordconf = req.body.passwordconf
var password = data.password
const checkEmail = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/;
checkEmail.test(String(email).toLowerCase())
? password === passwordconf
? connection.query('INSERT INTO users SET ?', data, (error, response, fields) => {
if (error)
res.status(500).json({ flash: error.message });
else
res.status(200).json({ flash: "User has been signed up!" , path: '/profile'})
})
: res.status(500).json({ flash: "Please confirm your password again" })
: res.status(500).json({ flash: "Please give a correct email" });
});
app.post('/signin', (req, res) => {
var data = {
email: req.body.email,
password: req.body.password
};
const checkEmail = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/;
checkEmail.test(String(data.email).toLowerCase())
? connection.query(`SELECT * FROM users WHERE email = "${data.email}" AND password = "${data.password}"`, (error, response, fields) => {
if (error)
res.status(500).json({ flash: error.message});
else
response != ''
? res.status(200).json({ flash: "Welcome !", path: '/profile'})
: res.status(200).json({ flash: `The email or the password is incorrect` , path: '/profile'})
})
: res.status(500).json({ flash: "Please give a correct email" });
});
// in case path is not found, return the 'Not Found' 404 code
app.use(function(req, res, next) {
var err = new Error('Not Found');
err.status = 404;
next(err);
});
app.get('/signin',
function(req, res){
res.render('login');
});
app.get('/logout',
function(req, res){
req.logout();
res.redirect('/');
});
app.get('/profile',
require('connect-ensure-login').ensureLoggedIn(),
function(req, res){
res.render('profile', { user: req.user });
});
// launch the node server
let server = app.listen( process.env.PORT || 5000, function(){
console.log('Listening on port ' + server.address().port);
});
```
Routes (frontend)
```
import {MuiThemeProvider} from '@material-ui/core/styles';
import { Grid, Paper } from '@material-ui/core';
import React from 'react';
import './App.css';
import {
BrowserRouter as Router,
Switch,
Route
} from "react-router-dom";
import SignUp from './components/SignUp';
import SignIn from './components/SignIn';
import Profile from './components/Profile';
import Home from './components/Home';
// const bcrypt = require('bcrypt');
function App() {
return (
<MuiThemeProvider >
<Grid container
alignItems='center'
style={{ height: '100%' }}>
<Grid item xs={12}>
<Paper
elevation={4}
style={{ margin: 32 }}
>
<Grid container
alignItems='center'
justify='center'>
<Grid item xs={12}
alignContent='center'
>
<Router>
<Switch>
<Route path='/'>
<Home />
</Route>
<Route path='/signup'>
<SignUp />
</Route>
<Route path='/signin'>
<SignIn />
</Route>
<Route path='/profile'>
<Profile />
</Route>
</Switch>
</Router>
</Grid>
</Grid>
</Paper>
</Grid>
</Grid>
</MuiThemeProvider>
);
}
export default App;
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674456",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11101899/"
] | You can do it in two steps (that'as good as my regex-fu can do).
Step 1:
`(import \{ )?((\w+)[, ]+)(?=.*?(\} from '(.*)'))` Search regex
`import { $3 } from '$5/lib/$3'\n` Find regex
[Regex101](https://regex101.com/r/mP8jjq/4)
---
This leaves one stray `} from 'great-library';` behind for each original `import` line (maybe someone can improve the regex to avoid that), so remove it with:
`^\} from.*$` Search
Replace with nothing.
---
There are a few multi-regex replace extensions out there where you can combine these two steps into one. For example, using [Batch Replacer](https://marketplace.visualstudio.com/items?itemName=angelomollame.batch-replacer&ssr=false#overview) try this "script":
```
replace-regex "(import { )?((\w+)[, ]+)(?=.*?(} from '(.*)'))"
with "import { $3 } from '$5/lib/$3'\n"
replace-regex "(?<!.)} from.*\r?\n?"
with ""
```
---
[![Batch Replacer demo](https://i.stack.imgur.com/k4FjJ.gif)](https://i.stack.imgur.com/k4FjJ.gif) | Without any extensions, just use regular expression as many times as there is a match:
Find:
```
(import\s*{\s*)(\w+)\s*,([\w\s,*]+?)(\s*}(?:\s*from)?\s*)(["'])?([@\w\s\\/.-]*?)\5
```
Replace:
```
$1$2$4$5$6$5\n$1$3$4$5$6$5
```
See the [regex demo](https://regex101.com/r/140ZOq/1)
The pattern will match and capture parts of the pattern into separate groups. Only the first word in the round brackets will be captured each time and a line with just that word will be output + a newline and a line containing other words inside the brackets.
That is why you need to run search and replace until no matches are found. |
62,674,471 | Onclick my request, the content Are you sure? is not changing to the required language.
list.component.html
```html
<button type="button" (click)="myrequest(item.Id)">Request View</button>
```
list.component.ts
```js
myrequest((Id: number));
{
this.notificationService.smartMessageBox(
{
title: "Warning",
content: "{{ 'Are you sure?' | translate }}",
buttons: "[No][Yes]"
},
ButtonPressed => {
if (ButtonPressed === "Yes") {
this.loading = true;
this.FurtherRequest(projectId).subscribe((data: any) => {
this.loading = false;
if (data.Success) {
this.notificationService.successMessage("request submitted");
} else {
this.notificationService.errorMessage(data.Message);
}
});
}
if (ButtonPressed === "No") {
}
}
);
}
```
list.module.ts
```js
import { TranslateModule, TranslateLoader } from '@ngx-translate/core';
imports: [
TranslateModule],
exports: [
TranslateModule],
```
Unable to translate things inside js | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674471",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13812405/"
] | You need to pass a parameter to translate pipe inside js file.
```
{{ 'Are you sure?' | translate :param }}
``` | Pipes do not work like this. You need to use `translateService.get` or `translateService.instant` to get a translation programatically. |
62,674,500 | How to convert given RGBA to hex assuming value of each of them are given as separate variables r, g, b, a where r, g & b are 1-225 and a in 0-1 in javascript.
Please define a function for it. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674500",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846410/"
] | You need to pass a parameter to translate pipe inside js file.
```
{{ 'Are you sure?' | translate :param }}
``` | Pipes do not work like this. You need to use `translateService.get` or `translateService.instant` to get a translation programatically. |
62,674,520 | I tried to test "Export PDF" angular on all web browsers (chrome, FireFox) it works correctly except on Internet Explorer 11
```
//******************************************".html"******************************************//
<button title="{{'purchase request'| translate}} PDF" mat-button class="header-button"
(click)="downloadPdf(prPurchaseRequest)">
<mat-icon>cloud_download</mat-icon>{{'purchase request'|translate}} PDF
</button>
//******************************************".ts"******************************************//
/**
*
* Download PDF prPurchaseRequest
*/
downloadPdf(prPurchaseRequest) {
this.spinnerstate = true;
this.extraService.getCollection('api/pr-purchase-requests/generate_pdf/' +
prPurchaseRequest.id).subscribe((response) => {
setTimeout(() => { this.spinnerstate = false; }, 2000);
const name = prPurchaseRequest.reference + '_purchase-request.pdf';
const linkSource = 'data:application/pdf;base64,' + response.data;
const downloadLink = document.createElement('a');
downloadLink.href = linkSource;
downloadLink.download = name;
downloadLink.click();
});
}
```
//\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*Error Console in internet explorer \*\*\*\*\*\*\*\*\*\*//
[![enter image description here](https://i.stack.imgur.com/vH7fA.png)](https://i.stack.imgur.com/vH7fA.png) | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674520",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9224746/"
] | You need to pass a parameter to translate pipe inside js file.
```
{{ 'Are you sure?' | translate :param }}
``` | Pipes do not work like this. You need to use `translateService.get` or `translateService.instant` to get a translation programatically. |
62,674,530 | My goal is to collect logs from different servers using Filebeat and aggregate/visualize them using ElasticSearch and Kibana. For the time being, I am excluding Logstash from the scene.
So far I have been able to configure Filebeat to push logs real-time and I am able to confirm through the Kibana interface that the logs are indeed being pushed to ElasticSearch.
**Problem:**
The problem is that the Filebeat (or the ElasticSearch) automatically adds extra empty fields/properties to the index.
Some of the fields I can see on the Kibana interface:
```
aws.cloudtrail.user_identity.session_context.creation_date
azure.auditlogs.properties.activity_datetime
azure.enqueued_time
azure.signinlogs.properties.created_at
cef.extensions.agentReceiptTime
cef.extensions.deviceCustomDate1
cef.extensions.deviceCustomDate2
cef.extensions.deviceReceiptTime
cef.extensions.endTime
cef.extensions.fileCreateTime
cef.extensions.fileModificationTime
cef.extensions.flexDate1
...
```
They are all empty fields.
When I check the mapping for that index using `GET /[index]/_mapping`, I can see ~3000 fields that I didn't really add. I am not sure how these fields were added and how to remove them.
**Reproduction:**
Filebeat and ElasticSearch docker images I use:
```
elasticsearch:7.8.0
elastic/filebeat:7.8.0
```
On top of the base images I put basic configuration files as simple as:
```
# filebeat.yml
filebeat.inputs:
- type: log
paths:
- /path_to/my_log_file/metrics.log
output.elasticsearch:
hosts: ["http://192.168.0.1:9200"]
```
```
# elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
node.name: node-1
discovery.seed_hosts: ["127.0.0.1"]
cluster.initial_master_nodes: ["node-1"]
```
A typical log message would look like this:
```
2020-07-01 08:40:07,432 - CPUUtilization.Percent:50.0|#Level:Host|#hostname:a78f2ab3da65,timestamp:1593592807
2020-07-01 08:40:07,437 - DiskAvailable.Gigabytes:43.607460021972656|#Level:Host|#hostname:a78f2ab3da65,timestamp:1593592807
```
Thank you | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6587143/"
] | Enter the [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/current/index.html) (ECS), a [godsend](https://www.elastic.co/blog/introducing-the-elastic-common-schema)!
When Filebeat starts, it installs an index template with all the ECS fields from the common schema, that's why you see so many fields in your index mapping, but it's not really an issue.
Then, on the Kibana interface you see all those "empty" fields on the Table view (Discover tab). But if you switch to the JSON view, you'll see that those fields are not actually inside the document. Filebeat doesn't add them to your documents. The reason you see them in the Table view is because Kibana is requesting them (using `docvalue_fields`). Just click on Inspect and see the request that Kibana sends to Elasticsearch.
So there's nothing to worry about, really.
Coming to your actual message, if you were to parse `CPUUtilization.Percent:50.0` you could actually store that into a standard ECS field called `"system.cpu.total.pct": 50` and you could see those values evolve over time in the Metrics app in Kibana. Same thing for `DiskAvailable.Gigabytes:43.607460021972656` | All the input is being automatically mapped by filebeat to the 'elastic common schema' aka ECS and exported. Please have a look [here](https://www.elastic.co/guide/en/beats/filebeat/master/exported-fields-ecs.html) for more details. |
62,674,530 | My goal is to collect logs from different servers using Filebeat and aggregate/visualize them using ElasticSearch and Kibana. For the time being, I am excluding Logstash from the scene.
So far I have been able to configure Filebeat to push logs real-time and I am able to confirm through the Kibana interface that the logs are indeed being pushed to ElasticSearch.
**Problem:**
The problem is that the Filebeat (or the ElasticSearch) automatically adds extra empty fields/properties to the index.
Some of the fields I can see on the Kibana interface:
```
aws.cloudtrail.user_identity.session_context.creation_date
azure.auditlogs.properties.activity_datetime
azure.enqueued_time
azure.signinlogs.properties.created_at
cef.extensions.agentReceiptTime
cef.extensions.deviceCustomDate1
cef.extensions.deviceCustomDate2
cef.extensions.deviceReceiptTime
cef.extensions.endTime
cef.extensions.fileCreateTime
cef.extensions.fileModificationTime
cef.extensions.flexDate1
...
```
They are all empty fields.
When I check the mapping for that index using `GET /[index]/_mapping`, I can see ~3000 fields that I didn't really add. I am not sure how these fields were added and how to remove them.
**Reproduction:**
Filebeat and ElasticSearch docker images I use:
```
elasticsearch:7.8.0
elastic/filebeat:7.8.0
```
On top of the base images I put basic configuration files as simple as:
```
# filebeat.yml
filebeat.inputs:
- type: log
paths:
- /path_to/my_log_file/metrics.log
output.elasticsearch:
hosts: ["http://192.168.0.1:9200"]
```
```
# elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
node.name: node-1
discovery.seed_hosts: ["127.0.0.1"]
cluster.initial_master_nodes: ["node-1"]
```
A typical log message would look like this:
```
2020-07-01 08:40:07,432 - CPUUtilization.Percent:50.0|#Level:Host|#hostname:a78f2ab3da65,timestamp:1593592807
2020-07-01 08:40:07,437 - DiskAvailable.Gigabytes:43.607460021972656|#Level:Host|#hostname:a78f2ab3da65,timestamp:1593592807
```
Thank you | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6587143/"
] | Enter the [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/current/index.html) (ECS), a [godsend](https://www.elastic.co/blog/introducing-the-elastic-common-schema)!
When Filebeat starts, it installs an index template with all the ECS fields from the common schema, that's why you see so many fields in your index mapping, but it's not really an issue.
Then, on the Kibana interface you see all those "empty" fields on the Table view (Discover tab). But if you switch to the JSON view, you'll see that those fields are not actually inside the document. Filebeat doesn't add them to your documents. The reason you see them in the Table view is because Kibana is requesting them (using `docvalue_fields`). Just click on Inspect and see the request that Kibana sends to Elasticsearch.
So there's nothing to worry about, really.
Coming to your actual message, if you were to parse `CPUUtilization.Percent:50.0` you could actually store that into a standard ECS field called `"system.cpu.total.pct": 50` and you could see those values evolve over time in the Metrics app in Kibana. Same thing for `DiskAvailable.Gigabytes:43.607460021972656` | If you define elastic index in filebeat.yml
You can go to Stack Management > Index Management and edit the filebeat index.
In the settings default fields:
```
"query": {
"default_field": [
"message",
"tags",
"agent.ephemeral_id",
"agent.id",
"agent.name",
"agent.type",
...
]}
```
You can remove all the unnecessary fields from here. |
62,674,530 | My goal is to collect logs from different servers using Filebeat and aggregate/visualize them using ElasticSearch and Kibana. For the time being, I am excluding Logstash from the scene.
So far I have been able to configure Filebeat to push logs real-time and I am able to confirm through the Kibana interface that the logs are indeed being pushed to ElasticSearch.
**Problem:**
The problem is that the Filebeat (or the ElasticSearch) automatically adds extra empty fields/properties to the index.
Some of the fields I can see on the Kibana interface:
```
aws.cloudtrail.user_identity.session_context.creation_date
azure.auditlogs.properties.activity_datetime
azure.enqueued_time
azure.signinlogs.properties.created_at
cef.extensions.agentReceiptTime
cef.extensions.deviceCustomDate1
cef.extensions.deviceCustomDate2
cef.extensions.deviceReceiptTime
cef.extensions.endTime
cef.extensions.fileCreateTime
cef.extensions.fileModificationTime
cef.extensions.flexDate1
...
```
They are all empty fields.
When I check the mapping for that index using `GET /[index]/_mapping`, I can see ~3000 fields that I didn't really add. I am not sure how these fields were added and how to remove them.
**Reproduction:**
Filebeat and ElasticSearch docker images I use:
```
elasticsearch:7.8.0
elastic/filebeat:7.8.0
```
On top of the base images I put basic configuration files as simple as:
```
# filebeat.yml
filebeat.inputs:
- type: log
paths:
- /path_to/my_log_file/metrics.log
output.elasticsearch:
hosts: ["http://192.168.0.1:9200"]
```
```
# elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
node.name: node-1
discovery.seed_hosts: ["127.0.0.1"]
cluster.initial_master_nodes: ["node-1"]
```
A typical log message would look like this:
```
2020-07-01 08:40:07,432 - CPUUtilization.Percent:50.0|#Level:Host|#hostname:a78f2ab3da65,timestamp:1593592807
2020-07-01 08:40:07,437 - DiskAvailable.Gigabytes:43.607460021972656|#Level:Host|#hostname:a78f2ab3da65,timestamp:1593592807
```
Thank you | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6587143/"
] | Enter the [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/current/index.html) (ECS), a [godsend](https://www.elastic.co/blog/introducing-the-elastic-common-schema)!
When Filebeat starts, it installs an index template with all the ECS fields from the common schema, that's why you see so many fields in your index mapping, but it's not really an issue.
Then, on the Kibana interface you see all those "empty" fields on the Table view (Discover tab). But if you switch to the JSON view, you'll see that those fields are not actually inside the document. Filebeat doesn't add them to your documents. The reason you see them in the Table view is because Kibana is requesting them (using `docvalue_fields`). Just click on Inspect and see the request that Kibana sends to Elasticsearch.
So there's nothing to worry about, really.
Coming to your actual message, if you were to parse `CPUUtilization.Percent:50.0` you could actually store that into a standard ECS field called `"system.cpu.total.pct": 50` and you could see those values evolve over time in the Metrics app in Kibana. Same thing for `DiskAvailable.Gigabytes:43.607460021972656` | You can define your own template like this:
```
curl -XPUT 'localhost:9200/_template/filebeat-7.8.0' -H 'Content-Type: application/json' -d'
{
"order" : 1,
"index_patterns" : [
"filebeat-7.8.0-*"
],
"settings" : {
"index" : {
"lifecycle" : {
"name" : "filebeat",
"rollover_alias" : "filebeat-7.8.0"
},
"mapping" : {
"total_fields" : {
"limit" : "10000"
}
},
"refresh_interval" : "5s"
}
},
"mappings" : {
"_meta" : {
"beat" : "filebeat",
"version" : "7.8.0"
},
"date_detection" : false,
"properties" : {
"@timestamp" : {
"type" : "date"
},
"type": {
"type": "keyword"
},
"namespace": {
"type": "keyword"
},
"app": {
"type": "keyword"
},
"k8s-app": {
"type": "keyword"
},
"node": {
"type": "keyword"
},
"pod": {
"type": "keyword"
},
"stream": {
"type": "keyword"
}
}
},
"aliases" : { }
}'
```
Explain:
As explained by @Val, Filebeat installs an index template named `filebeat-7.8.0` when it starts. However, if the template exists, Filebeat will use it directly rather than creating a new one. The default value of `setup.template.overwrite` is `false` (<https://www.elastic.co/guide/en/beats/filebeat/current/configuration-template.html>). By doing this you won't have those fields in your mapping, and Kibana won't show these fields either. |
62,674,530 | My goal is to collect logs from different servers using Filebeat and aggregate/visualize them using ElasticSearch and Kibana. For the time being, I am excluding Logstash from the scene.
So far I have been able to configure Filebeat to push logs real-time and I am able to confirm through the Kibana interface that the logs are indeed being pushed to ElasticSearch.
**Problem:**
The problem is that the Filebeat (or the ElasticSearch) automatically adds extra empty fields/properties to the index.
Some of the fields I can see on the Kibana interface:
```
aws.cloudtrail.user_identity.session_context.creation_date
azure.auditlogs.properties.activity_datetime
azure.enqueued_time
azure.signinlogs.properties.created_at
cef.extensions.agentReceiptTime
cef.extensions.deviceCustomDate1
cef.extensions.deviceCustomDate2
cef.extensions.deviceReceiptTime
cef.extensions.endTime
cef.extensions.fileCreateTime
cef.extensions.fileModificationTime
cef.extensions.flexDate1
...
```
They are all empty fields.
When I check the mapping for that index using `GET /[index]/_mapping`, I can see ~3000 fields that I didn't really add. I am not sure how these fields were added and how to remove them.
**Reproduction:**
Filebeat and ElasticSearch docker images I use:
```
elasticsearch:7.8.0
elastic/filebeat:7.8.0
```
On top of the base images I put basic configuration files as simple as:
```
# filebeat.yml
filebeat.inputs:
- type: log
paths:
- /path_to/my_log_file/metrics.log
output.elasticsearch:
hosts: ["http://192.168.0.1:9200"]
```
```
# elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
node.name: node-1
discovery.seed_hosts: ["127.0.0.1"]
cluster.initial_master_nodes: ["node-1"]
```
A typical log message would look like this:
```
2020-07-01 08:40:07,432 - CPUUtilization.Percent:50.0|#Level:Host|#hostname:a78f2ab3da65,timestamp:1593592807
2020-07-01 08:40:07,437 - DiskAvailable.Gigabytes:43.607460021972656|#Level:Host|#hostname:a78f2ab3da65,timestamp:1593592807
```
Thank you | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6587143/"
] | All the input is being automatically mapped by filebeat to the 'elastic common schema' aka ECS and exported. Please have a look [here](https://www.elastic.co/guide/en/beats/filebeat/master/exported-fields-ecs.html) for more details. | If you define elastic index in filebeat.yml
You can go to Stack Management > Index Management and edit the filebeat index.
In the settings default fields:
```
"query": {
"default_field": [
"message",
"tags",
"agent.ephemeral_id",
"agent.id",
"agent.name",
"agent.type",
...
]}
```
You can remove all the unnecessary fields from here. |
62,674,530 | My goal is to collect logs from different servers using Filebeat and aggregate/visualize them using ElasticSearch and Kibana. For the time being, I am excluding Logstash from the scene.
So far I have been able to configure Filebeat to push logs real-time and I am able to confirm through the Kibana interface that the logs are indeed being pushed to ElasticSearch.
**Problem:**
The problem is that the Filebeat (or the ElasticSearch) automatically adds extra empty fields/properties to the index.
Some of the fields I can see on the Kibana interface:
```
aws.cloudtrail.user_identity.session_context.creation_date
azure.auditlogs.properties.activity_datetime
azure.enqueued_time
azure.signinlogs.properties.created_at
cef.extensions.agentReceiptTime
cef.extensions.deviceCustomDate1
cef.extensions.deviceCustomDate2
cef.extensions.deviceReceiptTime
cef.extensions.endTime
cef.extensions.fileCreateTime
cef.extensions.fileModificationTime
cef.extensions.flexDate1
...
```
They are all empty fields.
When I check the mapping for that index using `GET /[index]/_mapping`, I can see ~3000 fields that I didn't really add. I am not sure how these fields were added and how to remove them.
**Reproduction:**
Filebeat and ElasticSearch docker images I use:
```
elasticsearch:7.8.0
elastic/filebeat:7.8.0
```
On top of the base images I put basic configuration files as simple as:
```
# filebeat.yml
filebeat.inputs:
- type: log
paths:
- /path_to/my_log_file/metrics.log
output.elasticsearch:
hosts: ["http://192.168.0.1:9200"]
```
```
# elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
node.name: node-1
discovery.seed_hosts: ["127.0.0.1"]
cluster.initial_master_nodes: ["node-1"]
```
A typical log message would look like this:
```
2020-07-01 08:40:07,432 - CPUUtilization.Percent:50.0|#Level:Host|#hostname:a78f2ab3da65,timestamp:1593592807
2020-07-01 08:40:07,437 - DiskAvailable.Gigabytes:43.607460021972656|#Level:Host|#hostname:a78f2ab3da65,timestamp:1593592807
```
Thank you | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6587143/"
] | All the input is being automatically mapped by filebeat to the 'elastic common schema' aka ECS and exported. Please have a look [here](https://www.elastic.co/guide/en/beats/filebeat/master/exported-fields-ecs.html) for more details. | You can define your own template like this:
```
curl -XPUT 'localhost:9200/_template/filebeat-7.8.0' -H 'Content-Type: application/json' -d'
{
"order" : 1,
"index_patterns" : [
"filebeat-7.8.0-*"
],
"settings" : {
"index" : {
"lifecycle" : {
"name" : "filebeat",
"rollover_alias" : "filebeat-7.8.0"
},
"mapping" : {
"total_fields" : {
"limit" : "10000"
}
},
"refresh_interval" : "5s"
}
},
"mappings" : {
"_meta" : {
"beat" : "filebeat",
"version" : "7.8.0"
},
"date_detection" : false,
"properties" : {
"@timestamp" : {
"type" : "date"
},
"type": {
"type": "keyword"
},
"namespace": {
"type": "keyword"
},
"app": {
"type": "keyword"
},
"k8s-app": {
"type": "keyword"
},
"node": {
"type": "keyword"
},
"pod": {
"type": "keyword"
},
"stream": {
"type": "keyword"
}
}
},
"aliases" : { }
}'
```
Explain:
As explained by @Val, Filebeat installs an index template named `filebeat-7.8.0` when it starts. However, if the template exists, Filebeat will use it directly rather than creating a new one. The default value of `setup.template.overwrite` is `false` (<https://www.elastic.co/guide/en/beats/filebeat/current/configuration-template.html>). By doing this you won't have those fields in your mapping, and Kibana won't show these fields either. |
62,674,583 | I have a bot that is working in PM, I can talk with it and make it do my tasks without any problems but I don't find how to make it send a message to a specific other person.
I want it to send private messages to a specific list of users without any interactions from thoses users. The only interaction is the order coming from me, asking it to send messages to others.
I found a lot of documentations and posts about bot **responding** to messages with webhook but nothing about the bot sending directly a PM to someone.
So instead of this :
```
function onMessage(event) {
return {"text": "MSG = " + message };
}
```
I'm looking for something where I could specify **user ID** or **user name** :
```
function sendMessage(ID/name) {
return {"text": "MSG = " + message, "ID": ID}; //not accurate example
}
sendMessage("User_ID");
```
If you have any ideas or informations on the way to do it, this would be very appreciated !
**UPDATE :**
It's not yet possible to INITIATE a DM conversation with someone but it is possible to send messages to all the bot contacts by checking the spaces the bot is in (so ther is no need for each person to send a message to the bot, only one message is necessary to trigger it).
Here is an example on how I used it :
```
//Configure the chatbot service
function get_chatbot_service() {
return OAuth2.createService(BOT_NAME)
.setTokenUrl('https://accounts.google.com/o/oauth2/token') // Set the endpoint URL.
.setPrivateKey(PRIVATE_KEY) // Set the private key.
.setIssuer(CLIENT_EMAIL) // Set the issuer.
.setPropertyStore(PropertiesService.getScriptProperties()) // Set the property store where authorized tokens should be persisted.
.setScope('https://www.googleapis.com/auth/chat.bot'); // Set the scope.
}
//Return all the spaces (DM and rooms) the bot belong to
function get_spaces() {
var service = get_chatbot_service();
var url = 'https://chat.googleapis.com/v1/spaces';
var response = UrlFetchApp.fetch(url, { headers: { Authorization: 'Bearer ' + service.getAccessToken() }});
var rep = JSON.parse(response.getContentText());
return (rep.spaces)
}
//Get the informations and send the message to every contacts the bot have been added to
function send_message() {
var service = get_chatbot_service();
var spaces = get_spaces();
var msg = "Test message";
for (var i = 0; i < spaces.length; i++) {
var space = spaces[i];
if (space.type == "DM") { //Check if we are in DM
var url = 'https://chat.googleapis.com/v1/'+ space.name +'/messages'; //Specify the URL with space name
var options = {
method : 'POST',
contentType: 'application/json',
headers: { Authorization: 'Bearer ' + service.getAccessToken() },
payload : JSON.stringify({ text: msg }) //Add your message
}
UrlFetchApp.fetch(url, options); //Send the message
}
}
}
```
Maybe this will help someone one day. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674583",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12226862/"
] | Currently there is no possible way to make the bot start a conversation.
I have found this issue on [Google's Public Issue Tracker](https://issuetracker.google.com/issues/111081181). You just have to go there and click on the star next to the title so you get updates on the issue and you give the issue more visibility. | [bot send me message before](https://i.stack.imgur.com/1vHIg.jpg)Curious that how daily bot send me a message. So will have a way to make bot DM message to specific user. |
62,674,596 | I am running dpdk-stable-18.11.8 on Centos 7. My test code transmits udp packets with a simple payload string 'Hello from dpdk' to a remote host.
Immediately before transmission a dump of the mbuf looks like this:
```
dump mbuf at 0x53e4e92c0, iova=73e4e9340, buf_len=2176
pkt_len=57, ol_flags=c0000000000000, nb_segs=2, in_port=65535
segment at 0x53e4e92c0, data=0x53e4e9396, data_len=42
Dump data at [0x53e4e9396], len=42
00000000: 94 40 C9 1F C4 DB 94 40 C9 1E 13 7D 08 00 45 B8 | .@.....@...}..E.
00000010: 00 2B 00 00 00 00 40 11 F5 E8 C0 A8 01 69 C0 A8 | .+....@......i..
00000020: 01 68 0B B9 0B B8 00 17 64 2D | | | | | | | .h......d-
segment at 0x53e146e40, data=0x53e4fbbc0, data_len=15
Dump data at [0x53e4fbbc0], len=15
00000000: 48 65 6C 6C 6F 20 66 72 6F 6D 20 64 70 64 6B | | Hello from dpdk
```
Running tcpdump with command:
```
tcpdump -A -nni eno2 src 192.168.1.105 -vv -X
```
on the remote host yields:
```
10:44:33.022538 IP (tos 0xb8, ttl 64, id 0, offset 0, flags [none], proto UDP (17), length 43)
192.168.1.105.3001 > 192.168.1.104.3000: [udp sum ok] UDP, length 15
0x0000: 45b8 002b 0000 0000 4011 f5e8 c0a8 0169 E..+....@......i
0x0010: c0a8 0168 0bb9 0bb8 0017 642d 0000 0000 ...h......d-....
0x0020: 0000 0000 0000 0000 0000 0000 0000 ..............
```
So the udp packet arrives but the payload is all zeros.
(The transmitter is 192.168.1.105 and the receiver is 192.168.1.104).
Here's a snippet of my code that adds space for the udp and ip headers to the mbuf in the transmitter:
```
p_udp_hdr = (struct udp_hdr*)rte_pktmbuf_prepend(ptMbuf, (uint16_t)sizeof(struct udp_hdr));
p_ip_hdr = (struct ipv4_hdr*)rte_pktmbuf_prepend(ptMbuf, (uint16_t)sizeof(struct ipv4_hdr));
```
I'm not sure where to look, as the code is adapted from working code from another project. It may be relevant that, in the other project, checksums were generated by the NIC but in this project I have calculated them in software; but tcpdump says the checksums are correct.
Any advice would be appreciated. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674596",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4940771/"
] | You need to enable multi segments in the eth\_dev configuration as indicated
below:
```
.txmode = {
.offloads = DEV_TX_OFFLOAD_MULTI_SEGS, },
```
[EDIT-1] also shared in comment 'Hi @DavidA requesting you to check the dev\_configure TX offload flag for multi-segment behaviour. '. | I believe at least part of your problem is that, while you've modified the pkt\_len field in your packet to reflect the payload you've added, data\_len is still just 42 bytes. That's just the length of your L2-4 headers. You want to add space for your payload there as well.
Also, I note that the nb\_segs field in your packet is 2, indicating that you have a chain of packets. Do you? I wouldn't expect having that wrong would cause the problem you're seeing, but it's probably something you'd like to fix up, regardless.
[EDIT-2] Issue because PKT\_TX\_MSEGS is not updated in application. Refelcted in above answer and comment `Hi @DavidA requesting you to check the dev_configure TX offload flag for multi-segment behaviour. – Vipin Varghese Jul 2 '20 at 12:13` |
62,674,606 | I have created 3 search fields: 1. Name.... 2. Subject.... 3.Price....
All of them are working good if considered separately. eg. from a list of elements if I search by name, I get all the results from that name or if I search by Subject, I get all the results of that subject and the same for price also.
**But the problem is I don't know how to make all the filters work simultaneously for me i.e. If I want a person named "BOB" who can teach "English" at price "500-600"**. How to achieve this link between multiple filters
[![This is the image of my filter](https://i.stack.imgur.com/CV8ji.png)](https://i.stack.imgur.com/CV8ji.png)
**component.ts
filterSearch an array that contains all the data of the users and tempFilterSearch is used to show it at html**
```
searchSubject($event) {
this.subject = $event.target.value;
if(this.price == null){
this.tempFilterSearch = this.filterSearch.filter((res) => {
return (
res.subject1.toLocaleLowerCase().match(this.subject.toLocaleLowerCase()) ||
res.subject2.toLocaleLowerCase().match(this.subject.toLocaleLowerCase()) ||
res.subject3.toLocaleLowerCase().match(this.subject.toLocaleLowerCase())
);
});
}else if(this.price != null){
this.tempFilterSearch = this.filterSearch.filter((res)=>{
console.log(res);
if((Number(res.price1))>=(Number(this.lowerValue))
&&(Number(res.price1))<=(Number(this.higherValue))
&&(res.subject1.match(this.subject)) || (res.subject2.match(this.subject))|| (res.subject3.match(this.subject))){
return res.subject1 || res.subject2 || res.subject3;
}
})
}
searchName() {
this.tempFilterSearch = this.filterSearch.filter((response)=>{
return response.fullName.toLocaleLowerCase().match(this.name.toLocaleLowerCase());
})
}
searchPrice($event) {
this.price = $event.target.value.split("-");
this.lowerValue = this.price[0];
this.higherValue = this.price[1];
if (this.subject == null) {
this.tempFilterSearch = this.filterSearch.filter((res) => {
if((Number(res.price1))>=(Number(this.lowerValue))&&(Number(res.price1))<=(Number(this.higherValue)))
return res.price1.toLocaleLowerCase();
});
}else if(this.subject != null){
this.tempFilterSearch = this.filterSearch.filter((res)=>{
if((Number(res.price1))>=(Number(this.lowerValue))
&&(Number(res.price1))<=(Number(this.higherValue)) ){
return res.price1.toLocaleLowerCase();
}
})
}
}
```
**component.html**
```
<div class="tutorWrapper" *ngFor="let element of tempFilterSearch">
```
**What I want is a solution so that all the filters can work simultaneously.** | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13573212/"
] | In your sample code all the seperate filters are applied to fresh `this.filterSearch`. That means, every time any filtering starts, it resets the previous filtered results and starts from beginning.
You need to merge the filters in to one logic to apply all on the same array and have only one output array. Any time a change occurs on filters, it will start from the original array and apply all filters one by one.
For example;
```
searchSubject($event) {
this.subject = $event.target.value;
this.applyFilters();
}
searchName() {
//this.name is already assigned
this.applyFilters();
}
searchPrice($event) {
this.price = $event.target.value.split("-");
this.lowerValue = this.price[0];
this.higherValue = this.price[1];
this.applyFilters();
}
applyFilters() {
let temp = this.filterSearch;
// SUBJECT
if(this.price == null){
temp = temp.filter((res) => {
return (
res.subject1.toLocaleLowerCase().match(this.subject.toLocaleLowerCase()) ||
res.subject2.toLocaleLowerCase().match(this.subject.toLocaleLowerCase()) ||
res.subject3.toLocaleLowerCase().match(this.subject.toLocaleLowerCase())
);
});
}else if(this.price != null){
temp = temp.filter((res)=>{
if((Number(res.price1))>=(Number(this.lowerValue))
&&(Number(res.price1))<=(Number(this.higherValue))
&&(res.subject1.match(this.subject)) || (res.subject2.match(this.subject))|| (res.subject3.match(this.subject))){
return res.subject1 || res.subject2 || res.subject3;
}
})
}
// NAME
temp = temp.filter((response)=>{
return response.fullName.toLocaleLowerCase().match(this.name.toLocaleLowerCase());
})
// PRICE
if (this.subject == null) {
temp = temp.filter((res) => {
if((Number(res.price1))>=(Number(this.lowerValue))&&(Number(res.price1))<=(Number(this.higherValue)))
return res.price1.toLocaleLowerCase();
});
}else if(this.subject != null){
temp = temp.filter((res)=>{
if((Number(res.price1))>=(Number(this.lowerValue))
&&(Number(res.price1))<=(Number(this.higherValue)) ){
return res.price1.toLocaleLowerCase();
}
})
}
this.tempFilterSearch = temp;
}
```
**Your seperate filter functions are not touched, logic is not tested. I only copy pasted to re-arrange your code in to merged function.** | I am answering this from the phone so if this answer isnt good enough ill edit it later from pc.
The way i would do it is to create all 3 possible filters as functions. In your case name, subject, price,
And create a variable to hold an array of the filtered objects.
On 1st filter (name) my function would take the main list array and returns all matching elements into the filtered variable.
Then subject filter takes the filtered variable and filters it further for subject. And re assign output back into filteredarray.
Same goes for price.
Then you ngfor the filtered array.
!!EDIT!!
as per your comment on how to solve the already filtered array issue. ill write some pseudo code below:
```js
mainArray = [...arrayItems];
filteredArray: Items[];
onFilterButtonPressed(name, subject, price) {
// here you can use if to see if some value hasnt been provided and skip that filter
nameFilter();
subjectFilter();
priceFilter();
}
nameFilter(name){
filteredArray = mainArray.filter(item => item.name === name)
}
subjectFilter(subject){
filteredArray = filteredArray.filter(item => item.subject === subject)
}
priceFilter(price){
filteredArray = filteredArray.filter(item => item.price === price)
}
```
```html
<button (click)="onFilterButtonPressed(pass your filter values here)">click to filter</button
```
The above code is ugly and requires name variable to be there however to go around that we use a piping function that uses reduce() to apply our filters
```js
//here we create a function that uses reduce which takes the current array and applies the next filter function to it and changes that current array and so on
function pipe(...fns) {
return (arg) => fns.reduce((prev, fn) => fn(prev), arg);
}
// here we call the function we made and pass the filter functions we created to it
//our filter functions must filter and return the filtered array, they also check whether the filter exists or not, if it doesnt then return the array unchanged
filteredArray = pipe(nameFilter, subjectFilter, priceFilter)(MainArray)
```
Hope this helps |
62,674,626 | I have a Format File of above format say `"test.fmt"`. I have to remove the entire third row and the last row ie. `Id` and `Flag` row from the file. Meanwhile, also update the column number on the LHS. For eg. in this case after removing the two columns, I will get 5 rows remaining. This value '`5`' must also be updated in row 2 which is currently '`7`'. Also notice how the last row in output file has a "`\r\n`", so need to change that too.
How do i go about this using python(it will be very helpful if someone could help with a sample code or something), and also parameterize the variable to do the same task for 50 files with different number of rows. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674626",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13839639/"
] | In your sample code all the seperate filters are applied to fresh `this.filterSearch`. That means, every time any filtering starts, it resets the previous filtered results and starts from beginning.
You need to merge the filters in to one logic to apply all on the same array and have only one output array. Any time a change occurs on filters, it will start from the original array and apply all filters one by one.
For example;
```
searchSubject($event) {
this.subject = $event.target.value;
this.applyFilters();
}
searchName() {
//this.name is already assigned
this.applyFilters();
}
searchPrice($event) {
this.price = $event.target.value.split("-");
this.lowerValue = this.price[0];
this.higherValue = this.price[1];
this.applyFilters();
}
applyFilters() {
let temp = this.filterSearch;
// SUBJECT
if(this.price == null){
temp = temp.filter((res) => {
return (
res.subject1.toLocaleLowerCase().match(this.subject.toLocaleLowerCase()) ||
res.subject2.toLocaleLowerCase().match(this.subject.toLocaleLowerCase()) ||
res.subject3.toLocaleLowerCase().match(this.subject.toLocaleLowerCase())
);
});
}else if(this.price != null){
temp = temp.filter((res)=>{
if((Number(res.price1))>=(Number(this.lowerValue))
&&(Number(res.price1))<=(Number(this.higherValue))
&&(res.subject1.match(this.subject)) || (res.subject2.match(this.subject))|| (res.subject3.match(this.subject))){
return res.subject1 || res.subject2 || res.subject3;
}
})
}
// NAME
temp = temp.filter((response)=>{
return response.fullName.toLocaleLowerCase().match(this.name.toLocaleLowerCase());
})
// PRICE
if (this.subject == null) {
temp = temp.filter((res) => {
if((Number(res.price1))>=(Number(this.lowerValue))&&(Number(res.price1))<=(Number(this.higherValue)))
return res.price1.toLocaleLowerCase();
});
}else if(this.subject != null){
temp = temp.filter((res)=>{
if((Number(res.price1))>=(Number(this.lowerValue))
&&(Number(res.price1))<=(Number(this.higherValue)) ){
return res.price1.toLocaleLowerCase();
}
})
}
this.tempFilterSearch = temp;
}
```
**Your seperate filter functions are not touched, logic is not tested. I only copy pasted to re-arrange your code in to merged function.** | I am answering this from the phone so if this answer isnt good enough ill edit it later from pc.
The way i would do it is to create all 3 possible filters as functions. In your case name, subject, price,
And create a variable to hold an array of the filtered objects.
On 1st filter (name) my function would take the main list array and returns all matching elements into the filtered variable.
Then subject filter takes the filtered variable and filters it further for subject. And re assign output back into filteredarray.
Same goes for price.
Then you ngfor the filtered array.
!!EDIT!!
as per your comment on how to solve the already filtered array issue. ill write some pseudo code below:
```js
mainArray = [...arrayItems];
filteredArray: Items[];
onFilterButtonPressed(name, subject, price) {
// here you can use if to see if some value hasnt been provided and skip that filter
nameFilter();
subjectFilter();
priceFilter();
}
nameFilter(name){
filteredArray = mainArray.filter(item => item.name === name)
}
subjectFilter(subject){
filteredArray = filteredArray.filter(item => item.subject === subject)
}
priceFilter(price){
filteredArray = filteredArray.filter(item => item.price === price)
}
```
```html
<button (click)="onFilterButtonPressed(pass your filter values here)">click to filter</button
```
The above code is ugly and requires name variable to be there however to go around that we use a piping function that uses reduce() to apply our filters
```js
//here we create a function that uses reduce which takes the current array and applies the next filter function to it and changes that current array and so on
function pipe(...fns) {
return (arg) => fns.reduce((prev, fn) => fn(prev), arg);
}
// here we call the function we made and pass the filter functions we created to it
//our filter functions must filter and return the filtered array, they also check whether the filter exists or not, if it doesnt then return the array unchanged
filteredArray = pipe(nameFilter, subjectFilter, priceFilter)(MainArray)
```
Hope this helps |
62,674,634 | I have a JSON file subDistricts.json.
```
{
"subDistrictsStrs" : [
{ "addr_id":"001",
"type":"sub_district",
"name":"ตากแดด",
"name_default":"ตากแดด",
"child":"zip_codes",
"child_id":"",
"parent":"districts",
"parent_id":"1234",
"root_id":"1234",
"is_active":true,
"created_date":"2019-09-11 18:05:36",
"updated_date":"2019-09-11 18:05:36"
},
{ "addr_id":"002",
"type":"sub_district",
"name":"เกตรี",
"name_default":"เกตรี",
"child":"zip_codes",
"child_id":"",
"parent":"districts",
"parent_id":"1234",
"root_id":"1234",
"is_active":true,"created_date":"2019-09-11 18:05:36",
"updated_date":"2019-09-11 18:05:36"
},
{
//and more
}
]
}
```
and I have another JSON file name.json.
```
[
{
"name": "กกกุง",
"name_default": "Kok Kung"
},
{
"name": "กกแก้วบูรพา",
"name_default": "Kok Kaeo Burapha"
},
{
"name": "กกโก",
"name_default": "Kok Ko"
},
{
"name": "กกดู่",
"name_default": "Kok Du"
},
{
/// and more
}
]
```
How to change "name\_default" in subDistricts.json based on "name" and "name\_default" in name.json?
Suggest me python, java or other languages,
Thank you. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674634",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12057519/"
] | In your sample code all the seperate filters are applied to fresh `this.filterSearch`. That means, every time any filtering starts, it resets the previous filtered results and starts from beginning.
You need to merge the filters in to one logic to apply all on the same array and have only one output array. Any time a change occurs on filters, it will start from the original array and apply all filters one by one.
For example;
```
searchSubject($event) {
this.subject = $event.target.value;
this.applyFilters();
}
searchName() {
//this.name is already assigned
this.applyFilters();
}
searchPrice($event) {
this.price = $event.target.value.split("-");
this.lowerValue = this.price[0];
this.higherValue = this.price[1];
this.applyFilters();
}
applyFilters() {
let temp = this.filterSearch;
// SUBJECT
if(this.price == null){
temp = temp.filter((res) => {
return (
res.subject1.toLocaleLowerCase().match(this.subject.toLocaleLowerCase()) ||
res.subject2.toLocaleLowerCase().match(this.subject.toLocaleLowerCase()) ||
res.subject3.toLocaleLowerCase().match(this.subject.toLocaleLowerCase())
);
});
}else if(this.price != null){
temp = temp.filter((res)=>{
if((Number(res.price1))>=(Number(this.lowerValue))
&&(Number(res.price1))<=(Number(this.higherValue))
&&(res.subject1.match(this.subject)) || (res.subject2.match(this.subject))|| (res.subject3.match(this.subject))){
return res.subject1 || res.subject2 || res.subject3;
}
})
}
// NAME
temp = temp.filter((response)=>{
return response.fullName.toLocaleLowerCase().match(this.name.toLocaleLowerCase());
})
// PRICE
if (this.subject == null) {
temp = temp.filter((res) => {
if((Number(res.price1))>=(Number(this.lowerValue))&&(Number(res.price1))<=(Number(this.higherValue)))
return res.price1.toLocaleLowerCase();
});
}else if(this.subject != null){
temp = temp.filter((res)=>{
if((Number(res.price1))>=(Number(this.lowerValue))
&&(Number(res.price1))<=(Number(this.higherValue)) ){
return res.price1.toLocaleLowerCase();
}
})
}
this.tempFilterSearch = temp;
}
```
**Your seperate filter functions are not touched, logic is not tested. I only copy pasted to re-arrange your code in to merged function.** | I am answering this from the phone so if this answer isnt good enough ill edit it later from pc.
The way i would do it is to create all 3 possible filters as functions. In your case name, subject, price,
And create a variable to hold an array of the filtered objects.
On 1st filter (name) my function would take the main list array and returns all matching elements into the filtered variable.
Then subject filter takes the filtered variable and filters it further for subject. And re assign output back into filteredarray.
Same goes for price.
Then you ngfor the filtered array.
!!EDIT!!
as per your comment on how to solve the already filtered array issue. ill write some pseudo code below:
```js
mainArray = [...arrayItems];
filteredArray: Items[];
onFilterButtonPressed(name, subject, price) {
// here you can use if to see if some value hasnt been provided and skip that filter
nameFilter();
subjectFilter();
priceFilter();
}
nameFilter(name){
filteredArray = mainArray.filter(item => item.name === name)
}
subjectFilter(subject){
filteredArray = filteredArray.filter(item => item.subject === subject)
}
priceFilter(price){
filteredArray = filteredArray.filter(item => item.price === price)
}
```
```html
<button (click)="onFilterButtonPressed(pass your filter values here)">click to filter</button
```
The above code is ugly and requires name variable to be there however to go around that we use a piping function that uses reduce() to apply our filters
```js
//here we create a function that uses reduce which takes the current array and applies the next filter function to it and changes that current array and so on
function pipe(...fns) {
return (arg) => fns.reduce((prev, fn) => fn(prev), arg);
}
// here we call the function we made and pass the filter functions we created to it
//our filter functions must filter and return the filtered array, they also check whether the filter exists or not, if it doesnt then return the array unchanged
filteredArray = pipe(nameFilter, subjectFilter, priceFilter)(MainArray)
```
Hope this helps |
62,674,710 | I need some help with a splitting a text file with java. I have a text file with around 2000 words, the text file is in the format as follows -
```
"A", "HELLO", "RANDOM", "WORD"... etc
```
My goal is to put these words into a string array. Using the scanner class I'm taking the text and creating a string, then trying to split and add to an array as follows.
```
Scanner input = new Scanner(file);
while(input.hasNext()){
String temp = input.next();
String[] wordArray = temp.split(",");
}
```
After added to the array, when I want to use or print each element, they are stored with the "" surrounding them. Instead of printing
```
A
HELLO
RANDOM
```
… etc they are printing
```
"A"
"HELLO"
"RANDOM"
```
So my question is how can I get rid of these regular expressions and split the text at each word so I'm left with no regular expressions?
Many Thanks | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674710",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846353/"
] | Try this
```
List<String> allMatches = new ArrayList<>();
Scanner input = new Scanner(new File(FILE_PATH), StandardCharsets.UTF_8);
while(input.hasNext()){
String temp = input.next();
Matcher m = Pattern.compile("\"(.*)\"").matcher(temp);
while (m.find()) {
allMatches.add(m.group(1));
}
}
allMatches.forEach(System.out::println);
``` | Use this to replace all occurrences of " with an empty string.
```
wordArray[i].replaceAll("/"", "");
```
Perform this in a loop. |
62,674,710 | I need some help with a splitting a text file with java. I have a text file with around 2000 words, the text file is in the format as follows -
```
"A", "HELLO", "RANDOM", "WORD"... etc
```
My goal is to put these words into a string array. Using the scanner class I'm taking the text and creating a string, then trying to split and add to an array as follows.
```
Scanner input = new Scanner(file);
while(input.hasNext()){
String temp = input.next();
String[] wordArray = temp.split(",");
}
```
After added to the array, when I want to use or print each element, they are stored with the "" surrounding them. Instead of printing
```
A
HELLO
RANDOM
```
… etc they are printing
```
"A"
"HELLO"
"RANDOM"
```
So my question is how can I get rid of these regular expressions and split the text at each word so I'm left with no regular expressions?
Many Thanks | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674710",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846353/"
] | Just in your case, when you have all values like this `"A", "HELLO", "RANDOM", "WORD"... etc`, where every value have delimiter `", "`, you can use this `temp.split("\", \"");` and you don't need to use loop. | Use this to replace all occurrences of " with an empty string.
```
wordArray[i].replaceAll("/"", "");
```
Perform this in a loop. |
62,674,711 | Dear developers and NN enthusiasts, I have quantized a model (8-bit post-training quantization) and I'm trying to do inference with the resulting model using tflite interprter.
In some cases the interpreter runs properly, and I can do inference on the quantized model as expected, with outputs close enough to the original model. Thus, my setup appears to be correct.
However, depending on the concrete quantized model, I frequently stumble across the following RuntimeError.
```
Traceback (most recent call last):
File ".\quantize_model.py", line 328, in <module>
interpreter.allocate_tensors()
File "---path removed---tf-nightly_py37\lib\site-packages\tensorflow\lite\python\interpreter.py", line 243, in allocate_tensors
return self._interpreter.AllocateTensors()
RuntimeError: tensorflow/lite/kernels/kernel_util.cc:154 scale_diff / output_scale <= 0.02 was not true.Node number 26 (FULLY_CONNECTED) failed to prepare.
```
Since the error appears to be related to the scale of the bias, I have retrained the original model using a bias\_regularizer. However, the error persists.
Do you have any suggestion on how to avoid this error? should I train or design the model in a different way?
Is it possible to suppress this error and continue as usual (even if the accuracy is reduced)?
I have used Netron to extracted some details regarding 'node 26' from the quantized tflite model:
```
*Node properties ->
type: FullyConnected, location:26. *Attributes asymmetric_quantization: false, fused_activation: NONE, keep_num_dims: false, weights_format: DEFAULT.
*Inputs ->
input. name: functional_3/tf_op_layer_Reshape/Reshape;StatefulPartitionedCall/functional_3/tf_op_layer_Reshape/Reshape
type: int8[1,34]
quantization: 0 ≤ 0.007448929361999035 * (q - -128) ≤ 1.8994770050048828
location: 98
weights. name: functional_3/tf_op_layer_MatMul_54/MatMul_54;StatefulPartitionedCall/functional_3/tf_op_layer_MatMul_54/MatMul_54
type: int8[34,34]
quantization: -0.3735211491584778 ≤ 0.002941111335530877 * q ≤ 0.1489555984735489
location: 42
[weights omitted to save space]
bias. name: functional_3/tf_op_layer_AddV2_93/AddV2_3/y;StatefulPartitionedCall/functional_3/tf_op_layer_AddV2_93/AddV2_3/y
type: int32[34]
quantization: 0.0002854724007192999 * q
location: 21
[13,-24,-19,-9,4,59,-18,9,14,-15,13,6,12,5,10,-2,-14,16,11,-1,12,7,-4,16,-8,6,-17,-7,9,-15,7,-29,5,3]
*outputs ->
output. name: functional_3/tf_op_layer_AddV2/AddV2;StatefulPartitionedCall/functional_3/tf_op_layer_AddV2/AddV2;functional_3/tf_op_layer_Reshape_99/Reshape_99/shape;StatefulPartitionedCall/functional_3/tf_op_layer_Reshape_99/Reshape_99/shape;functional_3/tf_op_layer_Reshape_1/Reshape_1;StatefulPartitionedCall/functional_3/tf_op_layer_Reshape_1/Reshape_1;functional_3/tf_op_layer_AddV2_93/AddV2_3/y;StatefulPartitionedCall/functional_3/tf_op_layer_AddV2_93/AddV2_3/y
type: int8[1,34]
quantization: -0.46506571769714355 ≤ 0.0031077787280082703 * (q - 22) ≤ 0.32741788029670715
location: 99
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674711",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10610362/"
] | I have found a workaround, which involves manually modifing the quantized tflite model.
This is the file that triggers the RuntimeError in question ([tensorflow/lite/kernels/kernel\_utils.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/kernel_util.cc#L173)):
```
// TODO(ahentz): The following conditions must be guaranteed by the training pipeline.
...
const double scale_diff = std::abs(input_product_scale - bias_scale);
const double output_scale = static_cast<double>(output->params.scale);
TF_LITE_ENSURE(context, scale_diff / output_scale <= 0.02);
```
The comment makes clear that some functionality in model quantization still needs to be completed. The failing condition is related to the scale of the bias. I verified that my quantized model does not fulfill the constraint above. In order to manually fix the quantized model, these steps can be done:
1. Open the quantized model using [Netron](https://github.com/lutzroeder/netron) and find the node causing the trouble (in my case it is node 26)
2. Extract the scale of the bias, input, and weights for this node.
3. Since the bias is represented using dynamic range quantization, the representation is not unique. One can create another representation by scaling the bias scale and bias values (bias zero point is zero and it does not need to be changed). Find a factor k, such that abs(input\_scale \* weight\_scale - bias\_scale \* k) < 0.02.
4. Use an hex editor (e.g., ghex in Ubuntu) to edit the tflite model. Replace the incorrect bias\_scale with bias\_scale \* k. You also need to replace the bias values by bias\_values / k. Bias\_scale is encoded in 32-bit ieee 754 format, little endian [(ieee-754 tool)](https://www.h-schmidt.net/FloatConverter/IEEE754.html), whereas the bias\_values are encoded in int32 format little endian.
5. Save the edited tflite model, it should now pass the required condition, can be used with tflite interpreter, and the model is equivalent to the model before the fix.
Of course, this solution is only a temporary workaround useful until the code in tensorflow's quantizer is corrected. | I have another approach that overcomes my problem and share with you guys. According to [quantization file](https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/contrib/quantize/python/quantize.py#L35-L36)
The quantization for activation only support with Relu and Identity. It may fail if we miss the biasAdd before Relu activation, therefore, we can wrap the layer as an identity to bypass this by `tf.identity`. I have tried and it works for my case without editing anything in the cpp files. |
62,674,717 | I am trying to run a powershell command - ConnectAzureAD and getting the below error-
'Could not load file or assembly 'Microsoft.IdentityModel.Clients.ActiveDirectory, Version=3.19.7.16602, Culture=neutral,. Could not find or load a specific file.'
This was working earlier with Powershell 5 but not with powershell core.The versions that i am using are as:
Powershell - 7.0.1
Az.Accounts - 1.8.1 (i have tried updating this but no luck)
AzureAd - 2.0.2.104
Is there any workaroudn for this ? We tried Azure.Standard.Preview from 'Post test Gallery' but it failed the keyVault powershell commands. Any help on this? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2784502/"
] | ```
Install-Module -Name AzureADPreview -RequiredVersion 2.0.2.89
Import-Module AzureADPreview -Version 2.0.2.89 -UseWindowsPowerShell
``` | As Shiva said, this is a known limitation on .NET CORE that new version assembly cannot be loaded if old version is already loaded. PowerShell is considering module isolation but so far there is no good solution yet.
You can upgrade `Microsoft.IdentityModel.Clients.ActiveDirectory` to the [latest version](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory/).
For more details, you could refer to this [issue](https://github.com/Azure/azure-powershell/issues/11446). |
62,674,717 | I am trying to run a powershell command - ConnectAzureAD and getting the below error-
'Could not load file or assembly 'Microsoft.IdentityModel.Clients.ActiveDirectory, Version=3.19.7.16602, Culture=neutral,. Could not find or load a specific file.'
This was working earlier with Powershell 5 but not with powershell core.The versions that i am using are as:
Powershell - 7.0.1
Az.Accounts - 1.8.1 (i have tried updating this but no luck)
AzureAd - 2.0.2.104
Is there any workaroudn for this ? We tried Azure.Standard.Preview from 'Post test Gallery' but it failed the keyVault powershell commands. Any help on this? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2784502/"
] | ```
Install-Module -Name AzureADPreview -RequiredVersion 2.0.2.89
Import-Module AzureADPreview -Version 2.0.2.89 -UseWindowsPowerShell
``` | You could instead try to use the az rest invoking the graph api.
Until the time az-cli is in par with the AzureAD powershell module you can instead use the graph api
```
az login --tenant <tenantname.onmicrosoft.com>
$uri = "https://graph.microsoft.com/v1.0/applications"
$allApplications = az rest `
--method GET `
--uri $uri `
--headers 'Content-Type=application/json' | convertfrom-json
$allApplications.value |% {"{0}-{1}" -f $_.appid, $_.displayname}
```
I have put some samples using az rest here,
<https://github.com/joepaulk/utilities/blob/master/manage-azuread-applicationregistrations.ps1>
You may also refer to: <https://damienbod.com/2020/06/22/using-azure-cli-to-create-azure-app-registrations/> from where i picked up inspiration
Other reference, how az rest use the accesstokens from az cli can be found here,
<https://mikhail.io/2019/07/how-azure-cli-manages-access-tokens/> |
62,674,724 | I would like to add 1 to all the values in a table. I am doing this:
```
public function up()
{
DB::table('mytable')
->update([
'amount' => amount + 1,
]);
}
```
the above code does not work. How should I refer to a value (amount) using migrations? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3174311/"
] | To increment or decrement a column value using Laravel's query builder, you can use the method `increment()` and `decrement()`.
E.g.:
```
DB::table('mytable')->increment('amount');
```
This is documented [here](https://laravel.com/docs/7.x/queries#increment-and-decrement).
Not sure if doing it on a migration is the wisest thing. I would reserve migrations to schema changes, not to data changes. | You can try this like way
```
public function up(){
DB::table('mytable')
->update([
'amount' => $amount++,
]);
}
``` |
62,674,744 | I have a `data.frame` (a `data.table` in fact) that I need to sort by multiple columns. The names of columns to sort by are in a vector. How can I do it? E.g.
```
DF <- data.frame(A= 5:1, B= 11:15, C= c(3, 3, 2, 2, 1))
DF
A B C
5 11 3
4 12 3
3 13 2
2 14 2
1 15 1
sortby <- c('C', 'A')
DF[order(sortby),] ## How to do this?
```
The desired output is the following but using the `sortby` vector as input.
```
DF[with(DF, order(C, A)),]
A B C
1 15 1
2 14 2
3 13 2
4 12 3
5 11 3
```
(Solutions for `data.table` are preferable.)
**EDIT**: I'd rather avoid importing additional packages provided that base R or data.table don't require too much coding. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1114453/"
] | Using `dplyr`, you can use `arrange_at` which accepts string column names :
```
library(dplyr)
DF %>% arrange_at(sortby)
# A B C
#1 1 15 1
#2 2 14 2
#3 3 13 2
#4 4 12 3
#5 5 11 3
```
Or with the new version
```
DF %>% arrange(across(sortby))
```
---
In base R, we can use
```
DF[do.call(order, DF[sortby]), ]
``` | Also possible with `dplyr`:
```
DF %>%
arrange(get(sort_by))
```
But Ronaks answer is more elegant. |
62,674,744 | I have a `data.frame` (a `data.table` in fact) that I need to sort by multiple columns. The names of columns to sort by are in a vector. How can I do it? E.g.
```
DF <- data.frame(A= 5:1, B= 11:15, C= c(3, 3, 2, 2, 1))
DF
A B C
5 11 3
4 12 3
3 13 2
2 14 2
1 15 1
sortby <- c('C', 'A')
DF[order(sortby),] ## How to do this?
```
The desired output is the following but using the `sortby` vector as input.
```
DF[with(DF, order(C, A)),]
A B C
1 15 1
2 14 2
3 13 2
4 12 3
5 11 3
```
(Solutions for `data.table` are preferable.)
**EDIT**: I'd rather avoid importing additional packages provided that base R or data.table don't require too much coding. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1114453/"
] | With [data.table](/questions/tagged/data.table "show questions tagged 'data.table'"):
```
setorderv(DF, sortby)
```
which gives:
>
>
> ```
> > DF
> A B C
> 1: 1 15 1
> 2: 2 14 2
> 3: 3 13 2
> 4: 4 12 3
> 5: 5 11 3
>
> ```
>
>
For completeness, with `setorder`:
```
setorder(DF, C, A)
```
The advantage of using `setorder`/`setorderv` is that the data is reordered by reference and thus very fast and memory efficient. Both functions work on `data.table`'s as wel as on `data.frame`'s.
---
If you want to combine ascending and descending ordering, you can use the `order`-parameter of `setorderv`:
```
setorderv(DF, sortby, order = c(1L, -1L))
```
which subsequently gives:
>
>
> ```
> > DF
> A B C
> 1: 1 15 1
> 2: 3 13 2
> 3: 2 14 2
> 4: 5 11 3
> 5: 4 12 3
>
> ```
>
>
With `setorder` you can achieve the same with:
```
setorder(DF, C, -A)
``` | Using `dplyr`, you can use `arrange_at` which accepts string column names :
```
library(dplyr)
DF %>% arrange_at(sortby)
# A B C
#1 1 15 1
#2 2 14 2
#3 3 13 2
#4 4 12 3
#5 5 11 3
```
Or with the new version
```
DF %>% arrange(across(sortby))
```
---
In base R, we can use
```
DF[do.call(order, DF[sortby]), ]
``` |
62,674,744 | I have a `data.frame` (a `data.table` in fact) that I need to sort by multiple columns. The names of columns to sort by are in a vector. How can I do it? E.g.
```
DF <- data.frame(A= 5:1, B= 11:15, C= c(3, 3, 2, 2, 1))
DF
A B C
5 11 3
4 12 3
3 13 2
2 14 2
1 15 1
sortby <- c('C', 'A')
DF[order(sortby),] ## How to do this?
```
The desired output is the following but using the `sortby` vector as input.
```
DF[with(DF, order(C, A)),]
A B C
1 15 1
2 14 2
3 13 2
4 12 3
5 11 3
```
(Solutions for `data.table` are preferable.)
**EDIT**: I'd rather avoid importing additional packages provided that base R or data.table don't require too much coding. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1114453/"
] | With [data.table](/questions/tagged/data.table "show questions tagged 'data.table'"):
```
setorderv(DF, sortby)
```
which gives:
>
>
> ```
> > DF
> A B C
> 1: 1 15 1
> 2: 2 14 2
> 3: 3 13 2
> 4: 4 12 3
> 5: 5 11 3
>
> ```
>
>
For completeness, with `setorder`:
```
setorder(DF, C, A)
```
The advantage of using `setorder`/`setorderv` is that the data is reordered by reference and thus very fast and memory efficient. Both functions work on `data.table`'s as wel as on `data.frame`'s.
---
If you want to combine ascending and descending ordering, you can use the `order`-parameter of `setorderv`:
```
setorderv(DF, sortby, order = c(1L, -1L))
```
which subsequently gives:
>
>
> ```
> > DF
> A B C
> 1: 1 15 1
> 2: 3 13 2
> 3: 2 14 2
> 4: 5 11 3
> 5: 4 12 3
>
> ```
>
>
With `setorder` you can achieve the same with:
```
setorder(DF, C, -A)
``` | Also possible with `dplyr`:
```
DF %>%
arrange(get(sort_by))
```
But Ronaks answer is more elegant. |
62,674,746 | Suppose you have a horizontal flatlist.
When a user clicks a button in an item, you want to present a view which looks different from the flat-list item you had.
Suppose you implement it like the following
```
{showDetail ? (
<DetailView onPress={toggleShowDetail} />
) : (
<FlatList
data={data}
renderItem={() => (
<View>
<Button onPress={toggleShowDetail} />{' '}
</View>
)}
/>
)}
```
Is the scroll position of flatlist maintained when the flatlist is replaced with `DetailView` and replaced back?
if not, what are the approaches I can take?
I'd like to avoid using modal if possible
* edit,
I'm not sure if setting style `width=0` would maintain the scroll position when set `width=prevSavedWidth` .. but definately can try..
```
import _ from 'lodash'
import React, {useState} from 'react'
import {useDispatch} from 'react-redux'
import {useSelector} from 'react-redux'
import {
Text,
Image,
View,
NativeModules,
NativeEventEmitter,
TouchableOpacity,
FlatList,
} from 'react-native'
const Qnas = props => {
const flatlistRef = React.useRef(null)
const [single, setSingle] = React.useState(false)
let qnas = [
{
title: 'a',
id: 1,
},
{
title: 'b',
id: 2,
},
{
title: 'c',
id: 3,
},
{
title: 'd',
id: 4,
},
{
title: 'e',
},
{
title: 'f',
},
{
title: 'j',
},
]
const toggle = () => {
setSingle(!single)
}
const renderItem = ({item: qna, index}) => {
return (
<View style={{height: 80, width: 200}}>
<Text>{qna.title}</Text>
<TouchableOpacity onPress={toggle}>
<Text>toggle</Text>
</TouchableOpacity>
</View>
)
}
const keyExtractor = (item, index) => {
return `qna-${item.title}-${index}`
}
return (
<View style={{height: 200}}>
{single ? (
<View>
<Text>hello</Text>
<TouchableOpacity onPress={toggle}>
<Text>toggle</Text>
</TouchableOpacity>
</View>
) : (
<FlatList
horizontal
ref={flatlistRef}
data={qnas}
renderItem={renderItem}
keyExtractor={keyExtractor}
contentContainerStyle={{
flexDirection: 'column',
flexWrap: 'wrap',
}}
/>
)}
</View>
)
}
export default Qnas
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/433570/"
] | Using `dplyr`, you can use `arrange_at` which accepts string column names :
```
library(dplyr)
DF %>% arrange_at(sortby)
# A B C
#1 1 15 1
#2 2 14 2
#3 3 13 2
#4 4 12 3
#5 5 11 3
```
Or with the new version
```
DF %>% arrange(across(sortby))
```
---
In base R, we can use
```
DF[do.call(order, DF[sortby]), ]
``` | Also possible with `dplyr`:
```
DF %>%
arrange(get(sort_by))
```
But Ronaks answer is more elegant. |
62,674,746 | Suppose you have a horizontal flatlist.
When a user clicks a button in an item, you want to present a view which looks different from the flat-list item you had.
Suppose you implement it like the following
```
{showDetail ? (
<DetailView onPress={toggleShowDetail} />
) : (
<FlatList
data={data}
renderItem={() => (
<View>
<Button onPress={toggleShowDetail} />{' '}
</View>
)}
/>
)}
```
Is the scroll position of flatlist maintained when the flatlist is replaced with `DetailView` and replaced back?
if not, what are the approaches I can take?
I'd like to avoid using modal if possible
* edit,
I'm not sure if setting style `width=0` would maintain the scroll position when set `width=prevSavedWidth` .. but definately can try..
```
import _ from 'lodash'
import React, {useState} from 'react'
import {useDispatch} from 'react-redux'
import {useSelector} from 'react-redux'
import {
Text,
Image,
View,
NativeModules,
NativeEventEmitter,
TouchableOpacity,
FlatList,
} from 'react-native'
const Qnas = props => {
const flatlistRef = React.useRef(null)
const [single, setSingle] = React.useState(false)
let qnas = [
{
title: 'a',
id: 1,
},
{
title: 'b',
id: 2,
},
{
title: 'c',
id: 3,
},
{
title: 'd',
id: 4,
},
{
title: 'e',
},
{
title: 'f',
},
{
title: 'j',
},
]
const toggle = () => {
setSingle(!single)
}
const renderItem = ({item: qna, index}) => {
return (
<View style={{height: 80, width: 200}}>
<Text>{qna.title}</Text>
<TouchableOpacity onPress={toggle}>
<Text>toggle</Text>
</TouchableOpacity>
</View>
)
}
const keyExtractor = (item, index) => {
return `qna-${item.title}-${index}`
}
return (
<View style={{height: 200}}>
{single ? (
<View>
<Text>hello</Text>
<TouchableOpacity onPress={toggle}>
<Text>toggle</Text>
</TouchableOpacity>
</View>
) : (
<FlatList
horizontal
ref={flatlistRef}
data={qnas}
renderItem={renderItem}
keyExtractor={keyExtractor}
contentContainerStyle={{
flexDirection: 'column',
flexWrap: 'wrap',
}}
/>
)}
</View>
)
}
export default Qnas
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/433570/"
] | With [data.table](/questions/tagged/data.table "show questions tagged 'data.table'"):
```
setorderv(DF, sortby)
```
which gives:
>
>
> ```
> > DF
> A B C
> 1: 1 15 1
> 2: 2 14 2
> 3: 3 13 2
> 4: 4 12 3
> 5: 5 11 3
>
> ```
>
>
For completeness, with `setorder`:
```
setorder(DF, C, A)
```
The advantage of using `setorder`/`setorderv` is that the data is reordered by reference and thus very fast and memory efficient. Both functions work on `data.table`'s as wel as on `data.frame`'s.
---
If you want to combine ascending and descending ordering, you can use the `order`-parameter of `setorderv`:
```
setorderv(DF, sortby, order = c(1L, -1L))
```
which subsequently gives:
>
>
> ```
> > DF
> A B C
> 1: 1 15 1
> 2: 3 13 2
> 3: 2 14 2
> 4: 5 11 3
> 5: 4 12 3
>
> ```
>
>
With `setorder` you can achieve the same with:
```
setorder(DF, C, -A)
``` | Using `dplyr`, you can use `arrange_at` which accepts string column names :
```
library(dplyr)
DF %>% arrange_at(sortby)
# A B C
#1 1 15 1
#2 2 14 2
#3 3 13 2
#4 4 12 3
#5 5 11 3
```
Or with the new version
```
DF %>% arrange(across(sortby))
```
---
In base R, we can use
```
DF[do.call(order, DF[sortby]), ]
``` |
62,674,746 | Suppose you have a horizontal flatlist.
When a user clicks a button in an item, you want to present a view which looks different from the flat-list item you had.
Suppose you implement it like the following
```
{showDetail ? (
<DetailView onPress={toggleShowDetail} />
) : (
<FlatList
data={data}
renderItem={() => (
<View>
<Button onPress={toggleShowDetail} />{' '}
</View>
)}
/>
)}
```
Is the scroll position of flatlist maintained when the flatlist is replaced with `DetailView` and replaced back?
if not, what are the approaches I can take?
I'd like to avoid using modal if possible
* edit,
I'm not sure if setting style `width=0` would maintain the scroll position when set `width=prevSavedWidth` .. but definately can try..
```
import _ from 'lodash'
import React, {useState} from 'react'
import {useDispatch} from 'react-redux'
import {useSelector} from 'react-redux'
import {
Text,
Image,
View,
NativeModules,
NativeEventEmitter,
TouchableOpacity,
FlatList,
} from 'react-native'
const Qnas = props => {
const flatlistRef = React.useRef(null)
const [single, setSingle] = React.useState(false)
let qnas = [
{
title: 'a',
id: 1,
},
{
title: 'b',
id: 2,
},
{
title: 'c',
id: 3,
},
{
title: 'd',
id: 4,
},
{
title: 'e',
},
{
title: 'f',
},
{
title: 'j',
},
]
const toggle = () => {
setSingle(!single)
}
const renderItem = ({item: qna, index}) => {
return (
<View style={{height: 80, width: 200}}>
<Text>{qna.title}</Text>
<TouchableOpacity onPress={toggle}>
<Text>toggle</Text>
</TouchableOpacity>
</View>
)
}
const keyExtractor = (item, index) => {
return `qna-${item.title}-${index}`
}
return (
<View style={{height: 200}}>
{single ? (
<View>
<Text>hello</Text>
<TouchableOpacity onPress={toggle}>
<Text>toggle</Text>
</TouchableOpacity>
</View>
) : (
<FlatList
horizontal
ref={flatlistRef}
data={qnas}
renderItem={renderItem}
keyExtractor={keyExtractor}
contentContainerStyle={{
flexDirection: 'column',
flexWrap: 'wrap',
}}
/>
)}
</View>
)
}
export default Qnas
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/433570/"
] | With [data.table](/questions/tagged/data.table "show questions tagged 'data.table'"):
```
setorderv(DF, sortby)
```
which gives:
>
>
> ```
> > DF
> A B C
> 1: 1 15 1
> 2: 2 14 2
> 3: 3 13 2
> 4: 4 12 3
> 5: 5 11 3
>
> ```
>
>
For completeness, with `setorder`:
```
setorder(DF, C, A)
```
The advantage of using `setorder`/`setorderv` is that the data is reordered by reference and thus very fast and memory efficient. Both functions work on `data.table`'s as wel as on `data.frame`'s.
---
If you want to combine ascending and descending ordering, you can use the `order`-parameter of `setorderv`:
```
setorderv(DF, sortby, order = c(1L, -1L))
```
which subsequently gives:
>
>
> ```
> > DF
> A B C
> 1: 1 15 1
> 2: 3 13 2
> 3: 2 14 2
> 4: 5 11 3
> 5: 4 12 3
>
> ```
>
>
With `setorder` you can achieve the same with:
```
setorder(DF, C, -A)
``` | Also possible with `dplyr`:
```
DF %>%
arrange(get(sort_by))
```
But Ronaks answer is more elegant. |
62,674,772 | I,ve been trying to use a BERT model from tf-hub <https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2>.
```
import tensorflow_hub as hub
bert_layer = hub.keras_layer('./bert_en_uncased_L-12_H-768_A-12_2', trainable=True)
```
But problem is that it is downloading data after every run.
So i downloaded the .tar file from tf-hub <https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2>
Now i'm trying to use this downloaded tar file(after untar)
I've followed this tutorial <https://medium.com/@xianbao.qian/how-to-run-tf-hub-locally-without-internet-connection-4506b850a915>
But it didn't work out well and there is no further infromation or script is provided in this blog post
if someone can provide complete script to use the dowloaded model locally(without internet) or can improve the above blog post(Medium).
I've also tried
```
untarredFilePath = './bert_en_uncased_L-12_H-768_A-12_2'
bert_lyr = hub.load(untarredFilePath)
print(bert_lyr)
```
Output
```
<tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject object at 0x7f05c46e6a10>
```
Doesn't seems to work.
or is there any other method to do so..?? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13179341/"
] | Using `dplyr`, you can use `arrange_at` which accepts string column names :
```
library(dplyr)
DF %>% arrange_at(sortby)
# A B C
#1 1 15 1
#2 2 14 2
#3 3 13 2
#4 4 12 3
#5 5 11 3
```
Or with the new version
```
DF %>% arrange(across(sortby))
```
---
In base R, we can use
```
DF[do.call(order, DF[sortby]), ]
``` | Also possible with `dplyr`:
```
DF %>%
arrange(get(sort_by))
```
But Ronaks answer is more elegant. |
62,674,772 | I,ve been trying to use a BERT model from tf-hub <https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2>.
```
import tensorflow_hub as hub
bert_layer = hub.keras_layer('./bert_en_uncased_L-12_H-768_A-12_2', trainable=True)
```
But problem is that it is downloading data after every run.
So i downloaded the .tar file from tf-hub <https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2>
Now i'm trying to use this downloaded tar file(after untar)
I've followed this tutorial <https://medium.com/@xianbao.qian/how-to-run-tf-hub-locally-without-internet-connection-4506b850a915>
But it didn't work out well and there is no further infromation or script is provided in this blog post
if someone can provide complete script to use the dowloaded model locally(without internet) or can improve the above blog post(Medium).
I've also tried
```
untarredFilePath = './bert_en_uncased_L-12_H-768_A-12_2'
bert_lyr = hub.load(untarredFilePath)
print(bert_lyr)
```
Output
```
<tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject object at 0x7f05c46e6a10>
```
Doesn't seems to work.
or is there any other method to do so..?? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13179341/"
] | With [data.table](/questions/tagged/data.table "show questions tagged 'data.table'"):
```
setorderv(DF, sortby)
```
which gives:
>
>
> ```
> > DF
> A B C
> 1: 1 15 1
> 2: 2 14 2
> 3: 3 13 2
> 4: 4 12 3
> 5: 5 11 3
>
> ```
>
>
For completeness, with `setorder`:
```
setorder(DF, C, A)
```
The advantage of using `setorder`/`setorderv` is that the data is reordered by reference and thus very fast and memory efficient. Both functions work on `data.table`'s as wel as on `data.frame`'s.
---
If you want to combine ascending and descending ordering, you can use the `order`-parameter of `setorderv`:
```
setorderv(DF, sortby, order = c(1L, -1L))
```
which subsequently gives:
>
>
> ```
> > DF
> A B C
> 1: 1 15 1
> 2: 3 13 2
> 3: 2 14 2
> 4: 5 11 3
> 5: 4 12 3
>
> ```
>
>
With `setorder` you can achieve the same with:
```
setorder(DF, C, -A)
``` | Using `dplyr`, you can use `arrange_at` which accepts string column names :
```
library(dplyr)
DF %>% arrange_at(sortby)
# A B C
#1 1 15 1
#2 2 14 2
#3 3 13 2
#4 4 12 3
#5 5 11 3
```
Or with the new version
```
DF %>% arrange(across(sortby))
```
---
In base R, we can use
```
DF[do.call(order, DF[sortby]), ]
``` |
62,674,772 | I,ve been trying to use a BERT model from tf-hub <https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2>.
```
import tensorflow_hub as hub
bert_layer = hub.keras_layer('./bert_en_uncased_L-12_H-768_A-12_2', trainable=True)
```
But problem is that it is downloading data after every run.
So i downloaded the .tar file from tf-hub <https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2>
Now i'm trying to use this downloaded tar file(after untar)
I've followed this tutorial <https://medium.com/@xianbao.qian/how-to-run-tf-hub-locally-without-internet-connection-4506b850a915>
But it didn't work out well and there is no further infromation or script is provided in this blog post
if someone can provide complete script to use the dowloaded model locally(without internet) or can improve the above blog post(Medium).
I've also tried
```
untarredFilePath = './bert_en_uncased_L-12_H-768_A-12_2'
bert_lyr = hub.load(untarredFilePath)
print(bert_lyr)
```
Output
```
<tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject object at 0x7f05c46e6a10>
```
Doesn't seems to work.
or is there any other method to do so..?? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13179341/"
] | With [data.table](/questions/tagged/data.table "show questions tagged 'data.table'"):
```
setorderv(DF, sortby)
```
which gives:
>
>
> ```
> > DF
> A B C
> 1: 1 15 1
> 2: 2 14 2
> 3: 3 13 2
> 4: 4 12 3
> 5: 5 11 3
>
> ```
>
>
For completeness, with `setorder`:
```
setorder(DF, C, A)
```
The advantage of using `setorder`/`setorderv` is that the data is reordered by reference and thus very fast and memory efficient. Both functions work on `data.table`'s as wel as on `data.frame`'s.
---
If you want to combine ascending and descending ordering, you can use the `order`-parameter of `setorderv`:
```
setorderv(DF, sortby, order = c(1L, -1L))
```
which subsequently gives:
>
>
> ```
> > DF
> A B C
> 1: 1 15 1
> 2: 3 13 2
> 3: 2 14 2
> 4: 5 11 3
> 5: 4 12 3
>
> ```
>
>
With `setorder` you can achieve the same with:
```
setorder(DF, C, -A)
``` | Also possible with `dplyr`:
```
DF %>%
arrange(get(sort_by))
```
But Ronaks answer is more elegant. |
62,674,782 | I am following the [guide](https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/connect-jdbc-thin-wallet.html#GUID-20656D84-4D79-4EE9-B55F-333053948966) to connect from a java application using the IDE IntelliJ to an Oracle Cloud Database.
I meet the [prerequisites](https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/connect-jdbc-thin-wallet.html#GUID-EBBB5D6B-118D-48BD-8D68-A3954EC92D2B) since:
* I have a Database in Oracle Cloud service
* I downloaded the wallet and I placed the files in the src directory of my workspace.
* I am using last JDK 14
* I am using the ojdbc8.jar
* And I downloaded as well the oraclepki, osdt\_cert, and osdt\_core jars, all of them added as java libraries in my test project
The recommended BD\_URL is never working. I always get: **java.net.UnknownHostException and oracle.net.ns.NetException: The Network Adapter could not establish the connection**
```
DB_URL= "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(HOST=testgerard_high)(PORT=1521)(PROTOCOL=tcp))(CONNECT_DATA=(SERVICE_NAME=testgerard_high)))"
```
Then I found in [Oracle support](https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=249272212021220&id=1457274.1&_afrWindowMode=0&_adf.ctrl-state=gydl1iqzq_259) that it could be added to the wallet directory, but the same issue.
```
DB_URL= "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(HOST=testgerard_high)(PORT=1521)(PROTOCOL=tcp))(CONNECT_DATA=(SERVICE_NAME=testgerard_high))(SECURITY = (MY_WALLET_DIRECTORY = src\\Wallet_testGerard)))"
```
If I switch to a connection string using 18.3 JDBC driver which should work for my settings, then I get the error: **Invalid connection string format, a valid format is: "host:port:sid"**
```
DB_URL="jdbc:oracle:thin:@testgerard_high?TNS_ADMIN=src\\Wallet_testGerard";
```
Finally, I have seen [here](https://stackoverflow.com/questions/57341480/invalid-connection-string-format-error-when-trying-to-connect-to-oracle-using) a way to inform the wallet folder out of the BD\_URL so I do not get the invalid format exception:
```
System.setProperty("oracle.net.tns_admin","src\\Wallet_testGerard");
```
Now it is trying to connect but it fails after 60 seconds with the **exception:sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target**
I adapted an oracle example, here is my code:
```
import java.sql.SQLException;
import java.sql.DatabaseMetaData;
import oracle.jdbc.pool.OracleDataSource;
import oracle.jdbc.OracleConnection;
public class OracleDataSourceSample
{
final static String DB_URL="jdbc:oracle:thin:@testgerard_high";
//final static String DB_URL="jdbc:oracle:thin:@testgerard_high?TNS_ADMIN=src\\Wallet_testGerard";
//final static String DB_URL= "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(HOST=testgerard_high)(PORT=1521)(PROTOCOL=tcp))(CONNECT_DATA=(SERVICE_NAME=testgerard_high))(SECURITY = (MY_WALLET_DIRECTORY = src\\Wallet_testGerard)))";
final static String DB_USER = "hr";
final static String DB_PASSWORD = "hr";
public static void main (String args[]) throws SQLException, ClassNotFoundException {
System.setProperty("oracle.net.tns_admin","src\\Wallet_testGerard");
Class.forName("oracle.jdbc.driver.OracleDriver");
OracleDataSource ods = new OracleDataSource();
ods.setURL(DB_URL);
ods.setUser(DB_USER);
ods.setPassword(DB_PASSWORD);
// With AutoCloseable, the connection is closed automatically.
try (OracleConnection connection = (OracleConnection)
ods.getConnection()) {
// Get the JDBC driver name and version
DatabaseMetaData dbmd = connection.getMetaData();
System.out.println("Driver Name: " + dbmd.getDriverName());
System.out.println("Driver Version: " +
dbmd.getDriverVersion());
System.out.println("Database Username is: " +
connection.getUserName());
}
}
}
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846276/"
] | Tested with stand alone jdk 8 and jdk 14 as well as Intellij Community edition(first run firewall blocked intellij).
```
import java.io.IOException;
import java.io.InputStream;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;
import oracle.jdbc.pool.OracleDataSource;
import oracle.jdbc.OracleConnection;
import java.sql.DatabaseMetaData;
public class SalesConnection
{
final static String DB_URL="jdbc:oracle:thin:@oci_adw_high";
final static String DB_USER = "xxxx";
final static String DB_PASSWORD = "xxxxx";
public static void main (String args[]) throws SQLException, ClassNotFoundException {
System.setProperty("oracle.net.tns_admin","C:\\app\\oracle\\product\\19\\dbhome_1\\network\\admin");
System.setProperty("oracle.jdbc.fanEnabled","false");
Class.forName("oracle.jdbc.driver.OracleDriver");
OracleDataSource ods = new OracleDataSource();
ods.setURL(DB_URL);
ods.setUser(DB_USER);
ods.setPassword(DB_PASSWORD);
// With AutoCloseable, the connection is closed automatically.
try (OracleConnection connection = (OracleConnection)
ods.getConnection()) {
// Get the JDBC driver name and version
DatabaseMetaData dbmd = connection.getMetaData();
System.out.println("Driver Name: " + dbmd.getDriverName());
System.out.println("Driver Version: " +
dbmd.getDriverVersion());
System.out.println("Database Username is: " +
connection.getUserName());
printSales(connection);
}
}
public static void printSales(Connection connection) throws SQLException {
// Statement and ResultSet are AutoCloseable and closed automatically.
try (Statement statement = connection.createStatement()) {
try (ResultSet resultSet = statement
.executeQuery("select /* Java Console */PROD_ID, CUST_ID from sales fetch first 10 rows only ")) {
System.out.println("PROD_ID" + " " + "CUST_ID");
System.out.println("---------------------");
while (resultSet.next())
System.out.println(resultSet.getString(1) + " "
+ resultSet.getString(2) + " ");
}
}
}
}
```
Try to run this code change table and column names
Source of this [code](https://github.com/oracle/oracle-db-examples/blob/master/java/jdbc/ConnectionSamples/DataSourceSample.java)
**Edit:**
Compiling and executing from command prompt
```
javac -cp "C:\ojdbc8-full\*;" SalesConnection.java
java -cp "C:\ojdbc8-full\*;" SalesConnection
``` | ojdbc8-full.zip contains oraclepki.jar, osdt\_core.jar and osdt\_cert.jar which are required for connecting to Oracle autonomous database. However, you can get JDBC driver along with other additional libraries using these maven co-ordinates.
```
<dependencies>
<dependency>
<groupid>com.oracle.database.jdbc</groupid>
<artifactid>ojdbc8-production</artifactid>
<version>19.7.0.0</version>
<type>pom</type>
</dependency>
</dependencies>
```
Refer to the [Maven Central Guide](https://www.oracle.com/database/technologies/maven-central-guide.html) for more information |
62,674,805 | Could not determine the dependencies of task ':shared\_preferences:compileDebugAidl'.
>
> Could not resolve all task dependencies for configuration ':shared\_preferences:debugCompileClasspath'.
>
>
>
>
> Could not resolve project :shared\_preferences\_macos.
> Required by:
> project :shared\_preferences
> > Unable to find a matching configuration of project :shared\_preferences\_macos:
> - None of the consumable configurations have attributes.
>
>
>
>
> Could not resolve project :shared\_preferences\_web.
>
>
>
Required by:
project :shared\_preferences
> Unable to find a matching configuration of project :shared\_preferences\_web:
- None of the consumable configurations have attributes.[![enter image description here](https://i.stack.imgur.com/EBwzL.png)](https://i.stack.imgur.com/EBwzL.png) | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674805",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11246887/"
] | Tested with stand alone jdk 8 and jdk 14 as well as Intellij Community edition(first run firewall blocked intellij).
```
import java.io.IOException;
import java.io.InputStream;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;
import oracle.jdbc.pool.OracleDataSource;
import oracle.jdbc.OracleConnection;
import java.sql.DatabaseMetaData;
public class SalesConnection
{
final static String DB_URL="jdbc:oracle:thin:@oci_adw_high";
final static String DB_USER = "xxxx";
final static String DB_PASSWORD = "xxxxx";
public static void main (String args[]) throws SQLException, ClassNotFoundException {
System.setProperty("oracle.net.tns_admin","C:\\app\\oracle\\product\\19\\dbhome_1\\network\\admin");
System.setProperty("oracle.jdbc.fanEnabled","false");
Class.forName("oracle.jdbc.driver.OracleDriver");
OracleDataSource ods = new OracleDataSource();
ods.setURL(DB_URL);
ods.setUser(DB_USER);
ods.setPassword(DB_PASSWORD);
// With AutoCloseable, the connection is closed automatically.
try (OracleConnection connection = (OracleConnection)
ods.getConnection()) {
// Get the JDBC driver name and version
DatabaseMetaData dbmd = connection.getMetaData();
System.out.println("Driver Name: " + dbmd.getDriverName());
System.out.println("Driver Version: " +
dbmd.getDriverVersion());
System.out.println("Database Username is: " +
connection.getUserName());
printSales(connection);
}
}
public static void printSales(Connection connection) throws SQLException {
// Statement and ResultSet are AutoCloseable and closed automatically.
try (Statement statement = connection.createStatement()) {
try (ResultSet resultSet = statement
.executeQuery("select /* Java Console */PROD_ID, CUST_ID from sales fetch first 10 rows only ")) {
System.out.println("PROD_ID" + " " + "CUST_ID");
System.out.println("---------------------");
while (resultSet.next())
System.out.println(resultSet.getString(1) + " "
+ resultSet.getString(2) + " ");
}
}
}
}
```
Try to run this code change table and column names
Source of this [code](https://github.com/oracle/oracle-db-examples/blob/master/java/jdbc/ConnectionSamples/DataSourceSample.java)
**Edit:**
Compiling and executing from command prompt
```
javac -cp "C:\ojdbc8-full\*;" SalesConnection.java
java -cp "C:\ojdbc8-full\*;" SalesConnection
``` | ojdbc8-full.zip contains oraclepki.jar, osdt\_core.jar and osdt\_cert.jar which are required for connecting to Oracle autonomous database. However, you can get JDBC driver along with other additional libraries using these maven co-ordinates.
```
<dependencies>
<dependency>
<groupid>com.oracle.database.jdbc</groupid>
<artifactid>ojdbc8-production</artifactid>
<version>19.7.0.0</version>
<type>pom</type>
</dependency>
</dependencies>
```
Refer to the [Maven Central Guide](https://www.oracle.com/database/technologies/maven-central-guide.html) for more information |
62,674,810 | I have been trying to write a panda dataframe from jupyter notebook (using to\_sql) to sql developer. I did the following:
```
import pandas as pd
import cx_Oracle
from sqlalchemy import types, create_engine
df = pd.read_csv('C:/Users/TOSHIBA/Downloads/Pandas Hub/Pandas_data//survey_results_public.csv')
engine = create_engine('oracle+cx_oracle://hr:123456@localhost/orcl')
df.to_sql('sample_table',engine)
```
Upon execution, I get this error:
"UnicodeEncodeError: 'charmap' codec can't encode characters in position 3-12: character maps to undefined".
When I checked my Sql developer, the sample\_table is found created with all headings of Panda Dataframe intact but WITHOUT rows or row contents.
I went through previous answers in a hope to get a solution. But, none were of much help as my problem remains the same.
Would anyone help me in sorting out this issue. I am quite confused what went wrong here. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674810",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13699631/"
] | Tested with stand alone jdk 8 and jdk 14 as well as Intellij Community edition(first run firewall blocked intellij).
```
import java.io.IOException;
import java.io.InputStream;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;
import oracle.jdbc.pool.OracleDataSource;
import oracle.jdbc.OracleConnection;
import java.sql.DatabaseMetaData;
public class SalesConnection
{
final static String DB_URL="jdbc:oracle:thin:@oci_adw_high";
final static String DB_USER = "xxxx";
final static String DB_PASSWORD = "xxxxx";
public static void main (String args[]) throws SQLException, ClassNotFoundException {
System.setProperty("oracle.net.tns_admin","C:\\app\\oracle\\product\\19\\dbhome_1\\network\\admin");
System.setProperty("oracle.jdbc.fanEnabled","false");
Class.forName("oracle.jdbc.driver.OracleDriver");
OracleDataSource ods = new OracleDataSource();
ods.setURL(DB_URL);
ods.setUser(DB_USER);
ods.setPassword(DB_PASSWORD);
// With AutoCloseable, the connection is closed automatically.
try (OracleConnection connection = (OracleConnection)
ods.getConnection()) {
// Get the JDBC driver name and version
DatabaseMetaData dbmd = connection.getMetaData();
System.out.println("Driver Name: " + dbmd.getDriverName());
System.out.println("Driver Version: " +
dbmd.getDriverVersion());
System.out.println("Database Username is: " +
connection.getUserName());
printSales(connection);
}
}
public static void printSales(Connection connection) throws SQLException {
// Statement and ResultSet are AutoCloseable and closed automatically.
try (Statement statement = connection.createStatement()) {
try (ResultSet resultSet = statement
.executeQuery("select /* Java Console */PROD_ID, CUST_ID from sales fetch first 10 rows only ")) {
System.out.println("PROD_ID" + " " + "CUST_ID");
System.out.println("---------------------");
while (resultSet.next())
System.out.println(resultSet.getString(1) + " "
+ resultSet.getString(2) + " ");
}
}
}
}
```
Try to run this code change table and column names
Source of this [code](https://github.com/oracle/oracle-db-examples/blob/master/java/jdbc/ConnectionSamples/DataSourceSample.java)
**Edit:**
Compiling and executing from command prompt
```
javac -cp "C:\ojdbc8-full\*;" SalesConnection.java
java -cp "C:\ojdbc8-full\*;" SalesConnection
``` | ojdbc8-full.zip contains oraclepki.jar, osdt\_core.jar and osdt\_cert.jar which are required for connecting to Oracle autonomous database. However, you can get JDBC driver along with other additional libraries using these maven co-ordinates.
```
<dependencies>
<dependency>
<groupid>com.oracle.database.jdbc</groupid>
<artifactid>ojdbc8-production</artifactid>
<version>19.7.0.0</version>
<type>pom</type>
</dependency>
</dependencies>
```
Refer to the [Maven Central Guide](https://www.oracle.com/database/technologies/maven-central-guide.html) for more information |
62,674,821 | I am using SymPy's plotting module, but legends stop working when I start having multiple lines with different ranges (to plot a function f(x) whose definition depend on x like in the following example).
I actually haven't seen any example of this anywhere :
```
from sympy import *
plt = plot((x**2,(x,-1,0)),(x**3,(x,0,1)),label='$f(x)$',show=False)
plt[0].legend = True
plt.show()
```
Here the legend is ignored. I also tried
```
plt.legend = True
```
instead of specifying plt[0] but Python says
>
> The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
>
>
>
My final goal would be something like
[a plot with lines of multiple colors, representing functions which have a different definition before and after a given value of x](https://i.stack.imgur.com/izxLP.png)
and I append them together, say, if plt1 and plt2 have both 2 parts,
```
plt = plt1
plt.append(plt2[0])
plt.append(plt2[1])
```
Does anyone know how label and legend work in this context?
Thank you. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674821",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846316/"
] | Tested with stand alone jdk 8 and jdk 14 as well as Intellij Community edition(first run firewall blocked intellij).
```
import java.io.IOException;
import java.io.InputStream;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;
import oracle.jdbc.pool.OracleDataSource;
import oracle.jdbc.OracleConnection;
import java.sql.DatabaseMetaData;
public class SalesConnection
{
final static String DB_URL="jdbc:oracle:thin:@oci_adw_high";
final static String DB_USER = "xxxx";
final static String DB_PASSWORD = "xxxxx";
public static void main (String args[]) throws SQLException, ClassNotFoundException {
System.setProperty("oracle.net.tns_admin","C:\\app\\oracle\\product\\19\\dbhome_1\\network\\admin");
System.setProperty("oracle.jdbc.fanEnabled","false");
Class.forName("oracle.jdbc.driver.OracleDriver");
OracleDataSource ods = new OracleDataSource();
ods.setURL(DB_URL);
ods.setUser(DB_USER);
ods.setPassword(DB_PASSWORD);
// With AutoCloseable, the connection is closed automatically.
try (OracleConnection connection = (OracleConnection)
ods.getConnection()) {
// Get the JDBC driver name and version
DatabaseMetaData dbmd = connection.getMetaData();
System.out.println("Driver Name: " + dbmd.getDriverName());
System.out.println("Driver Version: " +
dbmd.getDriverVersion());
System.out.println("Database Username is: " +
connection.getUserName());
printSales(connection);
}
}
public static void printSales(Connection connection) throws SQLException {
// Statement and ResultSet are AutoCloseable and closed automatically.
try (Statement statement = connection.createStatement()) {
try (ResultSet resultSet = statement
.executeQuery("select /* Java Console */PROD_ID, CUST_ID from sales fetch first 10 rows only ")) {
System.out.println("PROD_ID" + " " + "CUST_ID");
System.out.println("---------------------");
while (resultSet.next())
System.out.println(resultSet.getString(1) + " "
+ resultSet.getString(2) + " ");
}
}
}
}
```
Try to run this code change table and column names
Source of this [code](https://github.com/oracle/oracle-db-examples/blob/master/java/jdbc/ConnectionSamples/DataSourceSample.java)
**Edit:**
Compiling and executing from command prompt
```
javac -cp "C:\ojdbc8-full\*;" SalesConnection.java
java -cp "C:\ojdbc8-full\*;" SalesConnection
``` | ojdbc8-full.zip contains oraclepki.jar, osdt\_core.jar and osdt\_cert.jar which are required for connecting to Oracle autonomous database. However, you can get JDBC driver along with other additional libraries using these maven co-ordinates.
```
<dependencies>
<dependency>
<groupid>com.oracle.database.jdbc</groupid>
<artifactid>ojdbc8-production</artifactid>
<version>19.7.0.0</version>
<type>pom</type>
</dependency>
</dependencies>
```
Refer to the [Maven Central Guide](https://www.oracle.com/database/technologies/maven-central-guide.html) for more information |
62,674,822 | My objective is to create a square binary grid in a numpy 2D array (containing only `0` and `1` values) with a given length and given number of cells per row/column. E.g.: assigning 8 cells per axis would give us a chess board (with a given length):
[![chess_board](https://i.stack.imgur.com/H8Zmp.png)](https://i.stack.imgur.com/H8Zmp.png)
Translated into code, an example would look like this (using length=8 for simplicity):
```py
res = np_binary_grid(length=8, cells_per_row=4)
array([[1, 1, 0, 0, 1, 1, 0, 0],
[1, 1, 0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0, 1, 1],
[0, 0, 1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1, 0, 0],
[1, 1, 0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0, 1, 1],
[0, 0, 1, 1, 0, 0, 1, 1]])
```
So far I've been toying with [`np.meshgrid`](https://numpy.org/doc/stable/reference/generated/numpy.meshgrid.html), [`np.dot`](https://numpy.org/doc/stable/reference/generated/numpy.dot.html) and [`np.roll`](https://numpy.org/doc/1.18/reference/generated/numpy.roll.html), but all my attempts were unsuccessful (no colour on some columns, just stripes, etc...)
### Extra
To help visualize a given matrix `M`, I am using a small routine:
```py
from PIL import Image
import numpy as np
display(Image.fromarray(M.astype(np.uint8)*255))
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9670056/"
] | * Try to create C-element-long vector of alternating 0/1.
* use Kronecker product to expand those elements to desired size
* Finally, use broadcasting and xoring to create an 2d chessboard
Code:
```
import numpy as np
def np_binary_grid(L, C):
assert L % C == 0
r = np.arange(C, dtype=np.uint8) & 1
r = np.kron(r, np.ones(L // C, dtype=np.uint8))
# `^ 1` fill force upper-left cell to consist of 1s
return (r ^ 1) ^ r[:, None]
print(np_binary_grid(8, 4))
```
displays:
```
[[1 1 0 0 1 1 0 0]
[1 1 0 0 1 1 0 0]
[0 0 1 1 0 0 1 1]
[0 0 1 1 0 0 1 1]
[1 1 0 0 1 1 0 0]
[1 1 0 0 1 1 0 0]
[0 0 1 1 0 0 1 1]
[0 0 1 1 0 0 1 1]]
``` | I got my own version working:
```py
def np_binary_grid(length, cells_per_row):
'''Generates a binary grid in a numpy array.
cells_per_row must be a power of 2'''
aux = np.full(length, False)
n=2
aux[:length//n]=True
while n<cells_per_row:
n*=2
aux = aux != np.roll(aux, length//n)
a, b = np.meshgrid(aux, np.roll(aux, length//n))
return (a != b).astype(np.bool)
``` |
62,674,839 | I am trying to represent an abstract syntax tree in C++.
I have multiple classes which inherit base class 'TreeNode'.
Some of them contain pointers to other inherited classes of TreeNode within them.
With the method i am trying to use to get it to work, I can't seem to access the members within the child TreeNodes. Am i doing something wrong? Is there a better way to achieve this?
```
class TreeNode {};
class UnaryOperation : virtual public TreeNode {
public:
TreeNodeType Type;
UnaryOperationType Operation;
TreeNode* Operand;
UnaryOperation(UnaryOperationType operation, TreeNode* operand);
};
class BinaryOperation : virtual public TreeNode {
public:
TreeNodeType Type;
TreeNode* Left;
BinaryOperationType Operation;
TreeNode* Right;
BinaryOperation(TreeNode* left, BinaryOperationType operation, TreeNode* right);
};
```
. . .
```
TreeNode* unaryop_node = new UnaryOperation(op_type, operand_node);
return unaryop_node;
```
...
```
std::cout << (int)*unaryop->Type << std::endl;
```
class TreeNode has no member Type | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674839",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10109881/"
] | Thanks for all your answers.
It turns out I was trying to solve this problem in the wrong way. I completely removed the `TreeNodeType` enum, and instead used a virtual function to traverse the tree.
```
class TreeNode {
public:
TreeNode();
virtual void print() = 0;
};
class UnaryOperation : public TreeNode {
public:
UnaryOperationType Operation;
TreeNode* Operand;
UnaryOperation(UnaryOperationType operation, TreeNode* operand);
void print() override {
print_unaryop(Operation);
Operand->print();
}
};
class BinaryOperation : public TreeNode {
public:
TreeNode* Left;
BinaryOperationType Operation;
TreeNode* Right;
BinaryOperation(TreeNode* left, BinaryOperationType operation, TreeNode* right);
void print() override {
Left->print();
print_binaryop(Operation);
Right->print();
}
};```
`unaryop_node->print();`
``` | As a general note, in C++, when you have a pointer to a certain type, you can only access members/functions that this specific type has; you don't have access to any of the members/functions in derived classes (directly); this works only e.g. via casting or virtual functions.
From the way you're trying to access the created `UnaryOperation` via a `TreeNode` pointer, it looks like `Type` should be part of the base class `TreeNode`, something like:
```
class TreeNode {
public:
TreeNodeType Type;
};
class UnaryOperation : virtual public TreeNode {
public:
UnaryOperationType Operation;
TreeNode* Operand;
UnaryOperation(UnaryOperationType operation, TreeNode* operand);
};
class BinaryOperation : virtual public TreeNode {
public:
TreeNode* Left;
BinaryOperationType Operation;
TreeNode* Right;
BinaryOperation(TreeNode* left, BinaryOperationType operation, TreeNode* right);
};
```
This assumes that `TreeNodeType` indicates the type of node and will for example be the same for all instances of type `UnaryOperation`; it could be defined something like
```
enum TreeNodeType { UnaryOp, BinaryOp };
```
Note that requiring such a "type indicator" in a base class in C++ is typically a *code smell* that you are not doing inheritance the way it's supposed to be done in C++ - typically you either should not need to differentiate between different derived types contained in a TreeNode (because you call virtual functions that automatically do what is required for the specific subtype contained in the generic base pointer), or you should have a pointer to the specific class. |
62,674,845 | When I'm trying to output my code, it goes on some random lines and the last line is fine. I'm a beginner. Just started 20 minutes ago.
```
f = open('gamelist.txt')
text = f.readlines()
for line in text:
print('<div><a href="genericweb'+line+'.html">'+line+'</a></div>')
```
***THIS IS MY OUTPUT***
```
<div>
<a
href="genericwebabc
.html"
>abc
</a>
</div>
<div>
<a
href="genericwebdef
.html"
>def
</a>
</div>
<div><a href="genericwebghi.html">ghi</a></div>
```
As you can see, the first two don't work, but the last one does. The text document is: "abc def ghi". ***New lines in between each group of letters btw.*** Any help is appreciated | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674845",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846504/"
] | you may use :
```
print('<div><a href="genericweb'+line+'.html">'+line+'</a></div>', end=' ')
```
or try using this to get rid of newline :
```
for line in text:
line = line.rstrip('\n')
print('<div><a href="genericweb'+line+'.html">'+line+'</a></div>')
```
Hope it helps | You can set the [end](https://docs.python.org/3/library/functions.html#print) keyword-parameter to an empty string, and it will not print any ending (the newline). This parameter defaults to "\n", the escape for a newline. Setting it to be blank will cause this newline not to be printed. You can also set it to any other string to write a different end.
```py
print("hello", end="")
```
You should probably take a look at [Jinja2](https://jinja.palletsprojects.com/en/2.11.x/) for HTML templating in python. |
62,674,846 | I'm comparing different tracing backend using OpenCensus.
I already have the simple OpenCensus.io python samples running fine using Zipkin and Azure Monitor.
Now I'm trying to test using GCP's Stackdriver...
I have set up the test code from Opencensus
<https://opencensus.io/exporters/supported-exporters/python/stackdriver/> as follows:
```
#!/usr/bin/env python
import os
from opencensus.common.transports.async_ import AsyncTransport
from opencensus.ext.stackdriver.trace_exporter import StackdriverExporter
from opencensus.trace.tracer import Tracer
def main():
sde = StackdriverExporter(
project_id=os.environ.get("GCP_PROJECT_ID"),
transport=AsyncTransport)
tracer = Tracer(exporter=sde)
with tracer.span(name="doingWork") as span:
for i in range(10):
pass
if __name__ == "__main__":
main()
```
I have set the environment variable for **GCP\_PROJECT\_ID** and also have my key file path for my service account JSON file set in **GOOGLE\_APPLICATION\_CREDENTIALS**.
The service account has the ***"Cloud trace agent"*** role.
My code runs through with no errors but I can't see any info appearing in the GCP console under traces or in the monitoring dashboard.
Am I missing something?
Environment notes:
I'm testing this from my local Windows machine using Python 3.7.2 | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674846",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/44281/"
] | you may use :
```
print('<div><a href="genericweb'+line+'.html">'+line+'</a></div>', end=' ')
```
or try using this to get rid of newline :
```
for line in text:
line = line.rstrip('\n')
print('<div><a href="genericweb'+line+'.html">'+line+'</a></div>')
```
Hope it helps | You can set the [end](https://docs.python.org/3/library/functions.html#print) keyword-parameter to an empty string, and it will not print any ending (the newline). This parameter defaults to "\n", the escape for a newline. Setting it to be blank will cause this newline not to be printed. You can also set it to any other string to write a different end.
```py
print("hello", end="")
```
You should probably take a look at [Jinja2](https://jinja.palletsprojects.com/en/2.11.x/) for HTML templating in python. |
62,674,853 | I've a problem with my flutter app. I just called some widgets in my listview. My widgets use Api Data(http, futurebuilder). But my widgets rerender or recall when i scroll-up or down. I call that all api methods in initstate().
I dont understand, What's the problem?
[SCREEN1](https://i.stack.imgur.com/ILcyU.png)
[SCREEN 2 is my problem.](https://i.stack.imgur.com/Oklle.png)
HomeScreen.dart:
```
...
ListView(
children: <Widget>[
MainHeadlineMod(),
SizedBox(
height: 30.0,
),
CategoryMod(
categoryId: 19,
),
SizedBox(
height: 30.0,
),
CategoryMod(
categoryId: 15,
),
SizedBox(
height: 30.0,
),
CategoryMod(
categoryId: 20,
),
SizedBox(
height: 30.0,
),
CategoryMod(
categoryId: 14,
),
SizedBox(
height: 30.0,
),
CategoryMod(
categoryId: 16,
),
],
),
...
```
CategoryMod.dart:
```
...
class _CategoryModState extends State<CategoryMod> {
final WebService ws = WebService();
Future<List<ShortNewsModel>> _futureCategoryNews;
@override
void initState() {
_futureCategoryNews = ws.fetchCategoryNews(widget.categoryId);
super.initState();
}
@override
Widget build(BuildContext context) {
return Container(
child: FutureBuilder<List<ShortNewsModel>>(
future: _futureCategoryNews,
builder: (context, snapshot) {
if (snapshot.hasData) {
...
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62674853",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846322/"
] | you may use :
```
print('<div><a href="genericweb'+line+'.html">'+line+'</a></div>', end=' ')
```
or try using this to get rid of newline :
```
for line in text:
line = line.rstrip('\n')
print('<div><a href="genericweb'+line+'.html">'+line+'</a></div>')
```
Hope it helps | You can set the [end](https://docs.python.org/3/library/functions.html#print) keyword-parameter to an empty string, and it will not print any ending (the newline). This parameter defaults to "\n", the escape for a newline. Setting it to be blank will cause this newline not to be printed. You can also set it to any other string to write a different end.
```py
print("hello", end="")
```
You should probably take a look at [Jinja2](https://jinja.palletsprojects.com/en/2.11.x/) for HTML templating in python. |