id
stringlengths
3
6
prompt
stringlengths
100
55.1k
response_j
stringlengths
30
18.4k
290862
I have an Excel that is using Power Query to get data from a API. What I would like to do is have this data update every day without having to open the excel myself. So I enabled the setting within excel to `Refresh data when opening the file`. So I am trying to create a PowerShell script which open the excel, waits for the query to refresh, and then saves the excel. However I cant get it to wait update the query has refreshed before saving and closing. **code:** ``` $Excel = New-Object -COM "Excel.Application" $Excel.Visible = $true $Workbook = $Excel.Workbooks.Open("G:\...\jmp-main-2020-07-17.xlsx") While (($Workbook.Sheets | ForEach-Object {$_.QueryTables | ForEach-Object {if($_.QueryTable.Refreshing){$true}}})) { Start-Sleep -Seconds 1 } $Excel.Save() $Excel.Close() ```
I think your while loop is wrong. You should probably loop over the worksheets in the workbook and for each of them loop over the QueryTables. Then enter a while loop to wait until the `Refreshing` property turns `$false` ``` foreach ($sheet in $Workbook.Sheets) { $sheet.QueryTables | ForEach-Object { while ($_.QueryTable.Refreshing) { Start-Sleep -Seconds 1 } } } ``` As aside: you should clear the COM object you have created after finishing with them to free memory: ``` $null = [System.Runtime.Interopservices.Marshal]::ReleaseComObject($Workbook) $null = [System.Runtime.Interopservices.Marshal]::ReleaseComObject($Excel) [System.GC]::Collect() [System.GC]::WaitForPendingFinalizers() ```
291301
The context is, I am using the caret library with a data set to train and predict using different models. What is the difference between setting the seed at the beginning of an R script or in each of the training and prediction processes? Thanks Manel
Setting the seed makes the following RNG outputs repeatable. So if something strange happens and you want to debug it, setting the seed lets you see it happen again. Doing it once at the beginning means you'll have to repeat the whole sequence, doing it several times means you need to repeat less. So during debugging, it may make sense to set the seed just before the part you want to examine. On the other hand, many statistical methods assume independence of results. If you want to generate 1000 random numbers, you only want to set the seed once at the beginning, and the RNG will approximate independence after that. Setting the seed to different values each time is probably okay, but most RNGs are tested assuming the seed is only set once, so you may discover a pattern of seeds that makes results invalid if you set it more than once. So for final results, you should only set the seed at the start. As @RobertLong said, that also makes it more convenient to change the seed, to see if your results repeat with a different seed.
291419
I am applying a function to a xarray.DataArray using [xarray.apply\_ufunc()](http://xarray.pydata.org/en/stable/generated/xarray.apply_ufunc.html). It works well with some NetCDFs and fails with others that appear to be comparable in terms of dimensions, coordinates, etc. However there must be something different between the NetCDFs that the code works for and the ones where the code fails, and hopefully someone can comment as to what the problem is after seeing the code and some metadata about the files listed below. The code I'm running to perform the computation is this: ``` # open the precipitation NetCDF as an xarray DataSet object dataset = xr.open_dataset(kwrgs['netcdf_precip']) # get the precipitation array, over which we'll compute the SPI da_precip = dataset[kwrgs['var_name_precip']] # stack the lat and lon dimensions into a new dimension named point, so at each lat/lon # we'll have a time series for the geospatial point, and group by these points da_precip_groupby = da_precip.stack(point=('lat', 'lon')).groupby('point') # apply the SPI function to the data array da_spi = xr.apply_ufunc(indices.spi, da_precip_groupby) # unstack the array back into original dimensions da_spi = da_spi.unstack('point') ``` The NetCDF that works looks like this: ``` >>> import xarray as xr >>> ds_good = xr.open_dataset("good.nc") >>> ds_good <xarray.Dataset> Dimensions: (lat: 38, lon: 87, time: 1466) Coordinates: * lat (lat) float32 24.5625 25.229166 25.895834 ... 48.5625 49.229168 * lon (lon) float32 -124.6875 -124.020836 ... -68.020836 -67.354164 * time (time) datetime64[ns] 1895-01-01 1895-02-01 ... 2017-02-01 Data variables: prcp (lat, lon, time) float32 ... Attributes: Conventions: CF-1.6, ACDD-1.3 ncei_template_version: NCEI_NetCDF_Grid_Template_v2.0 title: nClimGrid naming_authority: gov.noaa.ncei standard_name_vocabulary: Standard Name Table v35 institution: National Centers for Environmental Information... geospatial_lat_min: 24.5625 geospatial_lat_max: 49.354168 geospatial_lon_min: -124.6875 geospatial_lon_max: -67.020836 geospatial_lat_units: degrees_north geospatial_lon_units: degrees_east NCO: 4.7.1 nco_openmp_thread_number: 1 >>> ds_good.prcp <xarray.DataArray 'prcp' (lat: 38, lon: 87, time: 1466)> [4846596 values with dtype=float32] Coordinates: * lat (lat) float32 24.5625 25.229166 25.895834 ... 48.5625 49.229168 * lon (lon) float32 -124.6875 -124.020836 ... -68.020836 -67.354164 * time (time) datetime64[ns] 1895-01-01 1895-02-01 ... 2017-02-01 Attributes: valid_min: 0.0 units: millimeter valid_max: 2000.0 standard_name: precipitation_amount long_name: Precipitation, monthly total ``` The NetCDF that fails looks like this: ``` >>> ds_bad = xr.open_dataset("bad.nc") >>> ds_bad <xarray.Dataset> Dimensions: (lat: 38, lon: 87, time: 1483) Coordinates: * lat (lat) float32 49.3542 48.687534 48.020866 ... 25.3542 24.687532 * lon (lon) float32 -124.6875 -124.020836 ... -68.020836 -67.354164 * time (time) datetime64[ns] 1895-01-01 1895-02-01 ... 2018-07-01 Data variables: prcp (lat, lon, time) float32 ... Attributes: date_created: 2018-02-15 10:29:25.485927 date_modified: 2018-02-15 10:29:25.486042 Conventions: CF-1.6, ACDD-1.3 ncei_template_version: NCEI_NetCDF_Grid_Template_v2.0 title: nClimGrid naming_authority: gov.noaa.ncei standard_name_vocabulary: Standard Name Table v35 institution: National Centers for Environmental Information... geospatial_lat_min: 24.562532 geospatial_lat_max: 49.3542 geospatial_lon_min: -124.6875 geospatial_lon_max: -67.020836 geospatial_lat_units: degrees_north geospatial_lon_units: degrees_east >>> ds_bad.prcp <xarray.DataArray 'prcp' (lat: 38, lon: 87, time: 1483)> [4902798 values with dtype=float32] Coordinates: * lat (lat) float32 49.3542 48.687534 48.020866 ... 25.3542 24.687532 * lon (lon) float32 -124.6875 -124.020836 ... -68.020836 -67.354164 * time (time) datetime64[ns] 1895-01-01 1895-02-01 ... 2018-07-01 Attributes: valid_min: 0.0 long_name: Precipitation, monthly total standard_name: precipitation_amount units: millimeter valid_max: 2000.0 ``` When I run the code against the first file above it works without error. When using the second file I get errors like this: ``` multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/home/paperspace/anaconda3/envs/climate/lib/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/paperspace/anaconda3/envs/climate/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "/home/paperspace/git/climate_indices/scripts/process_grid_ufunc.py", line 278, in compute_write_spi kwargs=args_dict) File "/home/paperspace/anaconda3/envs/climate/lib/python3.6/site-packages/xarray/core/computation.py", line 974, in apply_ufunc return apply_groupby_ufunc(this_apply, *args) File "/home/paperspace/anaconda3/envs/climate/lib/python3.6/site-packages/xarray/core/computation.py", line 432, in apply_groupby_ufunc applied_example, applied = peek_at(applied) File "/home/paperspace/anaconda3/envs/climate/lib/python3.6/site-packages/xarray/core/utils.py", line 133, in peek_at peek = next(gen) File "/home/paperspace/anaconda3/envs/climate/lib/python3.6/site-packages/xarray/core/computation.py", line 431, in <genexpr> applied = (func(*zipped_args) for zipped_args in zip(*iterators)) File "/home/paperspace/anaconda3/envs/climate/lib/python3.6/site-packages/xarray/core/computation.py", line 987, in apply_ufunc exclude_dims=exclude_dims) File "/home/paperspace/anaconda3/envs/climate/lib/python3.6/site-packages/xarray/core/computation.py", line 211, in apply_dataarray_ufunc result_var = func(*data_vars) File "/home/paperspace/anaconda3/envs/climate/lib/python3.6/site-packages/xarray/core/computation.py", line 579, in apply_variable_ufunc .format(data.ndim, len(dims), dims)) ValueError: applied function returned data with unexpected number of dimensions: 1 vs 2, for dimensions ('time', 'point') ``` Can anyone comment as to what may be the issue?
It turned out that the NetCDF files that were problematic as inputs the latitude coordinate values were in descending order. `xarray.apply_ufunc()` appears to require that coordinate values be in ascending order, at least in order to avoid this particular issue. This is easily remedied by reversing the offending dimension's coordinate values using NCO's [ncpdq](https://linux.die.net/man/1/ncpdq) command before using the NetCDF file as input to xarray.
291622
How can I get the position of an item from a List that I selected from an AutoCompleteTextview? ``` autoSearch = (AutoCompleteTextView) findViewById(R.id.autoSearch); autoSearch.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> arg0, View arg1, int arg2, long arg3) { } }); adapter = new ArrayAdapter<String>(this, android.R.layout.simple_dropdown_item_1line, mLIST); autoSearch.setAdapter(adapter); ``` Using the variable arg2 only gets me the position of the AutoComplete list and not the actual position of the item that is on my mLIST. Is there a way to get the actual position and not the AutoComplete position?
I have no idea if it's possible to get Jekyll to somehow "find" the MP3 files (without YAML header) and convert the filenames like in your example, so I'd suggest an alternative approach: * Put the MP3s into a different folder (not in `_posts`). * Write a small command-line utility in the language of your choice, that loops through all files in the MP3 folder, gets the data from the MP3 filenames and generates a Markdown file with the desired name and content in the `_posts` folder. * Ignore the `_posts` folder in source control * To build your site with a single command, write a shell script that first calls your command-line utility and then `jekyll build` Of course, you can't use GitHub Pages if you do it this way *(except when you run the command-line utility on your machine and commit the `_posts` folder to source control)*. **But IMO it's a question of using the right tool for the right job: Maybe it *is* somehow possible to do all this in Jekyll, but that would be probably much, much more effort, compared to doing it in the language that you're most familiar with.**
291657
I have used **docker-compose.yaml**. I have configured in docker-compose.yml bellow like: **Step 1:** In d**ocker-compose.yaml** I user bellow code ``` services: postgres: image: postgres container_name: postgres hostname: postgres volumes: - db-data:/var/lib/postgresql/data restart: always ports: - 5432:5432 environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres POSTGRES_DB: accountdb account-opening: image: ehaque95/pocv1-account-opening:0.0.1-SNAPSHOT mem_limit: 700m ports: - "8081:8081" networks: - account-network depends_on: - postgres environment: SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/accountdb volumes: db-data: ``` **Step 2 :** I have configure **applicaiton.yml** in spring boot bellow like: ``` spring: datasource: url: jdbc:postgresql://postgres:5432/accountdb username: postgres password: postgres jpa: database-platform: org.hibernate.dialect.PostgreSQLDialect database: postgresql hibernate: ddl-auto: update properties: dialect: org.hibernate.dialect.PostgreSQLDialect ``` Note : postgresql server IP change frequently. When i run docker inpspect ,it shows some **"IPAddress": "172.19.0.2"** or sometimes **"IPAddress": "172.19.0.3"**.It shows when I run **docker-compose up** again. It shows connection error.what is the wrong of my code connect to postgresql. Please help me
For two containers to communicate with each other, they must be on the same Docker network. [By default](https://docs.docker.com/compose/networking/), Compose creates a network named `default` and attaches containers to it; if you specify other `networks:` for a container, then they are not attached to the `default` network. In your `docker-compose.yml`, the `account-opening` container has a `networks:` block, but `postgres` doesn't: ```yaml version: '3.8' services: postgres: ... # no networks:, so implicitly # networks: [default] account-opening: ... networks: [account-network] # and not default # The postgres container is on the default network # This container is on account-network # And so this host name doesn't resolve environment: SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/accountdb ``` There's nothing wrong with using the `default` network. For most typical applications you can delete all of the `networks:` blocks in the entire file. Then the `default` network will get created with default settings, and all of the containers will attach to that network, and be able to address each other by their Compose service name.
291736
I am trying to get my head around programming using tidyeval. I want to write a function to run logistic regression models for selected outcome variables: ``` library(tidyverse) set.seed(1234) df <- tibble(id = 1:1000, group = sample(c("Group 1", "Group 2", "Group 3"), 1000, replace = TRUE), died = sample(c(0,1), 1000, replace = TRUE)) myfunc <- function(data, outcome){ enquo_var <- enquo(outcome) fit <- tidy(glm(!!enquo_var ~ group, data=data, family = binomial(link = "logit")), exponentiate = TRUE, conf.int=TRUE) fit } myfunc(df, died) ``` But get: > > Error in !enquo\_outcome : invalid argument type > > > (note real scenario involves more complex function). Is this possible?
We need to create a formula for `glm` to pick it up. One option is `paste` ``` myfunc <- function(data, outcome){ enquo_var <- enquo(outcome) fit <- tidy(glm(paste(quo_name(enquo_var), "group", sep="~"), data=data, family = binomial(link = "logit")), exponentiate = TRUE, conf.int=TRUE) fit } myfunc(df, died) # term estimate std.error statistic p.value conf.low conf.high #1 (Intercept) 0.8715084 0.1095300 -1.2556359 0.20924801 0.7026185 1.079852 #2 groupGroup 2 0.9253515 0.1550473 -0.5003736 0.61681204 0.6826512 1.253959 #3 groupGroup 3 1.3692735 0.1557241 2.0181864 0.04357185 1.0095739 1.859403 ``` --- If we also need to use the tidyverse functions ``` myfunc <- function(data, outcome){ quo_var <- quo_name(enquo(outcome)) fit <- tidy(glm(rlang::expr(!! rlang::sym(quo_var) ~ group), data=data, family = binomial(link = "logit")), exponentiate = TRUE, conf.int=TRUE) fit } myfunc(df, died) # term estimate std.error statistic p.value conf.low conf.high #1 (Intercept) 0.8715084 0.1095300 -1.2556359 0.20924801 0.7026185 1.079852 #2 groupGroup 2 0.9253515 0.1550473 -0.5003736 0.61681204 0.6826512 1.253959 #3 groupGroup 3 1.3692735 0.1557241 2.0181864 0.04357185 1.0095739 1.859403 ``` --- Or as @lionel mentioned in the comments `get_expr` can be used ``` myfunc <- function(data, outcome){ quo_var <- enquo(outcome) fit <- tidy(glm(rlang::expr(!! rlang::get_expr(quo_var) ~ group), data=data, family = binomial(link = "logit")), exponentiate = TRUE, conf.int=TRUE) fit } myfunc(df, died) # term estimate std.error statistic p.value conf.low conf.high #1 (Intercept) 0.8715084 0.1095300 -1.2556359 0.20924801 0.7026185 1.079852 #2 groupGroup 2 0.9253515 0.1550473 -0.5003736 0.61681204 0.6826512 1.253959 #3 groupGroup 3 1.3692735 0.1557241 2.0181864 0.04357185 1.0095739 1.859403 ``` --- Or a more compact approach suggested by @lionel which avoids the `enquo/quo_name/sym` conversion instead directly takes the argument in `enexpr` ```html myfunc <- function(data, outcome){ fit <- tidy(glm(rlang::expr(!! rlang::enexpr(outcome) ~ group), data=data, family = binomial(link = "logit")), exponentiate = TRUE, conf.int=TRUE) fit } myfunc(df, died) # term estimate std.error statistic p.value conf.low conf.high #1 (Intercept) 0.8715084 0.1095300 -1.2556359 0.20924801 0.7026185 1.079852 #2 groupGroup 2 0.9253515 0.1550473 -0.5003736 0.61681204 0.6826512 1.253959 #3 groupGroup 3 1.3692735 0.1557241 2.0181864 0.04357185 1.0095739 1.859403 ```
291828
since regular exressions are not supported in Cocoa I find [**RegexKitLite**](http://regexkit.sourceforge.net/RegexKitLite/index.html#RegexKitLiteOverview) very usefull. But all examples extract matching strings. I just want to test if a string matches a regular expression and get a Yes or No. How can I do that?
I've used NSPredicate for that purpose: ``` NSString *someRegexp = ...; NSPredicate *myTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", someRegexp]; if ([myTest evaluateWithObject: testString]){ //Matches } ```
291876
In my React application I have the next situation. On each page I have to create this logic: ``` const user = useUser(); const message = user.auth ? 'Welcome' : 'You are not allowed' <Button>{message}</Button> ``` I have to add this logic on 10 pages. I know that my question is more theoretical, but what is better in my case: 1. To add this logic on each page; or 2. To create a custom hook like `useButtonMessage()` which will look like this: ``` export const useButtonMessage = () => { const user = useUser(); const message = user.auth ? 'Welcome' : 'You are not allowed' return message; } ``` Won't the hook be redundant if it keeps only 3 lines of code?
Personally, I like to create hooks with all relative things of a particular topic (instead of just one constant like `message`). For example: ``` export const useAuth = () => { const user = useUser() const isLoggedIn = user.auth const welcomeMessage = isLoggedIn ? 'Welcome' : 'You are not allowed' /* add selectors, functions to dispatch actions, or other util contants such as welcomeMessage */ return { isLoggedIn, // use this constant across your app to check if the user is logged in user, // use this across your app to obtain the user welcomeMessage, }; } ```
291935
We are having a table like this to save login tokens per user sessions. This table was not partitioned earlier but now we decided to partition it to improve performance as it contains over a few millions rows. ``` CREATE TABLE `tokens` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `uid` int(10) unsigned DEFAULT NULL, `session` int(10) unsigned DEFAULT '0', `token` varchar(128) NOT NULL DEFAULT '', PRIMARY KEY (`id`), UNIQUE KEY `usersession` (`uid`,`session`), KEY `uid` (`uid`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 PARTITION BY HASH(id) PARTITIONS 101; ``` We plan to partition based on '***id***' as it's primarily used for "***select***" queries and hence can effectively perform pruning. However the problem is we maintain unique index of ***(uid, session)*** and partition requires participating column to be part of unique index. Now in this case unique index of ***(id, uid, session)*** doesn't make sense (will always be unique). is there anyway to get around this issue without manually checking (uid, session).
* Don't use partitioning. It won't speed up this kind of table. * I have yet to see a case of `BY HASH` that speeds up a system. * It is almost never useful to partition on the `PRIMARY KEY`. * In general, don't have an `AUTO_INCREMENT` id when you have a perfectly good "natural" PK -- `(uid, session)`. Or should it be `(toke n)`? * Don't have one index being the first part of another: `(uid)` is redundant, given `(uid, session)`. * Consider using utf8mb4 if you expect to have Emoji or Chinese. On the other hand, if `token` is, say, base64, then make it `ascii` or something. So, I think this will work significantly better (smaller, faster, etc): ``` CREATE TABLE `tokens` ( `uid` int(10) unsigned DEFAULT NULL, `session` int(10) unsigned DEFAULT '0', `token` VARBINARY(128) NOT NULL DEFAULT '', PRIMARY KEY (token), ) ENGINE=InnoDB ``` Which of these do you search by? ``` WHERE token = ... WHERE uid = ... AND session = ... ``` One drawback is that I got rid of `id`; if `id` is needed by other tables, then a change is needed there.
292327
I just spent the better part of a day finding a bug caused by a stray comma at the end of an assignment statement. The difficulty in finding my bug was exacerbated by a third-party callback library that was trapping exceptions but it made me wonder why Python (2.x) doesn't raise a syntax error instead of creating a tuple. Consider the following ``` >>> a = 1, >>> a (1,) ``` As you can see, the trailing comma creates a singleton tuple. It's not a violation of the Python grammar (see <http://docs.python.org/reference/expressions.html#grammar-token-expression_list>) but it can certainly lead to some unexpected results, e.g. ``` >>> a == 1, (False,) ``` vs ``` >>> (1,) == a True ``` Although I now understand what's going on, I'm puzzled about why Python permits this syntax instead of requiring explicit parentheses to create a tuple. **Is there some case where this behavior is necessary, or at least advantageous?** I've been programming almost exclusively in Python for the past 7 years and I've never needed to create a singleton this way. Python is, in most respects, a wonderfully readable and explicit language. This particular "feature" seems, well, un-Pythonic.
Here's the Guido van Rossum, creator of Python, [explaining](http://python-history.blogspot.com/2009_02_01_archive.html) how this bit of syntax came to be: > > One consequence of adding an array-like interface to tuples is that I > had to figure out some way to resolve the edge cases of tuples with > length 0 or 1. One of the rules I took from ABC was that every data > type, when printed or converted to a string, should be represented by > an expression that was a valid input to the language’s parser. So, it > followed that I needed to have notations for 0- and 1-length tuples. > At the same time I didn’t want to lose the distinction between a > one-tuple and a bare parenthesized expression, so I settled for an > ugly but pragmatic approach where a trailing comma would turn an > expression into a one-tuple and "()" would represent a zero-tuple. > It's worth nothing that parentheses aren’t normally required by > Python’s tuple syntax, except here--I felt representing the empty > tuple by “nothing” could too easily mask genuine typos. > > > See also this PythonInfo Wiki on [TupleSyntax](http://wiki.python.org/moin/TupleSyntax), especially "... it is the commas, not the parentheses, that define the tuple". While this syntax isn't beautiful (Guido says ugly, but pragmatic), I don't think it is much of a gotcha. The real gotcha was that your third party library was "trapping exceptions" thus hiding important information regarding your error.
293245
We have some product teams for improving/maintaining the current line of products; but there are also "innovation teams" which try to plan/shape/design the next-generation products. The former teams usually look at short-term goals (small improvements, time frames between few days and few months) while the latter teams do long-term research and development (years?). My fear is that the short-term improvements will be superseded by the long-term overhauls made by the innovation team; ie. the manpower put into short-term improvements must actually pay off before the next-gen product is out... I think this heavily limits what tasks the short-term teams can start, and I'd like to change that. So is it possible to direct both teams in a way that short-term improvements are not lost? Or is that not generally impossible, and we should try harder to keep the short-term tasks really short? Related questions: * is there existing research on this topic? Under which terms would I find this research? * should short-term teams and innovation teams work closely together (to improve idea flow in both directions, or to improve idea flow in a specific direction), or should they be separated (to avoid disrupting the teams with useless information)?
I don't think the two are mutually exclusive. Why would the long-term innovations limit the short-term improvement? If anything, I would think that the short-term improvements would be influencing the longer term. Think of it as an operating system - MS or Apple release a new OS (innovation), and then continually release updates (small improvements) until the next version. That next version will not only (hopefully) innovate, but it will include all of those updates that were seen as valuable and necessary. To answer your second question - the teams should definitely be communicating and working together. For an example (warning?) of what happens when they don't, look at Microsoft and how they sometimes end up with redundant or competing offerings, primarily because one side didn't know what the other was doing until release.
293294
Im having trouble with my sql statements. I dont know what im doing wrong but it keeps adding to the database much rather than uploading ``` $result = mysql_query("SELECT id FROM users where fbID=$userID"); if (mysql_num_rows($result) > 0) { mysql_query("UPDATE users SET firstName='$firstName' , lastName='$lastName' , facebookURL='$link' , birthday='$birthday' , update='$today' , accessToken='$accessToken' , parentEmailOne='$parentEmailOne' , WHERE fbID='$userID'"); } else { mysql_query("INSERT INTO users (fbID, firstName, lastName, facebookURL, birthday , updated, accessToken, parentEmailOne ) VALUES ('$userId', '$firstName', '$lastName', '$link', '$birthday' , '$today', '$accessToken', '$parentEmailOne')"); } ```
You need quotes on the first query, `fbID='$userID'` Also, you dont need this `,` before `where`, on the second SQL And last, you use `userID` on the first reference, and `userId` on the last
293680
I have a dataset in R which I am trying to aggregate by column level and year which looks like this: ``` City State Year Status Year_repealed PolicyNo Pitt PA 2001 InForce 6 Phil. PA 2001 Repealed 2004 9 Pitt PA 2002 InForce 7 Pitt PA 2005 InForce 2 ``` What I would like to create is where for each Year, I aggregate the PolicyNo across states taking into account the date the policy was repealed. The results I would then get is: ``` Year State PolicyNo 2001 PA 15 2002 PA 22 2003 PA 22 2004 PA 12 2005 PA 14 ``` I am not sure how to go about splitting and aggregating the data conditional on the repeal data and was wondering if there is a way to achieve this is R easily.
It may help you to break this up into two distinct problems. 1. Get a table that shows the change in PolicyNo in every city-state-year. 2. Summarize that table to show the PolicyNo in each state-year. To accomplish (1) we add the missing years with `NA` PolicyNo, and add repeals as negative `PolicyNo` observations. ``` library(dplyr) df = structure(list(City = c("Pitt", "Phil.", "Pitt", "Pitt"), State = c("PA", "PA", "PA", "PA"), Year = c(2001L, 2001L, 2002L, 2005L), Status = c("InForce", "Repealed", "InForce", "InForce"), Year_repealed = c(NA, 2004L, NA, NA), PolicyNo = c(6L, 9L, 7L, 2L)), .Names = c("City", "State", "Year", "Status", "Year_repealed", "PolicyNo"), class = "data.frame", row.names = c(NA, -4L)) repeals = df %>% filter(!is.na(Year_repealed)) %>% mutate(Year = Year_repealed, PolicyNo = -1 * PolicyNo) repeals # City State Year Status Year_repealed PolicyNo # 1 Phil. PA 2004 Repealed 2004 -9 all_years = expand.grid(City = unique(df$City), State = unique(df$State), Year = 2001:2005) df = bind_rows(df, repeals, all_years) # City State Year Status Year_repealed PolicyNo # 1 Pitt PA 2001 InForce NA 6 # 2 Phil. PA 2001 Repealed 2004 9 # 3 Pitt PA 2002 InForce NA 7 # 4 Pitt PA 2005 InForce NA 2 # 5 Phil. PA 2004 Repealed 2004 -9 # 6 Pitt PA 2001 <NA> NA NA # 7 Phil. PA 2001 <NA> NA NA # 8 Pitt PA 2002 <NA> NA NA # 9 Phil. PA 2002 <NA> NA NA # 10 Pitt PA 2003 <NA> NA NA # 11 Phil. PA 2003 <NA> NA NA # 12 Pitt PA 2004 <NA> NA NA # 13 Phil. PA 2004 <NA> NA NA # 14 Pitt PA 2005 <NA> NA NA # 15 Phil. PA 2005 <NA> NA NA ``` Now the table shows every city-state-year and incorporates repeals. This is a table we can summarize. ``` df = df %>% group_by(Year, State) %>% summarize(annual_change = sum(PolicyNo, na.rm = TRUE)) df # Source: local data frame [5 x 3] # Groups: Year [?] # # Year State annual_change # <int> <chr> <dbl> # 1 2001 PA 15 # 2 2002 PA 7 # 3 2003 PA 0 # 4 2004 PA -9 # 5 2005 PA 2 ``` That gets us PolicyNo change in each state-year. A cumulative sum over the changes gets us levels. ``` df = df %>% ungroup() %>% mutate(PolicyNo = cumsum(annual_change)) df # # A tibble: 5 × 4 # Year State annual_change PolicyNo # <int> <chr> <dbl> <dbl> # 1 2001 PA 15 15 # 2 2002 PA 7 22 # 3 2003 PA 0 22 # 4 2004 PA -9 13 # 5 2005 PA 2 15 ```
293773
When I try to use this sql statement: ``` LOAD DATA INFILE 'url/file.txt' IGNORE INTO TABLE myTbl FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' (SPEC, PERSON, BLAH, BLUH) ``` I get this error: **Invalid SQL statement; expected 'DELETE', 'INSERT', 'PROCEDURE', 'SELECT', or 'UPDATE'.** I am trying to insert into an access database. Any clues as to how I can get this LOAD DATA INFILE to work? the same error also occurs with my BULK INSERT attempts
Load Data infile is a mysql-only sql command, for ms access db you can use vpa to import text file into table Take a look here : <http://support.microsoft.com/kb/113905>
294169
I have to develop a layer to retrieve data from a database (can be SQL Server, Oracle or IBM DB2). Queries (which are generic) are written by developers, but i can can modify them in my layer. The tables can be huge (say > 1 000 000 rows), and they are a lot of joins (for example, I have a query with 35 joins - no way to reduce). So, I have to develop a pagination system, to retreive a "page" (say, 50 rows). The layer (which is in a dll) is for desktop applications. Important fact : queries are never ordered by ID. The only way I found is to generate a unique row number (with MSSQL ROW\_NUMBER() function) but won't work with Oracle because there are too much joins. Does anyone know another way ?
There are only two ways to do pagination code. The first is database specific. Each of those databases have very different best practices with regards to paging through result sets. Which means that your layer is going to have to know what the underlying database is. The second is to execute the query as is then just send the relevant records up the stream. This has obvious performance issues in that it would require your data layer to essentially grab all the records all of the time. --- This is, IMHO, the primary reason why people shouldn't try to write database agnostic code. At the end of the day there are enough differences between RDBMs that it makes sense to have a pluggable data layer architecture which can take advantage of the specific RDBMs it works with. In short, there is no ANSI standard for this. For example: **MySql** uses the LIMIT keyword for paging. **Oracle** has ROWNUM which has to be combined with subqueries. (not sure when it was introduced) **SQL Server 2008** has ROW\_NUMBER which should be used with a CTE. **SQL Server 2005** had a different (and very complicated) way entirely of paging in a query which required several different procs and a function. **IBM DB2** has rownumber() which also must be implemented as a subquery.
294580
This seems like rather too much nesting: ``` using (XmlReader reader = XmlReader.Create(filename)) { while (reader.Read()) { if (reader.IsStartElement()) { switch (reader.Name) { case "Width": map.Width = ParseXMLValue(reader); break; case "Height": map.Height = ParseXMLValue(reader); break; case "TileSize": map.TileSize = ParseXMLValue(reader); break; case "Layers": map.LayerCount = ParseXMLValue(reader); break; case "Layout": ParseLayout(reader); break; case "Layer": currentLayerIndex = ParseLayer(reader); break; case "CollisionLayer": currentLayerIndex = ParseCollisionLayer(); break; case "Row": ParseRow(reader); break; } } } } ``` This is P`arseXMLValue(reader)`: ``` private int ParseXMLValue(XmlReader reader) { reader.Read(); return int.Parse(reader.Value); } ``` I'm new to reading XML in C#. Surely there is a better way?
Yes, much. `XDocument` is the easier way. Though it does not perform as well if your documents are of significant size or performance is absolutely imperative: <http://www.nearinfinity.com/blogs/joe_ferner/performance_linq_to_sql_vs.html> It works like: ``` XDocument someXmlDoc = XDocument.Create(fileName); IEnumerable<XElement> widthElements = someXmlDoc.Descendants("Width"); int[] widthValues = widthElements.Select(xelement => int.Parse(xelement.Value)).ToArray(); ``` Though I don't understand the logic in your snippet as you're setting elements of the same member repeatedly, so I'm not going to reserve giving a more eleborate use case that might match your functionality better, though I would suggest reading the msdn articles for IEnumerable.Select() and IEnumerable.Where() to get an idea how to use it for your particular purposes. Select: <http://msdn.microsoft.com/en-us/library/bb548891.aspx> Where: <http://msdn.microsoft.com/en-us/library/bb534803.aspx>
294765
I'm trying to get the text from a Dialog, but it doesn't work. All the code works except `username = txtDialog.getText().toString();` I get a NullpointerException Here is the complete code : ``` btn_next.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { dialog = new Dialog(ResActivity.this); dialog.setContentView(R.layout.custom); dialog.show(); Button declineButton = (Button) dialog.findViewById(R.id.declineButton); declineButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // TODO Auto-generated method stub txtDialog = (EditText)findViewById(R.id.textDialog); username = txtDialog.getText().toString(); Intent intent = new Intent(getApplicationContext(), EndActivity.class); intent.putExtra(MESSAGE_NAME, username); startActivity(intent); dialog.dismiss(); } }); } }); ``` Can someone help ?
I think your `Edittext` inside `Dialog` Change this line ``` txtDialog = (EditText)findViewById(R.id.textDialog); ``` into ``` txtDialog = (EditText)dialog.findViewById(R.id.textDialog); ```
294868
I have a client who wishes to use our website as a plugin on his website. His website is being developed in Ruby on Rails. While our website is a PHP website. I was considering using iframes to load my website inside theirs. However I am unsure if this is possible as I have no clue about the Ruby framework. Please help. Thanks in advance.
Yes, you can. The HTML that is served to web browsers is independent of the web framework used on the server side. A browser will not be able to distinguish HTML generated by a PHP/Python/Rails web application if the generated HTML is same. For browsers, it's just HTML which it'll parse and display content accordingly. In your client's Ruby on Rails site, you can have an `iframe` embedded like this ``` <iframe src="http://www.yourphpsite.com/php_page.php"></iframe> ```
294871
I have a file named upload\_file.php which uploads an image file to my web server. The problem I am having is that I want to replace all the spaces in the file’s name with an underscore during the upload process. I know this could be done using str\_replace(), but I have no idea where I should use str\_replace() within my code. Any help will be greatly appreciated. Here is the full code contained within my upload\_file.php file: ``` <?php require_once '../../config.php'; include '../secure.php'; if ($login_status != 1) exit(); if (isset($_GET['done'])) { if ($_FILES["file"]["error"] > 0) { echo "Return Code: " . $_FILES["file"]["error"] . "<br />"; } else { if ($_POST['type'] == 'wallpaper') { $directory = "../../files/"; } else { $directory = "../../files/images/"; } $filename = basename($_FILES['file']['name']); $ext = substr($filename, strrpos($filename, '.') + 1); if (file_exists($directory . $_FILES["file"]["name"])) { echo htmlspecialchars($_FILES["file"]["name"]) . " already exists. "; } elseif (((strpos($ext, "php") !== false) || $ext == 'aspx' || $ext == 'py' || $ext == 'htaccess') && !isset($allow_php_uploads)) { echo 'Uploading PHP files disabled (<a target="_blank" href=""></a>)'; } else { move_uploaded_file($_FILES["file"]["tmp_name"], $directory . $_FILES["file"]["name"]); header("Location: upload-complete.php?type=".htmlspecialchars($_POST['type'])."&newfile=". htmlspecialchars($_FILES["file"]["name"])."&id=".intval($_POST['id']).""); } } } else { ?> <head> <style type="text/css"> body { margin:0px; padding:0px; } </style> </head> <form action="upload_file.php?done=1" method="post" enctype="multipart/form-data"> <input type="file" name="file" id="file" style="max-width:230px;" /> <input name="id" type="hidden" value="<?php echo intval($_GET['id']);?>" /> <input name="type" type="hidden" value="<?php echo htmlspecialchars($_GET['type']);?>" /> <input type="submit" name="submit" value="Upload" /> </form> <?php } ?> ```
You could read the file one line at a time using`readline`: ``` // this creates a read stream var reader = require('readline').createInterface({ input: require('fs').createReadStream('file.in') }); // then here you would be able to manipulate each line: reader.on('line', function (line) { console.log('The current line is: ', line); }); ```
295531
I've created 2 different applications. One of them sends a text message(SMS), not much to it but it works. The second application is where my problem occurs, this application was created to implement broadcastReceiver and listen to incoming text messages and when a message is recieved it's supposed to set a TextView to the message received. It works in the sense that I can get the app to show a Toast message when I recieved a message from my other app. However, when I go in to the second app the TextView is not edited. I've tried implementing both onResume(), onPause() and onStop(), onStart() to react when the app starts etc but nothing happens which makes sense, since the SMSReceiver class listens to broadcasts and it acts as soon as something comes up. It then creates an intent and since my app isn't up and running, nothing happens to the TextView. ``` // I have the permissions, I removed them so code is more clear public class MainActivity extends AppCompatActivity { private static final int REQUEST_RECEIVE_SMS = 1; public TextView msgDisplay; IntentFilter intentFilter; // vi skapar en private BroadcastReceiver intentReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { msgDisplay.setText(intent.getExtras().getString("sms")); } }; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); msgDisplay = (TextView) findViewById(R.id.msgDisplay); checkForSmsPermission(); intentFilter = new IntentFilter(); intentFilter.addAction("SMS_RECEIVED_ACTION"); } } public class SMSReceiver extends BroadcastReceiver { private static final String TAG = SMSReceiver.class.getSimpleName(); public static final String pdu_type = "pdus"; @Override public void onReceive(Context context, Intent intent) { // Get the SMS message. Bundle bundle = intent.getExtras(); SmsMessage[] msgs = null; String strMessage = ""; String format = bundle.getString("format"); // Retrieve the SMS message received. if(bundle != null) { Object[] pdus = (Object[]) bundle.get(pdu_type); msgs = new SmsMessage[pdus.length]; for (int i=0; i<msgs.length;i++) { msgs[i] = SmsMessage.createFromPdu((byte[]) pdus[i], format); strMessage = msgs[i].getMessageBody(); Toast.makeText(context, strMessage, Toast.LENGTH_SHORT).show(); Intent broadcastIntent = new Intent(); Intent messageIntent = new Intent(); messageIntent.setAction("SET_MESSAGE_ACTION"); broadcastIntent.setAction("SMS_RECEIVED_ACTION"); broadcastIntent.putExtra("sms", strMessage); context.sendBroadcast(broadcastIntent); } } } ```
I found the solution: ``` fld.Document.Blocks.Add(p1); fld.Measure(new System.Windows.Size(Double.PositiveInfinity, Double.PositiveInfinity)); fld.Arrange(new Rect(new System.Windows.Point(0, 0), fld.DesiredSize)); fld.UpdateLayout(); ``` One has to call exactly these methods. Measure with Double.PositiveInfinity is important and aferwards Arrange with DesiredSize. Then ActualHeight is updated and has the correct value.
295743
I am trying to create a Javascript Regex that captures the filename without the file extension. I have read the other posts here and *'goto this page:* <http://gunblad3.blogspot.com/2008/05/uri-url-parsing.html>' seems to be the default answer. This doesn't seem to do the job for me. So here is how I'm trying to get the regex to work: 1. Find the last forward slash '/' in the subject string. 2. Capture everything between that slash and the next period. The closest I could get was : **/([^/]*).\w*$** Which on the string **'<http://example.com/index.htm>'** exec() would capture **/index.htm** and **index**. I need this to only capture *index*.
``` var url = "http://example.com/index.htm"; var filename = url.match(/([^\/]+)(?=\.\w+$)/)[0]; ``` Let's go through the regular expression: ``` [^\/]+ # one or more character that isn't a slash (?= # open a positive lookahead assertion \. # a literal dot character \w+ # one or more word characters $ # end of string boundary ) # end of the lookahead ``` This expression will collect all characters that aren't a slash that are immediately followed (thanks to the [lookahead](http://www.regular-expressions.info/lookaround.html)) by an extension and the end of the string -- or, in other words, everything after the last slash and until the extension. Alternately, you can do this without regular expressions altogether, by finding the position of the last `/` and the last `.` using [`lastIndexOf`](http://www.w3schools.com/jsref/jsref_lastIndexOf.asp) and getting a [`substring`](http://www.w3schools.com/jsref/jsref_substring.asp) between those points: ``` var url = "http://example.com/index.htm"; var filename = url.substring(url.lastIndexOf("/") + 1, url.lastIndexOf(".")); ```
295918
I would like to create a application that would respond to receiving SMS messages and display a dialog. How can I register the receiver in the manifest without defining within an activity? I tried to keep the receiver/intent-filter tags in the manifest out of the activity tag but the emulator will not install the apk since there is no launch activity. Keeping the BroadcastReceiver as the main activity results in an "Unable to instantiate activity" error in the Logcat. Any help? Thanks, Sunny **Receiver class** ``` public class SMSReceiver extends BroadcastReceiver { // onCreat is invoked when an sms message is received. // Message is attached to Intent via Bundle, stored in an Object // array in the PDU format. public void onReceive(Context context, Intent intent) { // get the SMS message passed in from Bundle Bundle bundle = intent.getExtras(); String bodyText = ""; String from = ""; if (bundle != null) { //Retrieve sms message within Object array Object[] pdus = (Object[]) bundle.get("pdus"); SmsMessage[] msgs = new SmsMessage[pdus.length]; for (int i=0; i < msgs.length; i++) msgs[i] = SmsMessage.createFromPdu((byte[]) pdus[i]); for (SmsMessage message: msgs) { bodyText = message.getMessageBody(); from = "Message from " + message.getOriginatingAddress() + ": "; } // Display message in pop up Toast.makeText(context, from + bodyText, Toast.LENGTH_SHORT).show(); } } } ``` **Manifest** ``` <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="home.splttingatms.SMSReceiver" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".SMSReceiver" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <receiver android:name=".SMSReceiver"> <intent-filter> <action android:name="android.provider.Telephony.SMS_RECEIVED" /> </intent-filter> </receiver> </application> <uses-sdk android:minSdkVersion="7" /> <uses-permission android:name="android.permission.RECEIVE_SMS"></uses-permission> </manifest> ```
What you are trying to do is wrong for at least the following reasons... 1. MAIN/LAUNCHER only apply to activities and as you don't have a class which extends Activity in your code, this is what is causing the error. 2. Although there's nothing wrong with an 'app' which implements just a BroadcastReceiver or Service, it is good practice to let the user know that things have been initialized correctly. Even if you could list a receiver/service in 'All apps', if they select it and see nothing happens they're not going to be happy - users like to see some feedback. In short, create a simple Activity which will appear in 'All apps' and has the MAIN/LAUNCHER intent settings and when it starts just have it create a dialog saying something like "Hi, welcome to ..." or some other feedback to the user letting them know that things have started correctly. Have an 'OK' button on the dialog which when pressed, calls the Activity's finish() method leaving the receiver in place.
296287
I'm trying to send the contents of UITextView or UITextField as parameters to a php file ``` NSString *urlstr = [[NSString alloc] initWithFormat:@"http://server.com/file.php?name=%@&tags=%@&entry=%@",nameField.text, tagsField.text, dreamEntry.text]; ``` When i log urlstr, the url format is ok just as long as the UITextView or UITextField don't contain spaces. How would i go about converting the spaces to %20 ? edit here is the code at present, which not only crashes but isn't encoding the url properly. name=John Doe&tags=recurring nightmare&entry=Testing testing testing is converted to name=John -1844684964oe&tags=recurringightmare&entry=Testing 4.214929e-307sting -1.992836e+00sting ``` - (IBAction)sendButtonPressed:(id)sender { NSString *urlString = [[NSString alloc] initWithFormat:@"http://server.com/file.php?name=%@&tags=%@&entry=%@", nameField.text, tagsField.text, dreamEntry.text]; NSString *encodedString = [urlString stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]; NSURL *url = [[NSURL alloc] initWithString:encodedString]; NSLog(encodedString); NSLog(urlString); [urlString release]; [url release]; [encodedString release]; } ```
You're not supposed to URL-escape the entire string, you're supposed to URL-escape the dynamic components. Try ``` NSString *urlStr = [NSString stringWithFormat:@"http://server.com/file.php?name=%@&tags=%@&entry=%@", [nameField.text stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding], [tagsField.text stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding], [dreamEntry.text stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding], nil]; NSURL *url = [NSURL URLWithString:urlStr]; ``` The second issue with your code (and undoubtedly the reason for the odd printing) is you're passing the string directly to NSLog, so it's being treated as a format string. You need to use ``` NSLog(@"%@", encodedString); ``` instead. That will make it print as expected. Edit: A third issue with your code is you're mixing autoreleased and owned objects, then releasing them all at the end. Go look at the 3 objects you create, and which you subsequently release later. One of them shouldn't be released later because it was produced by a method that did not start with the words alloc, copy, or new. Identifying the object in question is an exercise left to the reader.
296396
In the following code Mix\_Card\_Reader inherits from Mix\_IO\_Device, the latter being an abstract tagged record. Previously it contained one `Positive` and two `Stream_Access` members. I'd like to alter the code so that it uses `File_Type` members instead. The reason for that is that I want each instance of this type to be able to open and close its files as and when required, or not at all if need be. The problem is that I cannot initialise this inheriting type because `File_Type` is a limited type. How can I write my `Create_Mix_Card_Reader` function to allow this? > > .ads... > > > ``` type Mix_IO_Device is abstract tagged limited record Block_Size : Positive; Input_File : File_Type; Output_File : File_Type; end record; type Mix_Card_Reader is new Mix_IO_Device with null record; ``` > > .adb... > > > ``` function Create_Mix_Card_Reader return Mix_IO_Device_Access is Ret : Mix_IO_Device_Access := new Mix_Card_Reader'(16, null, null); begin return Ret; end Create_Mix_Card_Reader; ``` GNAT is complaining that I cannot pass `null, null` into the pair of `File_Type` members because they are not compatible of course, the nulls are a left-over from when this used to have `Stream_Access` members. It seems that I have to pass something in here but I don't want to have to prematurely open the files simply to placate the compiler. What to do? Edit: I have a couple of obvious options: * use `access File_Type` instead (but I still have to maintain the opening/closing of the files elsewhere). * store all the File\_Type objects in an array separately and just refer to them using Streams as before but this seems messy.
This should do the trick: ``` function Create_Mix_Card_Reader return Mix_IO_Device_Access is Ret : Mix_IO_Device_Access := new Mix_Card_Reader'( 16, Input_Type => <>, Ouptut_Type => <>); begin return Ret; end Create_Mix_Card_Reader; ``` The box notation is a placeholder for the default value. You need at least Ada 2005 to use it in aggregates and must not use positional notation, details are explained in the [Ada 2005 Rationale](https://www.adaic.org/resources/add_content/standards/05rat/html/Rat-4-4.html). You can shorten the two assignments to `others => <>` if you want.
296591
I have 2 aws accounts having their own RDS instances(not publicly accessible) with db engine as postgresql 12.5. I downloaded RDS certificate from "https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem". I am using JDBC(postgresql driver) with properties ssl=true and sslrootcert="path to above certificate" to establish secure connections. My questions: 1. This certificate is same for both aws accounts which have different names, so how does it work , Does ssl hand shake verifies that client(jdbc connection) is talking to rds.amazonaws.com or the actual RDS instance which has separate name ? 2. RDS certificates are replaced every 5 years, i.e. applications also have to update the certificate every 5 years or sooner than that once new certificate is available from RDS, is this correct ?
Q1. Yes, its same for all accounts. You can download it from docs [here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html). Its about the instances as explained in the docs: > > Using a server certificate provides an extra layer of security by **validating** that the connection is being made to an **Amazon RDS DB instance**. > > > Q2. You can update before actual expiration few months before. Last year it happened as explained [here](https://aws.amazon.com/blogs/database/amazon-rds-customers-update-your-ssl-tls-certificates-by-february-5-2020/): [![enter image description here](https://i.stack.imgur.com/xAvfi.png)](https://i.stack.imgur.com/xAvfi.png)
296861
related to : [Remove duplicates from an array based on object property?](https://stackoverflow.com/questions/10505760/remove-duplicates-from-an-array-based-on-object-property/40941631#40941631) is there a way to do this exact same thing but consider the order? for example if instead of keeping [2] I wanted to keep [1] ``` [0]=> object(stdClass)#337 (9) { ["term_id"]=> string(2) "23" ["name"]=> string(12) "Assasination" ["slug"]=> string(12) "assasination" } [1]=> object(stdClass)#44 (9) { ["term_id"]=> string(2) "14" ["name"]=> string(16) "Campaign Finance" ["slug"]=> string(16) "campaign-finance" } [2]=> object(stdClass)#298 (9) { ["term_id"]=> string(2) "15" ["name"]=> string(16) "Campaign Finance" ["slug"]=> string(49) "campaign-finance-good-government-political-reform" } ``` I tried posting a comment but because of my reputation it won't let me so I decided to start a new thread
It is trivial to reverse and array. So before processing the array, call `array_reverse()` on it: ``` /** flip it to keep the last one instead of the first one **/ $array = array_reverse($array); ``` Then at the end you can reverse it again if ordering is an issue: ``` /** Answer Code ends here **/ /** flip it back now to get the original order **/ $array = array_reverse($array); ``` So put all together looks like this: ``` class my_obj { public $term_id; public $name; public $slug; public function __construct($i, $n, $s) { $this->term_id = $i; $this->name = $n; $this->slug = $s; } } $objA = new my_obj(23, "Assasination", "assasination"); $objB = new my_obj(14, "Campaign Finance", "campaign-finance"); $objC = new my_obj(15, "Campaign Finance", "campaign-finance-good-government-political-reform"); $array = array($objA, $objB, $objC); echo "Original array:\n"; print_r($array); /** flip it to keep the last one instead of the first one **/ $array = array_reverse($array); /** Answer Code begins here **/ // Build temporary array for array_unique $tmp = array(); foreach($array as $k => $v) $tmp[$k] = $v->name; // Find duplicates in temporary array $tmp = array_unique($tmp); // Remove the duplicates from original array foreach($array as $k => $v) { if (!array_key_exists($k, $tmp)) unset($array[$k]); } /** Answer Code ends here **/ /** flip it back now to get the original order **/ $array = array_reverse($array); echo "After removing duplicates\n"; echo "<pre>".print_r($array, 1); ``` produces the following output ``` Array ( [0] => my_obj Object ( [term_id] => 23 [name] => Assasination [slug] => assasination ) [1] => my_obj Object ( [term_id] => 15 [name] => Campaign Finance [slug] => campaign-finance-good-government-political-reform ) ) ```
297155
Do you know any "best practice" to design a REST method to alter the order of a small collection? I have a collection exposed at "GET /api/v1/items". This endpoint returns a JSON array and every item has a unique id. I was thinking on create "PATCH /api/v1/items" and send an array of ids with the new order. But I wonder if there is any alternative or design pattern to accomplish this task properly.
Following the REST Uniform Interface constraint, the HTTP `PUT` and `PATCH` methods have to stick to the standard semantics, so you can do that with either one in the following way: With `PUT`, clients can upload a whole new representation with the order they want. They will request `GET /api/v1/items`, change the order as they need, and submit it back with `PUT /api/v1/items`. With `PATCH`, clients can send a diff document which performs the order change as they need. You can use a format like [json-patch](http://jsonpatch.com/) and clients perform the change with the `move` operation and array paths. Be aware that neither of these are design patterns or best practices. They are simply how the `PUT` and `PATCH` methods are supposed to work. Ideally, this should work on any RESTful application implementing the `GET`, `PUT` and `PATCH` methods correctly for the resource at that URI, and that's the beauty of REST. If you do it the right way, you only have to do it once and clients can generalize for everyone. For instance, a client can choose to do it the `PUT` way with small collections, and the `PATCH` way for larger ones. Both your idea to use `PATCH` with an id array, and the answerfrom @dit suggesting to do it with `PUT` aren't really RESTful because they are breaking up with the standard semantics: yours for not using a delta format, his for doing partial updates with `PUT`. However, both those options can be RESTful if done with `POST`. `POST` is the method to go for any action that isn't standardized by the HTTP protocol, so you can do anything you want with it, but you have to document how exactly to do it. So, it's up to you. If you are concerned at all with being RESTful and your application has long term goals -- I'm talking years or even decades -- I'd say to go for a uniform implementation of the `PUT` and `PATCH` methods as suggested first. If you prefer a simple approach, use yours or dit's idea with `POST`.
297300
When were the major aspects of the Apollo program decided? Multi-stage rocket, CM pulling the LM out of the rocket through a docking maneuver, LOR, and a parachute landing in the ocean. I've seen answers to a few different pieces of this question, like the work of John C. Houbolt and the extensive coverage of the Rogallo wing in the excellent book [Coming Home](https://www.nasa.gov/connect/ebooks/coming_home_detail.html). I haven't seen an answer to when every piece was finally in place.
> > Multi-stage rocket > > > This was already certain in the mid-to-late 1940s, if not sooner. There's no practical way to reach the moon on chemical propellants without a multistage rocket; the only alternative in that era would have been a [Project Orion](https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion))-style nuclear detonation rocket, which wasn't ever considered for Apollo. > > CM pulling the LM out of the rocket through a docking maneuver, LOR > > > These two decisions go hand in hand; for the direct ascent strategy assumed prior to the LOR decision, the command/service module and the (much larger than the Apollo LM) lunar descent/landing module would have remained docked the way they were stacked on the launch pad all the way to lunar touchdown; there would have been no need for a transposition-docking-extraction maneuver. [LOR was a contender from 1960;](https://en.wikipedia.org/wiki/Lunar_orbit_rendezvous#Apollo_Mission_mode_selection) Houbolt's "voice in the wilderness" letter was sent in November 1961. It was debated for several months; NASA Administrator James Webb approved LOR in early July 1962, and as Organic Marble notes, it was announced on July 11. > > Parachute landing in the ocean > > > According to [Chariots For Apollo](https://www.hq.nasa.gov/office/pao/History/SP-4205/ch5-3.html), land touchdown was in consideration for a while: > > Another decision that would influence both spacecraft was on whether to set the vehicle down on land or water, a question that had been under discussion since mid-1962. During a meeting in early 1964, a North American engineer reported that "land impact problems are so severe that they require abandoning this mode as a primary landing mode." That was all Shea needed to settle that debate. Apollo spacecraft would land in the ocean and be recovered by naval ships as Mercury had been. > > >
297540
Someone recently asked me this question and I thought I'd post it on Stack Overflow to get some input. Now obviously both of the following scenarios are supposed to fail. #1: ``` DECLARE @x BIGINT SET @x = 100 SELECT CAST(@x AS VARCHAR(2)) ``` Obvious error: > > Msg 8115, Level 16, State 2, Line 3 > > Arithmetic overflow error converting expression to data type varchar. > > > #2: ``` DECLARE @x INT SET @x = 100 SELECT CAST(@x AS VARCHAR(2)) ``` Not obvious, it returns a \* (One would expect this to be an arithmetic overflow as well???) --- Now my real question is, why??? Is this merely by design or is there history or something sinister behind this? I looked at a few sites and couldn't get a satisfactory answer. e.g. <http://beyondrelational.com/quiz/sqlserver/tsql/2011/questions/Why-does-CAST-function-return-an-asterik--star.aspx> <http://msdn.microsoft.com/en-us/library/aa226054(v=sql.80).aspx> **Please note I know/understand that when an integer is too large to be converted to a specific sized string that it will be "converted" to an asterisk, this is the obvious answer and I wish I could downvote everyone that keeps on giving this answer. I want to know why an asterisk is used and not an exception thrown, e.g. historical reasons etc??**
For even more fun, try this one: ``` DECLARE @i INT SET @i = 100 SELECT CAST(@i AS VARCHAR(2)) -- result: '*' go DECLARE @i INT SET @i = 100 SELECT CAST(@i AS NVARCHAR(2)) -- result: Arithmetic overflow error ``` :) --- The answer to your query is: "Historical reasons" The datatypes INT and VARCHAR are older than BIGINT and NVARCHAR. ***Much*** older. In fact they're in the *original* SQL specs. Also older is the exception-suppressing approach of replacing the output with asterisks. Later on, the SQL folks decided that throwing an error was better/more consistent, etc. than substituting bogus (and usually confusing) output strings. However for consistencies sake they retained the prior behavior for the pre-existing combinations of data-types (so as not to break existing code). So (much) later when BIGINT and NVARCHAR datatypes were added, they got the new(er) behavior because they were not covered by the grandfathering mentioned above.
297996
I am having issues running rails test from within vim. When I issue the `:Rails test` from vim it returns > > /usr/local/lib/ruby/gems/2.2.0/gems/bundler-1.10.6/lib/bundler/lockfile\_parser.rb|72| in `warn\_for\_outdated\_bundler\_version': You must use Bundler 2 or greater with this lockfile. (Bundler::LockfileError) > > > Some terminal command outputs that might help answering 1. `which -a bundle` > > /home/my\_user\_name/.rbenv/shims/bundle > > > 2. `bundle env` ``` Bundler 2.0.1 Platforms ruby, x86_64-linux Ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-linux] Full Path /home/username/.rbenv/versions/2.4.1/bin/ruby Config Dir /home/username/.rbenv/versions/2.4.1/etc RubyGems 3.0.2 Gem Home /home/username/.rbenv/versions/2.4.1/lib/ruby/gems/2.4.0 Gem Path /home/username/.gem/ruby/2.4.0:/home/username/.rbenv/versions/2.4.1/lib/ruby/gems/2.4.0 User Path /home/username/.gem/ruby/2.4.0 Bin Dir /home/username/.rbenv/versions/2.4.1/bin Tools Git 2.17.1 RVM not installed rbenv rbenv 1.0.0-21-g9fdce5d chruby not installed ```
Create a file named `User.php` with the below content in `/src/Model/Entity` folder. This will automatically encrypt your password while saving. ``` use Cake\Auth\DefaultPasswordHasher; use Cake\ORM\Entity; class User extends Entity { protected function _setPassword($value) { if (strlen($value)) { $hasher = new DefaultPasswordHasher(); return $hasher->hash($value); } } } ```
298120
I created a JAR file with my Java program. This piece of code will open a few files inside a dir "Test", which is in the same dir as the JAR file. Like this: ``` / -- program.jar -- /Test -- * ``` If I run via terminal with: java -jar program.jar, it runs perfectly. But if I run graphically (right clicking on the jar file and Open with OpenJDK...), it doesn't work properly. Just like if I ran from another directory. Is it possible that when I run the JAR file graphically it's running from another directory? By the way, I'm running on Ubuntu.
Yes, you will get another current working directory... There would be two solutions: **1)** Find the cwd by doing this hack: ``` public class Test { public static void main(String... args) { ClassLoader cl = Test.class.getClassLoader(); String f = cl.getResource("").getFile(); File cwd = new File(f); if (cwd.toString().endsWith("!")) cwd = cwd.getParentFile(); JOptionPane.showMessageDialog(null, cwd); } } ``` **2)** If the files under `Test` are static (does not change to often) the solution would be to package them inside the jar.
298130
I have a LAN with a Linux server running BIND for addressing of local computers. When a workstation is connected to the local network (where there is no Internet access), I can successfully address devices using hostnames without any problems: ``` $ host server1.local $ server1.local has address 192.168.2.2 $ host 192.168.2.2 $ 2.2.168.192.in-addr.arpa domain name pointer server1.local. ``` When that same workstation enables WiFi (or any secondary interface) and connects to the greater Internet, the machine can no longer address local devices by hostname. Presumably this is because it is using the wrong network interface's DNS server to address my devices. My BIND configuration is as follows: ``` $ORIGIN local. $TTL 604800 @ IN SOA server1 admin ( 2008080101 ;serial 04800 ;refresh 86400 ;retry 2419200 ;expire 604800 ;negative cache TTL ) @ IN NS server1 @ IN A 192.168.2.2 server1 IN A 192.168.2.2 workstation1 IN A 192.168.2.44 workstation2 IN A 192.168.2.45 ``` and the reverse DNS: ``` $ORIGIN 2.168.192.in-addr.arpa. $TTL 604800 @ IN SOA server1.local. admin.local. ( 2008080101 ;serial 604800 ;refresh 86400 ;retry 2419200 ;expire 604800 ;negative cache TTL ) NS server1.local. 2 IN PTR server1.local. 44 IN PTR workstation1.local. 45 IN PTR workstation2.local. ``` How can I force clients to look at the correct network interface to find hosts in the ".local" namespace? Is it possible to do this from the BIND-configuration end, since I may not have complete control over the individual clients?
Certain versions of OS X assign preferences to DNS servers. This may cause your internal DNS server to be pushed down the preference order. Try running this command to find out which server is being used: ``` scutil --dns | grep nameserver\[[0-9]*\] ``` Sources: * <https://discussions.apple.com/thread/1660439> * <http://support.apple.com/kb/ht4030> * <https://superuser.com/questions/258151/how-do-i-check-what-dns-server-im-using-on-mac-os-x>
298977
In the art of electronics book, page 49, 2nd edition, they show a signal rectifier circuit (a circuit that tosses away the negative part of a signal). Here it is: ![schematic](https://i.stack.imgur.com/u0c60.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fu0c60.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) I think the capacitor is there to block out DC. Is that right? What are the uses of the two resistors? Couldn't the same goal be accomplished by taking them out? I think R2 between GND and the diode is needed for the diode to work correctly, as a way for the circuit to drop the needed voltage so the voltage drop across the diode stays at the diode forward voltage drop. What about R1? Thank you very much.
Assuming ideal components, when the diode is "on", the equivalent circuit is: ![schematic](https://i.stack.imgur.com/6pWoW.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2f6pWoW.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) The charging current clockwise through C1 produces a voltage across the parallel combination of resistors. When the diode is "off", the equivalent circuit is: ![schematic](https://i.stack.imgur.com/cs6JT.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fcs6JT.png) Now, you can see that R1 is provide a path for a counter-clockwise current to discharge the capacitor. Also, see that R2 is required to pull the output voltage to zero while the diode is off. --- > > Why is the path that R1 provides needed? if it's not there, then there > is an open-circuit, no current flows. Theoretically the end result > should be the same: the negative part of the signal does not appear at > the output. > > > If there is no path for a counter-clockwise current through the capacitor, the voltage across the capacitor will only increase and this voltage will *oppose* the positive signal voltage. Eventually, the capacitor voltage will be large enough that the diode will not turn on and the output voltage will be zero. Below is a simulation I ran *without* R1. The top trace is the voltage across the capacitor; the bottom trace is the output voltage. The input source is 1kHz sine wave. ![enter image description here](https://i.stack.imgur.com/MC05N.png) Note that the voltage across the capacitor increases while the diode is *on* but doesn't decrease while the diode is off. Note the effect of this on the output voltage.
298999
I have write a code to calculate average of an area temperature in the map. At start, the `initial()`will be invoked then `loadtempdata()` then `averagetemp()` and finally `cleanup()`. And I have use a global pointer to point to a heap dynamic array in function so that it can be used in other funcion. Finally, I use the `delete[] table` to recycle the memory but the result indicate that I still have memory leak and I have found it long time and cannot find the reason. Why is there still memory leak? Thx in advance. ``` #include<iostream> namespace TEST { double *table = 0; const double *mtx = 0; int n = 0, m = 0; void LoadTempData(const double* matrix_row_major, int M, int N){ table = new double [(M + 1) * (N + 1)]; mtx = matrix_row_major; m = M; n = N; for(int i = 1; i < M + 1; i ++){ for(int j = 1; j < N + 1; j ++){ table[j + i * (N + 1)] = matrix_row_major[(j - 1) + (i - 1) * N] + table[j - 1 + i * (N + 1)] + table[j + (i - 1) * (N + 1)] - table[j - 1 + (i - 1) * (N + 1)]; } } } double RegionAvgTemp(int y1, int x1, int y2, int x2){ if(y1 == y2){ return (table[x2 + 1 + (y2 + 1) * (n + 1)] - table[x2 + 1 + (y2 - 1 + 1) * (n + 1)] - table[x1 - 1 + 1 + (y1 + 1) * (n + 1)] + table[x1 - 1 + 1 + (y1 - 1 + 1) * (n + 1)]) / (x2 - x1 + 1); } else if(x1 == x2){ return (table[x2 + 1 + (y2 + 1) * (n + 1)] - table[x1 + 1 + (y1 - 1 + 1) * (n + 1)] - table[x2 - 1 + 1 + (y2 + 1) * (n + 1)] + table[x1 - 1 + 1 + (y1 - 1 + 1) * (n + 1)]) / (y2 - y1 + 1); } else if ((y1 == y2) && (x1 == x2)){ return mtx[x1 + y1 * n]; } else{ return (table[x2 + 1 + (y2 + 1) * (n + 1)] + table[x1 - 1 + 1 + (y1 - 1 + 1) * (n + 1)] - table[x2 + 1 + (y1 - 1 + 1) * (n + 1)] - table[x1 - 1 + 1 + (y2 + 1) * (n + 1)]) / ((y2 - y1 + 1) * (x2 - x1 + 1)); } return 0; } void Init() { } void Cleanup() { delete[] table; } } ```
I am a Program Manager in the Service Bus team. There is no plan to support WebSockets on HTML as an output pipe for Notification Hubs. At the moment your best bet is to use SignalR, which can be scaled out using Service Bus. What are the characteristics of Notification Hubs that make you say that it would be preferable to SignalR?
299136
The Economist isn't a fan of Donald Trump as a presidential candidate. In an analysis of his ascendence to the Republican nomination they try to discern some of the factors. As part of their list of factors driving Trump support [they argue](http://www.economist.com/news/briefing/21698252-donald-trump-going-be-republican-candidate-presidency-terrible-news) that terrorism has become a national bogeyman while quoting an odd statistic (my emphasis): > > Terrorism—*though it claimed fewer American lives last year than toddlers with guns*—has become a national bogeyman. > > > This isn't an appropriate place to discuss the relative merits of Trump, but that is one unexpected statistic. Is it true? Were there really more American deaths from gun-toting toddlers than from terrorism in 2015? NB This isn't a question about *politics*. This is merely the context in which the statistic was quoted. The question is whether the statistic is true. Please keep the politics out of any answers.
It depends on your definition of terrorism and your interpretation of the quote: is it about American lives anywhere in the world, or about death in America including non-Americans. For the purpose of this answer I'll assume the second interpretation. If terrorism only includes Islamic terrorism, it would be true, although quite close. See for example the [Snopes article on this question](http://www.snopes.com/toddlers-killed-americans-terrorists/): > > In 19 instances a toddler shot and killed themselves, and in two > others, the toddler shot and killed another individual. That brings > the **total of toddler-involved shooting deaths in the United States in > 2015 to 21**. > > > By contrast, if we count both the Chattanooga shootings and San > Bernardino as strictly terrorism, 14 Americans were killed in San > Bernardino and five in Chattanooga. As such, **19 Americans were killed [...] in instances of Islamic terrorism in 2015.** > > > Ultimately, even the broadest leeway led to the same mathematical > conclusion. The meme was basically correct; more Americans were shot > and killed by toddlers in 2015 than were killed by Islamic terrorists. > > > But your question isn't really about Islamic terrorism, but *terrorism in general*. Inclusion of only two other events provides insight: the Charleston church shooting (white supremacist terrorism, 9 dead) and the Colorado Springs Planned Parenthood shooting (Christian terrorism, 3 dead) may be included, leading to **31 victims of terror attacks in the US in 2015**, compared to **21 victims of toddlers.** **This would make the claim as stated false.**
299173
I am trying to send an email. When I send it without an attachment, it sends the email correctly. If I try to attach something, then it doesn't work. The Class: ``` import com.sun.mail.smtp.SMTPTransport; import java.io.File; import java.security.Security; import java.util.Date; import java.util.Properties; import javax.activation.DataHandler; import javax.activation.FileDataSource; import javax.mail.BodyPart; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.Multipart; import javax.mail.Part; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.AddressException; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeBodyPart; import javax.mail.internet.MimeMessage; import javax.mail.internet.MimeMultipart; import javax.swing.JOptionPane; /** * * @author doraemon */ public class GoogleMail { public static void Send(String from, String pass, String[] to, String assunto, String mensagem, File[] anexos) { String host = "smtp.risantaisabel.com.br"; Properties props = System.getProperties(); props.put("mail.smtp.starttls.enable", "true"); // added this line props.put("mail.smtp.host", host); props.put("mail.smtp.user", from); props.put("mail.smtp.password", pass); props.put("mail.smtp.port", "587"); props.put("mail.smtp.auth", "true"); //String[] to = {"tarcisioambrosio@gmail.com"}; // added this line Session session = Session.getDefaultInstance(props, null); MimeMessage message = new MimeMessage(session); try { message.setFrom(new InternetAddress(from)); } catch (AddressException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (MessagingException e) { // TODO Auto-generated catch block e.printStackTrace(); } InternetAddress[] toAddress = new InternetAddress[to.length]; boolean enviado = true; // To get the array of addresses for( int i=0; i < to.length; i++ ) { // changed from a while loop try { toAddress[i] = new InternetAddress(to[i]); } catch (AddressException e) { // TODO Auto-generated catch block e.printStackTrace(); } } // System.out.println(Message.RecipientType.TO); for( int i=0; i < toAddress.length; i++) { // changed from a while loop try { message.addRecipient(Message.RecipientType.TO, toAddress[i]); } catch (MessagingException e) { // TODO Auto-generated catch block e.printStackTrace(); } } try { message.setSubject(assunto); //message.setContent(mensagem, "text/plain"); message.setText(mensagem); Transport transport = session.getTransport("smtp"); transport.connect(host, from, pass); if(anexos.length == 0){ } else { Multipart mp = new MimeMultipart(); BodyPart messageBodyPart = new MimeBodyPart(); for(int i = 0; i < anexos.length;i++) { MimeBodyPart mbp2 = new MimeBodyPart(); FileDataSource fds = new FileDataSource(anexos[i].getPath()); mbp2.setDataHandler(new DataHandler(fds)); mbp2.setFileName(fds.getName()); mp.addBodyPart(mbp2); } messageBodyPart.setContent(message, "multipart/mixed"); mp.addBodyPart(messageBodyPart); message.setContent(mp); } transport.sendMessage(message, message.getAllRecipients()); transport.close(); } catch (MessagingException e) { // TODO Auto-generated catch block JOptionPane.showMessageDialog(null, "<html><b>Email não enviado, tente novamente confirmando seus dados"); enviado = false; e.printStackTrace(); } if(enviado) { JOptionPane.showMessageDialog(null, "<html><b>Email enviado.</html></b> \n\n Caso tenha digitado errado o email, somente pela sua caixa de entrada poderá confirmar se chegou. \n\n<html> Email da parte digitado:<b><font size = 3 COLOR=#ff0000> " + to[0]); } } } ``` If I change the the setContent to text/plain, I get: ``` javax.mail.MessagingException: IOException while sending message; nested exception is: java.io.IOException: "text/html" DataContentHandler requires String object, was given object of type class javax.mail.internet.MimeMessage at com.sun.mail.smtp.SMTPTransport.sendMessage(SMTPTransport.java:1167) ``` If I change the setContent to multipart/mixed, I get: ``` javax.mail.MessagingException: MIME part of type "multipart/mixed" contains object of type javax.mail.internet.MimeMessage instead of MimeMultipart ``` Do you have any idea how I can fix this? Thanks.
Remove ``` message.setText(mensagem); ``` Change ``` messageBodyPart.setContent(message, "multipart/mixed"); ``` to ``` messageBodyPart.setText(mensagem); ``` and move it and the following line above the "for" loop. Also, see t[his JavaMail FAQ entry of common mistakes](http://www.oracle.com/technetwork/java/javamail/faq/index.html#commonmistakes).
299237
We can prove the fundamental theorem of calculus without any reference to empirical data. We can't "prove" coulomb's law, we make experiments and describe the phenomena using mathematical tools. Where does Probability fall on that spectrum? if Probability suppose to predict natural phenomenon wouldn't you need to verify it works via experiments? If so, why is it considered a part of Math and not Physics? If empirical observations aren't needed, that means we can prove that the limit of(a(n)\n), a- number of times a coin lands on heads, n- number of throws, converges to 0.5 as n goes to infinity. Same as we can prove any other Theorem using only a pen and paper, But to me, that doesn't make any sense. I don't see how pure Math can be used to PROVE physical phenomenons.
Use Euler's identity to split the solution into its real and imaginary parts: $$e^{(2+3i)t}=e^{2t+i3t}=e^{2t}\*e^{i3t}$$ $$e^{i3t} = \text{cos}(3t)+i\text{sin}(3t)$$ Therefore: $$e^{\lambda\_1t}\vec v\_1 = e^{2t}\*e^{i3t}\begin{bmatrix} 2+3i\\ 13 \end{bmatrix} $$ $$= e^{2t}\*[\text{cos}(3t)+i\text{sin}(3t)]\*\left(\begin{bmatrix} 2\\ 13 \end{bmatrix}+i\begin{bmatrix} 3\\ 0 \end{bmatrix}\right) $$ Foiling: $$e^{2t}\left(\begin{bmatrix} 2\text{cos}(3t)\\ 13\text{cos}(3t) \end{bmatrix}+i\begin{bmatrix} 3\text{cos}(3t)\\ 0 \end{bmatrix}+i\begin{bmatrix} 2\text{sin}(3t)\\ 13\text{sin(3t)} \end{bmatrix}+i^2\begin{bmatrix} 3\text{sin}(3t)\\ 0 \end{bmatrix}\right)$$ $$\underbrace{e^{2t}\begin{bmatrix} 2\text{cos}(3t)-3\text{sin}(3t)\\ 13\text{cos}(3t) \end{bmatrix}}\_{{\text{Real Part}}}+i\underbrace{e^{2t}\begin{bmatrix} 2\text{sin}(3t)+3\text{cos}(3t)\\ 13\text{sin}(3t) \end{bmatrix}}\_{{\text{Imaginary Part}}}$$ The solution takes the form: $$\vec x(t)=C\_1(\text{Real Part}) + C\_2(\text{Imaginary Part})$$ $$\vec x(t)=C\_1e^{2t}\begin{bmatrix} 2\text{cos}(3t)-3\text{sin}(3t)\\ 13\text{cos}(3t) \end{bmatrix}+C\_2e^{2t}\begin{bmatrix} 2\text{sin}(3t)+3\text{cos}(3t)\\ 13\text{sin}(3t) \end{bmatrix}$$
299498
I'm new in php, I'm trying to display stdClass Object data via foreach loop inside table. But it's not working. ``` include("../config.php"); $get_data = $conn->query("SELECT * FROM `prd_rgistration`"); $prd_data = $get_data->fetchObject(); print_r($prd_data); ``` Data Print ``` stdClass Object ( [id] => 24 [password_db] => kignkgsnis [country_db] => United States [porder_db] => 56313241654321324 [email_db] => nisa@gmail.com ) ``` Foreach Loop ``` foreach($prd_data as $eprd_data){ echo $eprd_data->id; } ``` it's giving this error > > Trying to get property of non-object > > > kindly tell me how i can display data. What thing i'm doing wrong.
You're only getting one object, not an array, so wrap the while loop around fetch ``` while ($prd_data = $get_data->fetchObject()) echo $prd_data->id; ```
299582
Facing a problem that I can't use my function where I try to update my ArrayList element. Receiving error: > > com.wep.Darbuotojas cannot be cast to com.wep.Programuotojas > > > I guess that I try somewhere to cast from String to int or from int to String, but just can't see right now where. Darbuotojas class: ``` package com.wep; public class Darbuotojas { protected String vardas; protected String pavarde; public String getVardas() { return vardas; } public void setVardas(String vardas) { this.vardas = vardas; } public String getPavarde() { return pavarde; } public void setPavarde(String pavarde) { this.pavarde = pavarde; } public int getAmzius() { return amzius; } public void setAmzius(int amzius) { this.amzius = amzius; } protected int amzius; Darbuotojas() {} public Darbuotojas(String vardas, String pavarde, int amzius) { this.vardas = vardas; this.pavarde = pavarde; this.amzius = amzius; } ``` } Programuotojas class: ``` package com.wep; public class Programuotojas extends Darbuotojas { protected String programavimoKalba; @Override public String toString() { return "Programuotojas: " + vardas + " " + pavarde + " " + amzius + " " + programavimoKalba; } public void setProgramavimoKalba(String programavimoKalba) { this.programavimoKalba = programavimoKalba; } public Programuotojas(String vardas, String pavarde, int amzius, String programavimoKalba) { super(vardas, pavarde, amzius); this.programavimoKalba = programavimoKalba; } } ``` And here's my function where I try to update everything ``` private ArrayList<Darbuotojas> darbuotojuArray = new ArrayList<Darbuotojas>(); private Darbuotojas darbuotojas = new Darbuotojas(); private void atnaujintiDarbuotoja() { if (darbuotojuArray.size() == 0) { System.out.println("Darbuotoju sarasas tuscias. Pridekite nauju darbuotoju"); } else { for (int i = 0; i < darbuotojuArray.size(); i++) { System.out.println("ID: " + i + " " + darbuotojuArray.get(i)); } System.out.println("Pasirinkite kuri darbuotoja norite atnaujinti"); Scanner SI = new Scanner(System.in); int userSelectsEmployeeID = Integer.parseInt(SI.nextLine()); if (darbuotojuArray.get(userSelectsEmployeeID) instanceof Programuotojas) { System.out.println("Iveskite varda"); String naujasVardas = SI.nextLine(); System.out.println("Iveskite pavarde"); String naujaPavarde = SI.nextLine(); System.out.println("Iveskite amziu"); int naujasAmzius = Integer.parseInt(SI.nextLine()); System.out.println("Iveskite nauja programavimo kalba"); String naujaKalba = SI.nextLine(); Programuotojas darbProgramuotojas = (Programuotojas) darbuotojas; darbProgramuotojas.setVardas(naujasVardas); darbProgramuotojas.setPavarde(naujaPavarde); darbProgramuotojas.setAmzius(naujasAmzius); darbProgramuotojas.setProgramavimoKalba(naujaKalba); }... ```
`Darbuotojas cannot be cast to Programuotojas` is pretty clear, and does not deal with `int to String` or `String to int` Because you have `Programuotojas extends Darbuotojas` you could save a `Programuotojas` instance in a `Darbuotojas` object, but not in the other side, imagine another class `Foo extends Darbuotojas` you can't do ``` Darbuotojas foo = new Foo(); Programuotojas bar = (Programuotojas) foo; ``` Which is what you're trying to do here ``` Programuotojas darbProgramuotojas = (Programuotojas) darbuotojas; ``` --- ***Solution :*** It just seems that you used the wrong element, (after check the `instance of`) and you want : ``` if (darbuotojuArray.get(userSelectsEmployeeID) instanceof Programuotojas) { //... Programuotojas darbProgramuotojas = (Programuotojas) darbuotojuArray.get(userSelectsEmployeeID); } ```
299956
I've very much a beginner in Dart and Flutter, and my project is to learn while building an app. I've learned the basics with an online class and I have basis in Java. Now, in my homepage, I'm trying to have 4 buttons take the whole screen. My current layout is to have them all take an equal amount of space and be stacked vertically. Problem is, I can't find a way to make the RaisedButton fill automatically. I've tried retrieving the height of the screen and dividing that between the buttons but it's not working. Here's how I do it : First, the homescreen layout: ``` class Homescreen extends StatelessWidget { Widget build(context) { double height = MediaQuery.of(context).size.height - 60; return Scaffold( body: SafeArea( child: ListView( children: [ PlanningButton(height), Container(margin: EdgeInsets.only(top: 20.0)), BookingButton(height), Container(margin: EdgeInsets.only(top: 20.0)), ModifyButton(height), Container(margin: EdgeInsets.only(top: 20.0)), CancelButton(height), ], ), ), ); } ``` As you can see, I pass the height argument to my buttons all built around this model: ``` Widget PlanningButton(double pixel_y) { return RaisedButton( padding: EdgeInsets.all(pixel_y / 8.0), onPressed: () {}, //goes to the planning screen color: Colors.blue[200], child: Text("Planning"), ); } ``` Yet, it's never quite right. I don't think it's taking into account the notification bar or the navigation bar. I suspect there is a workaround that involves wrapping the buttons into a container or using a different kind of widget that have button-like properties but I can't find what suits my needs. Thanks for any help ! edit: I'm basically trying to do [this](https://imgur.com/a/iHxKcJO)
Run the following `ls -la /home/dotta/Documents/Udemy\ Codes/surf_shop` you will see that its owned by `root`, why because you run `sudo npm install` at some point. To fix: 1. Delete node\_modules: `sudo rm -rf /home/dotta/Documents/Udemy\ Codes/surf_shop/node_modules` 2. Then fix ownership permissions: `sudo chown dotta:dotta -R /home/dotta/Documents/Udemy\ Codes` Then you will be able to `npm install`, don't use sudo to install or make directories else when you want to do stuff as a normal user you wont own the directory or file. Also dont start vs-code as root i.e: `sudo code .` else any point within vs-code its making folders/files as root. ... sidenote, avoid spaces in folders or filenames.
300002
Why is the infinite set from the axiom of infinity the natural numbers? Is there any reason such set was chosen? Couldn't the axiom yield a set that looks like $\Bbb R$ for example?
No reason other than canonicity. It is not hard to see in the presence of the other axioms of set theory that if there is an infinite set, then there is a countably infinite one. (And you do not need the axiom of choice for this to be the case, see [here](https://math.stackexchange.com/a/815199/462) for a short sketch.)
300133
First of all, thank you in advance for your support. My problem; First I am successfully getting my specific parameters in Employer. However, I also have a constantly changing parameter list in request. I want to get them with Map too. My dto: ``` public class Employee { private String name; private MultipartFile document; } ``` ``` @RequestMapping(path = "/employee", method = POST, consumes = { MediaType.MULTIPART_FORM_DATA_VALUE }) public Mono<Employee> saveEmployee(@ModelAttribute Employee employee, Map<String,Object> otherValues) { System.out.println(otherValues.get("key1").toString()); return employeeService.save(employee); } ``` I attached a request example aslo. NOTE: I used @RequestParam, @RequestPart before Map<String,Object> otherValues like this; ``` @RequestParam Map<String,Object> otherValues @RequestPart Map<String,Object> otherValues ``` But I still couldn't get the rest of the data. [![enter image description here](https://i.stack.imgur.com/kbFyQ.png)](https://i.stack.imgur.com/kbFyQ.png)
When defining a function with parameters, make sure that these parameters don't come inside the definition of the function. The code also has some indentation and logical mistakes. This is a corrected version. ``` print("I can tell you the maximum of 3 numbers") num1 = input("Enter the first number:") num2 = input("Enter the second number:") num3 = input("Enter the third number:") def max_num(num1, num2, num3): if not num1.isdigit() or not num2.isdigit() or not num3.isdigit(): return "Wrong Input" else: num1,num2,num3=int(num1),int(num2),int(num3) if num1 >= num2 and num1 >= num3: return num1 elif num2 >= num1 and num2 >= num3: return num2 else: return num3 print(max_num(num1,num2,num3)) ```
300302
I am creating sample Spring MVC application.In this application i have one form when i submit the form i perform some action. My problem is after the form submit url is changed for example i have url as `http://localhost:8080/SampleWeb/sample/user` this is for my form display when i submit the form the url redirected to `http://localhost:8080/sample/user-by-name` in my jsp ``` <form:form method="POST" action="/sample/user"> <table> <tr> ``` In my controller ``` @Controller @RequestMapping("/sample") public class SampleController { @RequestMapping(value = "/user", method = RequestMethod.GET) return "redirect:" + "SampleWeb/sample/user-by-name"; ``` when i change the redirect url to "/SampleWeb/sample/user-by-name" it works in firefox but in chrome `http://localhost:8080/SampleWeb/SampleWeb/sample/user-by-name` it adds two times. if i give `return "redirect:" + "/sample/user-by-name";` means url will be `http://localhost:8080/sample/user-by-name` I am new to the Spring mvc. Please anyone can help me
There are a few things to say. 1. The model you are using to predict the most likely correction is a simple, cascaded probability model: There is a probability for `W` to be entered by the user, and a *conditional probability* for the misspelling `X` to appear when `W` was meant. The correct terminology for P(X|W) is *conditional probability*, not *likelihood*. (A *likelihood* is used when estimating how well a candidate probability model matches given data. So it plays a role when you machine-learn a model, not when you apply a model to predict a correction.) 2. If you were to use Levenshtein distance for P(X|W), you would get integers between 0 and the sum of the lengths of `W` and `X`. This would *not* be suitable, because you are supposed to use a *probability*, which has to be between 0 and 1. Even worse, the value you get would be the larger the more different the candidate is from the input. That's the opposite of what you want. 3. However, fortunately, `SequenceMatcher.ratio()` is not actually an implementation of Levenshtein distance. It's an implementation of a *similarity measure* and returns values between 0 and 1. The closer to 1, the more *similar* the two strings are. So this makes sense. 4. Strictly speaking, you would have to verify that `SequenceMatcher.ratio()` is actually suitable as a probability measure. For this, you'd have to check if the sum of all ratios you get for all possible misspellings of `W` is a total of 1. This is certainly not the case with `SequenceMatcher.ratio()`, so it is not in fact a mathematically valid choice. 5. However, it will still give you reasonable results, and I'd say it can be used for a practical and prototypical implementation of a spell-checker. There is a perfomance concern, though: Since `SequenceMatcher.ratio()` is applied to a pair of strings (a candidate `W` and the user input `X`), you might have to apply this to a huge number of possible candidates coming from the dictionary to select the best match. That will be very slow when your dictionary is large. To improve this, you'll need to implement your dictionary using a data structure that has *approximate string search* built into it. You may want to look at [this existing post](https://stackoverflow.com/questions/9139423/quickly-compare-a-string-against-a-collection-in-java) for inspiration (it's for Java, but the answers include suggestions of general algorithms).
301267
I was wondering how does .NET's string.Remove() method operates regarding memory. If I have the following piece of code: ``` string sample = "abc"; sample = sample.Remove(0); ``` What will actually happen in memory? If I understand correctly, We've allocated a string consisting of 3 chars, and then we removed all of them on a new copy of the string, assigned the copy to the old reference, by that overriding it, and then what? What happens to those 3 characters? If we're not pointing to them anymore, and they're not freed up (at least not that I'm aware of), they will remain in memory as garbage. However, I'm sure the CLR has some way of detecting it and freeing them up eventually. So any of you guys know what happens here? Thanks in advance!
First `Remove` is going to create a new string that has no characters in it (an empty string). This will involve the allocation of a char array an a `string` object to wrap it. Then you'll assign a reference to that string to your local variable. Since the string `"abc"` is a literal string, it'll still exist in the intern pool, unless you've disabled interning of compile time literal strings, so it won't be garbage collected. So in summary, you've created two new objects and changed the reference of the variable `sample` from the old object to the new one.
301344
I have a view pager that returns same fragment with different data from adapter every time i swap but despite of the fact that different data is received it displaying the same data over and over again, how can i make fragment update its data the structure is, there is a fragment that has a listview and when i click on the listview item it replaces a fragment that has viewpager and setting its adapter, the adapter is FragmentPagerAdapter and returns a fragment with argument now fragment uses its argument and display data, every time getview called of adapter returning the same fragment with different data here is the code: when list item clicked: ``` replaceFragment(ReportPaginationFragment.newInstance(i, isFromList), true, true); ``` ReportPaginationFragment: ``` viewPager.setAdapter(new ReportPaginationAdapter(getFragmentManager(), reportID, appointmentObjectArrayList, isFromList, listPosition )); ``` Adapter: ``` reportId = appointmentObjectArrayList.get(item++).getId(); return NewDrReportFragment.newInstance(reportId); ``` appointmentObjectArrayList is arraylist that has data Thanks in advance
Instead of `addFragment`, `replace` is better choice. ``` FragmentManager fragmentManager = getSupportFragmentManager(); FragmentTransaction fragmentTransaction = fragmentManager.beginTransaction(); fragmentTransaction.replace(R.id.container_body, fragment); fragmentTransaction.commit(); ```
301626
I want to access the engine from inside my eventReceiver object. They are fellow members of the game class, but how do I reach it? ``` // game.h class game { public: game(); 3dEngine* engine; eventReceiver* receiver; }; // eventReceiver.h class eventReceiver { public: eventReceiver () {} virtual bool OnEvent (const SEvent& event) { ... case QUIT_BUTTON_PRESSED: >>> engine->quit(); // how to access engine from here?? return true; ... } }; ``` Should I use 'this' ? I don't understand why receiver can't see the engine.
Implement the class as a [Singleton](http://en.wikipedia.org/wiki/Singleton_pattern) and write a getter for the `engine` property. Accessing code could then look like: ``` game::getInstance()->getEngine()->quit(); ``` I would recommend you though, that you create a `quite()` method in the game class itself hiding implementation details and allowing you to handle overall application shutdown and not just of the `3dEngine`: ``` game::getInstance()->quit(); ``` If you dont want to implement the `game` class as singleton you could also pass a reference/pointer of a `game` object to the constructor of your event handler: ``` class CloseButtonHandler : public eventHandler { game& game; public: CloseButtonHandler(game& game) : game(game) { } virtual bool OnEvent(const SEvent& event){ ... game.getEngine()->quit(); } } ```
301628
I have a string(variable) having value of database cell. Actually that value is html code. I have to print that code into html page. I found that how to convert HTML string into DOM string. But I don't know that How to convert string into HTML string. If anyone know answer then please explain or suggest me link from where I can understand from beginning. Thank You. Here is my code. ``` <?php $dbstr = mysql_escape_string("&lt;p&gt;Just install and use this app.&amp;nbsp;&lt;strong&gt;MENU -&amp;gt; SETTINGA&lt;/strong&gt;&lt;/p&gt; "); ?> <!DOCTYPE html> <html> <body> <script type="text/javascript"> function fun(str) { var div = document.getElementById('htmldiv'); div.innerHTML = str; var elements = div.childNodes; } </script> <div id="htmldiv"> <script type="text/javascript"> fun('<?php echo $dbstr; ?>'); </script> </div> </body> ``` I get out put like this. [![enter image description here](https://i.stack.imgur.com/aCbRb.png)](https://i.stack.imgur.com/aCbRb.png) But I want like this. [![enter image description here](https://i.stack.imgur.com/PU8xt.png)](https://i.stack.imgur.com/PU8xt.png)
try this ``` $dbstr = html_entity_decode("&lt;p&gt;Just install and use this app.&amp;nbsp;&lt;strong&gt;MENU -&amp;gt; SETTINGA&lt;/strong&gt;&lt;/p&gt;"); ```
301805
[LATER EDIT: As I found, the issue is related to Android version, not device type. So my code was perfect for Android till 4.0, not above. The fix is in the answer.] I have wasted at least 2 days with this problem. I have few webpages packed as an Android application. And working perfectly on browser, and on my Android devices, including Galaxy Tab 2. But not on Nexus. I don't have it, so I kept making APK and a friend tested. The error was at AJAX. The same code work for me, do not work for him (and few others, I don't know their devices). Below is the small test I use. As you can see, it's error free (this is my guess). Why is not working on all Android devices? I mention that I've compiled this code (the other refered files are here <http://jumpshare.com/b/57O6tH>) with Eclipse and also with Build.PhoneGap.com. Yet, the same result: the APK I get is working on some devices, not on others. Using \*file:///android\_asset/www/import.html\* did not help me. The error is 404, as the file is not there. But it is! Where is the mistake? It drives me crazy :). Why this code works fine in browser and as APK on my Galaxy Tab 2 (and on Samsung Gio), but not on Nexus (and other devices)? ``` <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Test</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <link href="jquery.mobile-1.2.0.min.css" rel="stylesheet"/> <script src="jquery-1.8.3.min.js" type='text/javascript'></script> <script src="jquery.mobile-1.2.0.min.js" type='text/javascript'></script> <script type='text/javascript'> //$(document).ready(function() { $(document).bind("pageinit", function(){ $("#buton").bind('click',function(){ $.mobile.showPageLoadingMsg(); $.ajax({ url:'import.html', datatype:'html', type: 'GET', success:function(html){ $.mobile.hidePageLoadingMsg(); $("#result").html(html); }, error: function(jqXHR, textStatus, errorThrown) { $("#result").html("ERRORS:"+errorThrown+"<hr>"+textStatus+"<hr>"+JSON.stringify(jqXHR)) $.mobile.hidePageLoadingMsg(); alert('Not working!!!'); } }) }); }); </script> </head> <body> <!-- Pagina de start --> <div data-role="page" id="start"> <div data-role="header" data-theme="b"> <h1>Test</h1> </div> <div data-role="content"> <button id="buton">AJAX!</button> <div id="result"></div> </div> </div> </body> </html> ```
I found what I needed. Android 4.1 and 4.2 introduce this new method: getAllowUniversalAccessFromFileURLs Since it's not working on API below 16 the solution needs some few more lines, to assure that this inexistent method do not cause errors in previous API. ``` public class MainActivity extends Activity { /** Called when the activity is first created. */ WebView webView; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); webView = (WebView) findViewById(R.id.webView); webView.setScrollBarStyle(WebView.SCROLLBARS_OUTSIDE_OVERLAY); webView.getSettings().setJavaScriptEnabled(true); int currentapiVersion = android.os.Build.VERSION.SDK_INT; if (currentapiVersion >= android.os.Build.VERSION_CODES.JELLY_BEAN){ fixNewAndroid(webView); } webView.setWebChromeClient(new WebChromeClient()); webView.loadUrl("file:///android_asset/www/index.html"); } @TargetApi(16) protected void fixNewAndroid(WebView webView) { try { webView.getSettings().setAllowUniversalAccessFromFileURLs(true); } catch(NullPointerException e) { } } ``` }
301860
I have a Shapefile (.shp, .dbf, .shx, .prj) that's supposed to represent street data for San Francisco, which I'm writing a parser for. The problem is, when I parse the bounding box according to the ESRI format for Shapefile [here](https://www.esri.com/library/whitepapers/pdfs/shapefile.pdf), I get back around (5979762, 2085921) for the min coordinate of the bounding box (note that I actually get floats, but I rounded them down here for clarity). These are not UTM coordinates for SF, as I originally thought, and they're definitely not lat/lons. I'm not sure what units these are or how to interpret them. The content of the .prj file is as follows: ``` PROJCS[ "NAD_1983_StatePlane_California_III_FIPS_0403_Feet", GEOGCS[ "GCS_North_American_1983", DATUM[ "D_North_American_1983", SPHEROID["GRS_1980",6378137.0,298.257 222101] ], PRIMEM["Greenwich",0.0], UNIT["Degree",0.0174532925199433] ], PROJECTION["Lambert_Conformal_Conic"], PARAMETER["False_Easting",6561666.666666666], PARAMETER["False_Northing",1640416.666666667], PARAMETER["Central_Meridian",-120.5], PARAMETER["Standard_Parallel_1",37.06666666666667], PARAMETER["Standard_Parallel_2",38.43333333333333], PARAMETER["Latitude_Of_Origin",36.5], UNIT["Foot_US",0.3048006096012192] ] ``` My problem is that I really don't have any experience with .prj files, so I don't know how to interpret this format. Can anyone explain what this format is and how I should interpret it, or where I can find any more resources on how to do so?
Welcome to GIS SE! No worries here, you've got a shapefile whose coordinates subscribe to a [State Plane Coordinate System](https://en.wikipedia.org/wiki/State_Plane_Coordinate_System), recorded in feet. The minimum bounding geometry coordinate you've retrieved can be thought of as the bottom-left bounding geometry corner relative to sort of an arbitrarily placed origin point on an XY plane. So, instead of `0,0`, you start counting at `6561666, 1640416`, in feet, as the map units. Usage of State Plane Coordinate Systems is common and these coordinate systems are generally most accurate for the region called out in their [name](http://spatialreference.org/ref/esri/nad-1983-stateplane-california-iii-fips-0403-feet/). California appears to have [7 State Plane Zones](http://www.conservation.ca.gov/cgs/information/geologic_mapping/state_plane) available. For visualizations at the county or regional level, State Plane systems are often a solid choice (of course, this is not meant to start an argument about the *"best"* projection for any particular application). Following @DLY's advice, you are free to project these coordinates into an alternative coordinate system should you decide to. Best Luck.
302046
I need assistance in retrieving the element country and Its value 1 in the first file and element Name and Its value XYZ in the second file. I am taking Input from app.config and i have stored both the files is C:\Test\ and the file are. Sample.xml ``` <?xml version="1.0" encoding="utf-8"?> <env:Contentname xmlns:env="http://data.schemas" xmlns="http://2013-02-01/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <env:Body> <env:Content action="Hello"> <env:Data xsi:type="Yellow"> </env:Data> </env:Content> <env:Content action="Hello"> <env:Data xsi:type="Red"> <Status xmlns="http://2010-10-10/"> <Id >39681</Id> <Name>Published</Name> </Status> </env:Data> </env:Content> <env:Content action="Hello"> <env:Data xsi:type="green"> <Document> <country>1</country> </Document> </env:Data> </env:Content> </env:Body> </env:Contentname> ``` and My 2nd file is Sample1.xml ``` <?xml version="1.0" encoding="utf-8"?> <env:Contentname xmlns:env="http://data.schemas" xmlns="http://2013-02-01/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <env:Body> <env:Content action="Hello"> <env:Data xsi:type="Yellow"> </env:Data> </env:Content> <env:Content action="Hello"> <env:Data xsi:type="Red"> <Status xmlns="http://2010-10-10/"> <Id >39681</Id> <Name>Published</Name> </Status> </env:Data> </env:Content> <env:Content action="Hello"> <env:Data xsi:type="green"> <Document> <Name>XYZ</Name> </Document> </env:Data> </env:Content> </env:Body> </env:Contentname> ``` I Tried This, ``` using System; using System.Collections.Generic; using System.Xml.Linq; using System.Linq; using System.Configuration; using System.IO; namespace LinEx { class Program { static void Main(string[] args) { String ReadFile = System.Configuration.ConfigurationManager.AppSettings["Check"]; DirectoryInfo di = new DirectoryInfo(ReadFile); FileInfo[] rgFiles = di.GetFiles("*.xml"); foreach (FileInfo fi in rgFiles) { XElement root = XElement.Load(fi.FullName); XNamespace aw = "http://data.schemas"; XNamespace kw = "http://www.w3.org/2001/XMLSchema-instance"; XNamespace ns = "http://2013-02-01/"; var result = from x in root.Descendants(aw + "Content") where (string)x.Attribute("action") == "Hello" from y in x.Descendants(aw + "Data") where (string)y.Attribute(kw + "type") == "green" select new { Firstel = y.Element(ns + "Document").Element(ns + "country").Value, Secondel=y.Element(ns+"Document").Element(ns+"Name").Value, }; foreach (var item in result) { Console.WriteLine("Country value={0}", item.Firstel); Console.WriteLine("Name value={0}", item.Secondel); } } Console.ReadLine(); } } } ``` But I am getting error(Object reference not set to an instance of an object).Kindly help me with this and Thanks in advance.
`gets` is deprecated, use `fgets` and replace the trailing newline with a white space: ``` void addDataToTextFile(FILE *fileptr) { char text[100]; char *ptr; fileptr = fopen("textfile.txt", "a"); printf("\n\nPlease enter some text: "); fflush(stdin); /* You want fflush(stdout); */ fgets(text, sizeof text, stdin); if ((ptr = strchr(text, '\n')) *ptr = ' '; fprintf(fileptr, text); /* Wrong, use fprintf(fileptr, "%s", text); */ fclose(fileptr); printf("\n\nText has been added"); getch(); return; } ```
302100
I am tearing my hair out trying to center the text right in the middle of the li container. It seems like it centers left to right. But top to bottom it is not. I have tried to add a span to the text and move it via the margin, and padding. I have been unsuccessful to move the text at all. I expect the text to be centered horizontally and vertically right in the center of the blue box. Right now, the text is centered left to right, but not centered top to bottom. ``` <html> <style> .follow-button { height: 40px; width: 100px; background: #0081F2; border: #0081F2; color: #fff; font-weight: bold; text-align: center; margin: 0; padding: 0; } .follow-button:hover { cursor: pointer; bacgkround: #0059D0; border-color: #0059D0; } #follow-button-nav { list-style: none; margin-left: 0; padding-left: 0; } </style> <ul id="follow-button-nav"> <li class="follow-button">Follow</li> </ul> </html>``` ```
Answering the latter requirement, please try: ``` awk ' { split($0, a, "] +") gsub("[[,]", "", a[1]) printf("%s %.21f\n", a[1], a[2]) }' file ```
302436
I'm using a VF page as a "dispatcher" to display different pages based on an object recordtype. ``` <!-- OverrideCaseView If Case is a XXXX, show in the XXXXX user interface Otherwise, show the normal Case page. --> <apex:page standardController="Case" action="{!IF(case.RecordType.DeveloperName='XXXXXXXXX', URLFOR($Page.XXXXXXOverview, null, [id=case.Id], true), URLFOR($Action.Case.View, case.Id, null, true))}"> <apex:outputPanel style="display:none;" > {!case.RecordType.DeveloperName} </apex:outputPanel> </apex:page> ``` Up until now, I've just been doing `View` overrides, but I'm adding some `edit` overrides as well. When they're complex, it makes sense to have a separate dispatcher page, and I'm trying to figure out if I could re-use the same page for simple cases. **When writing a VF page to be used as a button/action override, is there any way to tell which action you're overriding? Is this information included in any parameters or global variables?**
**Visualforce Pages and Action Overrides.** Currently there is no context other than that passed in the standard controller given to you by the platform. Which really just gives you the SObject and not the action invoked apone it. However this allows your controller code to be generic at least. Allowing you to reuse a controller between different pages... You can only associate pages to Action Functions that use standard controllers and hence the object needs to be explicitly stated on the page, preventing the pages from being reused between other objects. Context wise, you can determine your current page however. So it then becomes a question of how light you can make your Action Override pages.... **Dispatch Suggestion "Using very small pages for mulitple overrides"**. Create an extension controller, **MyDispatchController**, with a '**dispatch**' method on it (returning PageReference). Then create a VF page for each action you want to override and configure the Action Override to each. As a minimum it would look like this for a Test object... **Dispatch Action Pages and Controller:** Create for example, **testdispatchedit.page**, **testdispatchnew.page** and **testdispatchview.page** each looking like the following... ``` <apex:page standardController="Test__c" extensions="MyDispatchController" action="{!dispatch}"/> ``` Then in the '**dispatch**' method of the controller put your object and/or action dispatch logic... ``` public with sharing class MyDispatchController { public MyDispatchController(ApexPages.StandardController stdController) { } public PageReference dispatch() { if(ApexPages.currentPage().getUrl().contains('view')) // Some disaptching logic resulting in a PageReference // (could also reference standardController.getRecord() for object type)... return Page.testviewa; else if(ApexPages.currentPage().getUrl().contains('edit')) // Some disaptching logic resulting in a PageReference // (could also reference standardController.getRecord() for object type)... return Page.testedita; else if(ApexPages.currentPage().getUrl().contains('new')) // Some disaptching logic resulting in a PageReference // (could also reference standardController.getRecord() for object type)... return Page.testnewa; return null; } } ``` For the pages that you dispatch to, define these as follows... **testedita.page:** ``` <apex:page standardController="Test__c" extensions="MyDispatchController,TestEditAController"> Edit Page A </apex:page> ``` **testnewa.page:** ``` <apex:page standardController="Test__c" extensions="MyDispatchController,TestNewAController"> New Page A </apex:page> ``` **testviewa.page:** ``` <apex:page standardController="Test__c" extensions="MyDispatchController,TestViewAController"> View Page A </apex:page> ``` **VF Bindings and Standard Controller Record:** Note that the standard controller record is initialised with the fields as per those referenced on the disapatch page. Which of course does not reference any fields. This means that standardController.getRecord() will really only have the Id field populated. You could resolve this by adding some bindings to the dispatch page (these won't be displayed), say for some common fields all your dispatch to pages need. So be aware that the final controllers will likely have to query themselves for the information they need and expose this to the page themselves. **Summary:** This solution leverages a feature of Visualforce that allows you to share controller instances between pages. So long as they share the same controllers. This allows your final pages to also see the same instance of the MyDispatchController, should you want to store anything in that during the dispatch, that you want to reference. Hope this helps!
302483
I'm trying to do very basic form validation with Bootstrap 4. For some reason, when adding class 'is-valid' to my 2nd input in my example, because the 1st input has the 'is-invalid' class, the 2nd input will have green borders (as it should since it's valid!), **BUT** it'll also have the invalid-feedback div showing (the "this field is required" message!). See: [![enter image description here](https://i.stack.imgur.com/B38LW.png)](https://i.stack.imgur.com/B38LW.png) I'm not sure what I'm doing wrong here... Here's the code: ```js /** * AJAX Post script */ const ERROR_TYPE_FATALERROR = 1; const ERROR_TYPE_INPUTERROR = 2; const ERROR_TYPE_GLOBALMESSAGE = 3; // Run through the validate() function ONLY once the form as been submitted once! // Otherwise, user will get validation right away as he types! Just a visual thing ... var formSubmittedOnce = false; /** * submitFormData() * Serialize and post form data with an AJAX call * * Example: onClick="submitFormData('frm1',['username','email'])" * * @param string formid essentially the 'id' of the container holding all form elements (i.e. <tr id="rowfrm_1">, <form id='frm1'>, etc.) * @param array fields list of field names that may produce input errors (i.e. ['username','email'] ) */ function submitFormData(formid, fields) { // flag form was submitted once! formSubmittedOnce = true; // ---------------------------------- // first rehide all error containers // ---------------------------------- $('#fatalError').removeClass('d-block'); $('#fatalErrorID').removeClass('d-block'); $('#fatalErrorTrace').removeClass('d-block'); $('#fatalErrorGoBack').removeClass('d-block'); $('#globalMessage').removeClass('d-block'); $('#globalMessageID').removeClass('d-block'); $('#globalMessageTrace').removeClass('d-block'); $('#globalMessageFooter').removeClass('d-block'); $('#globalMessageMailLink').removeClass('d-block'); $('#globalMessageGoBackLink').removeClass('d-block'); // rehide error containers of all inputs that might produce errors if (fields != null){ for (const f of fields) { $('#' + f + '_inputError').removeClass('d-block'); } } // ---------------------------------- // loop form elements and validate required fields // ---------------------------------- var formNode = $("#"+formid); var formInputs = formNode.find("select, textarea, input"); var submit = true; for(var i = 0; i < formInputs.length; ++i) { var input = formInputs[i]; // validate fields if( validate(input) === false ){ submit = false; } } if(submit === true) { // ---------------------------------- // get form data and serialize it! // ---------------------------------- // formid comes from a <form>. just serialize it if( formNode.prop("tagName") === "FORM" ){ var formData = formNode.serialize(); } // formid doesn't come from a <form> else { // get all form control var myInputs = formNode.clone(); // bug with clone() and SELECT controls: it'll only get the value of the option having the 'selected' attribute (the default value) // this hack will change the value clone() got to the actual user selected value, and not the default set value! formNode.find('select').each(function(i) { myInputs.find('select').eq(i).val($(this).val()); }) // create a dummy form, append all inputs to it and serialize it. var formData = $('<form>').append(myInputs).serialize(); } // ---------------------------------- // POST ! // ---------------------------------- $.ajax({ type: 'POST', url: $(location).attr('href'), data: formData, dataType : "json", }).done(function(response) { // get response if(response) { // if we got success, redirect if we got a redirect url! if ( response.success != null ) { if (typeof response.success === "string") { window.location.replace(response.success); } } // if anything else, PHP returned some errors or a message to display (i.e. 'data saved!') else { showMessages(response); } } // Successful post, but no response came back !? // assume success, since no 'success' response came back, thus keeping same page as is else { console.warn("Post sent, but no response came back!? Assuming successful post..."); } }).fail(function(xhr, status, error) { // we get here if we don't have a proper response/json sent! console.error("Ajax failed: " + xhr.statusText); console.error(status); console.error(error); var ajaxError = { 'type' : ERROR_TYPE_FATALERROR, 'message' : '<strong>Ajax failure!</strong><br/><br/>' + status + '<br/><br/>' + error, 'trace' : null, 'id' : null, 'goback' : null, 'adminMailtoLnk' : 'mailto:' + 'ravenlost2@gmail.com' }; showMessages(ajaxError); }); } } /** * showMessages() * show error messages in page based on JSON response * @param response JSON object holding response with (error) messages to display */ function showMessages(response){ // error type switch (response.type) { // ---------------------------- // GLOBAL MESSAGE // ---------------------------- case ERROR_TYPE_GLOBALMESSAGE: $('#globalMessage').addClass('d-block'); // set global message header message type $('#globalMessage').removeClass("error warning info"); $('#globalMessage').addClass(response.gmType); $('#globalMessageIcon').removeClass("fa-exclamation-triangle fa-info-circle"); $('#globalMessageIcon').addClass(response.gmIcon); $('#globalMessageTitle').empty(); $('#globalMessageTitle').append(response.gmTitle); // set message $('#globalMessagePH').empty(); $('#globalMessagePH').append(response.message); // set uniq error id if (response.id != null) { $('#globalMessageID').addClass('d-block'); $('#globalMessageIDPH').empty(); $('#globalMessageIDPH').append(response.id); } // set stacktrace if(response.trace != null) { $('#globalMessageTrace').addClass('d-block'); $('#globalMessageTracePH').empty(); $('#globalMessageTracePH').append(response.trace); } // set footer if( (response.showContactAdmin == true) || (response.goback != null) ) { $('#globalMessageFooter').addClass('d-block'); // contact admin if(response.showContactAdmin == true) { $('#globalMessageMailLink').addClass('d-block'); $('#globalMessageMailLinkPH').attr('href', response.adminMailtoLnk); } // go back if(response.goback != null){ $('#globalMessageGoBackLink').addClass('d-block'); $('#globalMessageGoBackLinkPH').attr('href', response.goback); } } break; // ---------------------------- // FATAL ERROR // ---------------------------- case ERROR_TYPE_FATALERROR: // hide content if we got a fatal as to prevent user from fiddling around and not reading the message! $('#content').addClass('d-none'); $('#fatalError').addClass('d-block'); // set message $('#fatalErrorMessagePH').empty(); $('#fatalErrorMessagePH').append(response.message); // reset mailto link $('#fatalErrorMailLink').attr('href', response.adminMailtoLnk); // set stacktrace if (response.trace != null) { $('#fatalErrorTrace').addClass('d-block'); $('#fatalErrorTracePH').empty(); $('#fatalErrorTracePH').append(response.trace); } // set uniq error id if (response.id != null) { $('#fatalErrorID').addClass('d-block'); $('#fatalErrorIDPH').empty(); $('#fatalErrorIDPH').append(response.id); } // set 'go back' url if(response.goback != null) { $('#fatalErrorGoBack').addClass('d-block'); $('#fatalErrorGoBackLink').attr('href', response.goback); } break; // ---------------------------- // INPUT ERROR // ---------------------------- case ERROR_TYPE_INPUTERROR: for (var field in response.fields) { var msg = eval('response.fields.' + field); $('#' + field + '_inputError').addClass('d-block') $('#' + field + '_inputError_message').empty(); $('#' + field + '_inputError_message').append(msg); } break; default: console.error('Got an invalid error type from the response!'); } } /** * validate() * Validate if field is empty or not * @param input form element * @return boolean */ function validate(input) { if(formSubmittedOnce === true) { if( input.hasAttribute('required') ) { if(input.value.trim() === '') { input.classList.remove('is-valid'); input.classList.add('is-invalid'); return false; } else { input.classList.remove('is-invalid'); input.classList.add('is-valid'); return true; } } else { // if we get here, then any other inputs not marked as 'required' are valid input.classList.add('is-valid'); } } } ``` ```html <html> <head> <link href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" rel="stylesheet" type="text/css"> <script src="https://code.jquery.com/jquery-3.5.1.min.js"></script> </script> </head> <body> <form id="testfrm" class="form-group"> Username: <input type="text" name="username" aria-describedby="username_required username_inputError" class="form-control is-invalid" oninput="validate(this)" required/><br> <div id="username_required" class="pl-1 invalid-feedback"> This field is required! </div> <!-- if bad username format or already taken, print form input error --> <div id="username_inputError" class="col alert alert-danger alert-dismissible fade show mt-2 py-2 pl-3 pr-5 text-left d-none"> <small> <strong>Error!</strong> <span id="username_inputError_message"></span> <button type="button" aria-label="Close" class="close pt-1 pr-2" onclick="$('#username_inputError').removeClass('d-block').addClass('d-none');">×</button> </small> </div> Email: <input type="text" name="email" aria-describedby="email_required email_inputError" class="form-control is-valid" oninput="validate(this)" required/><br> <div id="email_required" class="pl-1 invalid-feedback"> This field is required! </div> <!-- if bad email format or already taken, print form input error --> <div id="email_inputError" class="col alert alert-danger alert-dismissible fade show mt-2 py-2 pl-3 pr-5 text-left d-none"> <small> <strong>Error!</strong> <span id="email_inputError_message"></span> <button type="button" aria-label="Close" class="close pt-1 pr-2" onclick="$('#email_inputError').removeClass('d-block').addClass('d-none');">×</button> </small> </div> Comment: <input type="text" name="comment" class="form-control is-valid"><br> <input type="button" value="Submit" onclick="submitFormData('testfrm',['username','email'])" class="is-valid"> </form> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/js/bootstrap.bundle.min.js" integrity="sha384-6khuMg9gaYr5AxOqhkVIODVIvm9ynTT5J4V1cfthmT+emCG6yVmEZsRHdxlotUnm" crossorigin="anonymous"></script> </body> </html> ``` If anyone can shed a light on this.. Cheers! Pat
The reason its not working is that you are not wrapping your label and input in `form-group` div Validation has changed the way it used to be for bootstrap - Which means the `is-valid` and i`s invalid` does not know where to look so when its not in the form-group div wrapped it applies the `is-invalid` message to all the matching `divs`. I have added `label` to make it nice instead of using just Email: `<input>` If you submit the form now without any values it will display the errors \_ the error will disappear as soon as you type into the input. Run Snippet below to see it woking. ```js /** * AJAX Post script */ const ERROR_TYPE_FATALERROR = 1; const ERROR_TYPE_INPUTERROR = 2; const ERROR_TYPE_GLOBALMESSAGE = 3; // Run through the validate() function ONLY once the form as been submitted once! // Otherwise, user will get validation right away as he types! Just a visual thing ... var formSubmittedOnce = false; /** * submitFormData() * Serialize and post form data with an AJAX call * * Example: onClick="submitFormData('frm1',['username','email'])" * * @param string formid essentially the 'id' of the container holding all form elements (i.e. <tr id="rowfrm_1">, <form id='frm1'>, etc.) * @param array fields list of field names that may produce input errors (i.e. ['username','email'] ) */ function submitFormData(formid, fields) { // flag form was submitted once! formSubmittedOnce = true; // ---------------------------------- // first rehide all error containers // ---------------------------------- $('#fatalError').removeClass('d-block'); $('#fatalErrorID').removeClass('d-block'); $('#fatalErrorTrace').removeClass('d-block'); $('#fatalErrorGoBack').removeClass('d-block'); $('#globalMessage').removeClass('d-block'); $('#globalMessageID').removeClass('d-block'); $('#globalMessageTrace').removeClass('d-block'); $('#globalMessageFooter').removeClass('d-block'); $('#globalMessageMailLink').removeClass('d-block'); $('#globalMessageGoBackLink').removeClass('d-block'); // rehide error containers of all inputs that might produce errors if (fields != null) { for (const f of fields) { $('#' + f + '_inputError').removeClass('d-block'); } } // ---------------------------------- // loop form elements and validate required fields // ---------------------------------- var formNode = $("#" + formid); var formInputs = formNode.find("select, textarea, input"); var submit = true; for (var i = 0; i < formInputs.length; ++i) { var input = formInputs[i]; // validate fields if (validate(input) === false) { submit = false; } } if (submit === true) { // ---------------------------------- // get form data and serialize it! // ---------------------------------- // formid comes from a <form>. just serialize it if (formNode.prop("tagName") === "FORM") { var formData = formNode.serialize(); } // formid doesn't come from a <form> else { // get all form control var myInputs = formNode.clone(); // bug with clone() and SELECT controls: it'll only get the value of the option having the 'selected' attribute (the default value) // this hack will change the value clone() got to the actual user selected value, and not the default set value! formNode.find('select').each(function(i) { myInputs.find('select').eq(i).val($(this).val()); }) // create a dummy form, append all inputs to it and serialize it. var formData = $('<form>').append(myInputs).serialize(); } // ---------------------------------- // POST ! // ---------------------------------- $.ajax({ type: 'POST', url: $(location).attr('href'), data: formData, dataType: "json", }).done(function(response) { // get response if (response) { // if we got success, redirect if we got a redirect url! if (response.success != null) { if (typeof response.success === "string") { window.location.replace(response.success); } } // if anything else, PHP returned some errors or a message to display (i.e. 'data saved!') else { showMessages(response); } } // Successful post, but no response came back !? // assume success, since no 'success' response came back, thus keeping same page as is else { console.warn("Post sent, but no response came back!? Assuming successful post..."); } }).fail(function(xhr, status, error) { // we get here if we don't have a proper response/json sent! console.error("Ajax failed: " + xhr.statusText); console.error(status); console.error(error); var ajaxError = { 'type': ERROR_TYPE_FATALERROR, 'message': '<strong>Ajax failure!</strong><br/><br/>' + status + '<br/><br/>' + error, 'trace': null, 'id': null, 'goback': null, 'adminMailtoLnk': 'mailto:' + 'ravenlost2@gmail.com' }; showMessages(ajaxError); }); } } /** * showMessages() * show error messages in page based on JSON response * @param response JSON object holding response with (error) messages to display */ function showMessages(response) { // error type switch (response.type) { // ---------------------------- // GLOBAL MESSAGE // ---------------------------- case ERROR_TYPE_GLOBALMESSAGE: $('#globalMessage').addClass('d-block'); // set global message header message type $('#globalMessage').removeClass("error warning info"); $('#globalMessage').addClass(response.gmType); $('#globalMessageIcon').removeClass("fa-exclamation-triangle fa-info-circle"); $('#globalMessageIcon').addClass(response.gmIcon); $('#globalMessageTitle').empty(); $('#globalMessageTitle').append(response.gmTitle); // set message $('#globalMessagePH').empty(); $('#globalMessagePH').append(response.message); // set uniq error id if (response.id != null) { $('#globalMessageID').addClass('d-block'); $('#globalMessageIDPH').empty(); $('#globalMessageIDPH').append(response.id); } // set stacktrace if (response.trace != null) { $('#globalMessageTrace').addClass('d-block'); $('#globalMessageTracePH').empty(); $('#globalMessageTracePH').append(response.trace); } // set footer if ((response.showContactAdmin == true) || (response.goback != null)) { $('#globalMessageFooter').addClass('d-block'); // contact admin if (response.showContactAdmin == true) { $('#globalMessageMailLink').addClass('d-block'); $('#globalMessageMailLinkPH').attr('href', response.adminMailtoLnk); } // go back if (response.goback != null) { $('#globalMessageGoBackLink').addClass('d-block'); $('#globalMessageGoBackLinkPH').attr('href', response.goback); } } break; // ---------------------------- // FATAL ERROR // ---------------------------- case ERROR_TYPE_FATALERROR: // hide content if we got a fatal as to prevent user from fiddling around and not reading the message! $('#content').addClass('d-none'); $('#fatalError').addClass('d-block'); // set message $('#fatalErrorMessagePH').empty(); $('#fatalErrorMessagePH').append(response.message); // reset mailto link $('#fatalErrorMailLink').attr('href', response.adminMailtoLnk); // set stacktrace if (response.trace != null) { $('#fatalErrorTrace').addClass('d-block'); $('#fatalErrorTracePH').empty(); $('#fatalErrorTracePH').append(response.trace); } // set uniq error id if (response.id != null) { $('#fatalErrorID').addClass('d-block'); $('#fatalErrorIDPH').empty(); $('#fatalErrorIDPH').append(response.id); } // set 'go back' url if (response.goback != null) { $('#fatalErrorGoBack').addClass('d-block'); $('#fatalErrorGoBackLink').attr('href', response.goback); } break; // ---------------------------- // INPUT ERROR // ---------------------------- case ERROR_TYPE_INPUTERROR: for (var field in response.fields) { var msg = eval('response.fields.' + field); $('#' + field + '_inputError').addClass('d-block') $('#' + field + '_inputError_message').empty(); $('#' + field + '_inputError_message').append(msg); } break; default: console.error('Got an invalid error type from the response!'); } } /** * validate() * Validate if field is empty or not * @param input form element * @return boolean */ function validate(input) { if (formSubmittedOnce === true) { if (input.hasAttribute('required')) { if (input.value.trim() == '') { input.classList.remove('is-valid'); input.classList.add('is-invalid'); return false; } else { input.classList.remove('is-invalid'); input.classList.add('is-valid'); return true; } } else { // if we get here, then any other inputs not marked as 'required' are valid input.classList.add('is-valid'); } } } ``` ```html <html> <head> <!-- Latest compiled and minified CSS --> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.5.0/css/bootstrap.min.css"> <!-- jQuery library --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script> <!-- Popper JS --> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.16.0/umd/popper.min.js"></script> <!-- Latest compiled JavaScript --> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.5.0/js/bootstrap.min.js"></script> </head> <body> <form id="testfrm" class="form-group"> <div class="form-group"> <label class="form-control-label" for="username_required">Username</label> <input type="text" name="username" aria-describedby="username_required username_inputError" class="form-control is-invalid" oninput="validate(this)" required/><br> <div id="username_required" class="pl-1 invalid-feedback"> This field is required! </div> <!-- if bad username format or already taken, print form input error --> <div id="username_inputError" class="col alert alert-danger alert-dismissible fade show mt-2 py-2 pl-3 pr-5 text-left d-none"> <small> <strong>Error!</strong> <span id="username_inputError_message"></span> <button type="button" aria-label="Close" class="close pt-1 pr-2" onclick="$('#username_inputError').removeClass('d-block').addClass('d-none');">×</button> </small> </div> <div class="form-group"> <label class="form-control-label" for="email_required">Email</label> <input type="text" name="email" aria-describedby="email_required email_inputError" class="form-control is-valid" oninput="validate(this)" required/><br> <div id="email_required" class="pl-1 invalid-feedback"> This field is required! </div> </div> <!-- if bad email format or already taken, print form input error --> <div id="email_inputError" class="col alert alert-danger alert-dismissible fade show mt-2 py-2 pl-3 pr-5 text-left d-none"> <small> <strong>Error!</strong> <span id="email_inputError_message"></span> <button type="button" aria-label="Close" class="close pt-1 pr-2" onclick="$('#email_inputError').removeClass('d-block').addClass('d-none');">×</button> </small> </div> Comment: <input type="text" name="comment" class="form-control is-valid"><br> <input type="button" value="Submit" onclick="submitFormData('testfrm',['username','email'])" class="is-valid"> </form> </body> </html> ```
302821
I have a combo-box that contains lots of entries like this small extract ``` 1R09ST75057 1R11ST75070 1R15ST75086 1R23ST75090 2R05HS75063 2R05ST75063 3R05ST75086 2R07HS75086 ``` The user now enters some information in the form that result in a string being produced that has a wildcat (unknown) character in it at the second character position ``` 3?05ST75086 ``` I now want to take this string and search\filter through the combo-box list and be left with this item as selected or a small set of strings. If I know the string without the wildcat I can use the following to select it in the Combo-box. ``` cmbobx_axrs75.SelectedIndex = cmbobx_axrs75.Items.IndexOf("2R05HS75063"); ``` I thought I could first create a small subset that all have the first char the same then make a substring of each minus the first two chars and check this but I can have a large amount of entries and this will take too much time there must be an easier way? Any ideas how I can do this with the wildcat in the string please? Added info: I want to end up with the selected item in the Combobox matching my string. I choose from items on the form and result in string 3?05ST75086. I now want to take this and search to find which one it is and select it. So from list below ``` 1R05ST75086 2R05ST75086 3R05ST75086 6R05ST75086 3R05GT75086 3R05ST75186 ``` I would end up with selected item in Combo-box as ``` 3R05ST75086 ```
You could use regular expressions. Something like this: ``` string[] data = new string[] { "1R09ST75057", "1R11ST75070", "1R15ST75086", "1R23ST75090", "2R05HS75063", "2R05ST75063", "3R05ST75086", "2R07HS75086" }; string pattern = "3*05ST75086"; string[] results = data .Where(x => System.Text.RegularExpressions.Regex.IsMatch(x, pattern)) .ToArray(); ```
303112
This is basically a follow-on to the question [Newtonsoft Object → Get JSON string](https://stackoverflow.com/questions/6690395/newtonsoft-object-%E2%86%92-get-json-string) . I have an object that looks like this: ``` [JsonConverter(typeof(MessageConverter))] public class Message { public Message(string original) { this.Original = original; } public string Type { get; set; } public string Original { get; set; } } ``` My requirement is to store the original JSON string as part of the object when I initialise it. I have been able to (partially) successfully achieve this using a custom `JsonConverter` and then basically doing something like this in the `JsonConverter`: ``` public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) { if (reader.TokenType == Newtonsoft.Json.JsonToken.Null) return null; JObject obj = JObject.Load(reader); return new Message(obj.ToString(Formatting.None)) { Type = obj["type"].ToString() }; } ``` However the problem I run into is when I try to inherit from `Message` with something like ``` public class CustomMessage : Message { public string Prop1 { get; set; } } ``` For obvious reasons my code fails because it tries to return a new `Message()` not a new `CustomMessage()`. So short of implementing a big if statement with all my sub-types and manually binding using something like `JObject["prop"].ToObject<T>()` how can I successfully populate the `Original` property while still binding all my sub-type values using the default deserialisation? ***NOTE: The reason this is being done is because the original message might contain data that isn't actually being bound so I can't just add a property which serialises the object as it stands.***
You want `fillna` with `backfill` First, you need to make sure your empty values are actually null values recognized by pandas, so ``` import pandas as pd import numpy as np df = df.replace('', np.NaN).fillna(method='bfill', limit=3).replace(np.NaN, '') trial 0 1 2 3 4 a 5 a 6 a 7 a 8 9 10 11 12 a 13 a 14 a 15 a 16 17 18 19 20 a 21 a 22 a 23 a ```
303409
Rather annoying issue, but it seems as though there's some default behavior on a scroll view, that when you change orientation, it auto-scrolls to top. `ShouldScrollToTop` is a property available on a `UIScrollView` but doesn't seem to be available in a XF ScrollView. But it does it anyway? Is there a way to stop?
I just did a quick test on this and only found it to be an issue on iOS. On Android the ScrollView did not scroll to the top after rotation. I did not test UWP/WinPhone. If it is iOS only, a bug should be filed against Xamarin.Forms. In the meantime, here is a workaround for iOS. You will need to create a custom renderer [1] for your ScrollView. And the renderer code would look like this: ``` public class MyScrollViewRenderer : ScrollViewRenderer { CGPoint offset; protected override void OnElementChanged(VisualElementChangedEventArgs e) { base.OnElementChanged(e); UIScrollView sv = NativeView as UIScrollView; sv.Scrolled += (sender, evt) => { // Checking if sv.ContentOffset is not 0,0 // because the ScrollView resets the ContentOffset to 0,0 when rotation starts // even if the ScrollView had been scrolled (I believe this is likely the cause for the bug). // so you only want to set offset variable if the ScrollView is scrolled away from 0,0 // and I do not want to reset offset to 0,0 when the rotation starts, as it would overwrite my saved offset. if (sv.ContentOffset.X != 0 || sv.ContentOffset.Y != 0) offset = sv.ContentOffset; }; // Subscribe to the oreintation changed event. NSNotificationCenter.DefaultCenter.AddObserver(this, new Selector("handleRotation"), new NSString("UIDeviceOrientationDidChangeNotification"), null ); } [Export("handleRotation")] void HandleRotation() { if (offset.X != 0 || offset.Y != 0) { UIScrollView sv = NativeView as UIScrollView; // Reset the ScrollView offset from the last saved offset. sv.ContentOffset = offset; } } } ``` [1] <https://developer.xamarin.com/guides/xamarin-forms/custom-renderer/>
303567
I have been working on playing videos using VideoView in Android. OnPause of fragment i am pausing the video and storing the seektime ``` @Override public void onPause() { super.onPause(); if (videoview != null && videoview.isPlaying()) { videoview.pause(); seekTime = videoview.getCurrentPosition(); } } ``` OnResume i am trying to resume the video using - ``` @Override public void onResume() { super.onResume(); videoview.seekTo(seekTime); //vvOnboardingVideos.resume();// this is not working for now - need to investigate videoview.start(); } ``` The video plays from the beginning. Is there anything i am doing wrong. Please suggest.
Try to reverse order of operations : ``` @Override public void onResume() { super.onResume(); videoview.start(); videoview.seekTo(seekTime); } ```
303570
I migrated from Visual Studio 2017 to Visual Studio 2019 and now, when I run my AspNet Core application with IIS Express, i receive an infinite waiting for site page. Browser is opened but page is loading forever. Can anyone help me understanding the reason? With VS2017 that problem never happened. Thanks in advance for your suggestions. Bye Update: As suggested by JayMee I checked my logs. I've no breakpoints and, if I pause application, it doesn't show me what's doing...seems that is blocked before to run completely the site. In My output windows, into Asp-NetCore output, it shows the following: Now listening on: <http://localhost:42848> Application started. Press Ctrl+C to shut down. info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1] Request starting HTTP/1.1 GET <http://localhost:44359/> info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1] Request starting HTTP/1.1 DEBUG <http://localhost:44360/> 0 info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2] Request finished in 795.63ms 200 Finally I've no significant Event Viewer logs information regarding IIS Express.
The reason why you are receiving that error is because your return statement is inside the for loops. More specifically, it can happen that neither of the loops execute (e.g. mapChunkSize == 0. Then 0 isn't less than 0 and the program doesn't enter the for loops) and that thus nothing gets returned. So either move the return outside the loops or add another return outside the loops. ``` MapData GenerateMapData() { float[,] noiseMap = Noise.GenerateNoiseMap(mapChunkSize, mapChunkSize, seed, noiseScale, octaves, lacunarity, persistance, offset); Color[] colourMap = new Color[mapChunkSize * mapChunkSize]; for (int y = 0; y < mapChunkSize; y++) { for (int x = 0; x < mapChunkSize; x++) { float currentHeight = noiseMap[x, y]; for (int i = 0; i < regions.Length; i++) { if (currentHeight <= regions[i].height) { colourMap[y * mapChunkSize + x] = regions[i].colour; break; } } } /*return new MapData(noiseMap, colourMap); //uncommenting this might still leave you with the functionality you want */ } return new MapData(noiseMap, colourMap); } ```
303612
I am generating the contents of a div's content on a page by calling getJSON on a razor/cshtml file. 90% of the code works - from 2013 down through 1960 - as you can see at <http://www.awardwinnersonly.com/> by selecting "Hugos (Science Fiction)" from the book dropdown, but something in the last few "records" I added (from 1959 to 1946) is apparently causing the call to getJSON('getHugos.cshtml') to fail, as with those additional records the page does not display at all. (The problem "records" are commented out for now) Note: The "records" with funky vals such as "blankThis" and "blankThat" and where category is set to a year are not a problem; they are just rather kludgy - if category is four chars in length, the "record" is a year, and it is processed differently. Also, the elements with "--" as their value are not a problem - in those cases, no corresponding button is created (for the Kindle, Hardcopy, or Paperback edition). Here is a subset of the code in the cshtml file. The ellipsis dots represent lots of elided "records"; the bulk of it, apparently where the problem lies, is after the second set of ellipsis dots and the comment: ``` @{ var books = new List<BookClass> { new BookClass{Year=2013, YearDisplay="blankYearDisplay", Category="2013", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, . . . new BookClass{Year=2001, YearDisplay="2001", Category="Best Novella", Title="The Ultimate Earth", Author="Jack Williamson", KindleASIN="B00DV8TSHO", HardboundASIN="--", PaperbackASIN="1612421547", ImgSrc="http:images.amazon.com/images/P/B00DV8TSHO.01.MZZZZZZZ"} . . . // the above works, from 2013 down to 1960, but something in the last few "records" is apparently causing it to fail new BookClass{Year=1959, YearDisplay="blankYearDisplay", Category="1959", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1959, YearDisplay="1959", Category="Best Novel", Title="A Case of Conscience", Author="James Blish", KindleASIN="--", HardboundASIN="B000J52BAI", PaperbackASIN="0345438353", ImgSrc="http:images.amazon.com/images/P/0345438353.01.MZZZZZZZ"}, new BookClass{Year=1958, YearDisplay="blankYearDisplay", Category="1958", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1958, YearDisplay="1958", Category="Best Novel", Title="The Big Time", Author="Fritz Leiber", KindleASIN="B004UJHII4", HardboundASIN="0899685374", PaperbackASIN="B003YMNGGG", ImgSrc="http:images.amazon.com/images/P/B004UJHII4.01.MZZZZZZZ"}, new BookClass{Year=1956, YearDisplay="blankYearDisplay", Category="1956", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1956, YearDisplay="1956", Category="Best Novel", Title="Double Star", Author="Robert A. Heinlein", KindleASIN="B0050OVMWG", HardboundASIN="0839824467", PaperbackASIN="0345330137", ImgSrc="http:images.amazon.com/images/P/B0050OVMWG.01.MZZZZZZZ"}, new BookClass{Year=1955, YearDisplay="blankYearDisplay", Category="1955", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1955, YearDisplay="1955", Category="Best Novel", Title="They'd Rather Be Right (also known as The Forever Machine)", Author="Mark Clifton and Frank Riley", KindleASIN="--", HardboundASIN="--", PaperbackASIN="0881848425", ImgSrc="http:images.amazon.com/images/P/0881848425.01.MZZZZZZZ"}, new BookClass{Year=1954, YearDisplay="blankYearDisplay", Category="1954", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1954, YearDisplay="1954", Category="Best Novella", Title="A Case of Conscience", Author="James Blish", KindleASIN="--", HardboundASIN="B000M0BM5A", PaperbackASIN="B005KEM8TW", ImgSrc="http:images.amazon.com/images/P/B005KEM8TW.01.MZZZZZZZ"}, new BookClass{Year=1953, YearDisplay="blankYearDisplay", Category="1953", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1953, YearDisplay="1953", Category="Best Novel", Title="The Demolished Man", Author="Alfred Bester", KindleASIN="B00D2ITJLS", HardboundASIN="B000UF0KTQ", PaperbackASIN="0679767819", ImgSrc="http:images.amazon.com/images/P/B00D2ITJLS.01.MZZZZZZZ"}, new BookClass{Year=1951, YearDisplay="blankYearDisplay", Category="1951", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1951, YearDisplay="1951", Category="Best Novella", Title="The Man Who Sold the Moon", Author="Robert A. Heinlein", KindleASIN="B00ELJZZ24", HardboundASIN="--", PaperbackASIN="1451639228", ImgSrc="http:images.amazon.com/images/P/B00ELJZZ24.01.MZZZZZZZ"}, new BookClass{Year=1946, YearDisplay="blankYearDisplay", Category="1946", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1946, YearDisplay="1946", Category="Best Novella", Title="Animal Farm", Author="George Orwell", KindleASIN="B003ZX868W", HardboundASIN="0151010269", PaperbackASIN="184046254X", ImgSrc="http:images.amazon.com/images/P/B003ZX868W.01.MZZZZZZZ"} }; Response.ContentType = "application/json"; Json.Write(books, Response.Output); } ``` Surely strings such as "They'd Rather Be Right (also known as The Forever Machine)" are not a problem, are they? Obviously, it compiles and runs... When possible, I will try stepping through getHugos.cshtml; also, looking in the browser console to see if there are any err msgs, but does anybody know anything about the vagaries of getJSON related to cshtml files that could shed some light on this conundrum? UPDATE ------ Rearranging and reformatting it this way: ``` var books = new List<BookClass> { new BookClass{Year=1959, YearDisplay="blankYearDisplay", Category="1959", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1958, YearDisplay="blankYearDisplay", Category="1958", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1956, YearDisplay="blankYearDisplay", Category="1956", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1955, YearDisplay="blankYearDisplay", Category="1955", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1954, YearDisplay="blankYearDisplay", Category="1954", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1953, YearDisplay="blankYearDisplay", Category="1953", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1951, YearDisplay="blankYearDisplay", Category="1951", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1946, YearDisplay="blankYearDisplay", Category="1946", Title="blankTitle", Author="blankAuthor", KindleASIN="blankKindleASIN", HardboundASIN="blankHardboundASIN", PaperbackASIN="blankPaperbackASIN", ImgSrc="blankImgSrc"}, new BookClass{Year=1959, YearDisplay="1959", Category="Best Novel", Title="A Case of Conscience", Author="James Blish", KindleASIN="--", HardboundASIN="B000J52BAI", PaperbackASIN="0345438353", ImgSrc="http:images.amazon.com/images/P/0345438353.01.MZZZZZZZ"}, new BookClass{Year=1958, YearDisplay="1958", Category="Best Novel", Title="The Big Time", Author="Fritz Leiber", KindleASIN="B004UJHII4", HardboundASIN="0899685374", PaperbackASIN="B003YMNGGG", ImgSrc="http:images.amazon.com/images/P/B004UJHII4.01.MZZZZZZZ"}, new BookClass{Year=1956, YearDisplay="1956", Category="Best Novel", Title="Double Star", Author="Robert A. Heinlein", KindleASIN="B0050OVMWG", HardboundASIN="0839824467", PaperbackASIN="0345330137", ImgSrc="http:images.amazon.com/images/P/B0050OVMWG.01.MZZZZZZZ"}, new BookClass{Year=1955, YearDisplay="1955", Category="Best Novel", Title="They'd Rather Be Right (also known as The Forever Machine)", Author="Mark Clifton and Frank Riley", KindleASIN="--", HardboundASIN="--", PaperbackASIN="0881848425", ImgSrc="http:images.amazon.com/images/P/0881848425.01.MZZZZZZZ"}, new BookClass{Year=1954, YearDisplay="1954", Category="Best Novella", Title="A Case of Conscience", Author="James Blish", KindleASIN="--", HardboundASIN="B000M0BM5A", PaperbackASIN="B005KEM8TW", ImgSrc="http:images.amazon.com/images/P/B005KEM8TW.01.MZZZZZZZ"}, new BookClass{Year=1953, YearDisplay="1953", Category="Best Novel", Title="The Demolished Man", Author="Alfred Bester", KindleASIN="B00D2ITJLS", HardboundASIN="B000UF0KTQ", PaperbackASIN="0679767819", ImgSrc="http:images.amazon.com/images/P/B00D2ITJLS.01.MZZZZZZZ"}, new BookClass{Year=1951, YearDisplay="1951", Category="Best Novella", Title="The Man Who Sold the Moon", Author="Robert A. Heinlein", KindleASIN="B00ELJZZ24", HardboundASIN="--", PaperbackASIN="1451639228", ImgSrc="http:images.amazon.com/images/P/B00ELJZZ24.01.MZZZZZZZ"}, new BookClass{Year=1946, YearDisplay="1946", Category="Best Novella", Title="Animal Farm", Author="George Orwell", KindleASIN="B003ZX868W", HardboundASIN="0151010269", PaperbackASIN="184046254X", ImgSrc="http:images.amazon.com/images/P/B003ZX868W.01.MZZZZZZZ"} }; ``` ...indicates it's not a problem with the data, UNLESS the following is, indeed, problematic for some reason: Title="They'd Rather Be Right (also known as The Forever Machine)" UPDATE 2 -------- Somehow this ended up at the top of my getHugos.cshtml file - beats me how - : ``` /P/ ``` ...so that the first line was: ``` /P/@{ ``` ...and that's why it was failing. It's bizarre and/or macrabre to me that it would even compile - in fact, it would be nice if it wouldn't, and point me to that line! But thanks for your tip on the unmatched alt tag, JayC - it wasn't causing the html to fail, but I'm sure it was messing up my alts.
``` public static void main(String[] args) throws JSONException { String jsonString = "{" + " \"MyResponse\": {" + " \"count\": 3," + " \"listTsm\": [{" + " \"id\": \"b90c6218-73c8-30bd-b532-5ccf435da766\"," + " \"simpleid\": 1," + " \"name\": \"vignesh1\"" + " }," + " {" + " \"id\": \"b90c6218-73c8-30bd-b532-5ccf435da766\"," + " \"simpleid\": 2," + " \"name\": \"vignesh2\"" + " }," + " {" + " \"id\": \"b90c6218-73c8-30bd-b532-5ccf435da766\"," + " \"simpleid\": 3," + " \"name\": \"vignesh3\"" + " }]" + " }" + "}"; JSONObject jsonObject = new JSONObject(jsonString); JSONObject myResponse = jsonObject.getJSONObject("MyResponse"); JSONArray tsmresponse = (JSONArray) myResponse.get("listTsm"); ArrayList<String> list = new ArrayList<String>(); for(int i=0; i<tsmresponse.length(); i++){ list.add(tsmresponse.getJSONObject(i).getString("name")); } System.out.println(list); } } ``` Output: ``` [vignesh1, vignesh2, vignesh3] ``` *Comment:* I didn't add validation **[EDIT]** other way to load json String ``` JSONObject obj= new JSONObject(); JSONObject jsonObject = obj.fromObject(jsonString); .... ```
303708
I'm not understanding something. I am trying to build a server on a Raspberry Pi with Apache. I can access it from my own network, but not outside. I already got the port forwarding and all that setup. I think I may have the answer to my problem but I am not understanding it. So, I gave a friend my Internal IP Address to see if he could load my webpage, he could not. Then I realized that the Internal IP is for use within a personal network only. For someone outside to gain access to your server, they need an External IP, right? But what I don't understand is that my Raspberry Pi's External IP is the same as my Mac's External IP! That doesn't make sense! Now I theorize that maybe the External IP applies to your own network. A request bound for the Pi Server is addressed to the External IP. The router receives the request and forwards the request to the appropriate IP and port #. Am I getting my facts straight? Is that how it works or no? Can somebody explain this simply and clearly to me?
That's pretty much it. Http requests for example will normally come in on port 80 so if you want the Pi to serve web pages externally you'd forward port 80 on the router to port 80 on the Pi. The Pi also needs configuring to accept external requests on that port (usually using a UFW rule)
303811
I want to create a functional test for an action that receives a POST method with data in JSON format. This is what I have: ``` info('set car')-> post('/user/'.$user->getId().'/set-car/'.$car->getId()'-> with('request')->ifFormat('json')->begin()-> isParameter('module', 'myModule')-> isParameter('action', 'myAction')-> end()-> ``` But..where should I set the receiving json data? sf 1.4 Regards Javi
Did you checked out the [page](http://www.symfony-project.org/jobeet/1_4/Doctrine/en/09) from Jobeet tutorial? The first example is ``` $browser = new sfBrowser(); $browser-> get('/')-> click('Design')-> get('/category/programming?page=2')-> get('/category/programming', array('page' => 2))-> post('search', array('keywords' => 'php')) ; ``` It may be that I didn't understand you question at all. Correct me if I'm wrong.
303929
I'm currently writing a program in CLI/C++ using an imported C# package. I need to use one of the functions in one of the C# objects which takes in an array. Unfortunately it isn't allowing me to use a CLI array, defined like so: ``` array<float>^ knots = gcnew array<float>(nurbs->GetNumKnots()); ``` (and then populated in a loop). The object and the function are: ``` TRH::NURBSCurveKit^ nurbsKit = gcnew TRH::NURBSCurveKit(); nurbsKit->SetKnots(nurbs->GetNumKnots(), knots); ``` This returns an error basically saying that the cli::array type isn't compatible. Does anyone know of a way where I can cast the array, or possibly define it differently? I'm quite new to CLI so I'm a little vague on the way it handles things at times. Thanks (I do something similar later on using an array of TRH::Points, but they're not defined as references or pointers, so I'm not sure if they'd work with any solutions or not).
I'm not sure if it's the same `NURBSCurveKit`, but according to an [online API reference](http://docs.techsoft3d.com/hps/core_graphics/ref_manual/cs/class_h_p_s_1_1_n_u_r_b_s_curve_kit.html#a005e69001a746259bd8197eeda388955) I found, the `SetKnots` method takes one parameter, not two. Since a managed array knows how long it is, you generally don't have to pass in a length with an array. If this matches your API, just switch to pass a single parameter to `SetKnots`. (The reference I found uses a different namespace, so it may not be what you're using.) ``` array<float>^ knots = gcnew array<float>(nurbs->GetNumKnots()); TRH::NURBSCurveKit^ nurbsKit = gcnew TRH::NURBSCurveKit(); nurbsKit->SetKnots(knots); ```
304219
I'm running ubuntu 10.04. I have a newly purchased TATA Photon+ Internet connection which supports Windows and Mac. On the Internet I found a article saying that it could be configured on Linux. I followed the steps to install it on Ubuntu from this [link](http://digitizor.com/2010/06/28/how-to-use-tata-photon-plus-in-ubuntu-10-04-lucid-lynx/). I am still not able to get online, and need some help. Also, it is very slow, but I was told that I would see speeds up to 3.1MB. I dont have wvdial installed and cannot install it from apt as I'm not connected to internet Booting from windows I dowloaded "wvdial" .deb package and tried to install on ubuntu but it's ended with dependency problem. Automatically, don't know how, I got connected to internet only for once. Immediately I installed wvdial package after this I followed the tutorials(I could not browse and upload the files here) . From then it's showing that the device is connected in the network connections but no internet connection. Once I disable the device, it won't show as connected again and I'll have to restart my system. Sometimes the device itself not detected(wondering if there is any command to re-read the all devices). output of **wvdialconf /etc/wvdial.cof**: ``` #wvdialconf /etc/wvdial.conf Editing `/etc/wvdial.conf'. Scanning your serial ports for a modem. ttyS0<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyS0<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 115200 baud ttyS0<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. Modem Port Scan<*1>: S1 S2 S3 WvModem<*1>: Cannot get information for serial port. ttyUSB0<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyUSB0<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud ttyUSB0<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. WvModem<*1>: Cannot get information for serial port. ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. WvModem<*1>: Cannot get information for serial port. ttyUSB2<*1>: ATQ0 V1 E1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 Z -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK ttyUSB2<*1>: Modem Identifier: ATI -- Manufacturer: +GMI: HUAWEI TECHNOLOGIES CO., LTD ttyUSB2<*1>: Speed 9600: AT -- OK ttyUSB2<*1>: Max speed is 9600; that should be safe. ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK Found a modem on /dev/ttyUSB2. Modem configuration written to /etc/wvdial.conf. ttyUSB2<Info>: Speed 9600; init "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0" ``` output of **wvdial**: ``` #wvdial --> WvDial: Internet dialer version 1.60 --> Cannot get information for serial port. --> Initializing modem. --> Sending: ATZ ATZ OK --> Sending: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 OK --> Sending: AT+CRM=1 AT+CRM=1 OK --> Modem initialized. --> Sending: ATDT#777 --> Waiting for carrier. ATDT#777 CONNECT --> Carrier detected. Starting PPP immediately. --> Starting pppd at Sat Oct 16 15:30:47 2010 --> Pid of pppd: 5681 --> Using interface ppp0 --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> local IP address 14.96.147.104 --> pppd: (u;[08]@s;[08]`{;[08] --> remote IP address 172.29.161.223 --> pppd: (u;[08]@s;[08]`{;[08] --> primary DNS address 121.40.152.90 --> pppd: (u;[08]@s;[08]`{;[08] --> secondary DNS address 121.40.152.100 --> pppd: (u;[08]@s;[08]`{;[08] ``` Output of log message **/var/log/messages**: ``` Oct 16 15:29:44 avyakta-desktop pppd[5119]: secondary DNS address 121.242.190.180 Oct 16 15:29:58 desktop pppd[5119]: Terminating on signal 15 Oct 16 15:29:58 desktop pppd[5119]: Connect time 0.3 minutes. Oct 16 15:29:58 desktop pppd[5119]: Sent 0 bytes, received 177 bytes. Oct 16 15:29:58 desktop pppd[5119]: Connection terminated. Oct 16 15:30:47 desktop pppd[5681]: pppd 2.4.5 started by root, uid 0 Oct 16 15:30:47 desktop pppd[5681]: Using interface ppp0 Oct 16 15:30:47 desktop pppd[5681]: Connect: ppp0 <--> /dev/ttyUSB2 Oct 16 15:30:47 desktop pppd[5681]: CHAP authentication succeeded Oct 16 15:30:47 desktop pppd[5681]: CHAP authentication succeeded Oct 16 15:30:48 desktop pppd[5681]: local IP address 14.96.147.104 Oct 16 15:30:48 desktop pppd[5681]: remote IP address 172.29.161.223 Oct 16 15:30:48 desktop pppd[5681]: primary DNS address 121.40.152.90 Oct 16 15:30:48 desktop pppd[5681]: secondary DNS address 121.40.152.100 ``` --- **EDIT 1** : I tried the following ``` sudo stop network-manager sudo killall modem-manager sudo /usr/sbin/modem-manager --debug > ~/mm.log 2>&1 & sudo /usr/sbin/NetworkManager --no-daemon > ~/nm.log 2>&1 & ``` Output of **mm.log**: ``` #vim ~/mm.log: ** Message: Loaded plugin Option High-Speed ** Message: Loaded plugin Option ** Message: Loaded plugin Huawei ** Message: Loaded plugin Longcheer ** Message: Loaded plugin AnyData ** Message: Loaded plugin ZTE ** Message: Loaded plugin Ericsson MBM ** Message: Loaded plugin Sierra ** Message: Loaded plugin Generic ** Message: Loaded plugin Gobi ** Message: Loaded plugin Novatel ** Message: Loaded plugin Nokia ** Message: Loaded plugin MotoC ``` Output of **nm.log**: ``` #vim ~/nm.log: NetworkManager: <info> starting... NetworkManager: <info> modem-manager is now available NetworkManager: SCPlugin-Ifupdown: init! NetworkManager: SCPlugin-Ifupdown: update_system_hostname NetworkManager: SCPluginIfupdown: guessed connection type (eth0) = 802-3-ethernet NetworkManager: SCPlugin-Ifupdown: update_connection_setting_from_if_block: name:eth0, type:802-3-ethernet, id:Ifupdown (eth0), uuid: 681b428f-beaf-8932-dce4-678ed5bae28e NetworkManager: SCPlugin-Ifupdown: addresses count: 1 NetworkManager: SCPlugin-Ifupdown: No dns-nameserver configured in /etc/network/interfaces NetworkManager: nm-ifupdown-connection.c.119 - invalid connection read from /etc/network/interfaces: (1) addresses NetworkManager: SCPluginIfupdown: management mode: unmanaged NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:14.4/0000:02:02.0/net/eth1, iface: eth1) NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:14.4/0000:02:02.0/net/eth1, iface: eth1): no ifupdown configuration found. NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/lo, iface: lo) @ ```
From [How to configure TATA PHOTON+ EC1261 on UBUNTU 10.04 Lucid Lynx](http://www.botskool.com/geeks/how-configure-tata-photon-ec1261-ubuntu-1004-lucid-lynx) by [Aamir Aarfi](http://www.botskool.com/author/aamir-aarfi): > > 1.- follow this guide : > > > How to install Tata Photon+ on Ubuntu 10.04 > > > ### STEP #1: > > > Plug-in your device and download two .deb files on your Ubuntu desktop. > > > [usb-modeswitch-data\_20100418-1\_all.deb](http://www.botskool.com/downloads/geeks/ubuntu/usb-modeswitch-data_20100418-1_all.deb) > > > [usb-modeswitch\_1.1.2-3\_i386.deb](http://www.botskool.com/downloads/geeks/ubuntu/usb-modeswitch-data_20100418-1_all.deb) > > > Download the above links or from the latest Debian repo listed below. > > > > ``` > http://packages.debian.org/squeeze/i386/usb-modeswitch/download > http://packages.debian.org/squeeze/all/usb-modeswitch-data/download > http://packages.debian.org/squeeze/usb-modeswitch-data > http://packages.debian.org/squeeze/usb-modeswitch > > ``` > > ### STEP #2: > > > Installing required packages 1. RIGHT CLICK on the package usb-modeswitch-data, SELECT "Open with GDebi Package Installer". 2. If there is no error message then CLICK on "Install Package" button. 3. RIGHT CLICK on the package usb-modeswitch, SELECT "Open with GDebi Package Installer". 4. If there is an error regarding architecture then DOWNLOAD different package and repeat above step again. 5. If there is no error message then CLICK on "Install Package" button. 6. After this verify installation of these packages by following PART-I > Step1 with another debian version. > > REBOOT your system [recommended] > > > ### STEP #3 [Important] > > > Modifying usb-modeswitch file 1. OPEN Terminal again ("Applications" > "Accessories" > "Terminal") 2. KEEP TATA Photon device connected, TYPE "lsusb" and you will get results like below. Bus 003 Device 002: ID 12d1:140b Huawei Technologies Co., Ltd. 3. If your device shows no. different than 140b then use that in below steps. 4. GO TO "/etc/usb\_modeswitch.d/12d1:1446" and edit it. (GO TO parent most folder and then TYPE "CD /etc/usb\_modeswitch.d/", 5. TYPE "gedit 12d1:1446", the file will get opened in text editor.) 6. ADD "140b" to the "targetList=" and SAVE the file. Configuring Tata Photon+ as "Mobile Brodband" from Network Manager. Step1 : GO TO "System" > "Preferences" > "Network Connections". Step2 : CLICK on "Mobile Broadband" tab. Step3 : CLICK on "ADD" button. Step4 : Observe the device name below "Create a connection for this mobile broadband device". "HUAyWEI TECHNOLOGIES HUAWEI Mobile" should already be selected in drop down. If not SELECT it and CLICK on "FORWARD". NOTE:- If this device does not shows here that means your Ubuntu is still not treating this device as USB MODEM. Step5 : SELECT country (e.g. India) from the list CLICK on "FORWARD". Step6 : SELECT PROVIDER (e.g. Tata Indicom (Photon+)) from the list CLICK on "FORWARD". Step7 : You will see "Your Device" and "Your Provider", CLICK "Apply" button. Step8 : FILL following information under "Mobile Broadband" tab. ``` Number: #777 Username: internet Password: internet ``` Optionally you can CHECK "Connect Automatically" option. CLICK on "Apply" button Step9 : TYPE first user password and CLICK on "Authenticate". You will see "Tata Indicom (Photon+)" in Mobile Broadband connection list. ### PART-III :- Starting "Mobile Broadband" Connection Step1 : RIGHT CLICK on "Network Manager" icon on top-left corner of the screen. Step2 : CHECK "Enable Mobile Broadband" option. Step3 : Now CLICK on the same icon. Step4 : CLICK on "Tata Indicom (photon+) connection1" And THATS IT !!!!!! 2.- another option is upgrade from 10.04 to 10.10 because THE DRIVER FOR TATA PHOTON+ IS PRE INSTALLED, so no need to install .deb file. Plug in your photon device and select the appropriate options from the Ubuntu network manager.
304356
I have a single task to produce a jar like this: ``` task contentJar(type: Jar, overwrite: true) { ... } artifacts { archives contentJar } ``` How do I include the jar produced by the contentJar task as a testRuntime dependency? I was able to define the dependency like this: `testRuntime fileTree(dir: 'build/libs', include: '*content.jar')` This works but is there a cleaner solution to this?
In your library project, Create a new configuration: ``` configurations { foo } ``` Add your artifact to this configuration: ``` artifacts { foo contentJar } ``` Use this project as a dependency in a different project, and use your configuration to limit artifacts to contentJar: ``` testCompile project(path: ':Library', configuration: 'foo') ```
304644
We have a legacy embedded system which uses SDL to read images and fonts from an NFS share. If there's a network problem, TTF\_OpenFont() and IMG\_Load() hang essentially forever. A test application reveals that open() behaves in the same way. It occurred to us that a quick fix would be to call alarm() before the calls which open files on the NFS share. The man pages weren't entirely clear whether open() would fail with EINTR when interrupted by SIGALRM, so we put together a test app to verify this approach. We set up a signal handler with sigaction::sa\_flags set to zero to ensure that SA\_RESTART was not set. The signal handler was called, but open() was not interrupted. (We observed the same behaviour with SIGINT and SIGTERM.) I suppose the system treats open() as a "fast" operation even on "slow" infrastructure such as NFS. Is there any way to change this behaviour and allow open() to be interrupted by a signal?
I've had exactly the same error CS0012: ...TaskAwaiter<>... This is a known issue in VS2015. <https://support.microsoft.com/en-us/kb/3025133> To solve it go to Debug - Options - General - Use the legacy C# and VB expression evaluator (tick)
304745
I have a controls.Image that I put in my Flex4 AIR App. I set it to x=0, y=0. Width=100%, Height=100%. The problem with this, is that The source image I am using does not fit the whole app screen. The actual controls.Image covers the entire screen, but the source image contained in the controls.Image does not fill the entire controls.Image height/width. Is there any way to "fill" the entirety of the controls.Image with the source? I am not worried about warping the source image, I would just like to fill the entire controls.Image no matter what the height/width of it are. I cannot find the tool to do so.. anyone recommend tips? I cannot find anything related to "FillMode" or similar and have been searching for awhile now. Thanks
If by 'controls.Image', you mean `mx.controls.Image`, which is technically a Flex 3 component, but is applicable in Flex 4.1 and earlier, you want to set `maintainAspectRatio` to `false`, and make sure that `scaleContent` is `true` (which is the default value). On the other hand, if you are using Flex 'Hero' SDK (Flex 4.5), which introduces a native spark image component (`spark.components.Image`), you'll need to set `scaleMode` to `BitmapScaleMode.STRETCH`, and make sure `fillMode` is set to `BitmapFillMode.SCALE` (which is the default value). **AS3 Documentation:** [mx.controls.Image (Flex 4.1 and earlier)](http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/mx/controls/Image.html#propertySummary) [spark.components.Image (Flex 4.5 "Hero")](http://help.adobe.com/en_US/FlashPlatform/beta/reference/actionscript/3/spark/components/Image.html#scaleMode)
304790
I'm trying to a simple insert list of rows from a `DataGridView` to a database. I have made a `checkedbox` that upon checked, the item will be added to the `DataGridView`. Now i'm attempting to do the INSERT part. This what I have come up so far: ``` try { string strAppointment = "SELECT appointmentID FROM APPOINTMENT WHERE appointmentID=@searchappointmentID"; SqlCommand cmdAppointment = new SqlCommand(strAppointment, connection); cmdAppointment.Parameters.AddWithValue("@searchappointmentID", txtAppointmentID.Text); connection.Open(); for (int i = 0; i < dataPrescription.Rows.Count; i++) { string firstColumn = dataPrescription[0, dataPrescription.CurrentCell.RowIndex].Value.ToString(); string strMedications = "SELECT medicationID FROM MEDICATION WHERE medicationName= ('" + firstColumn + "')"; SqlCommand cmdMedications = new SqlCommand(strMedications, connection); SqlDataReader readMedications = cmdMedications.ExecuteReader(); if (readMedications.Read()) { string getDrugID = readMedications["medicationID"].ToString(); string strPrescriptions = "INSERT INTO PRESCRIPTION (appointmentID, medicationID, quantity) " + "VALUES (@insertAppointment, " + getDrugID + ", " + dataPrescription.Rows[i].Cells["columnQuantity"].Value + ");"; SqlCommand cmdPrescriptions = new SqlCommand(strPrescriptions, connection); cmdPrescriptions.Parameters.AddWithValue("@insertAppointment", txtAppointmentID.Text); prescriptionsResult = cmdAppointment.ExecuteNonQuery(); } readMedications.Close(); } } catch (Exception ex) { MessageBox.Show(ex.Message, "Error"); } finally { connection.Close(); } ``` Right now it giving me this error: "There is already an open DataReader associated with the command which must be closed first". I don't know what I've done wrong
Try this: (initialize the datareader) ``` for (int i = 0; i < dataPrescription.Rows.Count; i++) { string firstColumn = dataPrescription[0, dataPrescription.CurrentCell.RowIndex].Value.ToString(); string strMedications = "SELECT medicationID FROM MEDICATION WHERE medicationName= ('" + firstColumn + "')"; SqlCommand cmdMedications = new SqlCommand(strMedications, connection); SqlDataReader dr = new SqlDataReader(); //Insert this line in your code SqlDataReader readMedications = cmdMedications.ExecuteReader(); ```
305121
I've been using GnuCash to track only my money and securities. Now, I wanna use it to track also my fixed assets so that I have a more accurate estimation of my net worth. One fixed asset I have is a motorcycle, a Honda CG 150 Titan. It was bought in 2006 for 4,100 BRL at market price, but today it's worth 6,500 BRL. Although the motorcycle has depreciated, BRL was also inflated meanwhile, which resulted in this increase in price. I consulted my government's website to see what was the inflation adjustement from 2006 to 2022, and it shows an inflation by a ratio of 2.54401930 in the period. That means the motorcycle would be bought nowadays for around 10,430 BRL (4,100 \* 2.54401930). Now the depreciation is visible, because the motorcycle went from what would be 10,430 BRL in 2006 to 6,500 BRL in 2022. My question is how to account that in GnuCash. The [GnuCash Guide for depreciation](https://codesmythe.gitbooks.io/gnucash-guide/content/en/ch_dep.html) suggests creating two Asset sub-accounts, one for cost and another for depreciation, which in my specific use case is: ``` - Assets - Fixed Assets - Honda CG 150 Titan - Cost 4,100 BRL - Depreciation ? BRL ``` But if I create depreciation transactions from 2006 to 2022 in the account "Depreciation", I will end up with an asset that's worth nothing, which is against the purpose of estimating the value of my fixed assets. One idea I had was creating a custom security for the motorcycle model as you do with stocks. This way, you would treat the motorcycle like you treat companies in a stock exchange, but I don't know if this is good accounting pratice. How should I account this?
The capital gains subject to exclusion are not per calendar days you live in the property, since no-one can actually attribute your gain to a specific date. Instead, it is being prorated. For example, you lived in the property 2 years and rented it out for 8 years before moving into it yourself. So 20% of your capital gain can be excluded under Sec. 121, and 80% cannot. [See 26 USC Sec. 121(b)(5)(B)](https://www.law.cornell.edu/uscode/text/26/121) and [the relevant IRS guidance](https://www.irs.gov/publications/p523#en_US_2021_publink100077085). To your question, the period where it was **your** primary residence is what matters. You would still have depreciation for the rental portion, and depreciation recapture is not excluded under Sec. 121 at all.
305185
I have a small set of data that I need to fit a curve to (see image and data below). [![enter image description here](https://i.stack.imgur.com/w18Wt.jpg)](https://i.stack.imgur.com/w18Wt.jpg) ``` algaeConc adultDens 2.934961 6665.145469 2.921015 6665.144453 2.910769 6665.142377 2.908331 6665.138925 2.907658 6665.134515 2.907278 6665.128736 2.907130 6665.121034 2.907160 6665.117992 2.907326 6665.124547 2.907594 6665.131266 ``` The data came out of a simulation model for which I was stressing a parameter (filtration rate) then recording what happened to my variables (algae concentration and adult density). Now, I need to fit a curve to this, and I have to admit I never saw this kind of decline before. So, I would like some suggestions about how to fit a curve to this. It seems to me that a power law might be needed, but I have never worked with those before and am not sure how to go about it. I've spent all day reading about power laws and trying to fit it on R but only managed to confuse myself, so any help will be highly appreciated. Edit: The X axis is inverted to match the stress on the filtration parameter (increasing stress, caused a decrease in algae concentration).
Here's the unflipped version of your plot: [![plot of data from question](https://i.stack.imgur.com/J0m4P.png)](https://i.stack.imgur.com/J0m4P.png) No choices of $a$ and $b$ in $y=ax^{-b}$ (where $y$ is adult density and $x$ is algae concentration) will fit that shape of relationship; they don't increase and flatten off up to a positive asymptote: [![plot of various power laws for positive a](https://i.stack.imgur.com/wLnmR.png)](https://i.stack.imgur.com/wLnmR.png) (with negative $a$ you can get it to increase to an asymptote, but the fit will be all negative) Indeed, the range of your $x$-variable is so narrow (relative to its mean say) that a typical power curve will look effectively like a straight line. --- **A quick power-law diagnostic plot**: If you don't have intuition for power laws, to see whether a power law is likely to be a reasonable description, plot log y against log x (it doesn't matter which base of logs). It should look close to linear on that scale. In your case we can see immediately that won't happen (indeed the coefficients of variation are so small that the log-scale plot looks a lot like the first plot above aside from the axis scales). There are a number of nonlinear functions used for this kind of "saturation" relationship we see in the plot of the data (dozens of them, really). It's best to proceed from theoretical notions of what is happening (which might lead to a differential equation, say, and then to a functional form) than to guess at functional forms; indeed that sort of theoretical understanding is where most of the common increasing-to-an-asymptote type models arise. --- Here's a few examples of nonlinear models that increase to an asymptote (at least for some parameter values); most of them will fit your data poorly: \begin{eqnarray} E(y) &=& ax/(b+x) \\ E(y) &=& a(1-\exp(-bx)) \\ E(y) &=& a (1 - \exp(-b(x - c))) \\ E(y) &=& a+(c-a)\exp(-bx) \\ E(y) &=& a - c(x-d)^{-b} \\ E(y) &=& a \exp(-b/x) \end{eqnarray} There are many others besides. You should be looking at subject-area knowledge to develop a model, though -- trawling through such a list of models to find one that fits effectively ruins any statistical inference on the same data, so unless you want to go collect another set of data (or indeed possibly several more), don't do that. Strictly speaking you should have a model in place before you even collect data. --- Indeed there further consequences to using data-generated models / data-generated hypotheses that I haven't touched on there; there are a number of posts on site that discuss the effect of this kind of approach to choosing models. In the situation where there's simply no subject-area knowledge to work form, and where the experimenter can collect any amount of data easily, such curve fitting strategies (trawling through a list) may have some value in helping to come up with plausible models -- as long as we take great care to make sure that the choice isn't just an artifact of our one set of data and we take similarly great care not to let the model-selection process affect our ultimate estimation and inference. For example if we take enough data that we can do out of sample testing (i.e. we can see if the model performs well on data we didn't use to pick the model) and other people repeat the experiment (so we can see whether the same curve tends to be picked up across multiple independently-run experiments - that it's robust to the small changes in the setup that naturally occur when one goes to a different lab with their own pieces of equipment and different people running the experiment). Once that's all done and the model actually can be seen to "work" in some sense, we can then sample an entirely new data set on which to perform estimation and inference (again, where possible, using out-of-sample assessment to avoid over-reliance on asymptotic approximations and on assuming the model is actually the data-generating process when computing thinks like intervals and standard errors). If we can't do something like that, such list trawling can be quite dangerous, leading to models that tell us something about the particulars of our data and our modelling process but little of value about the thing we want to model.
305186
Consider a series $a\_n \in {\mathbb R}\_+$ ($n \in {\mathbb Z}\_+$) with \begin{equation} \lim\_{n\rightarrow \infty} a\_n = 0. \end{equation} It is easy to find an example that \begin{equation} \lim\_{n\rightarrow \infty} \sum\_{i=1}^{n}a\_i = \infty. \end{equation} For example $a\_n = 1/n$ or $a\_n = 1/\log(n)$. However, does such a $a\_n$ exist that $\lim\_{n\rightarrow \infty} a\_n = 0$ and \begin{equation} \lim\_{n\rightarrow \infty} \frac {1}{n}\sum\_{i=1}^{n}a\_i = c, \end{equation} where $c > 0$? PS: For $a\_n = 1/n$ or $a\_n = 1/\log(n)$, it can be verified that $c=0$.
If a sequence $(a\_n)$ is convergence and $a\_n\to L$, it can be shown that $$ \lim\_{n\rightarrow \infty} \frac {1}{n}\sum\_{i=1}^{n}a\_i = L $$ also. So basically, you **cannot** find a sequence $(a\_n)$ such that $lim\_{n\to\infty} a\_n=0$ while $\lim\_{n\rightarrow \infty} \frac {1}{n}\sum\_{i=1}^{n}a\_i > 0$ For more information, see [Cesàro summation](https://en.wikipedia.org/wiki/Ces%C3%A0ro_summation) from wikipedia.
305192
Towards the end of *Avengers: Age of Ultron* JARVIS uploads himself into a lifeless Virbranium android. This process creates Vision. Then before the final battle we see Tony Stark uploading a program called FRIDAY into his suit. During the battle we hear a feminine voice in The Iron Man suit instead of JARVIS'. So did JARVIS cease to exist after uploading himself into Vision? If so did JARVIS know that this would happen, or was it because of the Mindstone?
JARVIS was gone, Tony thought... then he found him in pieces, in the internet, protecting the access to the launch codes from Ultron's attempted hack. Tony re-collected JARVIS and took him back to the tower. Tony wanted to put JARVIS into the new fancy body to fight against Ultron. I'm guessing he figured that JARVIS wouldn't be quite the same as when he was in the computer alone, due to the stone, but he figured that it would be better than Ultron having it (or destroying it). Then Quicksilver and Scarlet Witch come in, Quicksilver disconnects everything before the upload is complete and Thor blasts the body with lightning. When the new entity, Vision, awakes, he says that he's not JARVIS, despite speaking with JARVIS' voice, but that he's not Ultron, either. I'm guessing that JARVIS' change into Vision is a combination of the Stone, the partially uploaded data (but not personality) from Ultron, and the lightning bolt of Thor. Now, from a practicality standpoint, I'm guessing that JARVIS, as a standalone entity is "dead" and that FRIDAY (reminiscent of "*[His Girl Friday](http://en.wikipedia.org/wiki/His_Girl_Friday)*") is now going to be the voice of Tony's computer interface. I'm pretty sure Marvel doesn't want to confuse viewers (or listeners, anyway) by having one person ([Paul Bettany](http://en.wikipedia.org/wiki/Paul_Bettany)) play two characters in the same film. According to the [Marvel Wikia](http://marvel.wikia.com/Friday_%28Earth-616%29), Friday appears as a computer A.I. entity in the Earth-616 series.
305539
I have a list of tens of thousands of file names. I want to find which of those files actually exist on disk (in a particular directory). I'm not sure how to start. I could try it with either Python or bash. The list of file names is an ascii file with one file name per line and no other content.
Let's assume your file names are absolute paths, and all files are in one directory, and you want to find only files (not directories, special files, etc), and you don't have spaces or special characters in your file names. ``` sort < yourlist >yourlist_sorted find <absolute path to dir> -type f |sort |comm -1 -3 - yourlist_sorted ``` Will print lines of your file which can't be found with the find, ie. which aren't on your disk. The choice of which files are displayed is controlled by the comm command and the first two options. the `comm` command, sees two files stdin (list from find) and your list the options control which set are filtered (removed). -1 filters lines only in file 1, -2 only in file 2, -3 lines in both file 1 and 2 So, * -2 -3 Prints files that are found on disk and not in your list * -1 -2 Prints files that are found on disk and in your list. <== What you want * -1 -3 Prints files that are only in your list and not on disk.
305808
I need to create highly scalable solution - where field devices in thousands of sites are delivering data in real time to a back end system, and SQL Azure seems to fit the bill nicely in terms of adding sql databases and application servers. Each field device is effectively sending 400 sensor values every second - for about two hours a day, and those 400 sensor values every 5 minutes for all other times forever. Additionally, when an error occurs on this field device, it will send up the last minute's data for all 400 sensors as well (400 \* 60 readings) - causing a mass flood of data when anything goes wrong. I really want to design the system so that the independent field devices and the data in which they store can not affect other devices. Allowing each field device to not affect the performance of other field devices. I started the design with thinking a single database holding all the device's data - but have started to get deadlocks occurring when simulating multiple site devices. Hence, am in the process of moving to a multiple database solution. Where a master database holds a lookup table for all the devices - returning a connection string to the real database At this stage of the project, it is most important that I am able to pass that data back to User Interfaces running in web browsers in real time - updating their screens every second. In future stages of the project it will be necessary to start aggregating data across multiple devices showing statistics such as sum of sensor X in region Y. I can see this will be hard to do with the multiple database approach. So would value any advice e.g. Do you think it is sensible to use Sql Azure to host potentially 1000's of databases and to use this master database to indirectly point to the real ones? Will I have a problem with Connections to the databases from the applications- with issues with connection pooling for example? How will I be able to aggregate data from all these different databases in Sql Azure. Would be interested in all your comments. Regards, Chris.
Since no one else has answered, I'll share some opinions and do some hand-waving. As long as you aren't locking common resources, or are locking resources in the same order, you shouldn't have problems with deadlocks. I'd look at separate tables before separate databases. Each additional database will definitely cost you more but additional tables won't necessarily cost you more. You might need to use more than 1 database becuase of the sheer volume of data you will store or because of the rate at which you need to store your burst traffic. If you can manage it, I think that a table-level granularlity will be more flexible and possibly a good deal cheaper than starting with a database-level granularity. The problem with putting each device's data into it's own tables is that it makes the reporting hard since all of the table names will be different. I presume that you have some way of detecting when you get a "failure resend" of data. You don't want to put the same value in a table twice and I i'm sure that the devices can fail (local power failure?) in ways that have nothing to do with whether or not earlier values where properly stored. WAG: Assuming each "value" is 4 bytes, I calculated about 11.5 MB of collected data per device, per day. (This ignores all kinds of stuff, like device identifiers and timestamps, but I think it is OK as a rough estimate.) So, with "thousands" of sites, we are looking at tens of GB, per day. You don't mention any kind of lifetime on that data. The largest Azure database currently maxes out at 150 GB. You could fill those up pretty quickly. Getting anything to happen in a web browser in a short period of time is iffy. When you are reading from (possibly multiple) databases with GBs of data, continuously inserting lots of new data into the tables you are reading from and interacting with web servers across the open internet, "real time" is wishful thinking. IMO. "Fast enough" is the usual goal. If you can't keep all of the data you need in a single report in one SQL Azure database, it's a problem. There are no linked servers or distributed views (at this point). There is no simple way to aggregate accross many Azure databases. You'd have to pull all of the data to a central location and report from there. I'd guess that the aggregated data would be too large to store in a single SQL Azure database, so you'd have to go to on-premise or maybe EC2. A data mart or warehouse with a star-schema structure would be the classic answer there, but that takes significant processing time and that means no "real time". Also, that's potentially a lot more data transfer from Azure to wherever it goes, and that will cost you. I wouldn't commit to this strategy without a pilot program first. The first thing to do would be to build a single instance (can it handle 400 sensor values a second? (Is that a series of rows, a big denormalized row, an XML document or something else? The format of the incoming data will affect how fast the data can be stored. Can you do bulk inserts, or does it have to be row-by-row?) How about 4,000 sensor values a second? It's possible that an single SQL Azure instance might not be able to store that much that quickly.) and see how it handles insertions at your expected rates and see how the reporting might work. And I'd talk to Microsoft too. Just dealing with the billing for hundreds or thousands of seperate databases might be quirky. I don't know if this is applicable to you, but have you looked at [Microsoft's "Stream Insight" product](https://web.archive.org/web/20221021185242/https://www.microsoft.com/en-us/sql-server/sql-server-2019)? It seems to be aimed at situations like yours. Caveat: I've never used it. The marketting blurb: Effectively analyze large amounts of event data streaming in from multiple sources. Derive insights from critical information in near real time by using Microsoft StreamInsight. Monitor, analyze, and act on data in motion and make informed decisions almost instantaneously While doing some quickly googling, I noticed a [blog posting](https://web.archive.org/web/20160331180856/http://blogs.msdn.com/b/zaneadam/archive/2011/05/16/continuing-our-windows-azure-platform-innovation.aspx) which states that StreamInsight available on SQL Azure as a CTP last year. it might be ready for prime time by now. Good luck, it sounds like an interesting project.
306056
Within my custom `UIButton` class called "ComingHomeButton", I'm not able to change the backgroundcolor I would like to change it without creating images, but if it is the only way I will have to. I also want to change the background color every time the button is tapped. Here is what I have that doesnt work: ``` UIColor *color = [UIColor colorWithRed:1 green:1 blue:1 alpha:0.5]; [self setBackgroundColor:color]; ``` self refers to my "ComingHomeButton" Thank you so much :-)
You can create a counter, and then loop through colors when a button is tapped: ``` -(IBAction)buttonTapped:(id)sender { if (!counter) { counter = 0 } if (counter%3 == 0) { self.backgroundColor = [UIColor redColor]; //or whatever custom color you want to use } else if (counter%3 == 1) { self.backgroundColor = [UIColor blueColor]; } else if (counter%3 == 2) { self.backgroundColor = [UIColor greenColor]; } counter++ } ``` and you can add as many colors as you want to loop through. The % denotes the mod function, making sure that when counter is bigger than 2, it will still return a number between 0 and 2. The number after the % is the number of colors to loop through.
306330
i have a form having some textfields and a textarea (ckeditor), after onclick button art\_title field value is sent to art\_save.php page, but value from textarea is not sent. ``` <script src="ckeditor/ckeditor.js"></script> function saveArt() { var title = document.getElementById('art_title'), art_title = title.value; var desc = document.getElementById('art_body'), art_body = desc.value; jQuery.ajax({ type: 'POST', url: 'art_save.php', data: { title: art_title, aut: art_author, tag: art_tag, desc: art_body } }); return false; } ``` html part ``` <form method="post" name="art" id="art"> <input type="text" id="art_title" name="art_title" placeholder="Here goes your title"/> <textarea class="ckeditor" name="art_body" id="art_body"></textarea> <input type="submit" name="savedraft" id="savedraft" onclick="saveArt();return false;" value="Need more Research ! Save as Draft" class="button"/> </form> ```
You can force CKeditor to update the textarea value using: ``` for (instance in CKEDITOR.instances) { CKEDITOR.instances[instance].updateElement(); } ``` Also, you can use `.serialize` for the data - then you won't have to maintain the AJAX code if parameters change: ``` <script src="ckeditor/ckeditor.js"></script> function saveArt() { for (instance in CKEDITOR.instances) { CKEDITOR.instances[instance].updateElement(); } jQuery.ajax({ type: 'POST', url: 'art_save.php', data: $("#art").serialize() }); return false; } ```
306601
I have a table say `Configs` in which I have the column names ``` id | Name ------------- 1 | Fruits 2 | Vegetables 3 | Pulses ``` I have another table say `Details` ``` id |Field1 | Field2| Field3 | Active | Mandatory ------------------------------------------------- 1 |Apple |Potato |Red gram | 1 |0 2 |Mango |Peas |Chick Peas| 0 |0 ``` I need field1, field2, field3 to be selected as the name of 1st table eg. ``` select id, Field1 as Fruits, Field2 as Vegetables, Field3 as pulses, Active, Mandatory From Details ``` How do I do it?
``` declare @sql nvarchar(max) select @sql = isnull(@sql + ',' ,'') + N'Field' + convert(varchar(10), id) + ' as ' + quotename(Name) from Config -- Form the dynamic SQL select @sql = 'SELECT id,' + @sql + ',Active, Mandatory ' + 'FROM Details' -- Print to verify print @sql -- Execute it exec sp_executesql @sql ```
306738
Here is my config of YAML (all PV, Statefulset, and Service get created fine no issues in that). Tried a bunch of solution for connection string of Kubernetes mongo but didn't work any. Kubernetes version (minikube): 1.20.1 Storage for config: NFS (working fine, tested) OS: Linux Mint 20 YAML CONFIG: ``` apiVersion: v1 kind: PersistentVolume metadata: name: auth-pv spec: capacity: storage: 250Mi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: manual nfs: path: /nfs/auth server: 192.168.10.104 --- apiVersion: v1 kind: Service metadata: name: mongo labels: name: mongo spec: ports: - port: 27017 targetPort: 27017 selector: role: mongo --- apiVersion: apps/v1 kind: StatefulSet metadata: name: mongo spec: selector: matchLabels: app: mongo serviceName: mongo replicas: 1 template: metadata: labels: app: mongo spec: terminationGracePeriodSeconds: 10 containers: - name: mongo image: mongo ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db volumeClaimTemplates: - metadata: name: mongo-persistent-storage spec: storageClassName: manual accessModes: ["ReadWriteMany"] resources: requests: storage: 250Mi ```
I have found few issues in your configuration file. In your service manifest file you use ``` selector: role: mongo ``` But in your statefull set pod template you are using ``` labels: app: mongo ``` One more thing you should use `ClusterIP:None` to use the headless service which is recommended for statefull set, if you want to access the db using dns name.
306855
I'm trying to answer this > > In $\Bbb {R^3} $ consider the ellipsoid: > $2x^2+3y^2+4z^2=1$ It exists a subspace of dimension 2 which intersection with the ellipsoid is a circle. Justify any answer. > > > I know that I must search for a plane since these are the only linear subspaces of $\Bbb {R^3} $ of dimension 2, but I don't know what else to do to check if such a plane exists. any hint would be much appreciated. Thanks!
The problem becomes easier if you go to parametric equations: $$x=a\cos(u)\cos(v)\\ y=b\cos(u)\sin(v)\\ z=c\sin(v)$$ you know that the equation of a circle is $x^2+y^2+z^2=r^2$, being $r$ the radius of the circle. You only have to find $v$ for example, such that the equation of the circle holds: $$r^2=(a^2\cos(v)^2+b^2\sin(v)^2)\cos(u)^2+c^2\sin(u)^2$$ If you want to obtain a constant independent of $u,v$, you have that $(a^2\cos(v)^2+b^2\sin(v)^2)=c^2$. If you consider that $c<b,c$ (if this is not true, you can change the parametrization of the ellipsoid, and follow a similar reasoning), there will be always a solution to the equation and the radius of the circle will be $r=c$. Now, if you have the $v$, you can return back to your ellipsoid parametrized and rebuild the plane which leads to such circle (a plane is defined by three points). This is only one of the infinity solutions (see parametrization chapter of <https://en.wikipedia.org/wiki/Ellipsoid>). You can obtain the rest of solution by taking parallel planes to the one you have computed.
307738
Currently I am receiving the following error when using Java to decrypt a Base64 encoded RSA encrypted string that was made in C#: > > javax.crypto.BadPaddingException: Not PKCS#1 block type 2 or Zero padding > > > The setup process between the exchange from .NET and Java is done by creating a private key in the .NET key store then from the PEM file extracted, created use keytool to create a JKS version with the private key. Java loads the already created JKS and decodes the Base64 string into a byte array and then uses the private key to decrypt. Here is the code that I have in C# that creates the encrypted string: ``` public string Encrypt(string value) { byte[] baIn = null; byte[] baRet = null; string keyContainerName = "test"; CspParameters cp = new CspParameters(); cp.Flags = CspProviderFlags.UseMachineKeyStore; cp.KeyContainerName = keyContainerName; RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(cp); // Convert the input string to a byte array baIn = UnicodeEncoding.Unicode.GetBytes(value); // Encrypt baRet = rsa.Encrypt(baIn, false); // Convert the encrypted byte array to a base64 string return Convert.ToBase64String(baRet); } ``` Here is the code that I have in Java that decrypts the inputted string: ``` public void decrypt(String base64String) { String keyStorePath = "C:\Key.keystore"; String storepass = "1234"; String keypass = "abcd"; byte[] data = Base64.decode(base64String); byte[] cipherData = null; keystore = KeyStore.getInstance("JKS"); keystore.load(new FileInputStream(keyStorePath), storepass.toCharArray()); RSAPrivateKey privateRSAKey = (RSAPrivateKey) keystore.getKey(alias, keypass.toCharArray()); Cipher cipher = Cipher.getInstance("RSA/ECB/PKCS1Padding"); cipher.init(Cipher.DECRYPT_MODE, privateRSAKey); cipherData = cipher.doFinal(data); System.out.println(new String(cipherData)); } ``` Does anyone see a step missing or where the padding or item needs to be changed? I have done hours of reading on this site and others but haven't really found a concrete solution. You're help is vastly appreciated. Thanks. -Matt
Check that you have correctly exchanged the key. Trying to decrypt with an incorrect key is indistinguishable from decrypting badly padded data.
308029
I have a `TestMethod` that will loop through all pages that contain a certain user control on them. The issue I'm running into is that when/if my assertion fails I'm not able to see the page it failed on in the error message or stack trace. Is there a way to customize or add in additional parameters to be shown in the test results details? Not that it's really needed, but here's my code... ``` [TestMethod] public void uiTestCourseListingPages() { UiBrowserWindow uiBrowserWindow = new UiBrowserWindow(); string controlType = "~/_control/course/courseList.ascx"; var request = WebRequest.Create(Utility.GET_PAGES_WITH_CONTROL_URL + controlType); request.ContentType = "application/json; charset=utf-8"; using(var response = request.GetResponse()) { using(var streamReader = new StreamReader(response.GetResponseStream())) { JavaScriptSerializer serializer = new JavaScriptSerializer(); List<PagesWithControl> pagesWithControl = serializer.Deserialize<List<PagesWithControl>>(streamReader.ReadToEnd()); pagesWithControl.ForEach(x => { // launch browser uiBrowserWindow.launchUrl(x.key); // setup assertions Assert.AreEqual( uiBrowserWindow.uiHtmlDocument.searchHtmlElementByAttributeValues<HtmlDiv>(new Dictionary<string, string> { {HtmlDiv.PropertyNames.Class, "footer"} }).Class, "footer" ); }); } } } ```
Sunshirond idea, works perfect for me, and I made some changes in <http://www.android10.org/index.php/articleslibraries/291-twitter-integration-in-your-android-application> source code. The main activity to login and twitt will be something like that: ``` public class PrepareRequestTokenActivity extends Activity { final String TAG = getClass().getName(); private OAuthConsumer consumer; private OAuthProvider provider; private WebView webView; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.my_webview); try { this.consumer = new CommonsHttpOAuthConsumer(Constants.CONSUMER_KEY, Constants.CONSUMER_SECRET); this.provider = new CommonsHttpOAuthProvider(Constants.REQUEST_URL,Constants.ACCESS_URL,Constants.AUTHORIZE_URL); } catch (Exception e) { Log.e(TAG, "Error creating consumer / provider",e); } Log.i(TAG, "Starting task to retrieve request token."); webView = (WebView) findViewById(R.id.mywebview); webView.setWebViewClient(new MyWebViewClient(getApplicationContext(), PreferenceManager.getDefaultSharedPreferences(this))); webView.setScrollBarStyle(View.SCROLLBARS_INSIDE_OVERLAY); WebSettings webSettings = webView.getSettings(); webSettings.setSavePassword(false); webSettings.setSupportZoom(false); webSettings.setSaveFormData(false); webSettings.setJavaScriptEnabled(true); webSettings .setCacheMode(WebSettings.LOAD_NO_CACHE); try { final String url = provider.retrieveRequestToken(consumer, Constants.OAUTH_CALLBACK_URL); webView.loadUrl(url); } catch (Exception e) { webView.clearHistory(); webView.clearFormData(); webView.clearCache(true); finish(); } } private class MyWebViewClient extends WebViewClient { private Context context; private SharedPreferences prefs; public MyWebViewClient(Context context, SharedPreferences prefs) { this.context = context; this.prefs = prefs; } @Override public boolean shouldOverrideUrlLoading(WebView view, String url) { if (url.contains(Constants.OAUTH_CALLBACK_SCHEME)) { Uri uri = Uri.parse(url); new RetrieveAccessTokenTask(context,consumer,provider,prefs).execute(uri); finish(); return true; } return false; } } public class RetrieveAccessTokenTask extends AsyncTask<Uri, Void, Void> { private Context context; private OAuthProvider provider; private OAuthConsumer consumer; private SharedPreferences prefs; public RetrieveAccessTokenTask(Context context, OAuthConsumer consumer,OAuthProvider provider, SharedPreferences prefs) { Log.d("hi there", "im gonna make the twitt"); this.context = context; this.consumer = consumer; this.provider = provider; this.prefs=prefs; } /** * Retrieve the oauth_verifier, and store the oauth and oauth_token_secret * for future API calls. */ @Override protected Void doInBackground(Uri...params) { final Uri uri = params[0]; final String oauth_verifier = uri.getQueryParameter(OAuth.OAUTH_VERIFIER); try { provider.retrieveAccessToken(consumer, oauth_verifier); final Editor edit = prefs.edit(); edit.putString(OAuth.OAUTH_TOKEN, consumer.getToken()); edit.putString(OAuth.OAUTH_TOKEN_SECRET, consumer.getTokenSecret()); edit.commit(); String token = prefs.getString(OAuth.OAUTH_TOKEN, ""); String secret = prefs.getString(OAuth.OAUTH_TOKEN_SECRET, ""); consumer.setTokenWithSecret(token, secret); //context.startActivity(new Intent(context,AndroidTwitterSample.class)); executeAfterAccessTokenRetrieval(); Log.i(TAG, "OAuth - Access Token Retrieved"); } catch (Exception e) { Log.e(TAG, "OAuth - Access Token Retrieval Error", e); } return null; } private void executeAfterAccessTokenRetrieval() { String msg = getIntent().getExtras().getString("tweet_msg"); try { TwitterUtils.sendTweet(prefs, msg); } catch (Exception e) { Log.e(TAG, "OAuth - Error sending to Twitter", e); } } } ``` } If you have some problems make it work, mail me... may I upload the projecto to google code. Note- here is the XML for the webview: ``` <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <WebView android:id="@+id/mywebview" android:layout_width="fill_parent" android:layout_height="fill_parent" /> ```
308408
I have a unique problem which I'm hoping someone can assist with. I have One big text file, our **Production** file. The data in the file is delimited in the following format ``` Reference|Cost Centre|Analytics Base Value|.... UMBY_2288|023437|2883484|... NOT_REAL|1343534|283434|... ``` The average size of this file is about 30MB. with about 120000 rows. and then I have about 20 **Regional** files. these files are similar to the Current big file in structure. except that they are smaller. average size 50000 rows. Now I have to loop through each line in the big Prod file. For each **Reference** code, I have to search through each of the 'Regional' files to see which ones contain that specific reference code. and then copying some of the data from that line into a report. There is no way of predetermining what files to look into. And each reference can be in multiple **Regional** files.. As you can imagine, looping through each row in each file, multiple times is a very time consuming process. Due to memory constraints, I can't load the files into memory. Does anyone have any smart ideas on how I can do this? I don't need code samples. just pointers on ways I could solve this problem. I'm developing the tool in C#.
The solution is to read each file once, storing the date in memory. Keep an associative array or similar data structure where the key is the reference number. Then, as you process the master file, looking up each reference should take just microseconds. If the data is too big to fit in memory, you can create a temporary sqlite database.
308832
I'm learning the [Hibernate Search Query DSL](http://docs.jboss.org/hibernate/search/4.1/reference/en-US/html/search-query.html#search-query-querydsl), and I'm not sure how to construct queries using boolean arguments such as AND or OR. For example, let's say that I want to return all person records that have a `firstName` value of "bill" or "bob". Following the hibernate docs, one example uses the bool() method w/ two subqueries, such as: ``` QueryBuilder b = fts.getSearchFactory().buildQueryBuilder().forEntity(Person.class).get(); Query luceneQuery = b.bool() .should(b.keyword().onField("firstName").matching("bill").createQuery()) .should(b.keyword().onField("firstName").matching("bob").createQuery()) .createQuery(); logger.debug("query 1:{}", luceneQuery.toString()); ``` This ultimately produces the lucene query that I want, but is this the proper way to use boolean logic with hibernate search? Is "should()" the equivalent of "OR" (similarly, does "must()" correspond to "AND")?. Also, writing a query this way feels cumbersome. For example, what if I had a collection of firstNames to match against? Is this type of query a good match for the DSL in the first place?
Yes your example is correct. The boolean operators are called *should* instead of *OR* because of the names they have in the Lucene API and documentation, and because it is more appropriate: it is not only influencing a boolean decision, but it also affects scoring of the result. For example if you search for cars "of brand Fiat" OR "blue", the cars branded Fiat AND blue will also be returned and having an higher score than those which are blue but not Fiat. It might feel cumbersome because it's programmatic and provides many detailed options. A simpler alternative is to use a simple string for your query and use the *QueryParser* to create the query. Generally the parser is useful to parse user input, the programmatic one is easier to deal with well defined fields; for example if you have the collection you mentioned it's easy to build it in a *for* loop.
309629
Suppose $P$ is a polyhedron whose representation as a system of linear inequalities is given to us: $$ P = \{ x ~|~ Ax \leq b\}$$ Define $P'$ be the image of $P$ under the linear transformation which maps $x$ to $Cx$: $$ P' = \{ Cx ~|~ x \in P \}.$$ Given $A,b,C$, our goal is to compute a representation of $P'$ as a system of linear inequalities, i.e., to compute $D,e$ satisfying $$ P' = \{ x ~|~ D x \leq e \}.$$ What is the complexity of this problem (i.e., how many arithmetic operations or bit operations are required)? Let us adopt the convention that $A \in R^{m \times n}$ while $C \in R^{k \times n}$ so the answer should be in terms of $m,n,k$. Further, one may suppose that entries of $P$ are rational numbers whose numerators and denominators take $B$ bits to specify, so that in the bit-model answers should be in terms of $B$ as well.
I would say the following should be not too controversial: * $\emptyset$ and $\varnothing$ are typographical variants of the same mathematical symbol designating the empty set * The symbol was introduced by Bourbaki, was inspired by the Norwegian character Ø, but is a distinct character from it * The intention was most probably to create a symbol related to $0$ (zero), not to O (Oh), distinguished from it by striking it through. After all the empty set has all kinds of relations with the number $0$, but none with the letter O. (By contrast big-Oh and little-o symbols derive from the word "order".) * The symbol has absolutely no relation (apart from appearance) with the lower-case Greek letter phi, with typographical variants $\phi$ and $\varphi$.
309748
I have an Angular reactive form with works most of the time. But sometimes, randomly, I get an error ``` ERROR Error: Cannot find control with name: 'username' at _throwError (forms.js:2337) at setUpControl (forms.js:2245) at FormGroupDirective.push../node_modules/@angular/forms/fesm5/forms.js.FormGroupDirective.addControl (forms.js:5471) at FormControlName.push../node_modules/@angular/forms/fesm5/forms.js.FormControlName._setUpControl (forms.js:6072) at FormControlName.push../node_modules/@angular/forms/fesm5/forms.js.FormControlName.ngOnChanges (forms.js:5993) at checkAndUpdateDirectiveInline (core.js:21093) at checkAndUpdateNodeInline (core.js:29495) at checkAndUpdateNode (core.js:29457) at debugCheckAndUpdateNode (core.js:30091) at debugCheckDirectivesFn (core.js:30051) ``` The component is a simple login page, HTML is ``` <div id="loginSection" class="middle-dashboard-box"> <div class="login-title content-inner-title" i18n="logintitle|@@loginheader">Login</div> <form class="login-form" [formGroup]="loginFormGroup" (ngSubmit)="login($event)"> <mymat-form-error></mymat-form-error> <div> <mat-form-field class="login-input input-control-width"> <input matInput formControlName="username" i18n-placeholder="Username|@@username" placeholder="Username"> </mat-form-field> </div> <div> <mat-form-field class="login-input input-control-width"> <input matInput formControlName="password" type="password" i18n-placeholder="Password|@@password" placeholder="Password"> </mat-form-field> </div> <div class="forgot-pass"> <a href="#public/forgotpassword" i18n="forgot password|@@forgotPasswordLink">Forgot password?</a> </div> <button mat-raised-button color="primary" type="submit" class="main-btn login-btn" i18n="Login|Login button action@@login">Login</button> </form> </div> ``` and TS is ``` @Component( { templateUrl: './login.component.html', styleUrls: ['./login.component.scss'] } ) export class LoginComponent implements OnInit { loginFormGroup: FormGroup; constructor( private router: Router, private formBuilder: FormBuilder, private loginService: LoginService ) { } ngOnInit() { this.loginFormGroup = this.formBuilder.group( { username: ['', Validators.required], password: ['', Validators.required] } ); this.loginFormGroup.controls['username'].setValue( this.loginService.latestUsername ); } login( event ) { // do login stuff, not relevant here } } ``` `this.loginService.latestUsername` points to a getter in the LoginService: ``` get latestUsername(): string { return localStorage.getItem( LoginService.LATESTUSERNAME_KEY ) || '' } ``` The `FormControlName.ngOnChanges` in the stacktrace makes me think ng is running a change detection before the control is properly initialized, but as there is no subscription or similar around, I don't see what could be triggering it. I'm observing the error during Selenium tests (there reliably on some tests, intermittently on others), but have had it once or twice while clicking around manually. The effect only occurs since upgrading to Angular 8.2.14 (from 7)
Looking at the [source code](https://github.com/angular/angular/blob/master/packages/forms/src/directives/reactive_directives/form_control_name.ts) of `FormControlName` and thinking about [when ngOnChanges() can run](https://angular.io/guide/lifecycle-hooks), I reckon `ngOnChanges()` is running before `ngOnInit()` has finished building the form. To diagnose this, I would litter both `ngOnChanges()` and `ngOnInit()` with `console.log()` to trace what's going on. To get around the problem (if this is the cause), it should be as simple as guarding the form with `*ngIf` to make sure the directives aren't prematurely triggering change detection. Ideally you could also use `OnPush` change detection strategy.
310445
I have this form validation i copied from a youtube video but it is coded using cakephp 1.3. so i'm having a hard time migrating it to 2.1 CommentsController.php - function validate\_form() ``` function validate_form() { if($this->RequestHandler->isAjax()) { $this->request->data['Comment'][$this->request['form']['field']] = $this->request['form']['value']; $this->Comment->set($this->request->data); if($this->Comment->validates()) { $this->autorender = FALSE; } else { $error = $this->validateErrors($this->Comment); $this->set('error', $error[$this->request->params['data']['field']]); } } } ``` validate\_form.ctp ``` <?php echo $error; ?> ``` error: ``` Undefined variable: error [APP\View\Comments\validate_form.ctp, line 1] ``` how come it is undefined? please help me my boss gonna fire me if I didn't do this. I'm just a probationary employee a 5 days old junior web dev
With your current code, `$error` is only set if the submitted form does not validate, but it is not set when the form does validate properly. You'll want to add a check to your view, prior to spitting it out, like: ``` if(isset($error)) { echo $error; } else { echo "Form is valid"; // Optionally echo something else if everything went OK. } ``` But this can't possibly be your entire view? This bit will only show the validation errors if any are present, but nothing else.
310808
I have this query interface: ``` public interface IMyDocDao extends MongoRepository<MyDoc, String> { @Query(value = "{ 'stuff.in.nested.doc' : ?0, 'another.stuff.in.nested.doc' : ?1 }") List<MyDoc> findByFoo(String foo, String bar); } ``` If a call `findByFoo("a", "b")` it works correctly. If a call `findByFoo("", "b")` it returns the documents for which the first parameter is blank and the second one is "b". If a call `findByFoo(null, "b")` it returns the documents for which there is no property `stuff.in.nested.doc` and the second one is "b". Is there a way to tell the query to ignore the parameter `foo` in case it is blank/empty ? In other words, I would like `findByFoo("", "b")` to simply return all the documents for which the parameter `bar` is "b", ignoring the first parameter. As the number of parameters may increase, writing multiple methods is not a scalable option.
No you won't need an Apache server. Because Node itself will serve as a Server Especially if you are working with Frameworks like Express. You don't need Nginx or Apache at all, but you can use if you want. It's very cosy to some people use Nginx to do the load balance, or even other stuff like handle the https or server static content. It's your choice at the end. For best performance, though, you'll combine node.js with nginx, depending on the needs of your application. nginx does a better job of serving static files, though at the highest performance for static files will come from using a CDN. Most often, you'll use nginx as a reverse proxy: the web request will be received by nginx, which acts as a load balancer in front of several identical or subdivided servers. If it needs to server static files as well, it will just answer those requests directly.
311024
I'm wanting to setup Bootstrap alerts for successful and unsuccessful inserts. I've had a look around and couldn't find any examples of people having 2 redirects one for a successful insert and one for a failed insert. ``` if(isset($_POST['CountryID'])) { $CountryID = $_POST['CountryID']; if(isset($_POST['CountryName'])) { $CountryName = $_POST['CountryName']; } if(isset($_POST['GPD'])){ $GDP = $_POST['GPD']; } $stmt = oci_parse($conn, "INSERT INTO COUNTRY (COUNTRYNAME, GDP) VALUES (:CountryName, :GDP) WHERE COUNTRYID = :CountryID"); ocibindbyname($stmt, ":CountryID", $CountryID); oci_bind_by_name($stmt, ":CountryName", $CountryName); oci_bind_by_name($stmt, ":GDP", $GDP); oci_execute($stmt); oci_commit($conn); oci_free_statement($stmt); //insert if successful here header("Location: ../index.php?Success=Country has been updated!"); //insert else here header("Location: ../index.php?Danger=Country update failed!"); } ```
It's very easy with Flex box. Check [this awesome guide here](https://css-tricks.com/snippets/css/a-guide-to-flexbox/) ```css /* Normalize */ html,body { height: 100%; margin: 0; padding: 0; } /* Flex power */ .flex-container { display: flex; flex-direction: column; height: 100%; } .flex-container > * { width: 100%; /* Not mandatory, but to be sure */ } .flex-25 { height: 25%; background: red; } .flex-50 { height: 50%; background: blue; } ``` ```html <div class="flex-container"> <div class="flex-25"> </div> <div class="flex-50"> </div> <div class="flex-25"> </div> </div> ``` --- EDIT ---- ```css /* Normalize */ html,body { height: 100%; margin: 0; padding: 0; line-height: 2em; } /* Flex power */ .flex-container { display: flex; align-items: stretch; min-height: 100%; } .flex-25 { width: 25%; background: #333; color: #EEE; } .flex-50 { width: 50%; background: #EEE; } ``` ```html <div class="flex-container"> <div class="flex-25"> <p>Not much here</p> </div> <div class="flex-50"> <p>To infinity<br/>and infinity<br/>and infinity<br/>and infinity<br/>and infinity<br/>and infinity<br/>and infinity<br/>and infinity<br/>and infinity<br/>and infinity<br/>and infinity<br/>and infinity<br/>..<br/>..<br/>..<br/>..</p> </div> <div class="flex-25"> </div> </div> ```
311198
This is my test: ``` @Test public void shouldProcessRegistration() throws Exception { Spitter unsaved = new Spitter("Gustavo", "Diaz", "gdiaz", "gd123"); Spitter saved = new Spitter(24L, "Gustavo", "Diaz", "gdiaz", "gd123"); SpitterRepository spittlerRepository = Mockito.mock(SpitterRepository.class); Mockito.when(spittlerRepository.save(unsaved)).thenReturn(saved); SpitterController spittleController = new SpitterController(spittlerRepository); MockMvc mockSpittleController = MockMvcBuilders.standaloneSetup(spittleController).build(); mockSpittleController.perform(MockMvcRequestBuilders.post("/spitter/register") .param("firstName", "Gustavo") .param("lastName", "Diaz") .param("userName", "gdiaz") .param("password", "gd123")) .andExpect(MockMvcResultMatchers.redirectedUrl("/spitter/" + saved.getUserName())); Mockito.verify(spittlerRepository, Mockito.atLeastOnce()).save(unsaved); } ``` This is my controller: ``` @Controller @RequestMapping(value = "spitter") public class SpitterController { SpitterRepository spitterRepository; @Autowired public SpitterController(SpitterRepository spittlerRepository) { this.spitterRepository = spittlerRepository; } @RequestMapping(value = "/register", method = RequestMethod.POST) public String processRegistration(Spitter spitter){ spitterRepository.save(spitter); return "redirect:/spitter/" + spitter.getUserName(); } } ``` I want to verify that `spitterRepository.save` was called passing the same `unsaved` object I defined in the test. But i'm getting this exception: ``` Argument(s) are different! Wanted: spitterRepository.save( spittr.Spitter@3bd82cf5 ); -> at spitter.controllers.test.SpitterControllerTest.shouldProcessRegistration(SpitterControllerTest.java:48) Actual invocation has different arguments: spitterRepository.save( spittr.Spitter@544fa968 ); ```
Use an `ArgumentCaptor` to capture the value passed to save, and then assert on it. ``` ArgumentCaptor<Spitter> spitterArgument = ArgumentCaptor.forClass(Spitter.class); verify(spittlerRepository, atLeastOnce()).save(spitterArgument.capture()); assertEquals("Gustavo", spitterArgument.getValue().getName()); ``` For asserting if the Bean is the same, I would recommend you to use Hamcrest's samePropertyValues (<http://hamcrest.org/JavaHamcrest/javadoc/1.3/org/hamcrest/beans/SamePropertyValuesAs.html>)
311498
I have a linked list which takes several input files, then it put them into a linked-list to print them later on. I implemented a print function but it does not work well and gives access violation error. I tried to debug, unfortunetly I could not find the root of problem. Error Line in the function: ``` cout << ptr2->command + " "; ``` Run-Time Error : > > First-chance exception at 0x00DAC616 in the file.exe: 0xC0000005: Access violation reading location 0xCDCDCDE1. > > > Here is the code: ``` #include <iostream> #include <fstream> #include <string> #include "strutils.h" using namespace std; struct Commands; struct Functions { string fname; Functions *right; Commands *down; }; struct Commands { string command; Commands *next; }; Functions *head; Functions *temp; Commands *temp2; void StreamToLinkedList(ifstream &inputfile) { string s; getline(inputfile, s); temp = new Functions(); temp->fname = s.substr(0, s.length()); temp2 = temp->down; while (!inputfile.eof()) { getline(inputfile, s); temp2 = new Commands(); temp2->command = s.substr(0, s.length()-1) + ","; temp2 = temp2->next; } inputfile.clear(); inputfile.seekg(0); } void printLinkedList() { Functions *ptr = head; Commands *ptr2; while (ptr != nullptr) { cout << ptr->fname << endl; ptr2 = ptr->down; while (ptr2 != nullptr) { cout << ptr2->command + " "; ptr2 = ptr2->next; } cout << endl; ptr = ptr->right; } } int main() { string file, key, s; ifstream input; cout <<"If you want to open a service (function) defining the file," << endl <<"then press (Y/y) for 'yes', otherwise press any single key" << endl; cin >> key; ToLower(key); if (key == "y") { cout << "Enter file the input file name: "; cin >> file; input.open(file.c_str()); if (input.fail()) { cout << "Cannot open the file." << endl << "Program terminated." << endl; cin.get(); cin.ignore(); return 0; } else { StreamToLinkedList(input); head = temp; temp = temp->right; } } else { cout << "Cannot found any input file to process" <<endl << "Program terminated."<< endl; cin.get(); cin.ignore(); return 0; } do { cout<< "Do you want to open another service defining file?"<<endl << "Press (Y/y) for 'yes', otherwise press any key" <<endl; cin >> key; ToLower(key); if (key == "y") { cout << "Enter file the input file name: "; cin >> file; input.open(file.c_str()); if (input.fail()) { cout << "Cannot open the file." << endl << "Program terminated." << endl; cin.get(); cin.ignore(); return 0; } else { StreamToLinkedList(input); temp = temp->right; } } } while ( key == "y"); cout << "-------------------------------------------------------------------" << endl << "PRINTING AVAILABLE SERVICES (FUNCTIONS) TO BE CHOSEN FROM THE USERS" << endl << "-------------------------------------------------------------------" << endl << endl; printLinkedList(); cin.get(); cin.ignore(); return 0; } ``` What could be the code of the error ?
It appears the response from <http://www.merdeka.com> is gzipped compressed. Give this a try: ``` import gzip import urllib.request def __retrieve_html(self, address): with urllib.request.urlopen(address) as resp: html = resp.read() Helper.log('HTML length', len(html)) Helper.log('HTML content', html) if resp.info().get('Content-Encoding') == 'gzip': html = gzip.decompress(html) return html ``` How to decode your `html` object, I leave as an exercise to you. Alternatively, you could just use the Requests module: <http://docs.python-requests.org/en/latest/> Install it with: ``` pip install requests ``` Then execute like: ``` import requests r = requests.get('http://www.merdeka.com') r.text ``` Requests didn't appear to have any trouble with the response from <http://www.merdeka.com>
311714
I was trying to bind View Bag data to a HTML table. My C# code works fine. Can add data to View Bag as `ViewBag.employeeData = getDBData();` but when i try to access each item, > > Microsoft.CSharp.RuntimeBinder.RuntimeBinderException is thrown. > > > Below given my **C#** code ``` private IEnumerable<Employees> getDBData() { SqlConnection con = null; Employees empData; string sqlConn = ConfigurationManager.ConnectionStrings["dbConnectionString"].ConnectionString; try { using (con = new SqlConnection(sqlConn)) { SqlCommand command = new SqlCommand("SELECT ID,FirstName,LastName,Gender,Salary FROM Employees", con); con.Open(); SqlDataReader read = command.ExecuteReader(); List<Employees> empDetails = new List<Employees>(); while (read.Read()) { empData = new Employees(); empData.ID = Convert.ToInt32(read["ID"]); empData.FirstName = Convert.ToString(read["FirstName"]); empData.LastName = Convert.ToString(read["LastName"]); empData.Gender = Convert.ToString(read["Gender"]); empData.Salary = Convert.ToInt32(read["Salary"]); empDetails.Add(empData); } return empDetails; } } catch (Exception) { return null; } finally { con.Dispose(); } } ``` **Razor** ``` <table class="table table-striped table-condensed"> <thead> <tr> <th>ID</th> <th>FirstName</th> <th>LastName</th> <th>Gender</th> <th>Salary</th> </tr> </thead> <tbody> @foreach (var item in ViewBag.employeeData) { <tr> <td>@item.ID</td> <td>@item.FirstName</td> <td>@item.LastName</td> <td>@item.Gender</td> <td>@item.Salary</td> </tr> } </tbody> </table> ``` Exception: > > 'object' does not contain a definition for 'ID'. > > > How to resolve this?
You don't need to use the ViewBag (and I would recommend never using it). When you return a value from a controller: ``` return empDetails; ``` It becomes the *model* for the *view*. In the view, you need to declare the model type: ``` @model List<Employees> ``` Then you can: ``` @foreach (var item in Model) { <tr> <td>@item.ID</td> <td>@item.FirstName</td> <td>@item.LastName</td> <td>@item.Gender</td> <td>@item.Salary</td> </tr> } ``` > > How does razor know to hold List which is returned from controller class. > > > This is the designed convention of [asp.net-mvc](/questions/tagged/asp.net-mvc "show questions tagged 'asp.net-mvc'"). [Dynamic v. Strongly Typed Views](http://learn.microsoft.com/en-us/aspnet/mvc/overview/views/dynamic-v-strongly-typed-views).
312357
I am trying to perform a SQL query that removes all the rows where people have less then 10 people or more then 100 people in their network. ``` SELECT * FROM table1 Where inNetwork > 10 AND < 1000, ``` Basically, by this query what I mean is only show people with more then 10 and less then 1000 people in their network but it isn't working but when I try only 1 number e.g. ``` SELECT * FROM table1 Where inNetwork > 10 ``` then it works but I want to remove two types of data.
It is a bit more subtle. The `self()` call happens inside the `fun()` which is spawned. Since it it spawned, it has another Pid than the one you expect. Hence, the semantics are different.
312474
I want to try and find the employee who has taught the most classes as the position `Teacher`. So in this I want to print out `Nick`, as he has taught the most classes as a `Teacher`. However, I am getting the error: > > ERROR: column "e.name" must appear in the GROUP BY clause or be used in an aggregate function Position: 24 > > > ``` CREATE TABLE employees ( id integer primary key, name text ); CREATE TABLE positions ( id integer primary key, name text ); CREATE TABLE teaches ( id integer primary key, class text, employee integer, position integer, foreign key (employee) references employees(id), foreign key (position) references positions(id) ); INSERT INTO employees (id, name) VALUES (1, 'Clive'), (2, 'Johnny'), (3, 'Sam'), (4, 'Nick'); INSERT INTO positions (id, name) VALUES (1, 'Assistant'), (2, 'Teacher'), (3, 'CEO'), (4, 'Manager'); INSERT INTO teaches (id, class, employee, position) VALUES (1, 'Dancing', 1, 1), (2, 'Gardening', 1, 2), (3, 'Dancing', 1, 2), (4, 'Baking', 4, 2), (5, 'Gardening', 4, 2), (6, 'Gardening', 4, 2), (7, 'Baseball', 4, 1), (8, 'Baseball', 2, 1), (9, 'Baseball', 4, 2); ``` The SQL statement I am trying to use: ``` SELECT count(t.class), e.name FROM positions p JOIN teaches t ON p.id = t.position JOIN employees e ON e.id = t.employee WHERE p.name = 'Teacher' GROUP BY t.employee; ``` I've been working on this on a sql fiddle: <http://www.sqlfiddle.com/#!17/a8e19c/3>
Your code would work if you make this a rowwise operation so each `start` and `end` represent a single value at a time. ``` library(data.table) a[,S := sum(series[start:end]), 1:nrow(a)] a # start end S #1: 1 4 10 #2: 2 5 14 #3: 3 6 18 ```
312976
I've got quite a few SQL statements like such: `SELECT foo FROM things WHERE user_id IN (1,2,3..n)` Is there a known limit to the number of elements that will safely fit in an IN clause like that?
The 1000 limit in PostgreSQL is not a hard limit, it is a optimization limit, i.e; after 1000 PostgreSQL doesn't handle it very well. Of course I have to ask what in the world are you doing with a 1000 entry IN clause.
313324
I have a basic component that looks as follows. ``` class List extends React.Component { constructor() { super(...arguments); this.state = { selected: null, entities: new Map([ [0, { 'name': 'kot'} ], [1, { 'name': 'blini'} ] ]) }; } render() { return (<div> <ul>{this.renderItems()}</ul> </div>) } renderItems() { return Array.from(this.state.entities.entries()).map(s => { const [ id, entry ] = s; return <li key={id} onClick={() => this.setState(state => ({ selected: id }))} style={{ color: id === this.state.selected ? 'red' : 'black' }} >{entry.name}</li> }) } } ``` This works in order to allow me to click on any element and select it. A selected element will appear red. [codepen for easy editing](http://codepen.io/AlexFrazer/pen/wzWqLO?editors=1010). However, I want behavior that will unset any currently selected item if a click event was found that was not one of these `<li>` elements. How can this be done in React?
quizForm is never cleared, so we just keep appending it forever. I added this before the loop that handles that div: quizForm.innerHTML = ""; ``` function askQuestion(){ choices = allQuestions[currentQuestion].choices; question = allQuestions[currentQuestion].question; if(currentQuestion < allQuestions.length){ submitBtn.value = "Next"; quizQuestion.innerHTML = "<p class='quizQuestion'>" + question + "</p>"; quizForm.innerHTML = ""; for(var i = 0; i < choices.length; i++){ quizForm.innerHTML += " <label class='answerChoice'><input type= 'radio' name= 'radioOption' value=' " + choices[i] + "'/>" + choices[i] + "</label>" + "<br>"; } if(currentQuestion == allQuestions.length - 1){ submitBtn.value = "Submit"; } else if(currentQuestion > allQuestions.length - 1){ calcQuiz(); } } } ```
313721
I have the following map in my game, ``` x------------- y +3 +5 +2 -1 -2 | -1 +4 +2 +1 -1 | -2 +1 -1 -1 -1 | -1 -1 -2 -1 +2 | +2 +2 +2 +2 +4 ``` I want to draw positive points as polygons, like the following: ![willingimg](https://i.stack.imgur.com/X2aRo.png) How can I do that, what algorithm/way I should use?
``` use strict; use warnings; use HTML::TreeBuilder; my $str = "Your HTML STRING"; # Now create a new tree to parse the HTML my $tr = HTML::TreeBuilder->new_from_content($str); # And now find all required tags ex li and create an array my @lists = map { $_->content_list } $tr->find_by_tag_name('li'); # And loop through the array printing values of tag. foreach my $val (@lists) { print $val, "\n"; } ``` Do the same thing for all other tags. It is always recommended that you parse HTML instead of using regex for extraction purpose. It is very difficult to write 100% accurate regex for the purpose.
313801
A drop down contains an array value. If i select one value from drop down it will remove that value from array its working. But on click of reset button it should reset with old values .Here is my code HTML code ``` <html> <head> <script src="angular/angular.js"></script> <script src="js/controllers.js"></script> </head> <body ng-app="myApp"> <div ng-controller="exerciseTypeCtrl"> <select id="exerciseSuperCategory" data-role="listview" ng-options="Lay.id as Lay.value for Lay in Layer " ng-model="itemsuper" ng-change="changeData(itemsuper)"> </select> <input type="button" ng-click="resetLayer()" value="Reset"/> </div> </body> </html> ``` angularjs controler code ``` <script> var myApp = angular.module('myApp',[]); myApp.controller('exerciseTypeCtrl',function($scope) { $scope.Layer = [ { id: 1, value: '0.38'}, { id: 2, value: '0.76'}, { id: 3, value: '1.14'}, { id: 4, value: '1.52'}, { id: 5, value: '1.9'}, { id: 6, value: '2.28'}, { id: 7, value: '2.66'}, { id: 8, value: '3.04'}, { id: 9, value: '3.42'}, { id: 10, value:'3.8'}, { id: 11, value: '4.18'}, { id: 12, value: '4.56'} ]; $scope.changeData = function(value) { var coating = $scope.Layer; if(coating != null) { var j = coating.length; while(j>0) { j =j-1; var make = coating[j]['id']; var present = 0; if(make == value) { coating.indexOf(make); coating.splice(j,1); } } } } $scope.resetLayer -function() { $scope.Layer = $scope.Layer; } }); </script> ``` using splice i am removing the dropdown selected value. but on click of button its not resetting Thanks in advance
You should take a copy of variable while you intialize/get `Layer` data ``` var copyOfLayer = angular.copy($scope.Layer); ``` Then while reseting it you need to do assign old array to the `$scope.Layer` also you need to rewrite your `resetLayer` function to below ``` $scope.resetLayer = function() { $scope.Layer = angular.copy(copyOfLayer) } ``` [**Working Plunkr**](http://plnkr.co/edit/kmvO19mPf0cNZAmJysU3?p=preview)
314065
Here's the data I am working with right now: ``` $city = [ [ "name" => "Dhaka", "areas" => ["d1", "d2", "d3"] ], [ "name" => "Chittagong", "areas" => ["c1", "c2", "c3"] ], [ "name" => "Sylhet", "areas" => ["s1", "s2", "s3"] ], [ "name" => "Barisal", "areas" => ["b1", "b2", "b3"] ], [ "name" => "Khulna", "areas" => ["k1", "k2", "k3"] ], [ "name" => "Rajshahi", "areas" => ["r1", "r2", "r3"] ], ]; ``` I would like to get a list of data associated with only "name" key. I would also like to show the data associated with "area" key once the user go for the corresponding "name". It would be helpful if there were any suggestion on how to better store this data for frequent usage in code.
As long as the names are unique you can do this simple trick: **Example 1** ``` $city = [ [ "name" => "Dhaka", "areas" => ["d1", "d2", "d3"] ], [ "name" => "Chittagong", "areas" => ["c1", "c2", "c3"] ], [ "name" => "Sylhet", "areas" => ["s1", "s2", "s3"] ], [ "name" => "Barisal", "areas" => ["b1", "b2", "b3"] ], [ "name" => "Khulna", "areas" => ["k1", "k2", "k3"] ], [ "name" => "Rajshahi", "areas" => ["r1", "r2", "r3"] ], ]; $names = array_column($city, null, 'name'); print_r($names); ``` Output ``` Array ( [Dhaka] => Array ( [name] => Dhaka [areas] => Array ( [0] => d1 [1] => d2 [2] => d3 ) ) [Chittagong] => Array ( [name] => Chittagong [areas] => Array ( [0] => c1 [1] => c2 [2] => c3 ) ) [Sylhet] => Array ( [name] => Sylhet [areas] => Array ( [0] => s1 [1] => s2 [2] => s3 ) ) [Barisal] => Array ( [name] => Barisal [areas] => Array ( [0] => b1 [1] => b2 [2] => b3 ) ) [Khulna] => Array ( [name] => Khulna [areas] => Array ( [0] => k1 [1] => k2 [2] => k3 ) ) [Rajshahi] => Array ( [name] => Rajshahi [areas] => Array ( [0] => r1 [1] => r2 [2] => r3 ) ) ) ``` Now you can lookup by name ``` print_r($city['Chittagong']['areas']); //["c1", "c2", "c3"] ``` If the names are not guaranteed to be unique, you'll have to build them using foreach, like this: **Example 2** ``` $city = [ [ "name" => "Dhaka", "areas" => ["d1", "d2", "d3"] ], [ "name" => "Chittagong", "areas" => ["c1", "c2", "c3"] ], [ /*--- ADDED to show duplication ---- */ "name" => "Chittagong", "areas" => ["x1", "x2", "x3"] ], [ "name" => "Sylhet", "areas" => ["s1", "s2", "s3"] ], [ "name" => "Barisal", "areas" => ["b1", "b2", "b3"] ], [ "name" => "Khulna", "areas" => ["k1", "k2", "k3"] ], [ "name" => "Rajshahi", "areas" => ["r1", "r2", "r3"] ], ]; $output = []; foreach($city as $k=>$v){ $k = $v['name']; if(!isset($output[$k])) $output[$k] = []; //initialize $output[$k][] = $v; } print_r($output); ``` Which will give you something like this: ``` Array ( [Dhaka] => Array ( [0] => Array ( [name] => Dhaka [areas] => Array ( [0] => d1 [1] => d2 [2] => d3 ) ) ) [Chittagong] => Array ( [0] => Array ( [name] => Chittagong [areas] => Array ( [0] => c1 [1] => c2 [2] => c3 ) ) [1] => Array ( [name] => Chittagong [areas] => Array ( [0] => x1 [1] => x2 [2] => x3 ) ) ) [Sylhet] => Array( ... ) ``` As you can see this adds an extra level in to contain the multiple occurrences. Of course you could just combine the `area` when duplicate (in the foreach). But that is up to you. Here is a quick example of that (same data as above): **Example 3** ``` $output = []; foreach($city as $k=>$v){ $k = $v['name']; if(!isset($output[$k])) $output[$k] = []; //initialize $output[$k] = array_merge($output[$k], $v['areas']); //merge the areas } print_r($output); ``` Output ``` Array ( [Dhaka] => Array ( [0] => d1 [1] => d2 [2] => d3 ) [Chittagong] => Array ( [0] => c1 [1] => c2 [2] => c3 [3] => x1 [4] => x2 [5] => x3 ) .... ) ``` If it was me, I would go with the last one, just because of the simplicity it will have working with it later. ***PS***. if your pulling data out of a Database with `PDO`, you can use `$stmt->fetchAll(PDO::FETCH_GROUP)` which will give you something like the second example automatically. The only caveat here is that the `name` column is the first one selected in the Query, which is the column it will group on. Cheers.
314303
Lets say I have `n` files in a directory with filenames: `file_1.txt, file_2.txt, file_3.txt .....file_n.txt`. I would like to import them into Python individually and then do some computation on them, and then store the results into `n` corresponding output files: `file_1_o.txt, file_2_o.txt, ....file_n_o.txt.` I've figured out how to import multiple files: ``` import glob import numpy as np path = r'home\...\CurrentDirectory' allFiles = glob.glob(path + '/*.txt') for file in allFiles: # do something to file ... ... np.savetxt(file, ) ??? ``` Not quite sure how to append the `_o.txt` (or any string for that matter) after the filename so that the output file is `file_1_o.txt`
Can you use the following snippet to build the output filename? ``` parts = in_filename.split(".") out_filename = parts[0] + "_o." + parts[1] ``` where I assumed in\_filename is of the form "file\_1.txt". Of course would probably be better to put `"_o."` (the suffix before the extension) in a variable so that you can change at will just in one place and have the possibility to change that suffix more easily. In your case it means ``` import glob import numpy as np path = r'home\...\CurrentDirectory' allFiles = glob.glob(path + '/*.txt') for file in allFiles: # do something to file ... parts = file.split(".") out_filename = parts[0] + "_o." + parts[1] np.savetxt(out_filename, ) ??? ``` but you need to be careful, since maybe before you pass `out_filename` to `np.savetxt` you need to build the full path so you might need to have something like `np.savetxt(os.path.join(path, out_filename), )` or something along those lines. If you would like to combine the change in basically one line and define your "suffix in a variable" as I mentioned before you could have something like ``` hh = "_o." # variable suffix .......... # inside your loop now for file in allFiles: out_filename = hh.join(file.split(".")) ``` which uses another way of doing the same thing by using join on the splitted list, as mentioned by @NathanAck in his answer.
314757
I am migrating a `.Net Framework` project to `.Net Core`. In my `.Net Framework` code I call a stored procedure and it returns a `list<int>` in a `datatable`: ``` DataTable dt = new DLDistinctWarehousePackingOperation().CreatePacking(packingId, userId, previousOrderId, totalAwaitingOrdersAllowed); List<int> items = new List<int>(); foreach (DataRow dr in dt.Rows) { items.Add(int.Parse(dr["orderid"].ToString())); } return items; ``` This is the `.Net Core` code I have come up with until now: ``` List<int> items = dbContext.FromSql("CreatePacking") .Select(t => new List<int> { t.OrderId.Value }) .ToList(); ``` But it doesn't work. The problem is that an object needs to follow dbContext - like this dbContext.MyObject.FromSql - but in this case it should just be a List of ints - the stored procedure is not connected to any specific object. And how can I return a `List<int>`? or perhaps it has to be converted into a class? Thanks in advance. [![enter image description here](https://i.stack.imgur.com/LqjNm.png)](https://i.stack.imgur.com/LqjNm.png)
Assume the stored procedure name is `CreatePacking`, define the property in your DbContext. For example ``` public class MyContext : DbContext { ............... [NotMapped] public DbSet<CreatePackingResult> CreatePackingResult { get; set; } ............... ............... } ``` And then define the result class ``` public class CreatePackingResult { public int OrderId { get; set;} } ``` Finally ``` List<int> items = dbContext.CreatePackingResult.FromSqlRaw("EXECUTE [dbo].[CreatePacking]") .Select(t => t.OrderId) .ToList(); ```
314787
I am looking for someone to verify my proof of the problem in the title. Of course, if you believe it to be wrong, you are welcome to shred it into a million pieces and if I made any logical errors please point them out. I'll write my solution out in the most coherent way I know how to. I am also interested, even if the proof provided below is correct, is there a "simpler" (whatever you consider that to mean) way to approach this? **Proof** We have that $c$ divides $a+b$. This means $\exists \lambda \in \mathbb{Z}$, \begin{equation\*} a+b = \lambda c. \end{equation\*} Note that $a=\lambda c -b$ and $b = \lambda c - a$. This will be immediately useful. Let $x \in \langle a \rangle + \langle c \rangle$. Then $x=sa+tc$ for some $s,t\in\mathbb{Z}$. $sa+tc = s(\lambda c - b) + tc = (-s)b + (s\lambda + t)c \Rightarrow x \in \langle b \rangle + \langle c \rangle$. This means $\langle a \rangle + \langle c \rangle \subseteq \langle b \rangle + \langle c \rangle$. It is easy to verify that $\langle b \rangle + \langle c \rangle \subseteq \langle a \rangle + \langle c \rangle$ in the same way and hence $\langle a \rangle + \langle c \rangle = \langle b \rangle + \langle c \rangle$. Therefore we have gcd$(a,c)=$gcd$(b,c)$. To show that $a$ is relatively prime to $c$ and $b$ is relatively prime to $c$ we suppose that there are some $f,g\in\mathbb{Z}$ with $f\neq \pm 1$ and $g\neq \pm 1$ such that $a=fc$ and $c=fa$, $b=gc$ and $c=gb$. Then \begin{equation\*} a=fc=(fg)b \end{equation\*} Now, since we have established that $f$ and $g$ are integers, $fg$ is an integer. Furthermore, $fg\neq \pm 1$. This is a contradiction, since $a$ and $b$ are relatively prime. Therefore we conclude that \begin{equation\*} \text{gcd}(a,c) = \text{gcd}(b,c) = 1. \end{equation\*}
Let $e\_i\in W$ satisfy $(e\_i)\_j=1$ if $i=j$ and $(e\_i)\_j=0$ otherwise. Then $$\langle Te\_i,e\_j\rangle=\cases{1\,\,{\rm if}\,\,j\leq i,\\0 \,\,{\rm if}\,\,j>i.}$$ Thus if $T^\*$ exists: $$\langle e\_i,T^\*e\_j\rangle=\cases{1\,\,{\rm if}\,\,j\leq i,\\0 \,\,{\rm if}\,\,j>i.}$$ So we have $(T^\*e\_j)\_i=1$ for all $i\geq j$ which contradicts $T^\*e\_j\in W$.