date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/22 | 746 | 2,010 | <issue_start>username_0: I'm using an R Notebook and I'd like my plots to automatically save to disk when the code is run, as well as display inline.
[knitr: include figures in report \*and\* output figures to separate files](https://stackoverflow.com/questions/27992239/knitr-include-figures-in-report-and-output-figures-to-separate-files) addresses this for R Markdown but the solution given doesn't work for an R Notebook. Is there a similar option for R Notebooks?<issue_comment>username_1: Since this is actually in JSON format the answer is easy:
```
require 'json'
id_json = "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]"
ids = JSON.load(id_json)
```
The reason your solution "lops off" the first number is because of the way you're splitting. The first "number" in your series is actually `"[1"` which to Ruby is not a number, so it converts to 0 by default.
Upvotes: 3 <issue_comment>username_2: If you do not wish to use `JSON`,
```
ids.scan(/\d+/).map(&:to_i)
#=> [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
If `ids` may contain the string representation of negative integers, change the regex to `/-?\d+/`.
Upvotes: 2 <issue_comment>username_3: To summarize: you get `0` as first element in the array because of non-digital character at the beginning of your string:
```
p '[1'.to_i #=> 0
```
Maybe there is a better way to recieve original string. If there is no other way to recieve that string you can simply get rid off first character and your own solution will work:
```
ids = "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11]"
p ids[1..-1].split(",").map(&:to_i)
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11]
```
[@username_1](https://stackoverflow.com/users/87189/username_1)'s and [@CarySwoveland](https://stackoverflow.com/users/256970/cary-swoveland)'s solutions work perfectly fine. Alternatively:
```
ids = "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11]"
p ids.tr("^-0-9", ' ').split.map(&:to_i)
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11]
```
Keep in mind that `&:` first appeared in Ruby 1.8.7.
Upvotes: 0 |
2018/03/22 | 1,115 | 3,257 | <issue_start>username_0: ```html
[Home](index.html)
Home
```
Is there any logical difference between the two anchors created above? Any performance difference or how GET method is invoked?
I looked the request/ response headers for both anchors, and they are almost same, only extra '?' in request URL when is used.
```
Request URL:http://localhost:30272/index.html
Request Method:GET
Status Code:304 Not Modified
Remote Address:[::1]:30272
Referrer Policy:no-referrer-when-downgrade
Accept-Ranges:bytes
Request URL:http://localhost:30272/index.html?
Request Method:GET
Status Code:304 Not Modified
Remote Address:[::1]:30272
Referrer Policy:no-referrer-when-downgrade
Accept-Ranges:bytes
```
Edit 1: While trying to understand the GET in-depth, I encountered the following statement in RFC documentation.
>
> A payload within a GET request message has no defined semantics;
> sending a payload body on a GET request might cause some existing
> implementations to reject the request.
> <https://www.rfc-editor.org/rfc/rfc7231#section-4.3.1>
>
>
>
Does invoking GET methods using make any difference in caching and/or proxying?
Edit 2: As is not intended to create anchors, is it more vulnerable to [HTTP header injection](https://en.wikipedia.org/wiki/HTTP_header_injection)?
Edit 3: The Wikipedia article on [CGI](https://en.wikipedia.org/wiki/Common_Gateway_Interface) says,
>
> In brief, the CGI program receives HTTP forms data via Unix/Linux
> standard input, and most other data (such as URL paths, URL arguments,
> and HTTP header data) via well-known Unix/Linux process environment
> variables.
>
>
>
Does it mean GET request from and will be received differently by CGI program?<issue_comment>username_1: Since this is actually in JSON format the answer is easy:
```
require 'json'
id_json = "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]"
ids = JSON.load(id_json)
```
The reason your solution "lops off" the first number is because of the way you're splitting. The first "number" in your series is actually `"[1"` which to Ruby is not a number, so it converts to 0 by default.
Upvotes: 3 <issue_comment>username_2: If you do not wish to use `JSON`,
```
ids.scan(/\d+/).map(&:to_i)
#=> [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
If `ids` may contain the string representation of negative integers, change the regex to `/-?\d+/`.
Upvotes: 2 <issue_comment>username_3: To summarize: you get `0` as first element in the array because of non-digital character at the beginning of your string:
```
p '[1'.to_i #=> 0
```
Maybe there is a better way to recieve original string. If there is no other way to recieve that string you can simply get rid off first character and your own solution will work:
```
ids = "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11]"
p ids[1..-1].split(",").map(&:to_i)
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11]
```
[@username_1](https://stackoverflow.com/users/87189/username_1)'s and [@CarySwoveland](https://stackoverflow.com/users/256970/cary-swoveland)'s solutions work perfectly fine. Alternatively:
```
ids = "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11]"
p ids.tr("^-0-9", ' ').split.map(&:to_i)
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11]
```
Keep in mind that `&:` first appeared in Ruby 1.8.7.
Upvotes: 0 |
2018/03/22 | 779 | 3,021 | <issue_start>username_0: I've got a global npm package installed, [TileServer](https://github.com/klokantech/tileserver-gl), that I run as service through command line. I'd like to convert this to a guest executable in service fabric, but I'm having trouble implementing.
Pre-Guest executable, I would invoke the following command in cmd:
```
tileserver-gl-light --port=8788 map.mbtiles
```
My configuration for my guest executable is:
```
tileserver-gl-light
--port=8788 c:\maptiles.mbtiles
Work
```
Unfortunately, the error I get when trying to run the service just says "*There were deployment errors. Continue?*"
Any ideas on how to get this working?
Thanks!<issue_comment>username_1: Can you check to see if that application package validates on your local machine by calling Test-ServiceFabricApplicationPackage?
Usually SF expects the file to be
1. Present in the application package and
2. Some sort of executable file that the OS understands (for windows .bat, .exe, etc.)
In this case I think what you're saying is that the tileserver bits are actually outside the package (wherever your node packages are), and you're actually trying to use **node** to start it (since the tileserver-gl packages aren't natively executable by Windows).
If that's the case your program should probably be something like a batch file that just says "tileserver-gl-light" and then your command line args in it.
Include that batch file in your code package and reference that as your program (and straighten out how you want to pass the args) and you should be good to go.
Upvotes: 3 [selected_answer]<issue_comment>username_2: As @username_1 pointed out, the files must be present in the application package, and we must call the executable as something the OS understands. This presents particular challenges for node modules as:
1. They must be wrapped in a .bat
2. They can't be called using the global command
3. They need parameters
4. They files, node\_modules, often has issues with file path length issues
My end service had the following file structure:
```
- ApplicationPackageRoot
- MyService.ServicePkg
- Code
- 7za.exe
- node.exe
- npm.7z
- start.bat
```
My ServiceManifest.xml had these values in it:
```
7za.exe
x npm.7z
CodeBase
start.bat
CodeBase
```
And finally, my `start.bat` was comprised of the following line:
```
.\node.exe npm/node_modules/tileserver-gl-light/sc/main.js --port=%Port% %TilePath%
```
So how did this work? The npm.7z in the Code consisted of my node\_modules, pre-zipped. The 7za was the portable version of 7zip which allows use to get around the file path length issues present on windows. 7za is called in the SetupEntryPoint.
Then the SF calls start.bat, which reads in the environmental variables of Port and TilePath. This has to called the exactly .js file called by the normal tileserver-gl-light command.
Doing this results in a properly working node application in a service fabric guest executable.
Upvotes: 2 |
2018/03/22 | 760 | 2,774 | <issue_start>username_0: I'm using sample codes from documentation and I'm trying to connect to server using Prosys OPC UA Client. I have tried opcua-commander and integration objects opc ua client and it looks like server works just fine.
Here's what is happening:
1. After entering endpointUrl, client adds to url `-- urn:NodeOPCUA-Server-default`.
2. Client asks to specify security settings.
3. Client asks to choose server - only 1 option and it's urn:NodeOPCUA-Server-default.
And it goes back to step 2 and 3 over and over.
If I just minimize prosys client without closing configuration after some time I get this info in terminal:
`Server: closing SESSION new ProsysOpcUaClient Session15 because of timeout = 300000 has expired without a keep alive
\x1B[46mchannel = \x1B[49m fc00:db20:35b:7399::5:10.10.13.2 port = 51824`
I have tried this project and it works -> [node-opcua-htmlpanel](https://github.com/node-opcua/node-opcua-htmlpanel). What's missing in sample code then?
After opening debugger I have noticed that each Time I select security settings and hit OK, server\_publish\_engine reports:
`server_publish_engine:179 Cencelling pending PublishRequest with statusCode BadSecureChannelClosed (0x80860000) length = 0`<issue_comment>username_1: The issue has been handled at the [Prosys OPC Forum](https://forum.prosysopc.com/forum/opc-ua-client/connecting-to-node-opc-ua-server/#p3741):
>
> The error happens because the server sends different
> EndpointDescriptions in GetEndpointsResponse and
> CreateSessionResponse.
>
>
> In GetEndpoints, the returned EndpointDescriptions contain
> TransportProfileUri=<http://opcfoundation.org/UA-Profile/Transport/uatcp-uasc-uabinary>.
> In CreateSessionResponse, the corresponding TransportProfileUri is
> empty.
>
>
> In principle, the server application is not working according to
> specification. The part 4 of the OPC UA specification states that “The
> Server shall return a set of EndpointDescriptions available for the
> serverUri specified in the request. … The Client shall verify this
> list with the list from a DiscoveryEndpoint if it used a
> DiscoveryEndpoint to fetch the EndpointDescriptions. It is recommended
> that Servers only include the server.applicationUri, endpointUrl,
> securityMode, securityPolicyUri, userIdentityTokens,
> transportProfileUri and securityLevel with all other parameters set to
> null. Only the recommended parameters shall be verified by the
> client.”
>
>
>
Upvotes: 1 <issue_comment>username_2: This is due to a specific interoperability issue that was introduced in node-opcua@0.2.2. this will be fixed in next version of node-opcua. The resolution can be tracked here <https://github.com/node-opcua/node-opcua/issues/464>
Upvotes: 3 [selected_answer] |
2018/03/22 | 795 | 2,845 | <issue_start>username_0: I am running this code snippet in linux terminal using nodejs. Although the (key, value) pair is correctly set, the code prints undefined. Any explanation or work around for this problem?
```
function trial() {
var obj1 = {};
var obj2 = {};
obj1.fund = 1;
obj1.target = 2;
obj2.fund = 1;
obj2.target = 2;
states = [];
states.push(obj1);
states.push(obj2);
actions = [1,2,3,4];
table = new Map();
for (var state of states) {
for (var action of actions) {
cell = {};
cell.state = state;
cell.action = action;
table.set(cell, 0);
}
}
var obj = {};
obj.state = states[1];
obj.action = actions[1];
console.log(table.get(obj));
}
```<issue_comment>username_1: Try this one. You will probably get what you need. I did a small beautify.
```
function trial() {
var obj1 = {
fund: 1,
target: 2
},
obj2 = {
fund: 1,
target: 2
},
obj = {},
states = [
obj1, obj2
],
actions = [1,2,3,4],
table = new Map(),
cell
;
for (var state of states) {
for (var action of actions) {
cell = {};
cell.state = state;
cell.action = action;
table.set(cell, 0);
}
}
obj.state = states[1];
obj.action = actions[1];
return(obj);
//console.log(table.get(obj));
}
console.log(trial())
```
Upvotes: 0 <issue_comment>username_2: You need the original object reference to match the key in table(`Map()`), let's hold every new `cell` which is each object reference to show that.
And even you have a deep clone of object, to `Map()`, it is not the same key.
```js
var obj1 = {};
var obj2 = {};
obj1.fund = 1;
obj1.target = 2;
obj2.fund = 1;
obj2.target = 2;
states = [];
states.push(obj1);
states.push(obj2);
actions = [1,2,3,4];
table = new Map();
var objRefs = [];
var count = 0;
for (var state of states) {
for (var action of actions) {
cell = {};
cell.state = state;
cell.action = action;
table.set(cell, count);
objRefs.push(cell); //hold the object reference
count ++;
}
}
for (var i = 0; i < objRefs.length; i++) {
console.log(table.get(objRefs[i]));
}
// Copy by reference
var pointerToSecondObj = objRefs[1];
console.log(table.get(pointerToSecondObj));
//Deep clone the object
var deepCloneSecondObj = JSON.parse(JSON.stringify(objRefs[1]));
console.log(table.get(deepCloneSecondObj));
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 198 | 660 | <issue_start>username_0: I would like to fade an entity in A-Frame and I can't seem to find a solution using the documentation. Any help would be appreciated
Thanks!<issue_comment>username_1: For a-frame 0.9.0+ check out Kevins anwser. is deprecated since then.
---
You could use changing the `material.opacity` attribute like this:
```
here.
Live fiddle [here](https://jsfiddle.net/5wwraxcs/3/).
```
Upvotes: 2 <issue_comment>username_2: Include animaton component: <https://github.com/username_2/kframe/tree/master/components/animation>
Then:
```
```
I go through `components` for performant tweening that skips A-Frame setAttribute.
Upvotes: 2 |
2018/03/22 | 805 | 3,005 | <issue_start>username_0: I am trying to send an object from my component which is from the input of the user to the service I have.
This is my code..
>
> Sample.html
>
>
>
```html
Subject
{{subject.title}}
Date
Proceed
```
>
> Sample.ts
>
>
>
```js
public attendanceSet = {}
proceed(attendanceSet) {
this.navCtrl.push(CheckAttendancePage, {
setData: attendanceSet
});
}
```
>
> Service
>
>
>
```js
private attendanceListRef: AngularFireList
public setAttendance = {}
constructor(private db: AngularFireDatabase, private events: Events, public navParams: NavParams, private method: CheckAttendancePage){
this.attendanceListRef = this.db.list('attendance/')
}
addAttendance(attendance: Attendance) {
return this.attendanceListRef.push(attendance)
}
```
What I'm trying to do is getting data from the sample.html which I will later use to define the path of the "attendanceListRef" like this "attendance/subject/date"
Is there any proper or alternative way to do this? **Thanks!**<issue_comment>username_1: I tried to save the attendanceSet on an event then add listener on the service but is not successful. It seems like events do not work on services? Cause when I added the listener to a different component is worked but not on the service.
Upvotes: 0 <issue_comment>username_2: It's quite simple to send data to a `Service` with Angular. I'll give you three options in this example.
`service.ts`
```js
import { Injectable } from '@angular/core';
import { Events } from 'ionic-angular';
@Injectable()
export class MyService {
// option 1: Using the variable directly
public attendanceSet: any;
// option 3: Using Ionic's Events
constructor(events: Events) {
this.events.subscribe('set:changed', set => {
this.attendanceSet = set;
});
}
// option 2: Using a setter
setAttendanceSet(set: any) {
this.attendanceSet = set;
}
}
```
In your `Component` you can do the following.
```js
import { Component } from '@angular/core';
import { Events } from 'ionic-angular';
// import your service
import { MyService } from '../../services/my-service.ts';
@Component({
selector: 'something',
templateUrl: 'something.html',
providers: [MyService] // ADD HERE -> Also add in App.module.ts
})
export class Component {
newAttendanceSet = {some Object};
// Inject the service in your component
constructor(private myService: MyService) { }
proceed() {
// option 1
this.myService.attendanceSet = this.newAttendanceSet;
// option 2
this.myService.setAttendanceSet(this.newAttendanceSet);
// option 3
this.events.publish('set:changed', this.newAttendanceSet);
}
}
```
Basically I'd recommend option 1 or 2. 3 is considered bad practice in my eyes but it still works and sometimes allows for great flexibility if you need to update your value over multiple services/components.
Upvotes: 3 [selected_answer] |
2018/03/22 | 396 | 1,323 | <issue_start>username_0: I'm looking for a fast and effective way to draw a QPixmap with QPainter, but have the pixmap appear darker then normal. Is there some sort of filter or effect that can be applied to either the QPixmap or QPainter while drawing to create this effect?<issue_comment>username_1: You don't have pixel access in `QPixmap`, so going over the pixels and darkening them is out of the question.
You can however fill a pixmap with a transparent black brush, and use a number of composition modes to further customize the result.
```
QPainter painter(this);
QPixmap pm("d:/test.jpg");
painter.drawPixmap(QRect(0, 0, 400, 200), pm);
painter.translate(0, 200);
painter.drawPixmap(QRect(0, 0, 400, 200), pm);
painter.fillRect(QRect(0, 0, 400, 200), QBrush(QColor(0, 0, 0, 200)));
```
[![enter image description here](https://i.stack.imgur.com/Iy7UR.jpg)](https://i.stack.imgur.com/Iy7UR.jpg)
Upvotes: 3 <issue_comment>username_2: Some of the earlier comments will leave a dark gray rectangle on your screen. Using the composition mode, will darken everything visible, yet leaving any transparent areas still transparent.
```
painter.setCompositionMode (QPainter::CompositionMode_DestinationIn);
painter.fillRect (rect, QBrush (QColor (0,0,0,128)));
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,142 | 4,360 | <issue_start>username_0: I'm very new to C# so please excuse me if i'm asking stupid question or i'm making my question incorrectly.
Here is my code:
```
using UnityEngine;
using System;
namespace DuloGames.UI
{
[Serializable]
public class UISpellInfo
{
public int ID;
public string Name;
public Sprite Icon;
public string Description;
public float Range;
public float Cooldown;
public float CastTime;
public float PowerCost;
public float test;
[BitMask(typeof(UISpellInfo_Flags))]
public UISpellInfo_Flags Flags;
}
}
```
Here is my `UIspellDatabase` class:
```
using UnityEngine;
namespace DuloGames.UI
{
public class UISpellDatabase : ScriptableObject {
#region singleton
private static UISpellDatabase m_Instance;
public static UISpellDatabase Instance
{
get
{
if (m_Instance == null)
m_Instance = Resources.Load("Databases/SpellDatabase") as UISpellDatabase;
return m_Instance;
}
}
#endregion
public UISpellInfo[] spells;
///
/// Get the specified SpellInfo by index.
///
/// Index.
public UISpellInfo Get(int index)
{
return (spells[index]);
}
///
/// Gets the specified SpellInfo by ID.
///
/// The SpellInfo or NULL if not found.
/// The spell ID.
public UISpellInfo GetByID(int ID)
{
for (int i = 0; i < this.spells.Length; i++)
{
if (this.spells[i].ID == ID)
return this.spells[i];
}
return null;
}
}
}
```
Here is how i foreach the existing instances of `UISpellInfo`:
```
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using WorldServer;
using Spells;
using DuloGames.UI;
public class Character : MonoBehaviour
{
private void SeeAllInstances()
{
foreach (var item in UISpellDatabase.Instance.spells)
{
Debug.Log("ID->" + item.ID + " name: " + item.Name);
}
}
```
I can clearly see all the instances i have for this class `UISpellInfo`.
What i want to achieve is just to add more instances to array `UISpellInfo[] spells;`.
As far as i understand all the instances of the class `UISpellInfo` are loaded here `UISpellInfo[] spells;` then on the foreach i do in `SeeAllInstances()` i can see all the instances and the data hold for every instance.
My question if i'm even asking correctly is how to add more instances in `UISpellInfo[] spells;` ?
Can someone help me on this ?<issue_comment>username_1: I think you need to use [`List`](https://msdn.microsoft.com/en-us/library/6sh2ey19(v=vs.110).aspx) for your scenario, as the `Array` is declared with a size and we have to keep track when adding removing elements to know how many elements are in it and which slots have value or not so that we can add new item at the last available index.
For the situations where we don't know in advance the number of elements that will go in it `List` is a better option and if you are using that then you just need to call `Add()` method on your instance to add new item in it.
Your code in that case would be like:
```
public List spells;
```
and this will hold all the items of type `UISpellInfo` and now down somewhere in your code you can do like following to add:
```
UISpellDatabase.Instance.spells.Add(new UISpellInfo());
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: To add objects to an array, you would need to reconstruct the array with the new element, and assign it to `UISpellInfo[] spells;` Here's a method that you can use to accomplish this:
```cs
public void AddUISpellInfo(UISpellInfo infoToAdd) {
Array.Resize(ref this.spellInfoArray, this.spellInfoArray.Length + 1);
this.spellInfoArray[this.spellInfoArray.Length - 1] = infoToAdd;
}
```
The method takes whatever item you wish to add as an explicit parameter, as well as a reference to the existing array via the implicit parameter this. You can see more on `Array.Resize()` [here](https://msdn.microsoft.com/en-us/library/bb348051(v=vs.110).aspx "MSDN - Array.Resize(T) Method").
Upvotes: 2 |
2018/03/22 | 742 | 2,317 | <issue_start>username_0: I created a toy application that adds vertices and edges to a CosmosDB graph collection, and prints the consumed RUs for each request.
This is the output:
```
Running g.V().drop()
RUs: 34.88 //clean slate
Running g.V('1').addE('knows').to(g.V('1'))
RUs: 1.97 //don't actually create any edge, no vertices present
Running g.addV('person').property('id', '1').property('tenantId', '1')
RUs: 5.71
Running g.V('1').addE('knows').to(g.V('1'))
RUs: 11.4 //1st edge, 1 vertex present
Running g.addV('person').property('id', '2').property('tenantId', '1')
RUs: 5.71 //constant vertex creation cost
Running g.V('1').addE('knows').to(g.V('1'))
RUs: 11.76 //2nd edge, 2 vertices + 1 edge present - cost goes up
Running g.addV('person').property('id', '3').property('tenantId', '2')
RUs: 5.71 //constant vertex creation cost - this vertex is on a different partition
Running g.V('1').addE('knows').to(g.V('1'))
RUs: 12.1 //3rd edge, 3 vertices + 2 edges present - cost goes up
Running g.V('1').addE('knows').to(g.V('1'))
RUs: 12.28 // 4th edge, 3 vertices + 3 edges present - cost goes up
Running g.V('1').addE('knows').to(g.V('1'))
RUs: 12.46 // 5th edge, 3 vertices + 4 edges present - cost goes up
```
The cost of adding a vertex is constant, but the cost of adding an edge increases with the number of vertices and edges already in the graph.
Any idea why this is happening?
EDIT:
I tried the same thing on a collection WITHOUT partitions and whaddaya know?
All creation costs are constant!
I'd really like to understand what inter-partition communication is going on if the collection is partitioned.<issue_comment>username_1: What's the gremlin queries for adding vertices and edges.
Note that, adding a vertex is a single write. While, adding an edge can be two reads and a write.
Now, writes are typically independent, but reads can be correlated with the amount of data in the system, depending on how you are reading them.
Upvotes: 0 <issue_comment>username_1: You need a slight modification of the query,
g.V('1').has('tenantID', '1').addE('knows').to(g.V('1').has('tenantId', '1'))
for partitioned collection. Querying a vertex by ‘id’ is inefficient for partitioned collection, as the vertex will be searched in all the partitions.
Upvotes: 3 [selected_answer] |
2018/03/22 | 630 | 2,845 | <issue_start>username_0: I've inherited a lotus notes application and one of the things that really irks me is every function/sub/property has onerror statements and errorhandler labels that aside from typos all do the exact same thing. Additionally, and unfortunately this application has gone through several revisions and some errorhandler: labels have revisions where as other don't. I'd like to standardize and centralize this behavior.
Is there a way to have a single error handler for a given document, where if an error is raised anywhere in the document, that particular error handler is called?
Thank you,<issue_comment>username_1: You can have one error handler per script execution. You cannot have one global to a document. Each event that fires in a document results in a new script execution.
That said, it is generally advantageous to have one error handler per function, but that advantage is lost if they are actually exactly the same. The better practice is to customize them so that each error handler records the name of the current function. (Of course, due to copy/paste laziness, this is frequently more effective in theory than in practice.)
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you have an `On Error Goto SomeLabel` statement (where `SomeLabel` is whatever label the code actually uses), the label must exist in the same Sub/Function that contains that statement so, technically, you need a separate handler for each Sub/Function.
However, some things might simplify matters...
If one Sub/Function calls another Sub/Function, and the inner one doesn't have an error handler but the outer one (the caller) does, then an error in the inner Sub/Function will be caught by the handler in the caller.
This setup gives you less information (you can't get the line number on which the error occurred in the inner Sub/Function), but it might be helpful if you have any Subs/Functions that you're sure either can't produce an error, or only have one line on which an error could occur.
If you have some static message-text or logging which is identical in many error handlers, you could have a Sub/Function in the Form Globals (or in a script library to absolutely minimise code duplication) that contains the static parts of the error handlers, and takes arguments for the variable parts (error message, line number, and Sub/Function name).
Finally, this code will produce the name of the current Sub/Function and makes it easier to use the same error handler in many places, as long as the code declarations contain `%include "lsconst.lss"` or you use a script library containing the same `%include` statement:
`GetThreadInfo(LSI_THREAD_PROC)`
Another function, `LSI_Info`, can also give you the name of the current Sub/Function, but isn't supported by IBM, and should be avoided.
Upvotes: 0 |
2018/03/22 | 1,057 | 4,009 | <issue_start>username_0: When using `System.Threading.Tasks.Dataflow`, if I link block `a` to block `b`, will the link keep `b` alive? Or do I need to keep a reference to `b` around to prevent it from being collected?
```
internal class SomeDataflowUser
{
public SomeDataflowUser()
{
_a = new SomeBlock();
var b = new SomeOtherBlock();
_a.LinkTo(b);
}
public void ReactToIncomingMessage( Something data )
{
// might b be collected here?
_a.Post( data );
}
private ISourceBlock \_a;
}
```<issue_comment>username_1: You're confusing variables with variable contents. They can have completely different lifetimes.
Local variable `b` is no longer a root of the GC once control leaves the block. The object that was referenced by the reference stored in `b` is a managed object, and the GC will keep it alive at least as long as it's reachable from a root.
Now, note that the GC is allowed to treat local variables as dead *before* control leaves the block. If you have:
```
var a = whatever;
a.Foo();
var b = whatever;
// The object referred to by `a` could be collected here.
b.Foo();
return;
```
Because for example maybe the jitter decides that `b` can use the same local store as `a` since their usages do not overlap. **There is no requirement that the object referred to by `a` stays alive as long as `a` is in scope.**
This can cause issues if the object has a destructor with a side effect that you need to delay until the end of the block; this particularly happens when you have unmanaged code calls in the destructor. In that case use a keep-alive to keep it alive.
Upvotes: 2 <issue_comment>username_2: In addition to @Eric great explanation of GC behavior I want to address the special case related to TPL-Dataflow. You can easily see the behavior that `LinkTo` yields from a simple test. Notice that nothing, to my knowledge, is holding on to `b` except for its link to `a`.
```
[TestFixture]
public class BlockTester
{
private int count;
[Test]
public async Task Test()
{
var inputBlock = BuildPipeline();
var max = 1000;
foreach (var input in Enumerable.Range(1, max))
{
inputBlock.Post(input);
}
inputBlock.Complete();
//No reference to block b
//so we can't await b completion
//instead we'll just wait a second since
//the block should finish nearly immediately
await Task.Delay(TimeSpan.FromSeconds(1));
Assert.AreEqual(max, count);
}
public ITargetBlock BuildPipeline()
{
var a = new TransformBlock(x => x);
var b = new ActionBlock(x => count = x);
a.LinkTo(b, new DataflowLinkOptions() {PropagateCompletion = true});
return a;
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Yes, linking a dataflow block is enough to prevent it from being garbage collected. Not only that, but even with no references whatsoever, by just having work to do, the block stays alive until its work is done. Here is a [runnable](https://dotnetfiddle.net/hztVc7) example:
```
public static class Program
{
static void Main(string[] args)
{
StartBlock();
Thread.Sleep(500);
for (int i = 5; i > 0; i--)
{
Console.WriteLine($"Countdown: {i}");
Thread.Sleep(1000);
GC.Collect();
}
Console.WriteLine("Shutting down");
}
static void StartBlock()
{
var block = new ActionBlock(item =>
{
Console.WriteLine("Processing an item");
Thread.Sleep(1000);
});
for (int i = 0; i < 10; i++) block.Post(i);
}
}
```
Output:
```xml
Processing an item
Countdown: 5
Processing an item
Countdown: 4
Processing an item
Countdown: 3
Processing an item
Countdown: 2
Processing an item
Countdown: 1
Processing an item
Shutting down
Press any key to continue . . .
```
As long as there is a foreground thread still alive in the process, the block keeps going.
Upvotes: 1 |
2018/03/22 | 1,150 | 3,604 | <issue_start>username_0: ```
{
"uri" : "http://www.edamam.com/ontologies/edamam.owl#recipe_f9e656dd9d2b4db9816340687d01722c",
"calories" : 38,
"totalWeight" : 113.25,
"dietLabels" : [ "LOW_FAT" ],
"healthLabels" : [ "SUGAR_CONSCIOUS", "VEGAN", "VEGETARIAN", "PEANUT_FREE", "TREE_NUT_FREE", "ALCOHOL_FREE" ],
"cautions" : [ ],
"totalNutrients" : {
"ENERC_KCAL" : {
"label" : "Energy",
"quantity" : 38.505,
"unit" : "kcal"
},
"FAT" : {
"label" : "Fat",
"quantity" : 0.41902500000000004,
"unit" : "g"
},
"FASAT" : {
"label" : "Saturated",
"quantity" : 0.044167500000000005,
"unit" : "g"
},
"FAMS" : {
"label" : "Monounsaturated",
"quantity" : 0.0124575,
"unit" : "g"
},
"FAPU" : {
"label" : "Polyunsaturated",
"quantity" : 0.043035000000000004,
"unit" : "g"
}
}
}
```
/\*
\* networking method
\*/
```
func getNutritionData(url: String) {
Alamofire.request(url, method: .get)
.responseString { response in
if response.result.isSuccess {
print("Sucess! Got the Nutrition data")
let nutritionJSON : JSON = JSON(response.result.value!)
//print(nutritionJSON)
self.updateNutritionData(json: nutritionJSON)
} else {
print("Error: \(String(describing: response.result.error))")
self.caloriesLable.text = "Connection Issues"
}
}
}
func updateNutritionData(json: JSON) {
let calories = json["calories"].intValue
print(calories)
}
```
^ When I try to get the calories for example, I get nil
In the nutritionData method I tried using .responseJSON but it was throwing an error so I switched to .responseString. I want to get the "totalNutrients" information from that JSON. Help would be appreciated<issue_comment>username_1: It's better to use method with `responseJSON` instead of `responseString`
```
Alamofire.request(url, method: .get, parameters:nil, encoding: JSONEncoding.default).responseJSON { response in
print(response)
if let json = response.result.value as? [String:Any] {
print(json["calories"])
}
}
```
and then try
Upvotes: 1 <issue_comment>username_2: Ary alamofire.response, and then directly parse data in to JSON
```
Alamofire.request(url, method: .get).response { response in
if response.result.isSuccess {
let nutritionJSON : JSON = JSON(response.data)
self.updateNutritionData(json: nutritionJSON)
} else {
print("Error: \(String(describing: response.result.error))")
self.caloriesLable.text = "Connection Issues"
}
}
func updateNutritionData(json: JSON) {
let calories = json["calories"].intValue
print(calories)
}
```
Upvotes: 0 <issue_comment>username_3: You should definitely be using `responseJSON()` if you just wanna serialize JSON. If you're trying to use it with a `Decodable` object, you should be using `responseData()` and decoding one of your custom types with the data. Always validate.
```
func getNutritionData(url: String) {
Alamofire.request(url)
.validate(statusCode: 200...200)
.validate(contentType: ["application/json"])
.responseJSON { (response) in
switch response.result {
case .success(let json):
// Do something with the json.
let dict = json as? [String: Any]
print(dict[""]!)
case .failure(let error):
print(error)
}
}
}
```
Upvotes: 0 |
2018/03/22 | 2,358 | 7,503 | <issue_start>username_0: First off, I want to be able to encrypt some data in one of my Java Class and write it in a text file on the phone. Then my Unity class in c# read it, decrypt it and the data can be used.
At the moment, my java class can encrypt and decrypt his own data. My C# can do the same as well. Problem is, my c# code can't decrypt what java previously encrypted. I'm 100% sure they have the same key (printed a log so compare and it's the same). There seems to be something different between my encryption in java and c#.
Here is the error I'm getting when trying to decrypt in c# what was previously crypted by java:
```
03-22 13:32:57.034 14264 14351 E Unity : CryptographicException: Bad PKCS7 padding. Invalid length 197.
03-22 13:32:57.034 14264 14351 E Unity : at Mono.Security.Cryptography.SymmetricTransform.ThrowBadPaddingException (PaddingMode padding, Int32 length, Int32 position) [0x00000] in :0
03-22 13:32:57.034 14264 14351 E Unity : at Mono.Security.Cryptography.SymmetricTransform.FinalDecrypt (System.Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount) [0x00000] in :0
03-22 13:32:57.034 14264 14351 E Unity : at Mono.Security.Cryptography.SymmetricTransform.TransformFinalBlock (System.Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount) [0x00000] in :0
03-22 13:32:57.034 14264 14351 E Unity : at System.Security.Cryptography.RijndaelManagedTransform.TransformFinalBlock (System.Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount) [0x00000] in :0
03-22 13:32:57.034 14264 14351 E Unity : at GoogleReader.Decrypt (System.String text) [0x00000] in :0
```
JAVA CODE :
```
public static String key;
public static String Crypt(String text)
{
try
{
// Get the Key
if(com.UQAC.OceanEmpire.UnityPlayerActivity.myInstance != null){
key = Base64.encodeToString(com.UQAC.OceanEmpire.UnityPlayerActivity.myInstance.key.getEncoded(),Base64.DEFAULT);
com.UQAC.OceanEmpire.UnityPlayerActivity.myInstance.SendMessageToUnity(key);
} else {
return "";
}
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS7PADDING");
byte[] iv = { 1, 2, 3, 4, 5, 6, 6, 5, 4, 3, 2, 1, 7, 7, 7, 7 };
IvParameterSpec ivspec = new IvParameterSpec(iv);
Key secretKey = new SecretKeySpec(Base64.decode(key,Base64.DEFAULT), "AES");
cipher.init(Cipher.ENCRYPT_MODE, secretKey, ivspec);
return Base64.encodeToString(cipher.doFinal(text.getBytes()),Base64.DEFAULT);
}
catch (Exception e)
{
System.out.println("[Exception]:"+e.getMessage());
}
return null;
}
public static String Decrypt(String text)
{
try
{
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS7PADDING");
byte[] iv = { 1, 2, 3, 4, 5, 6, 6, 5, 4, 3, 2, 1, 7, 7, 7, 7 };
IvParameterSpec ivspec = new IvParameterSpec(iv);
Key SecretKey = new SecretKeySpec(Base64.decode(key,Base64.DEFAULT), "AES");
cipher.init(Cipher.DECRYPT_MODE, SecretKey, ivspec);
byte DecodedMessage[] = Base64.decode(text, Base64.DEFAULT);
return new String(cipher.doFinal(DecodedMessage));
}
catch (Exception e)
{
System.out.println("[Exception]:"+e.getMessage());
}
return null;
}
```
C# CODE :
```
public static string keyStr;
public static string Decrypt(string text)
{
RijndaelManaged aes = new RijndaelManaged();
aes.BlockSize = 128;
aes.KeySize = 256;
aes.Mode = CipherMode.ECB;
//aes.Padding = PaddingMode.None;
byte[] keyArr = Convert.FromBase64String(keyStr);
byte[] KeyArrBytes32Value = new byte[32];
Array.Copy(keyArr, KeyArrBytes32Value, Math.Min(KeyArrBytes32Value.Length, keyArr.Length));
byte[] ivArr = { 1, 2, 3, 4, 5, 6, 6, 5, 4, 3, 2, 1, 7, 7, 7, 7 };
byte[] IVBytes16Value = new byte[16];
Array.Copy(ivArr, IVBytes16Value, Math.Min(ivArr.Length, IVBytes16Value.Length));
aes.Key = KeyArrBytes32Value;
aes.IV = IVBytes16Value;
ICryptoTransform decrypto = aes.CreateDecryptor();
byte[] encryptedBytes = Convert.FromBase64CharArray(text.ToCharArray(), 0, text.Length);
byte[] decryptedData = decrypto.TransformFinalBlock(encryptedBytes, 0, encryptedBytes.Length);
return Encoding.UTF8.GetString(decryptedData);
}
public static string Encrypt(string text)
{
RijndaelManaged aes = new RijndaelManaged();
aes.BlockSize = 128;
aes.KeySize = 256;
aes.Mode = CipherMode.ECB;
byte[] keyArr = Convert.FromBase64String(keyStr);
byte[] KeyArrBytes32Value = new byte[32];
Array.Copy(keyArr, KeyArrBytes32Value, Math.Min(KeyArrBytes32Value.Length, keyArr.Length));
byte[] ivArr = { 1, 2, 3, 4, 5, 6, 6, 5, 4, 3, 2, 1, 7, 7, 7, 7 };
byte[] IVBytes16Value = new byte[16];
Array.Copy(ivArr, IVBytes16Value, Math.Min(ivArr.Length, IVBytes16Value.Length));
aes.Key = KeyArrBytes32Value;
aes.IV = IVBytes16Value;
ICryptoTransform encrypto = aes.CreateEncryptor();
byte[] plainTextByte = Encoding.UTF8.GetBytes(text);
byte[] CipherText = encrypto.TransformFinalBlock(plainTextByte, 0, plainTextByte.Length);
return Convert.ToBase64String(CipherText);
}
```
Test to encrypt and decrypt a string in C#
```
Original String : Hello World
Crypted : 7Zmd4yvxgR6Mg0nUDQumBA==
Decrypted : Hello World
```
Test to encrypt and decrypt a string in Java
```
Original String : Hello World
Crypted : zQjSpJqU8YkHhMDHw8wuTQ==
Decrypted : Hello World
```
The Key during those test :
```
FuVf/CNYkHdyBqejq3eoHQ==
```
As you can see, they have the same key but encrypted the data differently. I can't seem to figure out what I'm doing wrong.<issue_comment>username_1: Your C# code is in ECB mode but your Java code is in CBC mode.
Did you actually write the code you have or just copy and paste it? Looks like the latter. Don't copy and paste security code. You should be writing it with intent and understanding. Not "oh look here I'll throw this together and hope it works".
Also, the way you handle your IV is very insecure. Never use a fixed IV. Under the right circumstances it can completely reveal your plaintext. Use a randomly generated IV for every encryption operation.
Upvotes: 0 <issue_comment>username_2: Found the solution ! Everything works.
After following some hints, here is the fix:
1. Made sure everything was in ECB. In the Java insted of having this:
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS7PADDING");
I simply put this in both crypt and decrypt functions :
```
Cipher cipher = Cipher.getInstance("AES");
```
2. In C#, the key size and IV size need to depend on the actual original Key and the IV array (not an arbitrary number like 32 or 16, need to be the actualy size of the key)
3. Insted of writing a string to the file, I now write directly the byte as they are directly after I encrypted them. This made it easier for the C# side to just straight up get everything and decrypt it.
So the major problem was to write strings in the file insted of bytes. I don't know why this caused an error (**error : invalid block size**).
**Warning**: Using ECB mode is insecure, in some circumstances, it is trivial to retrieve the plaintext. Don't use this code in production or in any scenario where security is required.
Upvotes: 2 [selected_answer] |
2018/03/22 | 891 | 3,389 | <issue_start>username_0: I am trying to set an initial state for a component, from a prop that I have passed through. The prop is passed from App.js where it is populated from an API call.
When I check for a value for the component state there is none. How could this be?
Here is code setting initial state from [ListBooks.js](https://github.com/robineyre29/reactnd-project-myreads-starter/blob/master/src/ListBooks.js)
```
class ListBooks extends Component {
constructor(props) {
super(props);
this.state = {
x: props.books
};
console.log("Check if state has been set")
console.log(this.state.x)
}
```
Here is where prop is populated in [App.js](https://github.com/robineyre29/reactnd-project-myreads-starter/blob/master/src/App.js)
```
class BooksApp extends React.Component {
state = {
books: []
}
componentDidMount() {
console.log("Fetching All Books from API")
BooksAPI.getAll().then((books) => {
this.setState({books})
})
}
```
My overall goal is to be able to update the local state in the ListBooks component - so it updates immediately (haven't finished the code for this part) and then to update the server in background via API. To save a long reload after each update.<issue_comment>username_1: `console.log(this.state.x)` is undefined since the following code:
```
componentDidMount() {
console.log("Fetching All Books from API")
BooksAPI.getAll().then((books) => {
this.setState({books})
})
}
```
Includes two asynchronous calls that are not guaranteed to complete before it is rendered: `BooksAPI.getAll()` and `setState`.
But as the second comment to your question indicated, it is strange that you set the state in `ListBooks` from the passed props. The point in passing props is not to need to set local state in the component that gets the props.
If, as you wrote in the 4th comment on the question, you modify it inside `ListBooks`, then you probably need to call `BooksAPI` inside `ListBooks` and not pass props.
Upvotes: 0 <issue_comment>username_2: You shouldn't do this for sure.
First of all read this [tutorial](https://reactjs.org/docs/thinking-in-react.html#step-3-identify-the-minimal-but-complete-representation-of-ui-state) in the docs. It will show you the right way to organize your state.
If you can get values which you need from props - you shouldn't keep it in state - just use props in `render`.
>
> My overall goal is to be able to update the local state in the ListBooks component - so it updates immediately (haven't finished the code for this part) and then to update the server in background via API. To save a long reload after each update.
>
>
>
This is not a good idea because you need to be sure that data is changed on the server sucessfully otherwise you will display not reliable data which can be confused for user.
**For example:** Let's imagine you have a list of books which you got from a server. Then user add new book to this list and you update `state` immediately and then send request to the API and something went wrong there (you have sent invalid data, for example). After that user decided to change the info of this newly added book but when he/she will try to do it - server will, probably, return 404 because there is no such book. It's kinda confusing, isn't it? Does it makes sense? I hope so.
Good luck and enjoy your coding :)
Upvotes: 1 |
2018/03/22 | 690 | 2,603 | <issue_start>username_0: Can anyone help how to hide the field and title if the value is not there. For example if there is no Fax No., it should not show the title name "Fax:" also.
```
php
$name = get_sub_field('name');
$address = get_sub_field('address');
$fax = get_sub_field('fax');
?
###### php echo (isset($name)) ? $name : ''; ?
P.O. Box: php echo (isset($address)) ? $address : ''; ?
Fax: php echo (isset($fax)) ? $fax : ''; ?
```<issue_comment>username_1: `console.log(this.state.x)` is undefined since the following code:
```
componentDidMount() {
console.log("Fetching All Books from API")
BooksAPI.getAll().then((books) => {
this.setState({books})
})
}
```
Includes two asynchronous calls that are not guaranteed to complete before it is rendered: `BooksAPI.getAll()` and `setState`.
But as the second comment to your question indicated, it is strange that you set the state in `ListBooks` from the passed props. The point in passing props is not to need to set local state in the component that gets the props.
If, as you wrote in the 4th comment on the question, you modify it inside `ListBooks`, then you probably need to call `BooksAPI` inside `ListBooks` and not pass props.
Upvotes: 0 <issue_comment>username_2: You shouldn't do this for sure.
First of all read this [tutorial](https://reactjs.org/docs/thinking-in-react.html#step-3-identify-the-minimal-but-complete-representation-of-ui-state) in the docs. It will show you the right way to organize your state.
If you can get values which you need from props - you shouldn't keep it in state - just use props in `render`.
>
> My overall goal is to be able to update the local state in the ListBooks component - so it updates immediately (haven't finished the code for this part) and then to update the server in background via API. To save a long reload after each update.
>
>
>
This is not a good idea because you need to be sure that data is changed on the server sucessfully otherwise you will display not reliable data which can be confused for user.
**For example:** Let's imagine you have a list of books which you got from a server. Then user add new book to this list and you update `state` immediately and then send request to the API and something went wrong there (you have sent invalid data, for example). After that user decided to change the info of this newly added book but when he/she will try to do it - server will, probably, return 404 because there is no such book. It's kinda confusing, isn't it? Does it makes sense? I hope so.
Good luck and enjoy your coding :)
Upvotes: 1 |
2018/03/22 | 1,464 | 4,589 | <issue_start>username_0: This question may look like a duplicate but I am facing some issue while extracting country names from the string. I have gone through this link [link][Extracting Country Name from Author Affiliations](https://stackoverflow.com/questions/5318076/extracting-country-name-from-author-affiliations) but I was not able to solve my problem.I have tried grepl and for loop for text matching and replacement, my data column consists of more than 300k rows so using grepl and for loop for pattern matching is very very slow.
I have a column like this.
```
org_loc
Zug
Zug Canton of Zug
Zimbabwe
Zigong
Zhuhai
Zaragoza
York United Kingdom
Delhi
Yalleroi Queensland
Waterloo Ontario
Waterloo ON
Washington D.C.
Washington D.C. Metro
New York
df$org_loc <- c("zug", "zug canton of zug", "zimbabwe",
"zigong", "zhuhai", "zaragoza","York United Kingdom", "Delhi","Yalleroi Queensland","Waterloo Ontario","Waterloo ON","Washington D.C.","Washington D.C. Metro","New York")
```
the string may contain the name of a state, city or country. I just want Country as output. Like this
```
org_loc
Switzerland
Switzerland
Zimbabwe
China
China
Spain
United Kingdom
India
Australia
Canada
Canada
United State
United state
United state
```
I am trying to convert state (if match found) to its country using countrycode library but not able to do so. Any help would be appreciable.<issue_comment>username_1: ```
library(countrycode)
df <- c("zug switzerland", "zug canton of zug switzerland", "zimbabwe",
"zigong chengdu pr china", "zhuhai guangdong china", "zaragoza","York United Kingdom", "Yamunanagar","Yalleroi Queensland Australia","Waterloo Ontario","Waterloo ON","Washington D.C.","Washington D.C. Metro","USA")
df1 <- countrycode(df, 'country.name', 'country.name')
```
It didn't match a lot of them, but that should do what you're looking for, based on the reference manual for `countrycode`.
Upvotes: 0 <issue_comment>username_2: With function geocode from package ggmap you may accomplish, with good but not total accuracy your task; you must also use your criterion to say "Zaragoza" is a city in Spain (which is what geocode returns) and not somewhere in Argentina; geocode tends to give you the biggest city when there are several homonyms.
(remove the $country to see all of the output)
```
library(ggmap)
org_loc <- c("zug", "zug canton of zug", "zimbabwe",
"zigong", "zhuhai", "zaragoza","York United Kingdom",
"Delhi","Yalleroi Queensland","Waterloo Ontario","Waterloo ON","Washington D.C.","Washington D.C. Metro","New York")
geocode(org_loc, output = "more")$country
```
as geocode is provided by google, it has a query limit, 2,500 per day per IP address; if it returns NAs it may be because an unconsistent limit check, just try it again.
Upvotes: 0 <issue_comment>username_3: You can use your [`City_and_province_list.csv`](https://github.com/girijesh18/dataset/blob/master/City_and_province_list.csv) as a custom dictionary for `countrycode`. The custom dictionary can not have duplicates in the origin vector (the `City` column in your `City_and_province_list.csv`), so you'll have to remove them or deal with them somehow first (as in my example below). Currently, you don't have all of the possible strings in your example in your lookup CSV, so they are not all converted, but if you added all of the possible strings to the CSV, it would work completely.
```
library(countrycode)
org_loc <- c("Zug", "Zug Canton of Zug", "Zimbabwe", "Zigong", "Zhuhai",
"Zaragoza", "York United Kingdom", "Delhi",
"Yalleroi Queensland", "Waterloo Ontario", "Waterloo ON",
"Washington D.C.", "Washington D.C. Metro", "New York")
df <- data.frame(org_loc)
city_country <- read.csv("https://raw.githubusercontent.com/girijesh18/dataset/master/City_and_province_list.csv")
# custom_dict for countrycode cannot have duplicate origin codes
city_country <- city_country[!duplicated(city_country$City), ]
df$country <- countrycode(df$org_loc, "City", "Country",
custom_dict = city_country)
df
# org_loc country
# 1 Zug Switzerland
# 2 Zug Canton of Zug
# 3 Zimbabwe
# 4 Zigong China
# 5 Zhuhai China
# 6 Zaragoza Spain
# 7 York United Kingdom
# 8 Delhi India
# 9 Yalleroi Queensland
# 10 Waterloo Ontario
# 11 Waterloo ON
# 12 Washington D.C.
# 13 Washington D.C. Metro
# 14 New York United States of America
```
Upvotes: 2 |
2018/03/22 | 681 | 2,433 | <issue_start>username_0: In my TaskList.vue I have the code below:
```
*
export default {
data(){
return{
tasks: [],
newTask: ''
}
},
created(){
axios.get('tasks')
.then(
response => (this.tasks = response.data)
);
Echo.channel('task').listen('TaskCreated', (data) => {
this.tasks.push(data.task.body);
});
},
methods:{
addTask(){
axios.post('tasks', { body: this.newTask })
this.tasks.push(this.newTask);
this.newTask = '';
}
}
}
```
When I hit the axios.post('tasks') end point, I got duplicate result in my current tab that i input the value and the another tab got only 1 value which is correct.
To avoid this, I tried to use
broadcast(new TaskCreated($task))->toOthers();
OR
I put $this->dontBroadcastToCurrentUser() in the construct of my TaskCreated.php
However, both methods are not working. Any idea why?
The image below is from network tab of mine. One of it is pending, is that what caused the problem?
<https://ibb.co/jpnsnx> (sorry I couldn't post image as it needs more reputation)<issue_comment>username_1: I solved this issue on my Laravel Spark project by manually sending the X-Socket-Id with the axios post.
The documentation says the header is added automatically if you're using vue and axios, but for me (because of spark?) it was not.
Below is an example to show how I manually added the X-Socket-Id header to an axios post using the active socketId from Laravel Echo:
```
axios({
method: 'post',
url: '/api/post/' + this.post.id + '/comment',
data: {
body: this.body
},
headers: {
"X-Socket-Id": Echo.socketId(),
}
})
```
Upvotes: 3 <issue_comment>username_2: Laravel looks for the header X-Socket-ID when using $this->dontBroadcastToCurrentUser(); Through this ID, Laravel identifies which user (current user) to exclude from the event.
You can add an interceptor for requests in which you can add the id of your socket instance to the headers of each of your requests:
```
/**
* Register an Axios HTTP interceptor to add the X-Socket-ID header.
*/
Axios.interceptors.request.use((config) => {
config.headers['X-Socket-ID'] = window.Echo.socketId() // Echo instance
// the function socketId () returns the id of the socket connection
return config
})
```
Upvotes: 3 <issue_comment>username_3: window.axios.defaults.headers.common['X-Socket-Id'] = window.Echo.socketId();
this one work for me..!!
Upvotes: 0 |
2018/03/22 | 713 | 2,166 | <issue_start>username_0: I have many files to loop through. In each file I have to filter out specific values.
Example.
I have to filter out numbers `1, 3, 5, 7, 9` and keep everything else
beginning
```
1
2
3
4
9
11
15
```
result
```
2
4
11
15
```
I came up with this code, but it proved to be useless as its not working
```
Rows("1:1").Select
Selection.AutoFilter
wkb.Sheets("temp").Range("$A$1:$AC$72565").AutoFilter Field:=9, Criteria1:="Main"
wkb.Sheets("temp").Range("$A$1:$AC$72565").AutoFilter Field:=10, Criteria1:="C"
wkb.Sheets("temp").Range("$A$1:$AC$72565").AutoFilter Field:=11, Criteria1:="N"
'this part of the code is where I try to exclude values
wkb.Sheets("temp").Range("$A$1:$AC$72565").AutoFilter Field:=1, Criteria1:=Array("<>1", "<>3", "<>5", "<>7", "<>9")
```<issue_comment>username_1: Consider inverse the logic and just filter what you want to delete?
```
Sub test()
Dim rng As Range
Set rng = wkb.Sheets("temp").Range("$A$1:$AC$72565")
rng.AutoFilter Field:=1, Criteria1:=Array("1", "3", "5", "7", "9"), Operator:=xlFilterValues
' In case your data has header used Offset(1,0) to preserve the header on the first row
rng.Offset(1, 0).SpecialCells(xlCellTypeVisible).EntireRow.Delete
rng.AutoFilter
End Sub
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Try this (all necessary comments in code):
```
Sub filter()
'declaration of variables
Dim numbers(5) As Long, cell As Range, number As Variant
'fill this array accordingly to your needs (notee that you'd have to change array size in declaration
numbers(0) = 1
numbers(1) = 3
numbers(2) = 5
numbers(3) = 7
numbers(4) = 9
'here we loop through all cells in frist row, this lenghty formule is for finding last fill column in first row
For Each cell In Range(Cells(1, 1), Cells(1, Cells(1, Columns.Count).End(xlToLeft).Column))
'here we loop through specified numbers to filter out these values
For Each number In numbers
If number = cell.Value Then
'if match is found, clear the cell and leave the loop
cell.Value = ""
Exit For
End If
Next
Next
End Sub
```
Upvotes: 0 |
2018/03/22 | 742 | 2,630 | <issue_start>username_0: I have two different dataframes populated with different name sets. For example:
```
t1 = pd.DataFrame(['<NAME>', '<NAME>', '<NAME>', '<NAME>'], columns=['t1'])
t2 = pd.DataFrame(['<NAME>', '<NAME>', '<NAME>', '<NAME>'], columns=['t2'])
```
I need to find the intersection between the two data sets. My solution was to split by comma, and then groupby last name. Then I'll be able to compare last names, and then see if the first names of t2 are contained within t1. ['<NAME>'] is the only one that should be returned in above example.
What I need help on is how to compare values within two different dataframes that are grouped by a common column. Is there a way to intersect the results of a groupby for two different dataframes and then compare the values within each key value pair? I need to extract the names in t2 that are *not* in t1.
I have an idea that it should look something like this:
```
for last in t1:
print(t2.get_group(last)) #BUT run a compare between the two different lists here
```
Only problem is if the last name doesn't exist in the second groupby, it throws an error, so I can't even proceed to the next step mentioned by the comment, of comparing the values in the groups (first names).<issue_comment>username_1: Consider inverse the logic and just filter what you want to delete?
```
Sub test()
Dim rng As Range
Set rng = wkb.Sheets("temp").Range("$A$1:$AC$72565")
rng.AutoFilter Field:=1, Criteria1:=Array("1", "3", "5", "7", "9"), Operator:=xlFilterValues
' In case your data has header used Offset(1,0) to preserve the header on the first row
rng.Offset(1, 0).SpecialCells(xlCellTypeVisible).EntireRow.Delete
rng.AutoFilter
End Sub
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Try this (all necessary comments in code):
```
Sub filter()
'declaration of variables
Dim numbers(5) As Long, cell As Range, number As Variant
'fill this array accordingly to your needs (notee that you'd have to change array size in declaration
numbers(0) = 1
numbers(1) = 3
numbers(2) = 5
numbers(3) = 7
numbers(4) = 9
'here we loop through all cells in frist row, this lenghty formule is for finding last fill column in first row
For Each cell In Range(Cells(1, 1), Cells(1, Cells(1, Columns.Count).End(xlToLeft).Column))
'here we loop through specified numbers to filter out these values
For Each number In numbers
If number = cell.Value Then
'if match is found, clear the cell and leave the loop
cell.Value = ""
Exit For
End If
Next
Next
End Sub
```
Upvotes: 0 |
2018/03/22 | 512 | 1,568 | <issue_start>username_0: How I can maximize chrome browser window size to make it fit with using screen size? I have tried doing that using the function `driver.maximize_window()`. It only maximize height but not width.
The version I'm using is: selenium-server-standalone-3.11.0.jar and python 3.6.4
Here's the code I'm using:
```
from selenium import webdriver
# get initial window size
driver = webdriver.Chrome()
print(driver.get_window_size())
# set window size
driver.set_window_size(480, 320)
print(driver.get_window_size())
# maximize window
driver.maximize_window()
print(driver.get_window_size())
driver.set_window_size(2560, 1440)
print(driver.get_window_size())
driver.quit()
```
Output:
```
{'width': 1200, 'height': 1369}
{'width': 480, 'height': 320}
{'width': 1536, 'height': 1417}
{'width': 2560, 'height': 1417}
```<issue_comment>username_1: Here is what might help you with your problem if you want to start with an already maximized window:
```
options = webdriver.ChromeOptions()
options.add_argument("--start-maximized")
options.add_argument('window-size=2560,1440')
driver = webdriver.Chrome(chrome_options=options)
```
You can also edit the window size as you prefer. The information on this website might also come in handy: <http://selenium-python.readthedocs.io>
Upvotes: 3 <issue_comment>username_2: You can use below code to maximize the browser
```
driver.maximize.window()
```
Upvotes: 0 <issue_comment>username_3: `driver.maximize.window()` did not work for me.
This worked.
```
browser.maximize_window()
```
Upvotes: 2 |
2018/03/22 | 356 | 1,014 | <issue_start>username_0: My regex should allow a-z, A-Z, 0-9, single quote ('), space ( ), question mark (?), dash (-), full-stop (.).
The regex looks like this. But, it also flags underscore, ampersand as valid characters.
```
^[\.\w?\-' ]+$
```
How do i fix this?<issue_comment>username_1: `'`inside a character class will match those characters literally. To match `'`, just use `'` literally.
As <NAME>. commented, `\w` is equivalent to `[a-zA-Z0-9_]`.
Resulting regex:
```
^[a-zA-Z0-9.?\-' ]+$
```
Try it online [here](https://regex101.com/r/nxm4Mi/1).
Upvotes: 2 [selected_answer]<issue_comment>username_2: Expanding the previous answers,
if you want to match the escaped `'` as stated in the question then you can use
```
^(?:[\.a-zA-Z0-9.-\? ]|')+$
```
it uses a non capturing group `(?:exp1|exp2)+` to match one or more of exp1 or exp2 , as opposed to `[exp1exp2]+` which matches one or more of e or x or p or 1 or 2
you can try it out [here](https://regex101.com/r/FpXinV/4)
Upvotes: 2 |
2018/03/22 | 1,260 | 4,998 | <issue_start>username_0: I am relatively new to using Promises and MongoDB / Mongoose, and am trying to chain a flow through several database queries into an efficient and reliable function.
I want to know if my final function is a good and reliable way of achieving what I want, or if there are any issues or any improvements that can be made.
The process is as follows:
1) Check whether or not the user already exists
```
usersSchema.findOne({
email: email
}).then(res => {
if(res==null){
// user does not exist
}
}).catch(err => {});
```
2) Add the new user to the database
```
var new_user = new usersSchema({ email: email });
new_user.save().then(res => {
// new user id is res._id
}).catch(err => {});
```
3) Assign a free promotional code to the user
```
codeSchema.findOneAndUpdate({
used: false,
user_id: true
},{
used: true,
user_id: mongoose.Types.ObjectId(res._id)
}).then(res => {
// user's code is res.code
}).catch(err => {});
```
Obviously, each query needs to execute in sequence, so after a lot of research and experimentation into how to do this I have combined the queries into the following function, which seems to be working fine so far:
```
function signup(email){
// check email isn't already signed up
return usersSchema.findOne({
email: email
}).then(res => {
if(res==null){
// add to schema
var new_user = new usersSchema({ email: email });
// insert new user
return new_user.save().then(res => {
var result = parse_result(res);
// assign a code
return codesSchema.findOneAndUpdate({
used: false,
user_id: true
},{
used: true,
user_id: mongoose.Types.ObjectId(result._id),
});
});
}else{
return 'The user already exists';
}
});
}
signup('<EMAIL>').then(res => {
console.log('success, your code is '+res.code);
}).catch(err => {
console.log(err);
});
```
I'm still trying to get my head around exactly how and why this works - the function is returning a promise, and each nested promise is returning a promise.
My main concerns are that there is a lot of nesting going on (is there perhaps a way to do this by chaining .then() callbacks instead of nesting everything?) and that the nested promises don't appear to have error catching, although as the signup() function itself is a promise this seems to catch all the errors.
Is anyone knowledgeable on the subject able to confirm whether my process looks good and reliable or not? Thanks!<issue_comment>username_1: you code can be improve in this way
```
function signup(email){
// check email isn't already signed up
return usersSchema.findOne({
email: email
}).then(res => {
if(res==null){ // add to schema
var new_user = new usersSchema({ email: email });
// insert new user
return new_user.save()
}else{
return Promise.reject(new Error('The user already exists'));
}
})
.then(res => {
var result = parse_result(res);
// assign a code
return codesSchema.findOneAndUpdate({used: false,user_id: true},{used: true,user_id: mongoose.Types.ObjectId(result._id),});
});
}
signup('<EMAIL>').then(res => {
console.log('success, your code is '+res.code);
}).catch(err => {
console.log(err);
});
```
Upvotes: 2 <issue_comment>username_2: To avoid indentation-hell, if you return a value from the function passed to a Promise's `.then()`-method, you can chain multiple `.then()` as a neat and tidy pipeline. Note that you can also return a Promise that has pending status, and then next function in line will execute when it has resolved.
```
function signup (email) {
return usersSchema.findOne({
email: email
}).then(res => {
if (res) throw 'The user already exists'
var new_user = new usersSchema({ email: email })
return new_user.save()
}).then(res => {
var result = parse_result(res)
return codesSchema.findOneAndUpdate({
used: false,
user_id: true
},{
used: true,
user_id: mongoose.Types.ObjectId(result._id)
})
})
}
```
Even better, if you have the possibility to use `async/await` (Node v7.6 or above), your code can look like normal blocking code:
```
async function signup (email) {
let user = await usersSchema.findOne({ email: email })
if (user) throw 'The user already exists'
let new_user = await new usersSchema({ email: email }).save()
let result = parse_result(new_user)
return codesSchema.findOneAndUpdate({
used: false,
user_id: true
},{
used: true,
user_id: mongoose.Types.ObjectId(result._id)
})
}
```
Your original function call code works on both without changes.
Upvotes: 3 [selected_answer] |
2018/03/22 | 853 | 2,319 | <issue_start>username_0: I've looked through the forums and can't seem to figure this out. I have the following data. I assume the answer lies in the "groupby" function but I can't seem to work it out.
```
Date Hour Value 3DAverage
1/1 1 57 53.33
1/1 2 43 42.33
1/1 3 44 45.33
1/2 1 51 ...
1/2 2 40 ...
1/2 3 42 ...
1/3 1 56 ...
1/3 2 42
1/3 3 48
1/4 1 53
1/4 2 45
1/4 3 46
1/5 1 56
1/5 2 46
1/5 3 48
1/5 4 64 *
1/6 1 50
1/6 2 41
1/6 3 42
1/7 1 57
1/7 2 43
1/7 3 45
1/8 1 58
1/8 2 49
1/8 3 41
1/9 1 53
1/9 2 46
1/9 3 47
1/10 1 58
1/10 2 49
1/10 3 40
```
What I am trying to do is add the "3DAverage" column. I would like this column to produce an average of the "Value" column for the PRIOR 3 corresponding hour values. *I want to fill this column down for the entire series*. For example, the value 53.33 is an average of the value for hour 1 on 1/2, 1/3, and 1/4. I would like this to continue down the column using only the prior 3 values for each "HourValue".
Also, please note that there are instances such as 1/5 hour 4. Not all dates have the same number of hours, so I am looking for the last 3 hour values for dates in which those hours exist.
I hope that makes sense. Thanks so much in advance for your help !<issue_comment>username_1: You can `groupby` on Date column and do the following:
```
df['3DAverage'] = df['Hour'].map(df.groupby('Hour').apply(lambda x: x.loc[x['Date'].isin(['1/2','1/3','1/4']),'Value'].mean()))
df.head(6)
Date Hour Value 3DAverage
0 1/1 1 57 53.333333
1 1/1 2 43 42.333333
2 1/1 3 44 45.333333
3 1/2 1 51 53.333333
4 1/2 2 40 42.333333
5 1/2 3 42 45.333333
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can try rolling mean
```
df['3D Average'] = df.iloc[::-1].groupby('Hour').Value.rolling(window = 3).mean()\
.shift().sort_index(level = 1).values
```
Upvotes: 2 |
2018/03/22 | 433 | 1,100 | <issue_start>username_0: how to convert reshape 1D numpy array to 2D numpy array
and fill with zeroes on the columns.
For example:
Input:
```
a = np.array([1,2,3])
```
Expected output:
```
np.array([[0, 0, 1],
[0, 0, 2],
[0, 0, 3]])
```
How do I do this?<issue_comment>username_1: Create a function that creates a zero array of m x n x 3 dimensionality. Then go through your original matrix and assign its values to those parties of the new zero matrix that should be non-zero.
Upvotes: 0 <issue_comment>username_2: For your specific example:
```
a = np.array([1,2,3])
a.resize([3, 3])
a = np.rot90(a, k=3)
```
hope this helps
Upvotes: 1 <issue_comment>username_3: ```
a = np.array([1,2,3])
```
**Option 1**
`np.pad` (this should be fast)
```
np.pad(a[:, None], ((0, 0), (2, 0)), mode='constant')
array([[0, 0, 1],
[0, 0, 2],
[0, 0, 3]])
```
---
**Option 2**
Assign a slice to `np.zeros` (also very fast)
```
b = np.zeros((3, 3))
b[:, -1] = a
array([[0., 0., 1.],
[0., 0., 2.],
[0., 0., 3.]])
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 425 | 1,172 | <issue_start>username_0: I need to select an anchor element within an HTML document. The only issue is that I cannot find a single ID or Class within the document.
Whenever I run
```
document.querySelector('a')
```
Or
```
document.querySelectorAll('a')
```
I get the error:
>
> Object doesn't support property or method
>
>
>
Any thoughts?<issue_comment>username_1: Create a function that creates a zero array of m x n x 3 dimensionality. Then go through your original matrix and assign its values to those parties of the new zero matrix that should be non-zero.
Upvotes: 0 <issue_comment>username_2: For your specific example:
```
a = np.array([1,2,3])
a.resize([3, 3])
a = np.rot90(a, k=3)
```
hope this helps
Upvotes: 1 <issue_comment>username_3: ```
a = np.array([1,2,3])
```
**Option 1**
`np.pad` (this should be fast)
```
np.pad(a[:, None], ((0, 0), (2, 0)), mode='constant')
array([[0, 0, 1],
[0, 0, 2],
[0, 0, 3]])
```
---
**Option 2**
Assign a slice to `np.zeros` (also very fast)
```
b = np.zeros((3, 3))
b[:, -1] = a
array([[0., 0., 1.],
[0., 0., 2.],
[0., 0., 3.]])
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 261 | 1,013 | <issue_start>username_0: K8s has this feature to [encrypt secret data](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/), which requires modification of kube-apiserver config, how can i do this in GKE?<issue_comment>username_1: Short answer is, you can't.
The Kubernetes Engine master is managed by Google, so you can't change its runtime parameters. Nonetheless, while the data may not be encrypted inside the etcd running on the master node, [the contents of the master node itself are encrypted](https://cloud.google.com/security/encryption-at-rest/default-encryption/#encryption_of_data_at_rest) as the link Will pointed to explains.
Upvotes: 2 <issue_comment>username_2: It appears you can, now. From the latest `gcloud` release notes:
```
### Kubernetes Engine
* Promoted the `--database-encryption-key` flag of `gcloud container clusters
create` to beta. The flag enables support for encryption of Kubernetes Secrets.
```
<https://cloud.google.com/sdk/docs/release-notes>
Upvotes: 0 |
2018/03/22 | 715 | 2,755 | <issue_start>username_0: Currently working on an ASP.NET web application. I would like to deploy to Azure App Service but when running on App Service, the connection to my Azure SQL server can not be found.
[![enter image description here](https://i.stack.imgur.com/XczGP.png)](https://i.stack.imgur.com/XczGP.png)
When I run on my local machine, the connection to the Azure SQL server works. I can also connect to my Azure SQL server via SQL Management Studio.
On the Azure SQL server I have set up a Firewall rule to allow all IP addresses, and allowed access to Azure services:[![enter image description here](https://i.stack.imgur.com/a8wKP.png)](https://i.stack.imgur.com/a8wKP.png)
I have done considerable searching for solutions to this problem over the past couple days, but I can't find a solution.<issue_comment>username_1: You can easy find the app service all outbound ip address on properties
but you need setting all of the outbound ips in your firewall
[![enter image description here](https://i.stack.imgur.com/fsZ26.jpg)](https://i.stack.imgur.com/fsZ26.jpg)
Upvotes: 0 <issue_comment>username_2: I ran into what could be this same problem. Changing the server admin password resolved the issue. I was able to connect running locally, but the published application was unable to log in with the server admin credentials until I reset the server admin password.
Upvotes: 0 <issue_comment>username_3: I have added the Virtual IP for the Firewall rule and it fixed the issue.
[![enter image description here](https://i.stack.imgur.com/XENkh.png)](https://i.stack.imgur.com/XENkh.png)
For more troubleshooting use App insights. Most of the time it shows the exact issue.
Upvotes: 2 <issue_comment>username_4: On the azure prortal on sql database overview page click Set Server Firewall
Allow Azure services and resources to access this server should be set to Yes and rest of the settings should by default
Save your changes
It works for me with asp.net and sql db are located in azure env
Upvotes: 0 <issue_comment>username_5: After spending a few hours tweaking my `appsettings.json`, Azure firewall settings, permissions, ... it ended up being that I had my **App Service > Settings > Configuration > Connection String** previously set. And Azure seems to completely ignore your `appsettings.json` in this case.
[![Azure Screenshot](https://i.stack.imgur.com/1hsDp.png)](https://i.stack.imgur.com/1hsDp.png)
Upvotes: 3 <issue_comment>username_6: Works with the solution given by @username_5. Just an additional setting to remember, as you're mentioning azure sql, don't forget to change the type to SQLAzure.
[![enter image description here](https://i.stack.imgur.com/1ufWD.png)](https://i.stack.imgur.com/1ufWD.png)
Upvotes: 0 |
2018/03/22 | 961 | 2,979 | <issue_start>username_0: example:
```
var A = function (z) {
this.z = z
}
// attach 1 method
A.prototype.sayHay = function(message) {
this.message = message
console.log(message)
}
// attach 2 methods ?????
A.prototype.sayHay.protorype.inReverse = function () {
// how to avoid the first console.log
console.log(this.message.split('').reverse().join(''))
}
var b = new A()
b.sayHay('Hola').inReverse()
// this should prints 'aloH'
```
how can I also override the first console.log, because sayHay prints 'Hi' and would returns inReverse 'iH'
E D I T E D - - - - - - - - - - - - - - - - - - - - - - - - - - -
how can I do, that by executing only sayHay('Hola') I returns 'Hello' and executing sayHay('Hello').inReverse() returns 'aloH' ?<issue_comment>username_1: A prototype is used in situations where you create an object using the keyword `new`.
**For Example:**
```
new A().sayHay(...).[Any method from the prototype]
```
So, you can initialize the variable `b` and then execute the function `sayHay` as a constructor:
```
new b.sayHay('Hola').inReverse();
```
```js
var A = function (z) {
this.z = z
}
// attach 1 method
A.prototype.sayHay = function(message) {
this.message = message
//console.log(message);
}
A.prototype.sayHay.prototype.inReverse = function () {
// how to avoid the first console.log
console.log(this.message.split('').reverse().join(''))
}
var b = new A();
new b.sayHay('Hola').inReverse();
```
To avoid the first `console.log(message)` you can replace that function at runtime, or remove that line.
Another alternatives is avoiding the second prototyping and return an object:
```js
var A = function (z) {
this.z = z
}
// attach 1 method
A.prototype.sayHay = function(message) {
return {
message: message,
inReverse: function() {
console.log(this.message.split('').reverse().join(''));
}
}
}
var b = new A();
b.sayHay('Hola').inReverse();
```
Upvotes: 0 <issue_comment>username_2: Here is another way using regular classes
```js
class Sayer {
constructor (message) {
this.message = message
}
inReverse() {
return this.message.split('').reverse().join('')
}
toString() {
return this.message
}
}
class A {
sayHay(message) {
return new Sayer(message)
}
}
console.log(new A().sayHay('Hola').inReverse())
console.log(new A().sayHay('Hola').toString())
```
Upvotes: 0 <issue_comment>username_3: ```
var A = function (z) {
this.z = z
}
// attach 1 method
A.prototype.sayHay = function(message) {
this.message = message;
return {
log : function(){console.log(message)},
rev : function () { console.log(this.message.split('').reverse().join(''))}
};
}
var b = new A();
b.sayHay('Hola').rev();
// this should prints 'aloH'
```
I have tried to do it in way of closures. Check if you get satisfied to achieve your goal with prototype and closure combination.
Upvotes: 2 [selected_answer] |
2018/03/22 | 503 | 1,930 | <issue_start>username_0: I am trying to render the response of a simple api I've created but when I try to iterate through the items, it comes as undefined. I know it has to do with asynchronous programming but can't figure out how to solve it. Could anybody help ?
```js
import React from 'react';
export default class Home extends React.Component {
constructor(props) {
super(props);
this.state = {
items: undefined
};
}
componentDidMount() {
fetch('https://petzie-cajuhqnpqy.now.sh/breeds')
.then(res => res.json())
.then(data => {
this.setState(() => {
return {
items: data.result
};
});
console.log(this.state);
})
.catch(err => console.log(err));
}
render() {
return (
{ this.state.items.map((item) => {
return {item.name}
;
}) }
);
}
}
```<issue_comment>username_1: If you have async calls you need to do something like this:
```
render() {
if(!this.state.items) {
return
}
return (
{ this.state.items.map((item) => {
return {item.name}
;
}) }
);
}
```
Is one way to do it.
Upvotes: 0 <issue_comment>username_2: First of all, if you are going to `.map` through `this.state.items`, it can't be undefined at any point, otherwise you will throw an error (`.map` expects an array). So, in your constructor, define it as an empty array:
```
constructor(props) {
super(props);
this.state = { items: [] };
}
```
Second, `setState` is async. Therefore, when you call `setState` and `this.state` right after it, the object was not updated yet. But because of React nature, whenever it gets updated, your render function will be called again and this time `this.state.items` will have the data fetched from the API.
Upvotes: 2 [selected_answer] |
2018/03/22 | 481 | 1,766 | <issue_start>username_0: I have a list of files in which I need to select few lines which has CommonChar in them and work on creating a dictionary.
The lines in a file which I am interested in look like these:
```
CommonChar uniqueName field value
CommonChar uniqueName field1 value1
CommonChar uniqueName1 field value
CommonChar uniqueName1 field1 value1
```
So with this I would want a final dictionary which will store internal dictionaries named on uniqueName and key value pairs as (field: value, field1:value1)
This way I will have a main dict with internal dict based on uniqueName and key value pairs.
Final output should look like this:
```
mainDic={'uniqueName': {'field':'value', 'field1':value1, 'field2':value2},
'uniqueName1':{'field':'value', 'field1':value1, 'field2':value2} }
```<issue_comment>username_1: If you have async calls you need to do something like this:
```
render() {
if(!this.state.items) {
return
}
return (
{ this.state.items.map((item) => {
return {item.name}
;
}) }
);
}
```
Is one way to do it.
Upvotes: 0 <issue_comment>username_2: First of all, if you are going to `.map` through `this.state.items`, it can't be undefined at any point, otherwise you will throw an error (`.map` expects an array). So, in your constructor, define it as an empty array:
```
constructor(props) {
super(props);
this.state = { items: [] };
}
```
Second, `setState` is async. Therefore, when you call `setState` and `this.state` right after it, the object was not updated yet. But because of React nature, whenever it gets updated, your render function will be called again and this time `this.state.items` will have the data fetched from the API.
Upvotes: 2 [selected_answer] |
2018/03/22 | 833 | 3,133 | <issue_start>username_0: I have a class with embedded Id
```
@Entity
@Table(name="account_table")
Public class AccountLink {
@EmbeddedId
private AccountLinkKey accountLinkKey;
//Rest of code
}
@Embeddable
Public class AccountLinkKey {
//Rest code here
@ColumnName(name="i am hiding")
private String segAccount;
@ColumnName(name="i am hiding")
private String secAccount;
//Rest of code
}
```
Now in my repository I want to have a method to search only by `secAccount`, so how should I write `findBy..`
```
List findBy...(String secAccount);
```
I tried `findByAccountLinkKeySecAccount(String secAccount)` but still no success.<issue_comment>username_1: I have rewritten your class and could add repository function as well. Here's my implementation which you might consider taking a look at.
Here's the class `AccountLink.java`.
```
@Entity
@Table(name = "account_table")
public class AccountLink {
@Column(name = "id")
public Long id;
@EmbeddedId
private AccountLinkKey accountLinkKey;
@Embeddable
public class AccountLinkKey {
@Column(name = "seg_account")
private String segAccount;
@Column(name = "sec_account")
private String secAccount;
}
}
```
And the `AccountLinkRepository.java`.
```
public interface AccountLinkRepository extends JpaRepository {
AccountLink findAccountLinkByAccountLinkKey\_SecAccount(String secAccount);
}
```
I have added an `id` column in the `AccountLink` class as the primary key of the `account_table`. You might need something like this as well for your table.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Normally when you want to perform some action on embedded columns in JPA,You bind your columns with underscore('\_').
Example :
```
findBy\_(...)
```
Upvotes: 1 <issue_comment>username_3: It's useful when we used for delete method for `@Embeded` for keys with `@Entity`.
1. Your keys should append '\_' with the variable of Embeddable class.
example: pkId\_firstName (pkid is key of @Embeded and firstName is from Embedded class variable.)
2. it should traverse with keys to a variable with 'underscore symbol. since without '\_' underscore it does not work.
Upvotes: 0 <issue_comment>username_4: I used this example: <https://www.baeldung.com/spring-jpa-embedded-method-parameters>
**You define the Composite Key Class for the id:**
```
@Embeddable
public class BookId implements Serializable {
private String author;
private String name;
// standard getters and setters
}
```
**You use it as id in your entity:**
```
@Entity
public class Book {
@EmbeddedId
private BookId id;
private String genre;
private Integer price;
//standard getters and setters
}
```
**You finally make your calls to the repository:**
```
@Repository
public interface BookRepository extends JpaRepository {
List findByIdName(String name);
List findByIdAuthor(String author);
}
```
Example of code where I used it: [Quiz Engine with JetBrainsAcademy](https://github.com/vinny59200/JBAQuizEngineWithCompositeKey/blob/master/src/engine/vv/model/VVQuiz.java)
Upvotes: 3 |
2018/03/22 | 1,229 | 4,389 | <issue_start>username_0: at the moment I have over 150k thumbnails uploaded on my server. All 150k images are stored in different directories like this ( 2018/03/22/image1/image1.jpg ).
By checking with tools like GTmetrix or PingDom I noticed that each image can be losslessly compressed to have around 5% of reduce for each image.
For any image I can save around 500B, is not much but by considering that the homepage have 80 images, I can save around 40KiB.
I have read a lot of things on web during this days but there are some things/doubts that I still have.
1) Each image is stored in different path, for example ( 2018/03/22/image1/image1.jpg - 2018/03/22/image2/image2.jpg - 2018/03/22/image3/image3.jpg - 2018/03/22/image4/image4.jpg ) - can this create some problems to the performance? By assuming that the homepage load 80 images this means that it load 80 images from 80 different path. By uploading all the images in a single path can have any importance for the performance?
2) Is crazy to think to save all the images to the desktop and use software for windows or mac to bulk optimise all the 150k images and re-upload again to the server. Is there any solution to optimize the images by using cPanel?
3) I see a lot of tools that CloudFlare have but I still not understand if any tool can do this.
Things already try:
- I started to optimise all images by downloading one by one to the Desktop but after around 1 hour I noticed that is crazy.
- I have try to check a lot of tools that CloudFlare provide but I'm still unable to understand if anyone can do this.
- I have try to check the cPanel tools but no one can do this as I have understand.
Any answer will be very helpfu - thanks.<issue_comment>username_1: Path shouldn't have any impact on performance. Cloudflare offers Polish and webP on it's paid plans:
<https://support.cloudflare.com/hc/en-us/articles/360000607372-Using-Polish-to-compress-images-on-Cloudflare>
<https://www.cloudflare.com/plans/>
Upvotes: 0 <issue_comment>username_2: >
> 1) Each image is stored in different path, .... By uploading all the
> images in a single path can have any importance for the performance?
>
>
>
No
>
> 2) ... Is there any solution to optimize the images by using cPanel?
>
>
>
I don't know of a cPanel solution, I can think of two other options though.
You could zip and download the directory holding all your images, unzip it locally, then drop that directory onto a tool like <https://imageoptim.com> . ImageOptim gives excellent results, then reverse the process and upload the optimised images and overwrite your existing images.
Another tool would be ShortPixel, it's a commercial service and looks like it would cost you $99 for 170k images. They have command line tool and a php API: <https://shortpixel.com/api-tools>
>
> 3) I see a lot of tools that CloudFlare have but I still not
> understand if any tool can do this.
>
>
>
If it's an option for you, Google's PageSpeed modules gives very good results for image compression.
<https://www.modpagespeed.com/doc/filter-image-optimize>
Another suggestion which would give immediate results is to lazy load the images. With this in place, even though your homepage has 80 images, if only 10 of them are in view when the page opens, then just those 10 will be downloaded. There are all sorts of lazy load options available, [here are some to get you started.](https://www.sitepoint.com/five-techniques-lazy-load-images-website-performance/)
Good luck!
Upvotes: 1 <issue_comment>username_3: The easiest thing to do is use CloudFlare on the domain. With their Pro plan (or better) it will optimize and cache the images for you.
But if you want to 'own' to optimized images you'll have to re-compress them yourself.
On Mac I use ImageOptim: <https://imageoptim.com/mac>
You'd have to transfer the files to your mac. Run ImageOptim (this will take a REALLY long time). I suggest the following settings...
[![enter image description here](https://i.stack.imgur.com/wjaPe.png)](https://i.stack.imgur.com/wjaPe.png)
Then transfer the images back to your server.
If the server is a Windows Machine, you can use FileOptimizer: <https://nikkhokkho.sourceforge.io/static.php?page=FileOptimizer>
That will process the files directly on the server. It's convenient - just make sure to give that process lowest priority.
Upvotes: 0 |
2018/03/22 | 624 | 1,829 | <issue_start>username_0: I have an array of objects, each with non correlative ids.
I would like to know if it is possbile, given an integer, to find the next closest id value object.
Example
```
const array = [{id: 4}, {id: 10}, {id: 15}]
```
given any value `x`, if `10 < x <= 15`, it should return `15`
given any value `x`, if `4 < x <= 10` , it should return `10`
and so on.
Thank you<issue_comment>username_1: If the array is sorted by `id` value, you can use `Array.find()` to find a the 1st `id` that is greater or equal to `x`:
```js
const array = [{id: 4}, {id: 10}, {id: 15}]
const findClosestId = (x) => (array.find(({ id }) => x <= id) || {}).id
console.log(findClosestId(11)); // 15
console.log(findClosestId(6)); // 10
console.log(findClosestId(100)); // undefined
```
Upvotes: 2 <issue_comment>username_2: If every time you want the higher one, you can check this.
Find that item which `id` is higher than your `x`. Also I think that you need first to sort your array based on `id`, to not have any issues with unsorted ones.
```js
const x = 5;
const array = [{id: 4}, {id: 10}, {id: 15}];
const max = array.sort((a,b) => a.id - b.id)
.find(item => item.id > x);
console.log(max);
```
Upvotes: 2 <issue_comment>username_3: If your array is not sorted, and you don't want to use the `.sort` method (for example: maybe because you don't want to mutate your array), you can use `.reduce` to find the best result:
```js
const array = [{id: 4}, {id: 10}, {id: 20}, {id: 15}]
function findClosest(x) {
return array.reduce((best, current) => {
return (current.id >= x && (!best || current.id < best.id))
? current
: best;
}, undefined);
}
console.log(findClosest(11))
console.log(findClosest(16))
```
Upvotes: 4 [selected_answer] |
2018/03/22 | 813 | 2,549 | <issue_start>username_0: I have a view that i created that reads from a table as well as includes hard coded values. The view was created as expected except i want to order by a column in the definition of the view.
View:
```
create VIEW [dbo].[RPTLV_PICKIDS] ( [Fixed], [Item], [Desc] ) AS select 0, 'test1', 'test1' union select 0, 'test2', 'test2' union select 0, cast(PickID as varchar(11)), cast(PickID as varchar(11)) FROM dbo.table group by pickid
```
As a note i don't have control of order by when running select.
The results i am looking for would be when i run following:
```
select * from dbo.RPTLV_PICKIDS order by item
```
Behind the scenes a query is just ran to select from the view which returns following:
```
Fixed Item Desc
0 1 1
0 2 2
0 test1 test1
0 test2 test2
```
Desired Results:
```
Fixed Item Desc
0 test1 test1
0 test2 test2
0 1 1
0 2 2
```
I need to alter creation of view somehow to have order by Item
Any help would be much appreciated<issue_comment>username_1: If the array is sorted by `id` value, you can use `Array.find()` to find a the 1st `id` that is greater or equal to `x`:
```js
const array = [{id: 4}, {id: 10}, {id: 15}]
const findClosestId = (x) => (array.find(({ id }) => x <= id) || {}).id
console.log(findClosestId(11)); // 15
console.log(findClosestId(6)); // 10
console.log(findClosestId(100)); // undefined
```
Upvotes: 2 <issue_comment>username_2: If every time you want the higher one, you can check this.
Find that item which `id` is higher than your `x`. Also I think that you need first to sort your array based on `id`, to not have any issues with unsorted ones.
```js
const x = 5;
const array = [{id: 4}, {id: 10}, {id: 15}];
const max = array.sort((a,b) => a.id - b.id)
.find(item => item.id > x);
console.log(max);
```
Upvotes: 2 <issue_comment>username_3: If your array is not sorted, and you don't want to use the `.sort` method (for example: maybe because you don't want to mutate your array), you can use `.reduce` to find the best result:
```js
const array = [{id: 4}, {id: 10}, {id: 20}, {id: 15}]
function findClosest(x) {
return array.reduce((best, current) => {
return (current.id >= x && (!best || current.id < best.id))
? current
: best;
}, undefined);
}
console.log(findClosest(11))
console.log(findClosest(16))
```
Upvotes: 4 [selected_answer] |
2018/03/22 | 804 | 2,416 | <issue_start>username_0: i have some text being piped to sed (any other tool is okay). I want to extract the line which ends with a question mark and the line before that. how can we achieve this?
eg text
```
& 434.0K
-
....
What are you trying to filter
out of screencap's output?
Student
xcvdgfdg
srrtgg
```<issue_comment>username_1: Could you please try following and let me know if this helps you.
```
awk '/\?$/{print prev ORS $0} {prev=$0}' Input_file
```
For your shown Input following will be the output.
```
What are you trying to filter
out of screencap's output?
```
on a Solaris/SunOS system, change `awk` to `/usr/xpg4/bin/awk` , `/usr/xpg6/bin/awk` , or `nawk`.
***Explanation of code:***
`/\?$/`: Searching for a line which is ending with `?` if yes then do following.
`{print prev ORS $0}`: Using `awk`'s print function here by which we are printing variable named `prev` and `$0`(current line)'s value here.
`{prev=$0}`: So Here I am storing the current line's value to variable named prev so that in case of a match of `?` at line's last, could print the previous line.
Upvotes: 3 [selected_answer]<issue_comment>username_2: A sed solution:
```
sed -n '/?$/{x;G;p;};h'
```
Rough translation: in general, don't print anything. If this line ends with '$', prepend it with the line in the hold space and print it. And save this line in the hold space.
Upvotes: 1 <issue_comment>username_3: You can use `grep` to match the line in question then use `-A num` for lines after or `-B num` for lines before the match.
Given:
```
$ echo "$txt"
& 434.0K
-
....
What are you trying to filter
out of screencap's output?
Student
xcvdgfdg
srrtgg
```
One line before:
```
$ echo "$txt" | grep -B1 '\?$'
What are you trying to filter
out of screencap's output?
```
One line after:
```
$ echo "$txt" | grep -A1 '\?$'
out of screencap's output?
Student
```
etc.
POSIX and GNU grep supported.
---
If you have multiple matches, `grep` will put `--` between each match:
```
$ echo "$txt$txt" | grep -A1 '\?$'
out of screencap's output?
Student
--
out of screencap's output?
Student
```
If that is a problem, pipe through `sed` to delete it:
```
$ echo "$txt$txt" | grep -A1 '\?$' | sed '/^--*$/d'
out of screencap's output?
Student
out of screencap's output?
Student
```
Upvotes: 0 <issue_comment>username_4: Another sed solution
```
sed '$d;N;/?$/!D' infile
```
Upvotes: 0 |
2018/03/22 | 1,486 | 4,946 | <issue_start>username_0: I need to accept a user name and return the password and user ID to determine whether or not they have access to a particular Form in a project.
I have searched for a solution and viewed similar questions on this site, but to no avail. Please, help if you can.
The parameter values returned when run in SQL Server debug are:
```
Joseph
0x6D616368696E653100000000000000000000000000000000000000000000000000000000000000000000
34DFCAA9-1A5F-4AC4-AC43-4A025DD84063
```
My stored procedure is:
```
CREATE PROCEDURE [dbo].[RMVerifyUser]
@UserName AS VARCHAR(40),
@Password AS BINARY(42) OUTPUT,
@ClerkID AS UNIQUEIDENTIFIER OUTPUT
AS
SELECT
@ClerkID = IDClerk,
@Password = (CONVERT(binary(8), ClkPassCode, 0)) // ClkPassCode is varchar(50) in the table.
FROM
tblClerkInfo
WHERE
ClkName = @UserName
AND ISNULL(Canceled, 0) = 0
RETURN
```
And my C# code is:
```
private string retrievePassword(object userName)
{
string strPassword = "";
//uidClerkID = new Guid();
connect.Open();
try
{
SqlCommand command = new SqlCommand("RMVerifyUser");
command.Connection = connect;
command.CommandType = CommandType.StoredProcedure
SqlParameter retUserName = new SqlParameter("@UserName", userName); // userName is passed in from another method.
retUserName.ParameterName = "@UserName";
retUserName.Value = userName;
command.Parameters.Add(retUserName);
SqlParameter retPassword = new SqlParameter("@Password", strPassword); // I suspect the error is occurring somewhere in the declaration/definition of this parameter.
retPassword.ParameterName = "@Password";
retPassword.Direction = ParameterDirection.Output;
retPassword.DbType.Equals(DbType.String);
retPassword.Size = 50;
retPassword.Value = strPassword;
command.Parameters.Add(retPassword); // ClkPassCode is nvarchar(50) in the data table.
SqlParameter retUserID = new SqlParameter("@ClerkId", uidClerkID); // uidClerkID is a property in a separate class, but I also tried - uidClerkID = new Guid() - in this method. Same error thrown.
retUserID.ParameterName = "@ClerkId";
retUserID.Direction = ParameterDirection.Output;
retUserID.Value = uidClerkID;
command.Parameters.Add(retUserID);
command.ExecuteNonQuery();
uidClerkID = Guid.Parse(retUserID.Value.ToString());
if (strPassword == DBNull.Value.ToString())
{
ReferenceEquals(strPassword, null);
}
else
{
strPassword = retPassword.Value.ToString();
return strPassword;
}
}
catch (Exception ex)
{
....
}
}
```<issue_comment>username_1: Could you please try following and let me know if this helps you.
```
awk '/\?$/{print prev ORS $0} {prev=$0}' Input_file
```
For your shown Input following will be the output.
```
What are you trying to filter
out of screencap's output?
```
on a Solaris/SunOS system, change `awk` to `/usr/xpg4/bin/awk` , `/usr/xpg6/bin/awk` , or `nawk`.
***Explanation of code:***
`/\?$/`: Searching for a line which is ending with `?` if yes then do following.
`{print prev ORS $0}`: Using `awk`'s print function here by which we are printing variable named `prev` and `$0`(current line)'s value here.
`{prev=$0}`: So Here I am storing the current line's value to variable named prev so that in case of a match of `?` at line's last, could print the previous line.
Upvotes: 3 [selected_answer]<issue_comment>username_2: A sed solution:
```
sed -n '/?$/{x;G;p;};h'
```
Rough translation: in general, don't print anything. If this line ends with '$', prepend it with the line in the hold space and print it. And save this line in the hold space.
Upvotes: 1 <issue_comment>username_3: You can use `grep` to match the line in question then use `-A num` for lines after or `-B num` for lines before the match.
Given:
```
$ echo "$txt"
& 434.0K
-
....
What are you trying to filter
out of screencap's output?
Student
xcvdgfdg
srrtgg
```
One line before:
```
$ echo "$txt" | grep -B1 '\?$'
What are you trying to filter
out of screencap's output?
```
One line after:
```
$ echo "$txt" | grep -A1 '\?$'
out of screencap's output?
Student
```
etc.
POSIX and GNU grep supported.
---
If you have multiple matches, `grep` will put `--` between each match:
```
$ echo "$txt$txt" | grep -A1 '\?$'
out of screencap's output?
Student
--
out of screencap's output?
Student
```
If that is a problem, pipe through `sed` to delete it:
```
$ echo "$txt$txt" | grep -A1 '\?$' | sed '/^--*$/d'
out of screencap's output?
Student
out of screencap's output?
Student
```
Upvotes: 0 <issue_comment>username_4: Another sed solution
```
sed '$d;N;/?$/!D' infile
```
Upvotes: 0 |
2018/03/22 | 320 | 1,043 | <issue_start>username_0: How to convert an EPS file to PNG and specify the size of the PNG?
```
convert file.eps -resize 800x1200 file.png
```
This converts the EPS but with the wrong dimensions
sample EPS file
<https://ufile.io/zrv3n><issue_comment>username_1: Either ignore aspect ratio with `!`
```
convert file.eps -resize '800x1200!' file.png
```
Or resize to fill, and clip to size.
```
convert file.eps -resize '800x1200^' -gravity Center -extent 800x1200 file.png
```
Other techniques covered in [Resize or Scaling](http://www.imagemagick.org/Usage/resize/) article.
Upvotes: 3 [selected_answer]<issue_comment>username_2: In ImageMagick, the proper way to change output raster size when converting an EPS to PNG is to specify -density before reading the EPS. For higher quality, you can then resize after using a large density value. For example:
```
convert -density 300 file.eps -resize 800x1200 file.png
```
You can force the dimension to be exact with distortion or crop or pad as per username_1's answer.
Upvotes: 1 |
2018/03/22 | 898 | 3,096 | <issue_start>username_0: I'm getting an error when I try to create an sql table which I can't fix:
```
CREATE TABLE IF NOT EXISTS `#__websitemanager_magazine` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`asset_id` INT(10) unsigned NOT NULL DEFAULT 0 COMMENT 'FK to the #__assets table.',
`date` DATE NOT NULL DEFAULT '',
`magazine_file` TEXT NOT NULL,
PRIMARY KEY (`id`),
```
) ENGINE=MyISAM AUTO\_INCREMENT=0 DEFAULT CHARSET=utf8;
it throws an Invalid default value for 'date'. I'm using mysql 5.7<issue_comment>username_1: Don't put an empty string for default value of date. Try something like:
```
`date` DATE NOT NULL DEFAULT CURRENT_TIMESTAMP
```
Upvotes: 0 <issue_comment>username_2: You do not need quotes around your field strings. You also have an empty string at the end of your date line which you do not need, which could be throwing an error.
Upvotes: 1 <issue_comment>username_3: You can update your query
```
CREATE TABLE IF NOT EXISTS `#__websitemanager_magazine` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`asset_id` INT(10) unsigned NOT NULL DEFAULT 0 COMMENT 'FK to the #__assets table.',
`date` DATE NOT NULL DEFAULT CURRENT_TIMESTAMP,
`magazine_file` TEXT NOT NULL,
PRIMARY KEY (`id`),
) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
```
Every time when you are putting any date value either make it NULL and put default NULL or if you don't want to create it nullable value make it NOT NULL DEFAULT CURRENT\_TIMESTAMP.
Upvotes: 0 <issue_comment>username_4: You are getting the error due to using an empty string as a default value for a `date` data type. The other answers are off in the solution, as using functions such as `NOW` or `CURRENT_TIMESTAMP` is not supported for the `date` data type in MySQL. These functions do not provide constant values.
[Create Table Documentation](https://dev.mysql.com/doc/refman/5.7/en/create-table.html)
>
> The default value must be a constant; it cannot be a function or an expression. This means, for example, that you cannot set the default for a date column to be the value of a function such as NOW() or CURRENT\_DATE. The exception is that you can specify CURRENT\_TIMESTAMP as the default for a TIMESTAMP or DATETIME column
>
>
>
When declaring a `date` as `NOT NULL`, [MySQL](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-types.html) allows the usage of the zero value `0000-00-00` in place of null values. To fix your SQL statement, you need to change `DEFAULT '0000-00-00'`.
>
> MySQL permits you to store a “zero” value of '0000-00-00' as a “dummy date.” This is in some cases more convenient than using NULL values, and uses less data and index space. To disallow '0000-00-00', enable the NO\_ZERO\_DATE mode.
>
>
>
You can also leave the `DEFAULT` as a whole, as the [Data Type Defaults](https://dev.mysql.com/doc/refman/5.7/en/data-type-defaults.html) documentation states `0000-00-00` is an implicit default for dates
>
> For date and time types other than TIMESTAMP, the default is the appropriate “zero” value for the type
>
>
>
Upvotes: 2 [selected_answer] |
2018/03/22 | 839 | 2,922 | <issue_start>username_0: Im working with a Particle Photon to make a digital watch + alarm clock. I am using a TFT screen, and i have drawn some digits on it which i can increment and change "digit index" with some buttons that i connected to my Photon - with the purpose of setting the time of a scheduled alarm. The function under is the one i call when a button is clicked, and its parameter is what "digit index" the "cursor" on my screen is at, which i can navigate with my buttons. This works fine however, all the way until 9 - where my screen starts displaying ASCII smileys and other letters instead of digits. I tried putting an if statement, that when the variable temp is higher than 9, it is set to 0 - which is what i ultimately want to achieve. This did not work, and it keeps displaying ASCII letters when it exceeds 9. Then i decided to try to print temp to console at button click, but then the first letter it prints is 49 and not 0? What??
I am not too seasoned with C/C++, but i do have a hunch that before im incrementing temp, i am doing the conversion from the char array timeSet wrong. What do you guys think?
```
void replaceDigitFromIndex(int selectorIndex) {
tft.setTextSize(4);
tft.setTextColor(ST7735_RED);
tft.setCursor(4, 20);
tft.print(timeSet);
int temp = 0;
if(selectorIndex == 1) {
temp = (int) timeSet[0];
temp++;
timeSet[0] = (char) temp;
}
if(selectorIndex == 2) {
temp = (int) timeSet[1];
temp++;
timeSet[1] = (char) temp;
}
if(selectorIndex == 3) {
temp = (int) timeSet[3];
temp++;
timeSet[3] = (char) temp;
}
if(selectorIndex == 4) {
temp = (int) timeSet[4];
temp++;
timeSet[4] = (char) temp;
}
}
```<issue_comment>username_1: The characters `'0'`-`'9'` are [ASCII Characters](https://www.asciitable.com/) 48-57\*.
If you want to add one and have them loop, you can do:
```
//conversion to int is implicit, no need for (int)
int temp = timeSet[i] - '0'; //Subtract char '0' to map to ints [0,9]
temp = (temp + 1) % 10; //loop around 10
timeSet[i] = temp + '0'; //Add back char '0' to map back to chars ['0', '9']
```
---
\*For you at least (since you've told us that `'1'` outputs 48 so you must be using ASCII or some superset of ASCII like UTF-8), but there's no guarantee every compiler will behave like this. This and readability are the reasons I do `- '0'` in my code. The digit characters are always contiguous. Thanks @Pete!
Upvotes: 4 [selected_answer]<issue_comment>username_2: In ASCII, there is only a representation of single characters and particularly for numbers only digits from 0 (char 48) to 9 (char 57).
When you go over 9, you'll get the char '9'+1 which is ':'.
You are right about putting an if statement for when the int is over 9.
But make sure the if is on the int not the character.
Upvotes: 2 |
2018/03/22 | 697 | 2,646 | <issue_start>username_0: I wrote a python program and trying to debug it but I can't get rid of following error message. What am i doing wrong ? I will appreciate your help.
Problem seems to be lines but I have already declared lines as an array.
```
NameError Traceback (most recent call last)
in ()
16 csvreader = csv.reader(csvfile)
17 for line in csvreader:
---> 18 lines.append(line)
19 return lines
20 def loadDatasetFinal(filename, split, trainingSet = [], testSet = []):
NameError: name 'lines' is not defined
```
Below is the program code:
```
#kNN implementation in Python 3
import csv
import random
import math
import operator
import urllib.request
# Old Way Commented - Out### FUNCTION GET LINES: Get the files, either directly online or by saving it locally:
def getLines(filename):
lines = []
#if (filename.startswith(('http', 'ftp', 'sftp'))): #Skip downloading it and open directly online:
# response = urllib.request.urlopen(filename)
#lines = csv.reader(response.read().decode('utf-8').splitlines())
#else :#TutorialsPoint IDE requires 'r', not 'rb'
with open(r'C:\Users\Radha\Desktop\iris.data.csv','r') as csvfile: #csvreader is an object that is essentially a list of lists
csvreader = csv.reader(csvfile)
for line in csvreader:
lines.append(line)
return lines
```<issue_comment>username_1: Right before your `with open(r'C:\`... line, add `lines = []`.
This should solve your problem. Currently you haven't defined the variable `lines`.
Upvotes: 2 <issue_comment>username_2: Your lines is defined within the function, but you ask to return the value of lines outside of your function. You would need to return the value just outside of the For loop, but not outside of your function
```
for line in csvreader:
lines.append(line)
return lines
```
You won’t be able to return lines outside of your function. I would move the entire withopen function and everything within the function definition in which I created the empty list “lines”.
Upvotes: 1 <issue_comment>username_3: Currently the variable lines is only scoped within the getLines() function here.
```
def getLines(filename):
lines = []
```
You're trying to reference lines outside of that scope and it doesn't exist outside of it yet.
You'll need to create a separate list called lines in the same scope as the function using it, or the outer scope.
You have it commented out...
```
#lines = csv.reader(response.read().decode('utf-8').splitlines())
```
Uncomment that or otherwise redefine lines in the same scope and you won't get a nameerror for lines anymore.
Upvotes: 3 [selected_answer] |
2018/03/22 | 649 | 2,796 | <issue_start>username_0: I am updating an array of Realm objects:
```
for offer in offers {
if(offer.isReward == isReward){
offer.isSeen = true
}
}
```
but it is throwing an error:
`Attempting to modify object outside of a write transaction - call beginWriteTransaction on an RLMRealm instance first.`
I know I can get it working by doing:
```
for offer in offers {
if(offer.isReward == isReward){
try! offer.realm?.write {
offer.isSeen = true
}
}
}
```
Is there anyway I can hold off on the writing to Realm until I have made changes to all the elements in the array and just commit the entire array? It seems inefficient to have a write transaction for each object that needs to be changed.
My original plan was to modify the items and bulk update
```
try realm.write {
realm.add(offers, update: true)
}
```<issue_comment>username_1: If you keep an instance of the `realm` itself around, call `beginWrite()`, perform your loop of changes, then `commitWrite()`:
```
let realm = try! Realm()
//let offers = realm.... look up your objects
realm.beginWrite()
for offer in offers {
if(offer.isReward == isReward){
offer.isSeen = true
}
}
try! realm.commitWrite()
```
Upvotes: 1 <issue_comment>username_2: All changes to managed Realm objects need to happen inside a write transaction. You cannot modify a managed object, then later write the change to Realm, since a managed object "never leaves" the Realm and hence any changes you make to it will be saved immediately.
Without seeing more context about your code I cannot give you a specific implementation, but in general there are (at least) two ways to get around this issue.
Firstly, you can always create copies of managed objects, then modify the values of these copies one-by-one and once you're done update all managed objects from their copies in a single write transaction. However, this would require storing twice the objects in memory, so it's not an optimal solution.
The second solution is more optimal, but only works in certain scenarios. You can simply wrap the whole code you currently have in a write transaction and iterate through the managed objects inside the write transaction. This way, you only use a single transaction, but update all managed objects without making copies of them. However, you shouldn't be doing computationally expensive processes in a write transaction, so in case you need to perform long-lasting tasks when updating your objects, this isn't an optimal solution either.
There's no single good way of doing write transactions. There's a trade-off between the number of objects being updated in a single transaction and using several transactions to update a small number of objects.
Upvotes: 3 [selected_answer] |
2018/03/22 | 789 | 3,016 | <issue_start>username_0: I have the following Java regex (https://.\*?/api/FHIR/DSTU2/) to match a URL in the following pattern <https://somserver.esm.somedomain.edu/TST/api/FHIR/DSTU2/> in a large JSON result,
and replace it with another URL like
<https://api.anotherdomain.edu/FHIR/DSTU2/>.
The URL I'm replacing occurs multiple times in the JSON result. I am using Java replaceAll function and sometimes it fails and I get the error 'Expected ',' instead of '''. This happens because the JSON result gets messed up during the replacement process, and can't be properly parsed. What's strange is that I used the same JSON result that's failing in a JAVA regex tester, and the regex seems to be working fine. DO you see something out of whack here? Thanks!<issue_comment>username_1: I wrote a JUnit case for this, trying to reproduce the error. But with the given JSON input below your last link provides, there is no error and everything works fine everytime. Could you try out this test case please?
```
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.commons.io.IOUtils;
import org.junit.Before;
import org.junit.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.InputStream;
import static org.junit.Assert.fail;
public class RegexReplacementTest {
private static final Logger logger = LoggerFactory.getLogger(RegexReplacementTest.class);
private static final String replacement = "https://api.anotherdomain.edu/FHIR/DSTU2/";
private static final String pattern = "(https://.*?/api/FHIR/DSTU2/)";
private String json;
@Before
public void setup() throws IOException {
InputStream resourceAsStream = this.getClass().getResourceAsStream("/path/to/your/test.json");
json = IOUtils.toString(resourceAsStream);
resourceAsStream.close();
validateJson(json);
}
@Test
public void testReplace() {
String replacedString = json.replaceAll(pattern, replacement);
validateJson(replacedString);
}
private void validateJson(String json) {
ObjectMapper objectMapper = new ObjectMapper();
try {
objectMapper.readTree(json);
}
catch (IOException e) {
logger.error("Error upon testing json!" , e);
fail("Error upon testing json! " + e.getMessage());
}
}
}
```
Upvotes: 0 <issue_comment>username_2: Try this,
1.Object To String with return class java.lang.String
2.Expression with below code
```
import java.util.regex.*;
payload=payload.replaceAll("https[:]//.*./api/FHIR/DSTU2/", "https://api.anotherdomain.edu/FHIR/DSTU2/");
return payload;
```
final xml Flow will be
```
import java.util.regex.\*;
payload=payload.replaceAll("https[:]//.\*./api/FHIR/DSTU2/", "https://api.anotherdomain.edu/FHIR/DSTU2/");
return payload;
```
[![enter image description here](https://i.stack.imgur.com/RHZaN.png)](https://i.stack.imgur.com/RHZaN.png)
Upvotes: 1 |
2018/03/22 | 457 | 1,721 | <issue_start>username_0: I'm combining two tables with an inner join where the Primary ID is the movie ID, and I'd like to group them by number of movies per actor per year. However - possibly because the movie ID is the primary key - the results that I'm getting aren't correct. Below is my code. The DB is from iMDB, it has a cast table with movies and actors, an actors table and a movies table with the year.
```
SELECT
c.actor_id, m.year, COUNT(m.title)
FROM
cast c
INNER JOIN movies m
WHERE
c.movie_id = m.id
GROUP BY m.year
ORDER BY COUNT(m.title) DESC;
```<issue_comment>username_1: Need to include `actor_id` in the `GROUP` clause cause you are counting number of movies per actor per year.
```
SELECT
c.actor_id, m.year, COUNT(m.title)
FROM
cast c
INNER JOIN movies m
WHERE
c.movie_id = m.id
GROUP BY c.actor_id, m.year
ORDER BY COUNT(m.title) DESC;
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Everything not aggregated must be in `GROUP BY`. Most RDBMSs would error out on your SQL, but MySQL would give you "a free pass" on it:
```
SELECT
c.actor_id, m.year, COUNT(m.title)
FROM
cast c
INNER JOIN actor_movie am ON am.actor_id=am.actor_id
INNER JOIN movies m ON am.movie_id=m.movie_id
WHERE
c.movie_id = m.id
GROUP BY c.actor_id, m.year
ORDER BY COUNT(m.title) DESC;
```
Note that inner join needs an `ON` clause which you did not include. I added one possible implementation with an actor-movie table that I suppose you have (or should have).
Upvotes: 1 |
2018/03/22 | 1,096 | 3,348 | <issue_start>username_0: I have a simple html document, containing one parent div and two additional divs (one is left and the other one is right, both of them having max-width: 200px).
What I want to achieve, is having the height of this two divs equal while the left one contains text and the other one an image, both of them being center horizontally-aligned. If the image taller then the text it is also vertically center aligned, and of course if the text is taller then the image, it will be vertically aligned.
I use `display: table` for the parent, and display table-cell for the two divs. So far it works as I want, but for some reason the height of the image is less than the original one. For instance, an image having the height represented by 480px will be reduced to 300px and so on. Here is my code:
```css
html {
margin: 0;
padding: 0;
height: 100%;
}
body {
margin: 0;
padding: 0;
min-height: 100%;
}
#parent {
border: 1px solid black;
display: table;
}
#left {
width: 200px;
text-align: center;
display: table-cell;
vertical-align: middle;
}
#right {
width: 200px;
display: table-cell;
vertical-align: middle;
}
#right img {
max-width: 200px;
display: block;
margin-left: auto;
margin-right: auto;
}
```
```html
Un div parinte (chenarul negru) va contine doua divuri interioare, unul in stanga si unul in dreapta. Cele
doua
div-uri interioare au fiecare latimea fixa de 200 de pixeli.Div-ul interior din stanga contine text centrat
![](http://s1.1zoom.me/b7559/911/Sky_Fields_Clouds_Trees_526506_320x480.jpg)
```<issue_comment>username_1: ```
#right img {
max-width: 200px;
max-hight: 200px;
display: block;
margin-left: auto;
margin-right: auto;
}
```
Upvotes: 1 <issue_comment>username_2: Make an image with a
```
background
background-size
background-position
```
Upvotes: -1 <issue_comment>username_3: Your image size is responding to the max-width you placed on it's container. If the image width is larger than 200px the height will reduce to the same ratio e.g. if the original height/width is 320/480 the new size will be 200/300 which is a 1.6 ratio.
Upvotes: 0 <issue_comment>username_4: Everything you did seems correct, there is nothing incorrect in your code for sure.
**So what is that giving you a hard time?**
**Answer:** Image sets maintaining the aspect ratio.
---
A little deep dive...
**What is Aspect Ratio:** The ratio of the width to the height.
In the case here, the ratio of image width to the image height. That is `320px/480px` as width of image is `320px` and height of image is `480px`.
So ratio `320/480` would be equals to `0.66666`.
Now, if we change the width for the same ratio; the height would change too.
So let us say width is `200px` (same as the above situation), which gives us height as `200px/height` = `0.66666`.
**So `height` = `200/0.66666` = `300.00300` ~ `300px`**.
Update:
=======
>
> **You:** "*for some reason the height of the image is less than the original one.*"
>
>
>
If you notice not ONLY the `height` of an image is less, but ALSO the `width` of an image is less/reduced. So that is the actual reason.
Phew! we took the detour to some math stuff.
Hope this might help you.
Upvotes: 2 [selected_answer] |
2018/03/22 | 989 | 2,898 | <issue_start>username_0: So I have this javascript function, and I want to convert it to a java function. This is the original function:
```
function mailExistsCheck() {
request({
uri: "https://iag.ksmobile.net/1/cgi/is_exist",
method: "POST",
headers: {
'User-Agent': "FBAndroidSDK.0.0.1", DONE
'Accept-Language': "en_US", DONE
'appid': "135301",
'ver': "3.8.20",
'sid': "17DBB0C2B30D01B590B704501A8AC054",
'sig': "lbOfPDrq0kJNvZX8eYZH9c0Mo7o",
'Host': "iag.ksmobile.net",
'Content-Type': "multipart/form-data; boundary=3i2ndDfv2rTHiSisAbouNdArYfORhtTPEefj3q2f",
'Transfer-Encoding': "chunked",
'Accept-Encoding': "gzip"
},
body: `--3i2ndDfv2rTHiSisAbouNdArYfORhtTPEefj3q2f
Content-Disposition: form-data; name="cmversion"
38201849
--3i2ndDfv2rTHiSisAbouNdArYfORhtTPEefj3q2f
Content-Disposition: form-data; name="name"
` + address + `
--3i2ndDfv2rTHiSisAbouNdArYfORhtTPEefj3q2f`
}, (error, response, body) => {
if (error) {
console.error(error);
}
if (body.indexOf("12018") != -1) {
register();
} else {
console.error(body);
console.error("MAIL EXISTENCE CHECK FAILED!");
process.exit(1);
}
});
```
I'm using this library <http://kevinsawicki.github.io/http-request/> to create the http request, so far I have this:
```
public static String mailexiste(String address) {
String contentType = HttpRequest.post("https://iag.ksmobile.net/1/cgi/is_exist")
.userAgent("FBAndroidSDK.0.0.1")
.header("Accept-Language", "en_US")
.header("appid", "135301")
.header("ver", "3.8.20")
.header("sid", "17DBB0C2B30D01B590B704501A8AC054")
.header("sig", "lbOfPDrq0kJNvZX8eYZH9c0Mo7o")
.header("Host", "iag.ksmobile.net")
.contentType("multipart/form-data; boundary=3i2ndDfv2rTHiSisAbouNdArYfORhtTPEefj3q2f")
.acceptGzipEncoding()
.contentType(); //Gets response header
return contentType;
```
I think I did the headers the right way, my only issue is that I'm having problems with the body part. I don't understand most of this requests, because I'm just starting now to learn this, and I would like to do the program by myself, but I'm really stuck on this part. Also, how do I do the if's in java? Thanks in advance!<issue_comment>username_1: if() is pretty much the same in Java as in JavaScript.
Upvotes: 0 <issue_comment>username_2: * Drop the boundary - unless your server is non-HTTP-compliant, any string should do
* Drop the `Host` header, it will be added by the library automatically
* Use `.part(String name,String value)` to add parts in a multipart:
You should get something like:
```
.contentType("multipart/form-data")
.part("cmversion", "38201849")
.part(name, address)
```
Upvotes: 1 |
2018/03/22 | 818 | 2,895 | <issue_start>username_0: How is it possible to pass an array of elements as well as keeping the `data` attributes using jQuery? currently I've got the following:
```
$('body').on('click', '.selectItem', function() {
data: {
'hash': $(this).data("hash"),
'id': $(this).data("id"),
'inspect': $(this).data("inspect"),
'stickers': $(this).data("stickers")
}
});
```
How would I be able to do something like this?
```
$('.getItems').click(function(event) {
data: {
'items': $('.selectItem').andAttributes().toArray()
}
});
```
I'm guessing I could do something like a `foreach` & then add them into an array for each element, but doesn't jQuery have a simple solution?
My expected result from doing *something like this* `$('.selectItem').andAttributes().toArray()` would be something like:
```
{
0: {
'hash': $(this).data("hash"),
'id': $(this).data("id"),
'inspect': $(this).data("inspect"),
'stickers': $(this).data("stickers")
},
1: {
'hash': $(this).data("hash"),
'id': $(this).data("id"),
'inspect': $(this).data("inspect"),
'stickers': $(this).data("stickers")
}
2: {
'hash': $(this).data("hash"),
'id': $(this).data("id"),
'inspect': $(this).data("inspect"),
'stickers': $(this).data("stickers")
}
etc....
}
```<issue_comment>username_1: You're probably looking for [`map`](http://api.jquery.com/map/) (jQuery's, not Array's) combined with [`get`](http://api.jquery.com/get/) (to get the actual array). You can also use the no-arguments version of `data` to get an object to pick those properties from:
```
var a = $(/*...selector for elements...*/).map(function() {
var data = $(this).data(); // Get the data as an object
// Grab the properties from it you want
return {
'hash': data.hash,
'id': data.id,
'inspect': data.inspect,
'stickers': data.stickers
}
}).get();
```
`a` will be an array of objects with properties `hash`, `id`, etc.
(BTW: No need to put those property names in quotes.)
Live Example *(with just a couple of attrs)*:
```js
var a = $(".stuff").map(function() {
var data = $(this).data();
return {
hash: data.hash,
id: data.id/*,
inspect: data.inspect,
stickers: data.stickers*/
}
}).get();
console.log(a);
```
```css
.as-console-wrapper {
max-height: 100% !important;
}
```
```html
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: The `.data()` method with no arguments will return an object containing all the data attributes. Use `.map()` to iterate over all the elements matching the selector, and `.get()` to convert the resulting collection to an array.
```
data: $('.selectItem').map(function() {
return $(this).data();
}).get()
```
Upvotes: 0 |
2018/03/22 | 1,900 | 6,328 | <issue_start>username_0: Problem:
--------
Jupyter is not able to save, create (I imagene delete) any file type. But I can load them fine
An example of creating file
---------------------------
Creating File Failed An error occurred while creating anew file.
>
> ''' Unexpected error while saving file: untitled.txt [Errno 2]
> No such file or directory:
> 'C:\Users\me\Documents\jupyter\_notebooks\untitled.txt' '''
>
>
>
An example of saving file
-------------------------
```
C:\WINDOWS\system32>jupyter notebook
[I 17:15:51.888 NotebookApp] JupyterLab beta preview extension loaded from c:\users\me\appdata\local\programs\python\python35\lib\site-packages\jupyterlab
[I 17:15:51.888 NotebookApp] JupyterLab application directory is c:\users\me\appdata\local\programs\python\python35\share\jupyter\lab
[I 17:15:52.091 NotebookApp] Serving notebooks from local directory: C:\Users\me\Documents\jupyter_notebooks
[I 17:15:52.091 NotebookApp] 0 active kernels
[I 17:15:52.091 NotebookApp] The Jupyter Notebook is running at:
[I 17:15:52.091 NotebookApp] http://localhost:8888/?token=*******************************01<PASSWORD>
[I 17:15:52.091 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 17:15:52.127 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://localhost:8888/?token=*******************************<PASSWORD>
[I 17:15:52.472 NotebookApp] Accepting one-time-token-authenticated connection from ::1
[I 17:16:01.628 NotebookApp] Kernel started: 77255bad-ad7a-4714-8221-1dd90a3a34b5
[W 17:16:11.656 NotebookApp] Timeout waiting for kernel_info reply from 77255bad-ad7a-4714-8221-1dd90a3a34b5
[I 17:16:25.646 NotebookApp] Adapting to protocol v5.0 for kernel 77255bad-ad7a-4714-8221-1dd90a3a34b5
[I 17:16:26.160 NotebookApp] Saving file at /jupyter_jupyter_lab.ipynb
[E 17:16:26.543 NotebookApp] Error while saving file: jupyter_jupyter_lab.ipynb [Errno 2] No such file or directory: 'C:\\Users\\me\\Documents\\jupyter_notebooks\\.~jupyter_jupyter_lab.ipynb'
```
**Tried:** Still the same problem
I double checked the folder location and it **matches**
>
> C:\Users\me\Documents\jupyter\_notebooks
>
>
>
**I also tried:** It install and runs, able to open and read but no changes allowed
```
pip3 install --upgrade --force-reinstall --no-cache-dir jupyter
~ https://stackoverflow.com/a/42667069/6202092
```
**System:**
* Windows 10
* Python environment: 2.7 & 3.5
* package manager: pip / pip3
**Possible contributing factors:**
Made some system changes two days ago, and since the problem started. I believe is a permission issue. I recall deselecting a "permissions check box" for executing scripts. but it only seemed to affect Jupyter
**Not acceptable solutions:**
"Try Anaconda", "Reinstall windows" - the system was working fine and continues to work, with the exception of Jupyter enability to make changes
Thank you in advance
~eb<issue_comment>username_1: I am the author of the question...
**The problem was a permissions issue**, as I mentioned earlier, I did modify the system and could not remember what I had done to prevent Jupyter Notebook from working as before. I spent the better part of three days researching the problem and could not find an answer, in frustration, today I posted the problem. And with the comments and suggestion from a couple of the users I was able to take a better look at the problem and try a few different approaches.
I had enabled a feature in **"Windows Defender Security Center"** that prevented Jupyter from working as before, preventing me from running `Notebooks` in different locations, more specifically the "Documents" folder.
**Should this happen to you:**
1. **Go** to "**Windows Defender Security Center**"
2. **Scroll & Click on** "**Virus & threat protection settings**"
3. **Scroll down** to "**Controlled folder access**"
4. Make sure is **Disabled**
This was the "**global**" solution I was looking for
That should fix a similar problem
~ eb
Upvotes: 4 <issue_comment>username_2: As username_1 mentioned above, this behavior can be caused by having "**Controlled folder access**" enabled in the **Windows Defender Security Center**. Disabling it fixes the issue.
If you'd like to leave "Controlled folder access" enabled, follow Enrique's steps 1-3, which I have quoted here.
>
> 1. Go to "**Windows Defender Security Center**"
> 2. Scroll & Click on "**Virus & threat protection settings**"
> 3. Scroll down to "**Controlled folder access**"
>
>
>
Then:
4. Click "**Allow an app through Controlled folder access**".
5. Click on "**Add an allowed app**" and select your **python.exe**, **jupyter.exe**, and **jupyter-notebook.exe** files. You can find *python.exe* in your main Python installation directory and the other two in the "*Scripts*" subdirectory of your main Python installation directory.
6. Restart *jupyter* if it was running when you made these changes.
You have to add the files one at a time, so it will take a bunch of clicking. I was able to save my jupyter notebooks without issue after doing this.
*NOTE: If you have both Python 2 and 3 installed, you must do this for each Python installation.*
Upvotes: 2 <issue_comment>username_3: if your OS is Windows10, may you have to flow this
1. mouse right click the "Anaconda Navigator" icon
2. and pop-up "Anaconda Navigator" icons properties
3. click the "detail properties"
4. Check the checkbox labeled "Run as administrator" as shown below in the text "Select advanced properties for this shortcut" and save it.
Upvotes: 2 <issue_comment>username_4: I had a similar issue on Windows 10. The resolution was as follows:
1. open a cmd window as administrator
2. takeown /R /F path\_to\_folder
3. icacls path\_to\_folder /T /C /RESET
Further details in <https://www.deskmodder.de/wiki/index.php?title=NTFS_Berechtigungen_f%C3%BCr_Dateien_und_Ordner_zur%C3%BCcksetzen_Windows_10>
Upvotes: 0 <issue_comment>username_5: Are you using the windows command prompt or the anaconda bash prompt?
You can enter into a conda env with a windows command prompt but it will not let you
create a new notebook.
If you go to "anaconda prompt" from the programs menu and go into "bash" you get
permissions
Upvotes: -1 |
2018/03/22 | 2,025 | 5,457 | <issue_start>username_0: I am trying to utilize the MAX() OVER PARTITION BY function to evaluate the most recent receipt for a specific part that my company has bought. Below is an example table of the information for a few parts from the last year:
```none
| VEND_NUM | VEND_NAME | RECEIPT_NUM | RECEIPT_ITEM | RECEIPT_DATE |
|----------|--------------|-------------|----------|--------------|
| 100 | SmallTech | 2001 | 5844HAJ | 11/22/2017 |
| 100 | SmallTech | 3188 | 5521LRO | 12/31/2017 |
| 200 | RealSolution | 5109 | 8715JUI | 05/01/2017 |
| 100 | SmallTech | 3232 | 8715JUI | 11/01/2017 |
| 200 | RealSolution | 2101 | 4715TEN | 01/01/2017 |
```
As you can see, the third and fourth row show two different vendors for the SAME part number.
Here is my current query:
```
WITH
-- various other subqueries above...
AllData AS
(
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, RECEIPT_DATE
FROM tblVend
INNER JOIN tblReceipt ON VEND_NUM = RECEIPT_VEND_NUM
WHERE
VEND_NUM = '100' OR VEND_NUM = '200' AND RECEIPT_DATE >= '01-Jan-2017'
),
SELECT MAX(RECEIPT_DATE) OVER PARTITION BY(RECEIPT_ITEM) AS "Recent Date", RECEIPT_ITEM
FROM AllData
```
My return set looks like:
```none
| Recent Date | RECEIPT_ITEM |
|-------------|--------------|
| 11/22/2017 | 5844HAJ |
| 12/31/2017 | 5521LRO |
| 11/01/2017 | 8715JUI |
| 11/01/2017 | 8715JUI |
| 01/01/2017 | 4715TEN |
```
However, it should look like this:
```none
| Recent Date | RECEIPT_ITEM |
|-------------|--------------|
| 11/22/2017 | 5844HAJ |
| 12/31/2017 | 5521LRO |
| 11/01/2017 | 8715JUI |
| 01/01/2017 | 4715TEN |
```
Can anybody please offer advice as to what I'm doing wrong? It looks like it is simply replacing the most recent date, not giving me just the row I want that is most recent.
Ultimately, I would like for my table to look like this. However, I don't know how to use the MAX() or MAX() OVER PARTITION BY() functions properly to allow for this:
```none
| VEND_NUM | VEND_NAME | RECEIPT_NUM | RECEIPT_ITEM | RECEIPT_DATE |
|----------|--------------|-------------|----------|--------------|
| 100 | SmallTech | 2001 | 5844HAJ | 11/22/2017 |
| 100 | SmallTech | 3188 | 5521LRO | 12/31/2017 |
| 100 | SmallTech | 3232 | 8715JUI | 11/01/2017 |
| 200 | RealSolution | 2101 | 4715TEN | 01/01/2017 |
```<issue_comment>username_1: Use window function `ROW_NUMBER() OVER (PARTITION BY receipt_item ORDER BY receipt_date DESC)` to assign a sequence number to each row. The row with the most recent `receipt_date` for a `receipt_item` will be numbered as 1.
```
WITH
-- various other subqueries above...
AllData AS
(
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, RECEIPT_DATE,
ROW_NUMBER() OVER (PARTITION BY RECEIPT_ITEM ORDER BY RECEIPT_DATE DESC ) AS RN
FROM tblVend
INNER JOIN tblReceipt ON VEND_NUM = RECEIPT_VEND_NUM
WHERE
VEND_NUM IN ( '100','200') AND RECEIPT_DATE >= '01-Jan-2017'
)
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, RECEIPT_DATE
FROM AllData WHERE RN = 1
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: THIS ANSWERS THE ORIGINAL VERSION OF THE QUESTION.
Your `where` clause should probably look like this:
```
WHERE VEND_NUM IN ('100', '200') AND RECEIPT_DATE >= DATE '2017-01-01'
```
It is quite possible that what you want is simply:
```
SELECT DISTINCT RECEIPT_DATE, RECEIPT_ITEM
FROM tblVend INNER JOIN
tblReceipt
ON VEND_NUM = RECEIPT_VEND_NUM
WHERE VEND_NUM IN ('100', '200') AND RECEIPT_DATE >= DATE '2017-01-01';
```
At the very least, this returns what you want to return.
Upvotes: 0 <issue_comment>username_3: I see a couple of issues here. One, the syntax for using the aggregate function `MAX()` as an analytic function (which is what Oracle helpfully calls a window function) looks like this:
```
MAX(receipt_date) OVER ( PARTITION BY receipt_item )
```
(note the position of the parentheses). Second, from your desired result set, you don't actually want a window function, you want to aggregate. A window (or analytic) function will always return a row for each row in its partition; that's just the way it works. So I think what you want is this:
```
WITH
-- various other subqueries above...
AllData AS
(
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, RECEIPT_DATE
FROM tblVend
INNER JOIN tblReceipt ON VEND_NUM = RECEIPT_VEND_NUM
WHERE ( VEND_NUM = '100' OR VEND_NUM = '200' ) AND RECEIPT_DATE >= DATE'2017-01-01'
)
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, MAX(RECEIPT_DATE)
FROM AllData
GROUP BY VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM;
```
Now I made some small changes to the above, such as wrapping parentheses around the `OR` conditions (using `IN ('100','200')` might be even better) since `AND` takes precedence over `OR` (so your query would have gotten results where `VEND_NUM = '100' OR ( VEND_NUM = '200' RECEIPT_DATE >= DATE'2017-01-01' )` ... but maybe that's what you wanted?).
Upvotes: 2 <issue_comment>username_4: Just passing by but I think you have to format the date to a 'YYYY-MM-DD' format so that it doesn't consider the "time".
Upvotes: 1 |
2018/03/22 | 1,141 | 3,418 | <issue_start>username_0: How to make this syntax below more efficient because i have to update more than 20 fields for one record?
```
UPDATE TRANS SET A = @XA WHERE UNIQX = @XUNIQX AND STS_TRANS = 0 and A <> @XA
UPDATE TRANS SET B = @XB WHERE UNIQX = @XUNIQX AND STS_TRANS = 0 and B <> @XB
UPDATE TRANS SET C = @XC WHERE UNIQX = @XUNIQX AND STS_TRANS = 0 and C <> @XC
UPDATE TRANS SET D = @XD WHERE UNIQX = @XUNIQX AND STS_TRANS = 0 and D <> @XD
UPDATE TRANS SET E = @XE WHERE UNIQX = @XUNIQX AND STS_TRANS = 0 and E <> @XE
```<issue_comment>username_1: Use window function `ROW_NUMBER() OVER (PARTITION BY receipt_item ORDER BY receipt_date DESC)` to assign a sequence number to each row. The row with the most recent `receipt_date` for a `receipt_item` will be numbered as 1.
```
WITH
-- various other subqueries above...
AllData AS
(
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, RECEIPT_DATE,
ROW_NUMBER() OVER (PARTITION BY RECEIPT_ITEM ORDER BY RECEIPT_DATE DESC ) AS RN
FROM tblVend
INNER JOIN tblReceipt ON VEND_NUM = RECEIPT_VEND_NUM
WHERE
VEND_NUM IN ( '100','200') AND RECEIPT_DATE >= '01-Jan-2017'
)
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, RECEIPT_DATE
FROM AllData WHERE RN = 1
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: THIS ANSWERS THE ORIGINAL VERSION OF THE QUESTION.
Your `where` clause should probably look like this:
```
WHERE VEND_NUM IN ('100', '200') AND RECEIPT_DATE >= DATE '2017-01-01'
```
It is quite possible that what you want is simply:
```
SELECT DISTINCT RECEIPT_DATE, RECEIPT_ITEM
FROM tblVend INNER JOIN
tblReceipt
ON VEND_NUM = RECEIPT_VEND_NUM
WHERE VEND_NUM IN ('100', '200') AND RECEIPT_DATE >= DATE '2017-01-01';
```
At the very least, this returns what you want to return.
Upvotes: 0 <issue_comment>username_3: I see a couple of issues here. One, the syntax for using the aggregate function `MAX()` as an analytic function (which is what Oracle helpfully calls a window function) looks like this:
```
MAX(receipt_date) OVER ( PARTITION BY receipt_item )
```
(note the position of the parentheses). Second, from your desired result set, you don't actually want a window function, you want to aggregate. A window (or analytic) function will always return a row for each row in its partition; that's just the way it works. So I think what you want is this:
```
WITH
-- various other subqueries above...
AllData AS
(
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, RECEIPT_DATE
FROM tblVend
INNER JOIN tblReceipt ON VEND_NUM = RECEIPT_VEND_NUM
WHERE ( VEND_NUM = '100' OR VEND_NUM = '200' ) AND RECEIPT_DATE >= DATE'2017-01-01'
)
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, MAX(RECEIPT_DATE)
FROM AllData
GROUP BY VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM;
```
Now I made some small changes to the above, such as wrapping parentheses around the `OR` conditions (using `IN ('100','200')` might be even better) since `AND` takes precedence over `OR` (so your query would have gotten results where `VEND_NUM = '100' OR ( VEND_NUM = '200' RECEIPT_DATE >= DATE'2017-01-01' )` ... but maybe that's what you wanted?).
Upvotes: 2 <issue_comment>username_4: Just passing by but I think you have to format the date to a 'YYYY-MM-DD' format so that it doesn't consider the "time".
Upvotes: 1 |
2018/03/22 | 1,087 | 3,198 | <issue_start>username_0: here is a link : <https://api.nanopool.org/v1/etn/balance/etnkGqH26XNVZ9cJv7jh95Dwj7LYPupYX5F5QVf85icHXq2sRJcXSxDT6SL4gD2Vn1WBKi5wqd9x5En2LLmDhnKs8pqrcM75VJ> to a json file and i would like to get the data on java.
So how can i download the file or parsing it directly from internet ?
Thank you<issue_comment>username_1: Use window function `ROW_NUMBER() OVER (PARTITION BY receipt_item ORDER BY receipt_date DESC)` to assign a sequence number to each row. The row with the most recent `receipt_date` for a `receipt_item` will be numbered as 1.
```
WITH
-- various other subqueries above...
AllData AS
(
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, RECEIPT_DATE,
ROW_NUMBER() OVER (PARTITION BY RECEIPT_ITEM ORDER BY RECEIPT_DATE DESC ) AS RN
FROM tblVend
INNER JOIN tblReceipt ON VEND_NUM = RECEIPT_VEND_NUM
WHERE
VEND_NUM IN ( '100','200') AND RECEIPT_DATE >= '01-Jan-2017'
)
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, RECEIPT_DATE
FROM AllData WHERE RN = 1
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: THIS ANSWERS THE ORIGINAL VERSION OF THE QUESTION.
Your `where` clause should probably look like this:
```
WHERE VEND_NUM IN ('100', '200') AND RECEIPT_DATE >= DATE '2017-01-01'
```
It is quite possible that what you want is simply:
```
SELECT DISTINCT RECEIPT_DATE, RECEIPT_ITEM
FROM tblVend INNER JOIN
tblReceipt
ON VEND_NUM = RECEIPT_VEND_NUM
WHERE VEND_NUM IN ('100', '200') AND RECEIPT_DATE >= DATE '2017-01-01';
```
At the very least, this returns what you want to return.
Upvotes: 0 <issue_comment>username_3: I see a couple of issues here. One, the syntax for using the aggregate function `MAX()` as an analytic function (which is what Oracle helpfully calls a window function) looks like this:
```
MAX(receipt_date) OVER ( PARTITION BY receipt_item )
```
(note the position of the parentheses). Second, from your desired result set, you don't actually want a window function, you want to aggregate. A window (or analytic) function will always return a row for each row in its partition; that's just the way it works. So I think what you want is this:
```
WITH
-- various other subqueries above...
AllData AS
(
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, RECEIPT_DATE
FROM tblVend
INNER JOIN tblReceipt ON VEND_NUM = RECEIPT_VEND_NUM
WHERE ( VEND_NUM = '100' OR VEND_NUM = '200' ) AND RECEIPT_DATE >= DATE'2017-01-01'
)
SELECT VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM, MAX(RECEIPT_DATE)
FROM AllData
GROUP BY VEND_NUM, VEND_NAME, RECEIPT_NUM, RECEIPT_ITEM;
```
Now I made some small changes to the above, such as wrapping parentheses around the `OR` conditions (using `IN ('100','200')` might be even better) since `AND` takes precedence over `OR` (so your query would have gotten results where `VEND_NUM = '100' OR ( VEND_NUM = '200' RECEIPT_DATE >= DATE'2017-01-01' )` ... but maybe that's what you wanted?).
Upvotes: 2 <issue_comment>username_4: Just passing by but I think you have to format the date to a 'YYYY-MM-DD' format so that it doesn't consider the "time".
Upvotes: 1 |
2018/03/22 | 551 | 2,218 | <issue_start>username_0: I have a big table in redshift I need to automate the process of archiving monthly data.
The current approach is as follows(Manual):
1. unload the redshift query result to s3
2. create new backup table
3. copy files from s3 to redshift table
4. remove data from the original table
I need to automate this approach,
Is using aws data pipeline a good approach?
Please suggest any other effective approach, examples appreciated.
Thanks for the help!<issue_comment>username_1: My suggested approach is to set up airflow in a small instance to run the scheduling. or if that is too much work set up a crontab.
1. using the redshift unload command, copy the data that you want to archive to s3, use a sub folder for
each archive (e.g. monthly - use the year and month as the folder
name)
2. delete the data from your redshift table.
3. set up a redshift spectrum external table definition for that data
in s3, you can set that up to include all of the subfolders if you
wish.
I suggest using gzip format and limiting the size to about 20-100mb per file
That way the data is outside redshift but can be accessed from redshift whenever you need it.
Upvotes: 2 <issue_comment>username_2: I do not know if this is a situation Amazon considers "common" enough to suggest practices, but here are several options (Note: the popular consensus seems to be that data pipeline is good for simple data loading, but it isn't intended as an archive method.)
1. Create a snapshot each month and then remove data from that table. You can use snapshot API actions or console scheduling that could be automated and your snapshots would have date stamps.
2. Copy the data over to an external table in S3 and then delete from the Redshift table. I think you can use Spectrum for this.
3. Use a third party [backup solution](https://n2ws.com/product) such as N2WS, Panopoly.
4. Use AWS Glue (or perhaps Data Pipelines, I haven't used that) to pull out the table structure/data and then truncate the original table.
5. Use an external ETL product to do the same as Glue. Some are free, or your firm might already use one.
I didn't come across any best practices around this type of data copy.
Upvotes: 1 |
2018/03/22 | 988 | 3,557 | <issue_start>username_0: our project is now over, we only have two weeks to give back the project for our final year's studies at University. Our teacher told us that now that development phase was over we would have to go deploy it in production phase.
We used ReactJS for the frontend (hosted in localhost://3000) and NodeJS for server/database management (hosted in localhost://3004).
I've done some research so far on **how to deploy it** (teacher said that ideally it was a zip that we can insert into a webpage, just like a html one, and that it would work).
So far I haven't managed to do that. Much people recommand using "webpack" for managing this but we didn't configure it in the beginning because we had no clue about reactJS and I don't know if we can do it right now. I've read tutorials etc and tried some things with NODE\_ENV but till now I'm stuck with errors.
Can anyone help me & my teammates ?
Thanks in advance!<issue_comment>username_1: To deploy a react app build with the cli `create-react-app` , first you need to set the host in your `package.json` , so in your `package.json` added this line
```
"homepage":"http://localhost/" ,assuming you will host your app on the root directory of your webserver
```
your `package.json` will look something like this
```
{
"name": "nameofyourapp",
"version": "0.1.0",
"private": true,
"homepage":"http://localhost/",
"dependencies": {
.....
},
"scripts": {
......
}
}
```
then inside your app folder run this commande
```
npm run build
```
then copy what inside your build folder and paste it in the root directory of your serveur
you will ended up with files like this
```
httdocs //root directory of your webserver
|
|-static
|
|-index.html
|-serviceworker.js
```
after that you cant access your webserver from your browser : <http://localhost/>
Upvotes: 2 <issue_comment>username_2: I assumed you used create-react-app to generate your react project. My folder structure is like:
* client (it is my react app)
* config
* controllers
* routes
* models
* index.js (express app is in index.js)
* package.json
Following step that I used to deployed my app to Heroku:
1. In package.json, added this line to the scripts `"heroku-postbuild":
"NPM_CONFIG_PRODUCTION=false npm install --prefix client && npm run
build --prefix client"`.
2. Then added this `"engines": { "node" : "[your node version]" }` after scripts.
3. In index.js, put the following code after your routes set up
```
if (process.env.NODE_ENV === "production") {
app.use(express.static("client/build"));
const path = require("path");
app.get("*", (req, res) => {
res.sendFile(path.resolve(__dirname, "client", "build", "index.html"));
});
}
```
4. I assume that you use git for version control and already install heroku. Open your terminal, then heroku login -> heroku
create -> git push heroku master. If you do not get any error,
you are success to deploy your app.
This is my GitHub <https://github.com/username_2/instagrom> that I deployed my app to Heroku. I hope it will give you some idea how to do so
Hope you will get it to work!
Upvotes: 3 [selected_answer]<issue_comment>username_3: In 2019 this does work for me:
in node.js server:
```
app.use('/react', express.static(path.join(__dirname, '../client/build')));
```
in package.json react
```
http://localhost:server-port/client
```
and in browser just type
```
http://localhost:server-port/client .
```
You should see react homepage now.
Hopefully helpful for somebody.
Upvotes: 0 |
2018/03/22 | 2,818 | 5,091 | <issue_start>username_0: I have an array($coords) in php which contains some coordinates.
```
print_r($coords)
```
gives
`Array ( [0] => 80.23338342111089 ,26.52748985468579 [1] => 80.21915925983853 ,26.51065783417543 [2] => 80.23036859334592 ,26.49785585262899 [3] => 80.24667293123426 ,26.51075259323626 [4] => 80.23338342111089 ,26.52748985468579 )`
Then I used `$string = implode(",",$coords);
print_r($string);`
It gave me
`80.23338342111089 ,26.52748985468579,80.21915925983853 ,26.51065783417543,80.23036859334592 ,26.49785585262899,80.24667293123426 ,26.51075259323626,80.23338342111089 ,26.52748985468579`
I passed this to Javascript like this-
```
var Coordinates = [];
Coordinates.push(php echo ($string); ?);
```
now I want to stringify this so I used-
```
var JSON_Coordinates = JSON.stringify(Coordinates);
```
Now the value of JSON\_Coordinates is-
```
[80.23338342111089 ,26.52748985468579,80.21915925983853 ,26.51065783417543,80.23036859334592 ,26.49785585262899,80.24667293123426 ,26.51075259323626,80.23338342111089 ,26.52748985468579]
```
But it should be-
```
[[80.23338342111089 ,26.52748985468579],[80.21915925983853 ,26.51065783417543],[80.23036859334592 ,26.49785585262899],[80.24667293123426 ,26.51075259323626],[80.23338342111089 ,26.52748985468579]]
```
Someone please tell me where am i wrong?
Thanks.<issue_comment>username_1: Would need to see how the array is being established in your PHP code. Are they fetched as pairs or all in one big array?
Pretty straightforward. Just has to do with structure of the array you are converting to JSON.
```
$foo = array(1, 2, 3, 4);
// [1,2,3,4]
$bar = array(array(1, 2), array(3, 4));
// [[1,2],[3,4]]
```
I'd clean up the data server side, so you just have to JSON encode it, and then pass that client side.
Upvotes: 0 <issue_comment>username_2: You should write your JS logic like that way. At this moment you are doing nothing in your code that will create a desired 2d array.
You should loop through whole array and increment counter by 2 to get 2 elements at a time and create a new temporary array that will push in your original arrCoordinates.
>
> Coordinates supply as array
>
>
>
```js
var coordinates = [80.23338342111089 ,26.52748985468579,80.21915925983853 ,26.51065783417543,80.23036859334592 ,26.49785585262899,80.24667293123426 ,26.51075259323626,80.23338342111089 ,26.52748985468579];
var arrCoordinates =[];
for(var index=0; index< coordinates.length; index+=2){
arrCoordinates.push([coordinates[index], coordinates[index+1]]);
}
console.log(arrCoordinates);
```
>
> Coordinates supply as string
>
>
>
```js
var coordinates = "80.23338342111089 ,26.52748985468579,80.21915925983853 ,26.51065783417543,80.23036859334592 ,26.49785585262899,80.24667293123426 ,26.51075259323626,80.23338342111089 ,26.52748985468579";
var arrCoordinates =[];
var inputCoordinates = coordinates.split(",");
for(var index=0; index< inputCoordinates.length; index+=2){
arrCoordinates.push([inputCoordinates[index].trim(), inputCoordinates[index+1].trim()]);
}
console.log(arrCoordinates);
```
Upvotes: 0 <issue_comment>username_3: After you get the array of coordinates, you can loop over it to build the desired output.
An alternative is using a `for-loop`.
```js
var coordinates = [80.23338342111089 ,26.52748985468579,80.21915925983853 ,26.51065783417543,80.23036859334592 ,26.49785585262899,80.24667293123426 ,26.51075259323626,80.23338342111089 ,26.52748985468579];
var newCoordinates = [];
for (var i = 0; i < coordinates.length; i++) {
if (i % 2 !== 0) newCoordinates.push([coordinates[i - 1], coordinates[i]]);
}
console.log(newCoordinates);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
According to your initial array, you can loop, split and then build the nested arrays:
**Initial input data:** `["c1, c2", "c1, c2", ...]`
This way, you don't need to call the function `stringify`.
```js
var array = ["80.23338342111089 ,26.52748985468579", "80.21915925983853 ,26.51065783417543","80.23036859334592 ,26.49785585262899", "80.24667293123426 ,26.51075259323626","80.23338342111089 ,26.52748985468579" ];
var result = [];
for(var i = 0; i < array.length; i++) {
var split = array[i].split(',');
var lt = split[0].trim();
var ln = split[1].trim();
result.push([+lt.trim(), +ln.trim()])
}
console.log(result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 0 <issue_comment>username_4: Use array\_chunk to split the array to pairs
```
$arr = [80.23338342111089 ,26.52748985468579,80.21915925983853 ,26.51065783417543,80.23036859334592 ,26.49785585262899,80.24667293123426 ,26.51075259323626,80.23338342111089 ,26.52748985468579];
?>
var Coordinates = = json_encode(array_chunk($arr,2)) ?;
// var Coordinates = [[80.233383421111,26.527489854686],[80.219159259839,26.510657834175],[80.230368593346,26.497855852629],[80.246672931234,26.510752593236],[80.233383421111,26.527489854686]];
```
[**demo**](https://eval.in/977101)
Upvotes: 1 |
2018/03/22 | 750 | 2,804 | <issue_start>username_0: I'm currently working on an angular 5 application. I try to alter a member variable of my component in an input on the view and use the variable in my component after the change.
My structure is as follows:
**Folder**: my-test
* my-test.component.html
* my-test.component.css
* my-test.component.ts
*1) my-test.component.html:*
```
```
*2) my-test.component.ts:*
```
import { Component, OnChanges, SimpleChanges } from '@angular/core';
@Component({
selector: 'my-test',
templateUrl: './my-test.component.html',
styleUrls: ['./my-test.component.css']
})
export class MyTestComponent implements OnChanges {
hello: string = "bonjour";
constructor() { }
ngOnChanges(changes: SimpleChanges) {
// I'd like to log the changes of my field 'hello',
// once it get's changed in the input on the ui...
console.log(changes);
}
}
```
Unfortunately this solution doesn't work. Do you know how to change an component's variable on the ui and use it in the component afterwards?
Thank you!!<issue_comment>username_1: You can use `(ngModelChange)=functionToCall($event)` to call the function on model change and get updated value. It's pretty useful, and you can use it with regular `[(ngModel)]` on the same element. In this case you can use just `[ngModel]` instead of regular `[(ngModel)]` and set new value to variable from `functionToCall` function, but it depends on your needs. Here is a small demo (check the console to see updated values):
<https://stackblitz.com/edit/angular4-rnvmhm?file=app%2Fapp.component.html>
Upvotes: 1 <issue_comment>username_2: you can use the (ngModelChange) directive
```
```
code:
```
import { Component } from '@angular/core';
@Component({
selector: 'my-test',
templateUrl: './my-test.component.html',
styleUrls: ['./my-test.component.css']
})
export class MyTestComponent {
hello: string = "bonjour";
constructor() { }
myFunction() {
console.log(this.hello);
}
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: Use the banana in the box syntax [(ngModel)] to get the two way data binding for your variable(hello), if you just want to use the hello variable in some other method inside the component then there's no need to watch the value change manually, because **ngModel** will keep property(hello) and view(input) in sync, so methods using 'hello' property will always get the updated value.
But if you want to do something on every time the value changes then use **ngModelChange** property to listen for the value change of the property.
```
{{hello}}
```
listen for change in value of the property
```
{{hello}} //will also get updated
{{hello}} //will not get updated
```
Upvotes: 1 |
2018/03/22 | 1,313 | 4,624 | <issue_start>username_0: I'm able the read a simple json file which has a key value pair and the code is
```
import java.io.FileInputStream;
import java.io.FileReader;
import java.io.IOException;
import org.codehaus.jackson.JsonParseException;
import org.codehaus.jackson.map.JsonMappingException;
import org.codehaus.jackson.map.ObjectMapper;
public class readconfig {
public static void main(String Args[]) throws JsonParseException, JsonMappingException, IOException{
// String inputjson = "{\"key1\":\"value1\", \"key2\":\"value2\", \"key3\":\"value3\"}";
ObjectMapper mapper= new ObjectMapper();
// readjson mp =mapper.readValue(inputjson, readjson.class);
readjson mp =mapper.readValue(new FileInputStream("sam.json"), readjson.class );
System.out.println(mp.getKey1());
}
}
import java.io.*;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
import org.codehaus.jackson.map.ObjectMapper;
public class readjson {
public String k1;
public String k2;
public String k3;
public key1 key;
public key1 getKey() {
return key;
}
public void setKey(key1 key) {
this.key = key;
}
public String getKey1() {
return k1;
}
public void setKey1(String key1) {
this.k1 = key1;
}
public String getKey2() {
return k2;
}
public void setKey2(String key2) {
this.k2 = key2;
}
public String getKey3() {
return k3;
}
public void setKey3(String key3) {
this.k3 = key3;
}
}
```
the problem is my json file is complex
Here is my json file which is map of map and the values of inner map is a list
```
{
"key1":
{ "value1":
["listele1",
"listele2"
]
},
"key2":
{ "value2":
["listele1",
"listele2"
]
},
"key3":
{ "value3":
["listele1",
"listele2"
]
}
}
```
Can you ppl help me in reding the value of inner map's value that is in an list and also deserialize the json and get the json as a Object<issue_comment>username_1: First you should stick a little bit more to Java's naming conventions… such as `ReadJson` instead of `readjson` or `String[] args` instead of `String Args[]` – it is just more convenient and easier to read.
Now to your problem… mind that your `ReadJson` has to reflect the same structure like your JSON data – so it has to be a map with string keys to map values. The latter ones again with string keys and list of string values.
This set you can use the following code to deserialize:
```
Map>> readJson = MAPPER.readValue(inputjson, new TypeReference>>>() {});
```
So to be complete:
```
public class LearningTest {
public static void main(String[] args) throws JsonParseException, JsonMappingException, IOException {
String inputJson = "{ \"key1\": { \"value1\": [\"listele1\", \"listele2\" ] }, \"key2\": { \"value2\": [\"listele1\", \"listele2\" ] } }";
Map>> readJson = new ObjectMapper().readValue(inputJson,
new TypeReference>>>() {
});
System.out.println(readJson.get("key1").get("value1").get(0));
}
}
```
As you see, you no longer need a dedicate data class (ReadJson.java) for this approach.
If you want all list elements in a single list, you reach this like so:
```
List elements = readJson.values() // Collection>>
.stream() // Stream>>
.flatMap(value -> value.values().stream()) // Stream>
.flatMap(listOfStrings -> listOfStrings.stream()) // Stream
.collect(Collectors.toList()); // List
System.out.println(elements);
```
Or you can access single lists like:
```
List listOfFirstInnerMap = readJson.get("key1").get("value1");
List listOfSecondInnerMap = readJson.get("key2").get("value2");
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Your json maps to this Java object:
```
Map>
```
Here is some code you can build off of:
```
import com.fasterxml.jackson.databind.ObjectMapper;
import java.util.HashMap;
import java.util.Map;
public class TestJson {
public static void main(String[] args) throws Exception {
Map> myFile = new HashMap<>();
Map subMap1 = new HashMap<>();
Map subMap2 = new HashMap<>();
Map subMap3 = new HashMap<>();
String[] myArray = new String[] {"listele1", "listele2"};
subMap1.put("value1", myArray);
subMap2.put("value2", myArray);
subMap3.put("value3", myArray);
myFile.put("key1", subMap1);
myFile.put("key2", subMap2);
myFile.put("key3", subMap3);
ObjectMapper mapper = new ObjectMapper();
System.out.println(mapper.writerWithDefaultPrettyPrinter().writeValueAsString(myFile));
}
}
```
Upvotes: 0 |
2018/03/22 | 874 | 3,705 | <issue_start>username_0: ```
class TimerFactory {
var second: Int = 0;
var timer: Timer = Timer();
init(second: Int) {
if(second > 0) {
self.second = second;
}
}
func runTimer(_ vc: ViewController) -> Void {
self.timer = Timer.scheduledTimer(timeInterval: 1, target: vc, selector: (#selector(vc.updateViewTimer)), userInfo: nil, repeats: true)
}
}
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
}
@objc func updateViewTimer() {
print("Hello")
}
}
extension ViewController {
@IBAction func startCounter(_ sender: UIBarButtonItem) {
let timer = TimerFactory(second: 60);
timer.runTimer(self)
}
}
```
Sory for my question I'm new in SWIFT But i want to create timer but timer selector call my ViewController updateViewTimer but this code return false but self.test work
What is problem in here?<issue_comment>username_1: The problem is that the target should contain the method to be invoked , in timer declaration you set it to self , so it will look for a method inside that class , can you try
```
self.timer = Timer.scheduledTimer(timeInterval: 1, target: vc , selector: (#selector(vc.updateViewTimer)), userInfo: nil, repeats: true)
```
Upvotes: 0 <issue_comment>username_2: Your target should be `vc` as that's where the timer will look for the method `updateViewTimer`.
```
func runTimer(_ vc: ViewController) -> Void {
self.timer = Timer.scheduledTimer(timeInterval: 1,
target: vc,
selector: (#selector(vc.updateViewTimer)),
userInfo: nil,
repeats: true)
}
```
When target was `self` it referred to `TimerFactory`.
---
**EDIT** *(based on comments)*:
```
func runTimer(_ vc: ViewController) -> Void {
self.timer = Timer.scheduledTimer(timeInterval: 1,
target: vc,
selector: (#selector(vc.updateViewTimer(timer:))),
userInfo: self,
repeats: true)
}
```
---
```
@objc func updateViewTimer(_ timer: Timer) {
if let factory = timer.userInfo as? TimerFactory {
factory.second -= 1
print(factory.second)
if factory.second == 0 {
print("completed")
timer.invalidate()
}
}
}
```
---
**EDIT 2** *(suggestion)*:
If you plan to reuse `TimerFactory` in another class then you'd need to ensure that `updateViewTimer()` method exists in this class or else the code will crash.
Instead... I would rather suggest a more robust approach that will use a closure and drop the dependency on the `updateViewTimer(_:)` method:
```
func runTimer(_ vc: ViewController, updateHandler: @escaping (_ second: Int)->Void) -> Void {
self.timer = Timer.scheduledTimer(withTimeInterval: 1,
repeats: true,
block: { (timer) in
self.second -= 1
updateHandler(self.second)
if self.second == 0 {
timer.invalidate()
}
})
}
```
---
```
func startCounter() {
let timer = TimerFactory(second: 10);
timer.runTimer(self) { (second) in
print(second)
if second == 0 {
print("completed")
}
}
}
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 789 | 2,671 | <issue_start>username_0: I have a series of 3 SVG elements inside a wrapper element and I have rotated this wrapper by -90 deg with a CSS transform.
I'm trying to get it so the yellow containing block is pinned to the left of the screen, so I can then alter how far from the left (% or vw) this yellow wrapper sits.
I've reduced the size of the yellow wrapper element so it's easier to see the problem. The yellow wrapper must remain vertically centred like it is now.
I can't for the life of me get it stick to the left - the transform seems to be making everything a lot harder. I need it so that even if the window is resized it stays a set % (or vw) from the far left - for arguments sake say 5% / 5vw.
P.S I'm using SVG text because it's part of a clip-path animation, but I have removed this code to make things easier to read.
Codepen: <https://codepen.io/emilychews/pen/qojZae>
```css
body {
margin: 0;
display: flex;
justify-content: center;
align-items: center;
width: 100%;
height: 100vh;
position: relative;
}
#wrapper {
display: flex;
width: 25%;
justify-content: center;
background: yellow;
transform: rotate(-90deg);
position: absolute;
}
#wrapper svg {margin: 1rem}
text {font-size: 1rem}
```
```html
Some Text 1
Some Text 2
Some Text 3
```<issue_comment>username_1: You can use `left` on your `#wrapper` like this:
```
#wrapper {
display: flex;
width: 25%;
justify-content: center;
background: yellow;
transform: rotate(-90deg);
position: absolute;
left: 5%;
}
```
Upvotes: 0 <issue_comment>username_2: You can set a transform origin and then adjust the offset from the transform like this. I also removed the Flex properties on `body` to accomplish this correctly. You can adjust to your liking from this point.
```css
body {
margin: 0;
/*display: flex;
justify-content: center;
align-items: center;*/
width: 100%;
height: 100vh;
position: relative;
}
#wrapper {
display: flex;
justify-content: center;
background: yellow;
transform: rotate(-90deg) translate(-100%, 0);
transform-origin: left top;
position: absolute;
top: 0px;
left: 0px;
}
#wrapper svg {margin: 1rem}
text {font-size: 1rem}
```
```html
Some Text 1
Some Text 2
Some Text 3
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: If I understand the question correctly it would be sufficient to remove the `justify-content: center` on the `body` element.
Why would you need it there anyway?
If the `justify-content` is removed, you can then use `padding(-left)` to move your wrapper more to the center of the screen
Upvotes: -1 |
2018/03/22 | 686 | 2,020 | <issue_start>username_0: ```
var people = [
{
name: "Mike",
age: 12,
gender: "male"
},{
name: "Madeline",
age: 80,
gender: "female"
}
]
```
How do i Loop through the array and log to the console "old enough" if they are 18 or older, and "not old enough" if thy aren't 18. how do i do it, any help, thanks in advance<issue_comment>username_1: ```
//Iterate your objects, check their age property to see if greater than equal to 18
for(let x = 0; x < people.length; x++) {
if(people[x].age >= 18){
console.log(people[x].name + " Is Old Enough");
}else {
console.log(people[x].name + " Is Not Old Enough");
}
}
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: ```
for(const person of people)
console.log(`is ${person.age < 18 ? "not" : ""} old enough`);
```
Upvotes: 1 <issue_comment>username_3: ```
people.forEach((node)=>{
if(node.age > 18) {
console.log("old enough")
}else {
console.log("not old enough")
}
})
```
Upvotes: 0 <issue_comment>username_4: You can use the forEach Method.
```
people.forEach((element)=>{
element.age >= 18 ? console.log('Old Enough') : console.log('Not old enough')
});
```
Upvotes: 0 <issue_comment>username_5: ```js
var people = [{
name: "Mike",
age: 12,
gender: "male"
}, {
name: "Madeline",
age: 80,
gender: "female"
}]
for (var i = 0; i < people.count; i++) {
if (people[i].age < 18) {
console.log("too young message");
} else {
console.log("old enough message");
}
}
```
Upvotes: 0 <issue_comment>username_6: ```js
var people = [
{ name: "Mike", age: 12, gender: "male" },{ name: "Madeline", age: 80, gender: "female" } ];
people.forEach(person=>{
console.log(person.age >= 18 ? "Old enough": "Not old enough");
return;
});
```
Upvotes: 0 <issue_comment>username_7: ```
people.map(
item => {
console.log(
(item.age>18)?item.name+' is old enough':item.name+' is not old enough'
)
}
)
```
Upvotes: 0 |
2018/03/22 | 774 | 2,437 | <issue_start>username_0: I have a question regarding the use of tf.cond. I am trying to run 2 images through a CNN and only backprop using the lower cross entropy loss value. The code is as follows:
`train_cross_entropy = tf.cond(train_cross_entropy1 < train_cross_entropy2,
lambda: train_cross_entropy1,
lambda: train_cross_entropy2)`
Using this train\_cross\_entropy is just as slow as writing
`train_cross_entropy = train_cross_entropy1 + train_cross_entropy2`
Which suggests to me that it is backpropping through both parts of the graph instead of just the one.
I would hope that it would be almost as fast as writing
`train_cross_entropy = train_cross_entropy1`
It would be greatly appreciated if anybody had any ideas on how to accomplish this! Thanks.<issue_comment>username_1: ```
//Iterate your objects, check their age property to see if greater than equal to 18
for(let x = 0; x < people.length; x++) {
if(people[x].age >= 18){
console.log(people[x].name + " Is Old Enough");
}else {
console.log(people[x].name + " Is Not Old Enough");
}
}
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: ```
for(const person of people)
console.log(`is ${person.age < 18 ? "not" : ""} old enough`);
```
Upvotes: 1 <issue_comment>username_3: ```
people.forEach((node)=>{
if(node.age > 18) {
console.log("old enough")
}else {
console.log("not old enough")
}
})
```
Upvotes: 0 <issue_comment>username_4: You can use the forEach Method.
```
people.forEach((element)=>{
element.age >= 18 ? console.log('Old Enough') : console.log('Not old enough')
});
```
Upvotes: 0 <issue_comment>username_5: ```js
var people = [{
name: "Mike",
age: 12,
gender: "male"
}, {
name: "Madeline",
age: 80,
gender: "female"
}]
for (var i = 0; i < people.count; i++) {
if (people[i].age < 18) {
console.log("too young message");
} else {
console.log("old enough message");
}
}
```
Upvotes: 0 <issue_comment>username_6: ```js
var people = [
{ name: "Mike", age: 12, gender: "male" },{ name: "Madeline", age: 80, gender: "female" } ];
people.forEach(person=>{
console.log(person.age >= 18 ? "Old enough": "Not old enough");
return;
});
```
Upvotes: 0 <issue_comment>username_7: ```
people.map(
item => {
console.log(
(item.age>18)?item.name+' is old enough':item.name+' is not old enough'
)
}
)
```
Upvotes: 0 |
2018/03/22 | 933 | 3,858 | <issue_start>username_0: I'm using the PowerShell syntax:
```
Start-Process -FilePath $file -ArgumentList $argList -NoNewWindow -Wait
```
The process being started is an installer, which installs a Windows service and then starts up that service, then the process stops. What's happening is that my script is hanging indefinitely, even after the process completed successfully and the process has exited (as I can see in Task Manager. When I run it from a command line, without PowerShell, it's done in under a minute). Apparently, it's still waiting because there's a child process which was spawned from the original process. How can I tell PowerShell to wait only for the initial process ($file) and not any children?
Thanks<issue_comment>username_1: I have dealt with this for years. Generally speaking if the install is pretty short and simple I will just put a wait into the script (Start-Sleep 60 or whatever). If however the install runs for a very variable amount of time then I actually catalog the msiexec processes that are running right before I start the install then loop and check every 60 seconds or so to see if all the new processes have ended. It gets even more tricky if the install is calling actual child installs. Another approach that I have used to a lesser or greater degree of success depending on the install is to again loop checking every 60 seconds or so to see if a query against the WMI Win32\_Product class has an instance for the product being installed. You pretty much have to tailor the solution to the install and your environment.
Upvotes: 0 <issue_comment>username_2: In the past, I've done the opposite, where I wanted to wait until a certain process starts in order to continue, and I've used WMI events to do it. It's worth a shot to see if the same technique helps you:
```
# Start a process to track
Start-Process "Notepad.exe"
# Get WMI to track process exits
Register-WmiEvent -Query "SELECT * FROM __instancedeletionevent WITHIN 5 WHERE targetinstance isa 'win32_process'" `
-SourceIdentifier "WMI.ProcessDeleted" `
-Action {
if($eventArgs.NewEvent.TargetInstance.Name -eq "Notepad.exe")
{
Write-Host "Notepad exited!"
# Raise an event to notify completion
New-Event "PowerShell.ProcessDeleted" -Sender $sender -EventArguments $EventArgs.NewEvent.TargetInstance | Out-Null
}
} | Out-Null
# Wait for the event signifying that we completed the work (timeout after 10 min)
Wait-Event -SourceIdentifier "PowerShell.ProcessDeleted" -Timeout 600 | Out-Null
# Remove any events we created
Get-Event -SourceIdentifier "PowerShell.ProcessDeleted" -ErrorAction SilentlyContinue | Remove-Event
# Unregister the event monitor
Unregister-Event -Force -SourceIdentifier "WMI.ProcessDeleted"
```
Just replace "Notepad.exe" with the name of your installer process, and remove the call to Write-Host (or replace with something more useful). If you close Notepad within 10 minutes (adjust to suit) you should see a message, otherwise we continue anyway so the script doesn't hang indefinitely.
Upvotes: 0 <issue_comment>username_3: In the end, I decided to simply do it the .NETy way, even though I had preferred a Powershelly way of doing it. Code below is copied from <https://stackoverflow.com/a/8762068/1378356>:
```
$pinfo = New-Object System.Diagnostics.ProcessStartInfo
$pinfo.FileName = $file
$pinfo.UseShellExecute = $false
$pinfo.Arguments = $argList
$p = New-Object System.Diagnostics.Process
$p.StartInfo = $pinfo
$p.Start() | Out-Null
$p.WaitForExit()
Write-Host "exit code: " + $p.ExitCode
```
This way worked without issue.
Upvotes: 3 [selected_answer] |
2018/03/22 | 773 | 2,742 | <issue_start>username_0: I want to have my own method in JavaScript which can be able to access some property of the preceding method/function
Eg:
```
//default
var arr = [1,6,3,5];
alert(arr.length);
```
This will alert the length of arr.
But I want to use my custom method instead of the **length** method say **len**
Eg:
```
//custom
prototype.len = ()=>{
return prototype.length;
}
var arr = [1,6,3,5];
alert(arr.len);
```<issue_comment>username_1: You can create your custom methods or properties to `built-in objects`, in the example you gave, you want to alter the `Array` built-in object and you have to be specific. For instance:
```
Array.prototype.len = function() { return this.length; };
[1, 2, 3].len(); // this will return 3
```
Keep in mind that `this` refers to the array you're calling the method and if you use an ES6 arrow function, the scope will be affected and therefore the `this` value.
Upvotes: -1 <issue_comment>username_2: You can do it like this :
```
Array.prototype.len = function () {
return this.length;
};
```
But there can be a better option, see comments below.
Upvotes: 0 <issue_comment>username_3: You may define your own method for prototype:
```
// also you may use Object.defineProperty
Array.prototype.len = function() {
return this.length // --> this will be an array
}
```
and then call the function
```
[1, 2, 3].len()
```
**BUT**
It is a bad practice to define any new function in built-in prototypes - you can accidentally redefine existing method or method with the same name will be added in the future and it will cause unpredictable behavior.
Better approach either create your own prototype and use it
```
function MyArray () {
// your code
}
MyArray.prototype = Object.create(Array.prototype) // inheritance
MyArray.prototype.len = function () {
return this.length
}
```
or just create simple function and pass an array as an argument to it or as `this`:
as argument:
```
function len (array) {
return array.len
}
```
as `this`:
```
function len() {
return this.len
}
// invoking function
len.call(array)
```
Upvotes: 2 <issue_comment>username_4: To define a "getter" without using ES6 class syntax you can use `Object.defineProperty`:
```
Object.defineProperty(Array.prototype, 'len', {
get: function() {
return this.length;
}
});
```
Use of `.defineProperty` will (by default) create a *non-enumerable* property which ensures that your new property doesn't appear inadvertently in the results of a `for .. in ...` loop.
The question of whether it's appropriate to add such a property to a built-in class is a matter of some debate. Some languages explicitly encourage it, some don't.
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,141 | 3,821 | <issue_start>username_0: I have a project and the professor gave us some code. There is a line in the code that has me confused:
```
arrayListName.sort(null);
```
what does the call to `sort(null)` do exactly?
The doc says: "If the specified comparator is null then all elements in this list must implement the Comparable interface and the elements' natural ordering should be used. This list must be modifiable, but need not be resizable." What does it mean by the list's natural ordering? The elements we are trying to sort are phone numbers.
Note: I read the javadoc and it is not clear to me what it means. English is not my first language and the professor does not teach the class in English. I tried to google the question but am still confused as to what it means specifically.<issue_comment>username_1: I typed it up and it did not throw an error. It actually sorted the list correctly. This is my code:
```
ArrayList nums = new ArrayList();
nums.add(2);
nums.add(4);
nums.add(3);
nums.add(1);
System.out.println(nums);
nums.sort(null);
System.out.println(nums);
```
The output was:
```
[2, 4, 3, 1]
[1, 2, 3, 4]
```
The sort method accepts a Comparator object, and if null is passed it defaults to natural ordering.
Upvotes: 2 <issue_comment>username_2: Explanation
-----------
Supposed `arrayListName` is actually a variable of type `ArrayList`, then you are calling the `List#sort` method here. From its [documentation](https://docs.oracle.com/javase/8/docs/api/java/util/List.html#sort-java.util.Comparator-):
>
> `default void sort(Comparator super E c)`
>
>
> Sorts this list according to the order induced by the specified `Comparator`.
>
>
> If the **specified comparator** is `null` then all elements in this list must implement the `Comparable` interface and the elements' **natural ordering** should be used.
>
>
>
So the method uses the **natural ordering** of elements when the comparator is `null`.
This natural ordering is given by a `compareTo` method on the items when they implement the interface `Compareable` ([documentation](https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html)). For `int` this sorts increasing. For `String` this sorts based on the [lexicographical order](https://en.wikipedia.org/wiki/Lexicographical_order).
Examples after sorting with natural ordering:
```
1, 2, 3, 3, 3, 8, 11
"A", "B", "H", "Helicopter", "Hello", "Tree"
```
Many classes implement this interface already. Take a look at the [documentation](https://docs.oracle.com/javase/9/docs/api/java/lang/Comparable.html). It counts `287` classes currently.
---
Details
-------
Let's compare that to the actual [implementation](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/util/ArrayList.java#ArrayList.sort%28java.util.Comparator%29):
```
@Override
@SuppressWarnings("unchecked")
public void sort(Comparator super E c) {
final int expectedModCount = modCount;
Arrays.sort((E[]) elementData, 0, size, c);
if (modCount != expectedModCount) {
throw new ConcurrentModificationException();
}
modCount++;
}
```
The comparator `c` is passed to the method `Arrays#sort`, let's take a look at an excerpt from its [implementation](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/util/Arrays.java#Arrays.sort%28java.lang.Object%5B%5D%2Cint%2Cint%2Cjava.util.Comparator%29):
```
if (c == null) {
sort(a, fromIndex, toIndex);
}
```
We follow that call to another `Arrays#sort` method ([implementation](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/util/Arrays.java#Arrays.sort%28java.lang.Object%5B%5D%2Cint%2Cint%29)). This method sorts elements based on their **natural ordering**. So no comparator is used.
Upvotes: 3 |
2018/03/22 | 1,752 | 5,359 | <issue_start>username_0: I am going to record video but it's not working
```
public void startRecord() {
if (preVideo()) {
mRecorder.start();
}
isRecording = true;
}
public boolean preVideo() {
if (mCamera == null) {
return false;
}
mRecorder = new MediaRecorder();
mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
CamcorderProfile cpHigh = CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH);
mRecorder.setProfile(cpHigh);
mRecorder.setOutputFile(createFile().toString());
mRecorder.setVideoSize(mFrameWidth, mFrameHeight);
return true;
}
```
and logcat;
>
> FATAL EXCEPTION: main
> Process: pt.chambino.p.pulse, PID: 6578
> java.lang.IllegalStateException
> at android.media.MediaRecorder.start(Native Method)
> at org.opencv.android.MyJavaCameraView.startRecord(MyJavaCameraView.java:375)
> at pt.chambino.p.pulse.App.onRecord(App.java:241)
> at pt.chambino.p.pulse.App.onOptionsItemSelected(App.java:213)
> at android.app.Activity.onMenuItemSelected(Activity.java:2600)
> at com.android.internal.policy.impl.PhoneWindow.onMenuItemSelected(PhoneWindow.java:1019)
> at com.android.internal.view.menu.MenuBuilder.dispatchMenuItemSelected(MenuBuilder.java:735)
> at com.android.internal.view.menu.MenuItemImpl.invoke(MenuItemImpl.java:152)
> at com.android.internal.view.menu.MenuBuilder.performItemAction(MenuBuilder.java:874)
> at com.android.internal.view.menu.ActionMenuView.invokeItem(ActionMenuView.java:546)
> at com.android.internal.view.menu.ActionMenuItemView.onClick(ActionMenuItemView.java:119)
> at android.view.View.performClick(View.java:4569)
> at android.view.View$PerformClick.run(View.java:18553)
> at android.os.Handler.handleCallback(Handler.java:733)
> at android.os.Handler.dispatchMessage(Handler.java:95)
> at android.os.Looper.loop(Looper.java:212)
> at android.app.ActivityThread.main(ActivityThread.java:5151)
> at java.lang.reflect.Method.invokeNative(Native Method)
> at java.lang.reflect.Method.invoke(Method.java:515)
> at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868)
> at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:684)
> at dalvik.system.NativeStart.main(Native Method)
>
>
><issue_comment>username_1: I typed it up and it did not throw an error. It actually sorted the list correctly. This is my code:
```
ArrayList nums = new ArrayList();
nums.add(2);
nums.add(4);
nums.add(3);
nums.add(1);
System.out.println(nums);
nums.sort(null);
System.out.println(nums);
```
The output was:
```
[2, 4, 3, 1]
[1, 2, 3, 4]
```
The sort method accepts a Comparator object, and if null is passed it defaults to natural ordering.
Upvotes: 2 <issue_comment>username_2: Explanation
-----------
Supposed `arrayListName` is actually a variable of type `ArrayList`, then you are calling the `List#sort` method here. From its [documentation](https://docs.oracle.com/javase/8/docs/api/java/util/List.html#sort-java.util.Comparator-):
>
> `default void sort(Comparator super E c)`
>
>
> Sorts this list according to the order induced by the specified `Comparator`.
>
>
> If the **specified comparator** is `null` then all elements in this list must implement the `Comparable` interface and the elements' **natural ordering** should be used.
>
>
>
So the method uses the **natural ordering** of elements when the comparator is `null`.
This natural ordering is given by a `compareTo` method on the items when they implement the interface `Compareable` ([documentation](https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html)). For `int` this sorts increasing. For `String` this sorts based on the [lexicographical order](https://en.wikipedia.org/wiki/Lexicographical_order).
Examples after sorting with natural ordering:
```
1, 2, 3, 3, 3, 8, 11
"A", "B", "H", "Helicopter", "Hello", "Tree"
```
Many classes implement this interface already. Take a look at the [documentation](https://docs.oracle.com/javase/9/docs/api/java/lang/Comparable.html). It counts `287` classes currently.
---
Details
-------
Let's compare that to the actual [implementation](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/util/ArrayList.java#ArrayList.sort%28java.util.Comparator%29):
```
@Override
@SuppressWarnings("unchecked")
public void sort(Comparator super E c) {
final int expectedModCount = modCount;
Arrays.sort((E[]) elementData, 0, size, c);
if (modCount != expectedModCount) {
throw new ConcurrentModificationException();
}
modCount++;
}
```
The comparator `c` is passed to the method `Arrays#sort`, let's take a look at an excerpt from its [implementation](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/util/Arrays.java#Arrays.sort%28java.lang.Object%5B%5D%2Cint%2Cint%2Cjava.util.Comparator%29):
```
if (c == null) {
sort(a, fromIndex, toIndex);
}
```
We follow that call to another `Arrays#sort` method ([implementation](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/util/Arrays.java#Arrays.sort%28java.lang.Object%5B%5D%2Cint%2Cint%29)). This method sorts elements based on their **natural ordering**. So no comparator is used.
Upvotes: 3 |
2018/03/22 | 1,111 | 4,271 | <issue_start>username_0: There are so many ETL tools out there. Not many that are Free. And of the Free choices out there they don't appear to have any knowledge of or support for ArangoDB. If anyone has dealt with the migration of their data over to ArangoDB and automated this process I would love to hear how you accomplished this. Below I have listed out several choices we have for ETL Tools. These choices I actually took from the 2016 Spark Europe presentation by <NAME>.
```
* IBM InfoSphere DataStage
* Oracle Warehouse Builder
* Pervasive Data Integrator
* PowerCenter Informatica
* SAS Data Management
* Talend Open Studio
* SAP Data Services
* Microsoft SSIS
* Syncsort DMX
* CloverETL
* Jaspersoft
* Pentaho
* NiFi
```<issue_comment>username_1: I was able to utilize `Apache NiFi` in order to accomplish this goal. Below is an extremely basic overview of what I did in order to get data out of a source Database into ArangoDB.
Using NiFi you are able to extract data from many of the standard databases out there. There are many Java Drivers out there that are already created to work with databases such as MySQL, SQLite, Oracle, etc....
I was able to use two processors to pull data out of a source database using:
`QueryDatabaseTable`
`ExecuteSQL`
The output of these are in NiFi's Avro format which I then converted to JSON using the `ConvertAvroToJSON` Processor. This converts the output to a JSON List.
While there really isn't anything within NiFi specifically built for use with ArangoDB there is one feature that comes built in with ArangoDB and that is it's API.
I was able to Bulk Insert data into ArangoDB using NiFi's InvokeHTTP Processor with a POST method into a Collection named Cities.
The value I used as the RemoteURL:
`http://localhost:8529/_api/import?collection=cities&type=list&details=true`
[![enter image description here](https://i.stack.imgur.com/wRUG0.png)](https://i.stack.imgur.com/wRUG0.png)
Below is a screenshot of NiFi. I could have definitely used this to kick start my research. I hope this helps someone else. Ignore some of the extra processors as I have them in there for testing purposes and was messing around with JOLT to see if I can use it to 'Transform' my JSON. The `"T"` in ETL.
[![enter image description here](https://i.stack.imgur.com/ZVzLj.png)](https://i.stack.imgur.com/ZVzLj.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: I wanted to add a comment above, but was unable to do so.
Based on username_1's response, I too used NiFi to move data into ArangoDB. In my case, I moved data from SQL Server on a Windows desktop machine to ArangoDB on a Linux desktop machine with both machines on the same network. For 9.7M records = 5.4GB of uncompressed JSON data, the flow took approximately 12 minutes - reasonable performance.
I made a minor change to the above flow by using the ExecuteSQLRecord processor. This step negates the need to convert from AVRO to JSON. In total, you can move the data with two processors: ExecuteSQLRecord and InvokeHTTP.
For ExecuteSQLRecord, based on my testing I recommend submitting in many small batches (~10,000 per batch) versus a few large batches (~500,000 per batch) to avoid ArangoDB bottlenecks.
For InvokeHTTP, if you run NiFi on a different machine than the ArangoDB machine, you need to (1) make sure your ArangoDB machine firewall port is open and (2) change the server address in the .conf files from 127.0.0.1 to your actual ArangoDB machine IP address. The .conf files can be found in the /etc/arangodb3 folder.
For the T (processors on the side above), I would generally let SQL perform the transformations rather than JOLT unless JSON specific formatting changes are needed.
Last, you could do the above by using the following three processors: ExecuteSQLRecord, PutFile, and ExecuteProcess
For ExecuteSQLRecord, you need to change the Output Grouping property to One Line Per Object (i.e. jsonl). For ExecuteProcess, you ask NiFi to invoke arangoimport with the appropriate options. I did not complete this process entirely in NiFi, but some testing suggests the time to complete is comparable to the ExecuteSQLRecord and InvokeHTTP flow.
I concur that NiFi is an excellent way to move data into ArangoDB.
Upvotes: 2 |
2018/03/22 | 427 | 1,430 | <issue_start>username_0: I am able to connect to IONIC DEVAPP via my personal computer but unable to connect it when hosted on the office laptop even on my home wifi network.
I can see my app on both office and personal laptop.
```
$ionic serve -c
```
I am able to access the Devapp hosted URL via my browser
```
http://192.168.1.115:8100/?devapp=true
```
But, from my Android device, on my office laptop it gives the following error
>
> net::ERR:CONNECTION\_TIMED\_OUT( <http://192.168.1.115:8100/?devapp=true>)
>
>
>
It seems more like a Firewall Issue rather than a network issue.<issue_comment>username_1: I had the same problem and thought as you did, that this was a firewall issue. As I am on Ubuntu, using the `ufw` firewall...
```
sudo ufw allow 8100
```
in a terminal did the trick for me, and `IONIC DEVAPP` connected to my app.
Upvotes: 1 <issue_comment>username_2: You just need to open the port in windows firewall, since you told that you're using windows...
* Go to Control Panel and find Windows Firewall.
* Click the Advanced Settings.
* On the left you must click Inbound Rules and under Actions, click the New Rule.
* Select Port.
* Type the ports you want to open.
* Allow the Connection and configure the rest.
Done;
Add this line to your config.xml file:
```
```
Now test your DevApp again.
You can use Android Emulator on your device instead if you want.
Upvotes: 3 [selected_answer] |
2018/03/22 | 633 | 2,227 | <issue_start>username_0: ```
string name[size] = {"<NAME>", "<NAME>", "<NAME>", "<NAME>",
"<NAME>", "<NAME>", "<NAME>", "<NAME>",
"<NAME>", "<NAME>", "<NAME>", "<NAME>",
"<NAME>", "<NAME>", "<NAME>", "<NAME>",
"<NAME>", "<NAME>", "<NAME>", "<NAME>"};
int binarySearchIterative(string name[], int size, string empName) {
int low = 0;
int high = size - 1;
while (low <= high) {
int mid = (low + high) / 2;
if (empName == name[mid]) {
return mid;
} else if (empName < name[mid]) {
high = mid - 1;
} else {
low = mid + 1;
}
}
return -1;
}
```
So when I type in a name to search for, I have to type it in exactly as listed in the array. For example, if I want to find <NAME>. I have to type it as <NAME>. If I were to type in <NAME>, it would tell me that the employee is not found. I need to be able to search the name First Last, and also without using the comma. If you need to see more of my code let me know.<issue_comment>username_1: Binary search requires a sorted input, and you can check this easily with a pencil test:
Suppose you have a series of numbers like `1,5,8,10,12`, and you are looking for `5`; First guess, you split the series and test the number in the middle, i.e. `8`. Since `5` is lower than `8`, it is clear that you don't have to look for `5` in the half above the `8`. Yet this assumption only holds if the series is sorted; otherwise the `5` might be in the upper part as well.
So sort your input, and then see if it works. Otherwise use a debugger :-)
Upvotes: 2 <issue_comment>username_2: You will need to implement a comparison function (or code), that extracts the first name and last name (as separate variables) from the string.
Once extracted, you can easily compare by first name or last name.
Ideally, you should have a structure that models the data, e.g. a first name field and a last name field.
Upvotes: 0 |
2018/03/22 | 1,278 | 4,848 | <issue_start>username_0: My Code:
```
protected void Page_Load(object sender, EventArgs e)
{
ABC ABCDetails;
if(condition)
{
//ABCClient is a soap service that returns ABCDetails which consists of an array[] ABCDetails.ABCLineItemsAuto which is a list but for some reason it would show up as an array when returned from the service I check that it is declared a list in the service end.
ABCDetails = new ABCClient().GetABC(param1,param2).FirstOrDefault();
foreach (var item in ABCDetails.ABCLineItemsAuto)
{
if (filterItemsDB.Any(c => c.itemCODE == item.itemType))
{
//if its in the filterItemsDB which ia a list from databse table do something
}
else
{
//if its not remove the item from the ABCDetails.ClaimLineItemsAuto which is an ARRAY[]
ABCDetails.ABCLineItemsAuto.ToList().Remove(item);
}
}
}
}
```
My Goal:
* to remove the item from the list if it does not belong to the table
from DB
My Issue:
* the Remove(item) runs fine and does not throw any error but does not
remove the item from ABCDetails.ABCLineItemsAuto array.
My effort:
* tried to research and find any different methods for removing an item
from list or array none of them have this issue
Not sure whats wrong here?
I am unable to use ABCDetails.ABCLineItemsAuto.Remove(item) as the .Remove() is not available for array[] ABCDetails.ABCLineItemsAuto .
**ABCClient is a soap service that returns ABCDetails which consists of an array[] ABCDetails.ABCLineItemsAuto which is a list but for some reason it would show up as an array when returned from the service I check that it is declared a list in the service end.**<issue_comment>username_1: ```
ABCDetails.ABCLineItemsAuto.ToList().Remove(item);
```
The `ToList` actually creates a new List and removes the itme from that list. It does not alter the original list in `ABCDetails.ABCLineItemsAuto`.
You could rewrite the whole `foreach` block with 1 statement like this using [RemoveAll](https://msdn.microsoft.com/en-us/library/wdka673a(v=vs.110).aspx):
```
var lst = ABCDetails.ABCLineItemsAuto.ToList();
lst.RemoveAll(item => filterItemsDB.All(c => c.itemCODE != item.itemType));
ABCDetails.ABCLineItemsAuto = lst.ToArray();
```
This is a round about way of doing it as you just indicated that `ABCDetails.ABCLineItemsAuto` is an array.
Upvotes: 2 [selected_answer]<issue_comment>username_2: The statement:
```
ABCDetails.ABCLineItemsAuto.ToList()
```
will give you a new collection i.e `List` and when you are calling `Remove()` it is removing element form that new `List` not from `ABCLineItemsAuto`.
What you need to do is hold that same collection as reference of `List` and then call `Remove()` on it.
When we call `ToList()` we get a new `List` back.
Upvotes: 1 <issue_comment>username_3: You have the logic slightly backwards. When you call `.ToList()`, you're creating a copy of the original list. Then when you call `.Remove()`, you're removing the item from that copy.
What you can do instead is call `.ToList()` on the item in the `foreach` condition (to avoid getting an error about modifying the enumeration), and then remove the item from the *actual* list inside the body:
**Update:** Since the type you're dealing with is an `Array`, an easy way around this is to convert it to a `List` first, since it has the handy [`Remove()`](https://referencesource.microsoft.com/#mscorlib/system/collections/generic/list.cs,3d46113cc199059a,references) method. Then at the end, you can take the list and assign it back to the array if you need to. I updated the code to show how that might look:
```
var lineItemsList = ABCDetails.ABCLineItemsAuto.ToList();
// Use a copy of the list in your foreach condition
foreach (var item in lineItemsList.ToList())
{
if (filterItemsDB.Any(c => c.itemCODE == item.itemType))
{
// Do something here, details aren't relevant to this question
}
else
{
lineItemsList.Remove(item);
}
}
ABCDetails.ABCLineItemsAuto = lineItemsList.ToArray();
```
Upvotes: 2 <issue_comment>username_4: When you call ToList() a new List is instantiated. The item is being removed from this new one, but it's kept on the source. Remove it just calling `ABCDetails.ABCLineItemsAuto.Remove(item)`.
You can achieve this without the loop too:
```
if(condition)
{
//ABCClient is a soap service that returns ABCDetails
ABCDetails = new ABCClient().GetABC(param1,param2).FirstOrDefault();
ABCDetails.ABCLineItemsAuto = ABCDetails.ABCLineItemsAuto.Except(ABCDetails.ABCLineItemsAuto.Where(i => filterItemsDB.Any(f => f.itemCODE == i.itemType)));
}
```
Upvotes: 1 |
2018/03/22 | 1,032 | 3,860 | <issue_start>username_0: Is it correct to access variables from a child class in a parent class? Is this a good OOP approach? I don't need to create instances of Animal class, but if I will, the `make_sound` method will raise `AttributeError`, which is bothering me.
```
class Animal:
def make_sound(self):
print(self.sound)
class Cat(Animal):
sound = 'meow'
class Dog(Animal):
sound = 'bark'
cat = Cat()
cat.make_sound()
dog = Dog()
dog.make_sound()
```<issue_comment>username_1: Yes, this is good OOP. You could inherit from the parent, and simply make methods in the parent class that you would call in an instance for the Cat or Dog class, so that you could add nothing to the Cat/Dog class: for example:
```
class Animal():
def make_sound(self):
print(self.sound)
def make_dog_sound(self):
print('bark')
def make_cat_sound(self):
print('meow')
class Cat(Animal):
pass
class Dog(Animal):
pass
cat = Cat()
cat.make_cat_sound()
```
But what you did is also correct.
Upvotes: -1 <issue_comment>username_2: There's nothing inherently wrong with this approach. It really depends on the scope and significance of this class, and where its being used. Building a parent class to use implicitly defined attributes is quick, and in many cases perfectly OK. But, sometimes those implicit attributes can get out of hand, and you might want to ensure that anyone making new subclasses *has* to define those attributes.
There are a couple approaches to this. Some of this may not work depending on what version of Python you are using. I believe the usage of ABC like this works in Python 3.4+.
Python (and many OO languages) have the concept of an [Abstract Base Class](https://docs.python.org/3/library/abc.html). This is a class that can never be instantiated, and it enforces that any subclasses must implement methods or properties defined as abtract in order to be instantiated.
Here's how you could provide a `make_sound` method, and still be 100% sure that anyone subclassing Animal is indeed making that sound.
```
from abc import ABC, abstractmethod
class Animal(ABC):
def make_sound(self):
print(self.sound)
@property
@abstractmethod
def sound(self):
""" return the sound the animal makes """
class Dog(Animal):
@property
def sound(self):
return "bark"
class Cat(Animal):
sound = "meow"
class Thing(Animal):
""" Not an animal """
dog = Dog()
dog.make_sound()
cat = Cat()
cat.make_sound()
# thing = Thing() this will raise a TypeError, complaining that its abstract
# animal = Animal() as will this
```
This shows the many different ways to do this. Using the `@property` decorator allows you to set instance variables, or more complex logic, that affect it. Setting sound in the class is (somewhat) like setting a static member in a Java class. Since all cats meow, this probably makes sense in this case.
Upvotes: 6 [selected_answer]<issue_comment>username_3: ```
class Animal:
def make_sound(self):
print(self.sound)
def make_class_sound(self):
print(self.class_sound)
class Cat(Animal):
class_sound = "Meow"
def __init__(self):
self.sound = 'meow'
class Dog(Animal):
class_sound = "Bark"
def __init__(self):
self.sound = 'bark'
cat = Cat()
cat.make_sound()
cat.make_class_sound()
dog = Dog()
dog.make_sound()
dog.make_class_sound()
```
```
$ python tt.py
meow
Meow
bark
Bark
```
I guess this code snippet would help you not only to understand how to access children's variables in parent class, but also to be able to differentiate between an instance variable and a class variable. The class variable of "self.class\_sound" would be finally referred to in a child class after searching it among its own instance variables.
Upvotes: 0 |
2018/03/22 | 850 | 2,637 | <issue_start>username_0: Why this code returns the error: java.lang.NullPointerException
```
Object obj = null;
Long lNull = null;
Long res = obj == null ? lNull : 10L;
```
But the following way works without any errors:
```
Object obj = null;
Long res = obj == null ? null : 10L;
```<issue_comment>username_1: In the second case, the compiler can infer that the `null` must be a `Long` type. In the first case, it doesn't - and assumes that the expression returns a `long`. You can see that this is the case (i.e. fix it) like,
```
Long res = obj == null ? lNull : (Long) 10L;
```
which does not yield a NullPointerException.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The [JLS, Section 15.25](https://docs.oracle.com/javase/specs/jls/se8/html/jls-15.html#jls-15.25) talks about the type of the conditional operator expression for various combinations of the types of the second and third operands. There are lots of tables mapping the two types in all relevant combinations to the result type.
>
>
> ```
> 3rd → long
> 2nd ↓
> ...
> Long long
> ...
> null lub(null,Long)
>
> ```
>
>
Your first example has a `Long` and a `long`, which yields `long`. This requires `lNull` to be unboxed, which explains the `NullPointerException`.
Your second example has a `null` *literal* (not a `null` variable) and a `long`. This results in "lub(null,Long)" or `Long`, and no unboxing is performed, so no NPE is observed.
You can avoid the NPE by using your first example or by casting `10L` as a `Long`, because a `null` and a `Long` yield a `Long`.
>
>
> ```
> 3rd → Long
> 2nd ↓
> ...
> Long Long
>
> ```
>
>
Upvotes: 2 <issue_comment>username_3: The error happens because in your case [the standard requires unboxing the value of a boxed type](https://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.25):
>
> If one of the second and third operands is of primitive type `T`, and the type of the other is the result of applying boxing conversion (§5.1.7) to `T`, then the type of the conditional expression is `T`.
>
>
>
In your case `T` is `long`, because `10L` is `long` and `lNull` is `Long`, i.e. the result of applying boxing conversion to `long`.
The standard further says that
>
> If necessary, unboxing conversion is performed on the result.
>
>
>
This is what causes the exception. Note that if you invert the condition, the exception would no longer be thrown:
```
Long res = obj != null ? lNull : 10L;
```
You can fix the problem by explicitly asking for `Long` instead of using `long`, i.e.
```
Long res = obj == null ? lNull : Long.valueOf(10L);
```
Upvotes: 2 |
2018/03/22 | 795 | 2,685 | <issue_start>username_0: When trying out express-session, I couldn't retrieve anything from my previous entries when I stored a variable to a session. Not only that, my session ID has changed. Why is this?
Here is my server.js:
```
const express = require('express')
const session = require('express-session')
const bodyParser = require('body-parser')
const app = express()
app.use(express.static(__dirname + "/src"))
app.use(bodyParser.json())
app.use(session({secret: 'tester', saveUninitialized: true, resave: false, cookie: {maxAge: 5*60000}}))
app.get("/",(request, response) => {
response.sendFile(__dirname + "/front_end.html")
//response.end('This is a test for stuff')
})
app.post("/resources", (request, response)=>{
request.session.myvar = "Hello Person"
console.log(request.session.myvar);
console.log(request.sessionID)
response.json({message: "Success", url: "/respond"})
})
app.get("/respond", (request, response)=>{
console.log(request.session.myvar)
console.log(request.sessionID)
response.send('stuff')
})
app.listen(3000, (err) => {
if (err) {
console.log('Server is down');
return false;
}
console.log('Server is up');
})
```
Here is the front end html:
```
Test
```
And this is my front end js:
```
document.getElementById("test").addEventListener("click", () => {
fetch('/resources', {
method: "POST",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify({
"name": "Test",
"pass": "<PASSWORD>" //ignore the fact that this actually doesn't do anything yet.
})
}).then((response) => {
return response.json()
}).then((json)=>{
window.location.assign(json.url)
});
});
```
Here is the result from the command line(server end):
```
Server is up
Hello Person
DQH0s5Mqu9FasN5It49xD1ZAtkCbj0P5
undefined
p5_imn5FXxxO-i2lR62TNGXEd-o2WHGP
```<issue_comment>username_1: Try without cookie maxAge, might be help-full.
```
app.use(session({
secret: 'xyz',
resave: false,
saveUninitialized: true
}));
```
Upvotes: -1 <issue_comment>username_2: fetch API does not send cookies automatically.
use `credentials: "include"` to send the cookies with the fetch GET request.
<https://developers.google.com/web/updates/2015/03/introduction-to-fetch>
```
fetch('/resources', {
method: "POST",
credentials: "include",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify({
"name": "Test",
"pass": "<PASSWORD>" //ignore the fact that this actually doesn't do anything yet.
})
})
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 364 | 1,196 | <issue_start>username_0: I am making a python program where I need to inspect individual four letter parts of a variable. If the variable is for example **help\_me\_code\_please** it should output **ease** then **e\_pl** etc , I attempted it with
```
a=0
b=3
repetitions=5
word="10011100110000111010"
for x in range (1,repetitions):
print(word[a:b])
a=a+4
b=b+4
```
however it just outputs empty lines.
Thanks so much for any help in advanced.<issue_comment>username_1: Try without cookie maxAge, might be help-full.
```
app.use(session({
secret: 'xyz',
resave: false,
saveUninitialized: true
}));
```
Upvotes: -1 <issue_comment>username_2: fetch API does not send cookies automatically.
use `credentials: "include"` to send the cookies with the fetch GET request.
<https://developers.google.com/web/updates/2015/03/introduction-to-fetch>
```
fetch('/resources', {
method: "POST",
credentials: "include",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify({
"name": "Test",
"pass": "<PASSWORD>" //ignore the fact that this actually doesn't do anything yet.
})
})
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 301 | 1,105 | <issue_start>username_0: I have a `loginActivity` and a `MainActivity`
I have used `SharedPreferences` to store some values in some variables in the `loginActivity`. Now I want to use these values(They're `boolean` values) in MainActivity without re=opening MainActivity. How do I do this?
Also, the `SharedPreferences` are in an `onClick` method for a `button` .
Thank you!<issue_comment>username_1: Try without cookie maxAge, might be help-full.
```
app.use(session({
secret: 'xyz',
resave: false,
saveUninitialized: true
}));
```
Upvotes: -1 <issue_comment>username_2: fetch API does not send cookies automatically.
use `credentials: "include"` to send the cookies with the fetch GET request.
<https://developers.google.com/web/updates/2015/03/introduction-to-fetch>
```
fetch('/resources', {
method: "POST",
credentials: "include",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify({
"name": "Test",
"pass": "<PASSWORD>" //ignore the fact that this actually doesn't do anything yet.
})
})
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 536 | 2,161 | <issue_start>username_0: When designing a HTTP RESTful API is it ok to have parameters with default values when they are omitted? or is that confusing?
for example:
`/posts?categories=20,21,18`
is missing a `limit` parameter, so we set the limit to `limit=100` by default
`/posts?categories=20,21,18&limit=200`
sets the limit to 200 overriding the default.
**Is it okay to have defaults for params in an API? or will this simply confuse developers trying to understand the API? Should default params responsibility be on the client consuming the API?**<issue_comment>username_1: Of course it is ok. And it won't confuse devs as long as your documentation is well maintained and shows what parans are required and which one have default values. Have a look at the rest api docs of GitLab for example.
Upvotes: 0 <issue_comment>username_2: While the answer to this question largely depends on the circumstance, providing reasonable defaults is very common.
For example, we can look at how Google does things with their search. When searching for cats you can use their `q` parameter: <https://www.google.com/search?q=cats>. Google won't return all 635,000,000 results because you didn't specify a limit, they make a reasonable assumption that they can limit the results to a set number and wait for you to request more.
Looking further into your example, you really only have two options for when the client consuming your API omits the limit param:
1. Return an error
2. Set a default
Generally you want to avoid returning errors unless something actually goes wrong (for instance if the endpoint has a required field that is essential to the process).
So we set a default. In the case of the `limit` param, aside for responding with an error, there is no way to avoid setting a default. Whether you respond with every possible entry, 100 entries, 1 entry, or none, all of those are a type of default. In essence, not choosing is a choice.
Upvotes: 2 [selected_answer]<issue_comment>username_3: It very ok to specify default if you know that the caller of the endpoint may omit the parameter in the url. This way codes will not be broken.
Upvotes: 0 |
2018/03/22 | 542 | 2,169 | <issue_start>username_0: My goal is to hide each row that contains a cell filled with red. Here is my code:
```
Sub AstInv()
Dim myTable As ListObject
Dim myArray As Variant
Set myTable = ActiveSheet.ListObjects("Sheet2")
Set myArray = myTable.ListColumns("Código de Barras2").Range
For Each cell In myArray
If Rng.Interior.Color = vbRed Then
rw.EntireRow.Hidden = True
End If
Next
End Sub
```
Each time I run it, I get this error:
>
> Compile error: Invalid outside procedure
>
>
>
Please help! Thanks!<issue_comment>username_1: Of course it is ok. And it won't confuse devs as long as your documentation is well maintained and shows what parans are required and which one have default values. Have a look at the rest api docs of GitLab for example.
Upvotes: 0 <issue_comment>username_2: While the answer to this question largely depends on the circumstance, providing reasonable defaults is very common.
For example, we can look at how Google does things with their search. When searching for cats you can use their `q` parameter: <https://www.google.com/search?q=cats>. Google won't return all 635,000,000 results because you didn't specify a limit, they make a reasonable assumption that they can limit the results to a set number and wait for you to request more.
Looking further into your example, you really only have two options for when the client consuming your API omits the limit param:
1. Return an error
2. Set a default
Generally you want to avoid returning errors unless something actually goes wrong (for instance if the endpoint has a required field that is essential to the process).
So we set a default. In the case of the `limit` param, aside for responding with an error, there is no way to avoid setting a default. Whether you respond with every possible entry, 100 entries, 1 entry, or none, all of those are a type of default. In essence, not choosing is a choice.
Upvotes: 2 [selected_answer]<issue_comment>username_3: It very ok to specify default if you know that the caller of the endpoint may omit the parameter in the url. This way codes will not be broken.
Upvotes: 0 |
2018/03/22 | 299 | 1,087 | <issue_start>username_0: I am in the process of migrating an NDK application from AOSP 7 to 8 and right away I have two (related) issues.
1) I used to build my module with `mm -B`, but now it seems `-B` is no longer an accepted option.
2) How can I do a clean of just my module? [This answer](https://stackoverflow.com/questions/19074431/how-to-do-an-android-mm-clean) was pre AOSP 8 and instructed doing an `mm -B` which is no longer an option.<issue_comment>username_1: The short answer is that you don't. Soong tracks dependencies properly (unlike our old make system) so you don't ever need to do the equivalent of `mm -B`. If you find any cases to the contrary that's a bug.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Google has people in charge of Android that are, literally, learning as they go, like username_1. So I would expect answers of the type "You cannot do this...because we say so". No reasoning behind it. It is one of the worst build environments to work with on the planet. In your case, I don't think there is anything you can do about it.
Upvotes: 0 |
2018/03/22 | 278 | 1,052 | <issue_start>username_0: I am trying to parse information from a html page which looks like this:
Column 1 | Column 2 | Column 3 ....
This is the code I have so far:
```
from bs4 import BeautifulSoup as BS
import urllib.request
html=urllib.request.urlopen(url)
soup=BS(html,"lxml")
```
But I can't seem to figure out how I can extract, say column 1 from that html page and put it into a dataframe in python.<issue_comment>username_1: The short answer is that you don't. Soong tracks dependencies properly (unlike our old make system) so you don't ever need to do the equivalent of `mm -B`. If you find any cases to the contrary that's a bug.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Google has people in charge of Android that are, literally, learning as they go, like username_1. So I would expect answers of the type "You cannot do this...because we say so". No reasoning behind it. It is one of the worst build environments to work with on the planet. In your case, I don't think there is anything you can do about it.
Upvotes: 0 |
2018/03/22 | 780 | 2,483 | <issue_start>username_0: I'm trying to print each element individually, which is fine but also repeat each element based on position eg. "abcd" = A-Bb-Ccc-Dddd etc
So my problems are making print statements print x times based off their position in the string. I've tried a few combinations using len and range but i often encounter errors because i'm using strings not ints.
Should i be using len and range here? I'd prefer if you guys didn't post finished code, just basically how to go about that specific problem (if possible) so i can still go about figuring it out myself.
```
user_string = input()
def accum(s):
for letter in s:
pos = s[0]
print(letter.title())
pos = s[0 + 1]
accum(user_string)
```<issue_comment>username_1: You could try having a counter that will increase by 1 as the loop traverses through the string. Then, within the loop that you currently have, have another for loop to loop the size of the counter. If you want it to print out the first letter capitalized then you will need to account for that along with the dashes.
Upvotes: 0 <issue_comment>username_2: You can enumerate iterables (lists, strings, ranges, dictkeys, ...) - it provides the index and a value:
```
text = "abcdef"
for idx,c in enumerate(text):
print(idx,c)
```
Output:
```
(0, 'a')
(1, 'b')
(2, 'c')
(3, 'd')
(4, 'e')
(5, 'f')
```
You can use that to print something multiple times. The print command takes 2 optional parameters :
```
print("Bla","blubb", sep=" --->", end=" Kawumm\n")
```
Output:
```
Bla --->blubb Kawumm
```
that specify what is printed between outputs and on the end of output - you can specify an `end=""` - so you can continue printing on the same line.
Doku:
* [Print](https://docs.python.org/3/library/functions.html#print)
* [Enumerate](https://docs.python.org/3/library/functions.html#enumerate)
---
Edit:
```
user_string = input()
def accum(s):
t = [] # list to store stuff into
for count, letter in enumerate(s):
total = letter.upper() + letter * (count) # 1st as Upper, rest as is
t.append(total) # add to list
print(*t, sep="-") # the * "unpacks" the list into its parts
accum(user_string)
```
Unpacking:
```
print( [1,2,3,4,5], sep=" +++ ") # its just 1 value to print, no sep needed
print(*[1,2,3,4,5], sep=" +++ ") # 5 values to print, sep needed
```
Output:
```
[1, 2, 3, 4, 5]
1 +++ 2 +++ 3 +++ 4 +++ 5
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 483 | 1,731 | <issue_start>username_0: What I'm trying to do is something like this:
```
var textareas = document.getElementsByTagName("textarea");
var inputs = document.getElementsByTagName("input");
var selects = document.getElementsByTagName("select");
var concatenated = textareas.concat(inputs).concat(selects);
for(var i = 0; i < concatenated.length; i++) {
//Do something on each concatenated[i] item
}
```<issue_comment>username_1: You can use ES6:
```
const concatenated = [...textareas, ...inputs, ...selects];
```
Upvotes: 2 <issue_comment>username_2: From ES6 onwards, you can simply [spread](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) them into another array:
```
var concatenated = [...textareas, ...inputs, ...selects];
```
Upvotes: 2 <issue_comment>username_3: You can use `Array.prototype` to make use of `Array` methods on array-like objects. In this case, you can create a small helper function to call `slice` on your collections, turning them into arrays:
```
function toArray(collection){ return Array.prototype.slice.call(collection, 0) }
```
So your queries get structured like this instead:
```
var textareas = toArray(document.getElementsByTagName("textarea"));
```
Then, you're free to use `concat` to join them.
Even better, you can use `document.querySelectorAll` to just get them all in the first place, then loop over them:
```
var concatenated = document.querySelectorAll('textarea, input, select')
```
Upvotes: 1 <issue_comment>username_4: How about using 1 queryselector
```
var z = document.querySelectorAll("input, textarea");
//now z has input and textarea
for(i=0;i<z.length;i++)
{
alert( z[i]);
}
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 683 | 2,220 | <issue_start>username_0: I am trying to understant the switchs and args in awk and sed
For instance, to get the number next to nonce in the line form the file response.xml:
```
WWW-Authenticate: ServiceAuth realm="WinREST", nonce="1828HvF7EfPnRtzSs/h10Q=="
```
I use by suggestion of another member
```
nonce=$(sed -nE 's/.*nonce="([^"]+)"/\1/p' response.xml)
```
to get the numbers next to the word idOperation in the line below I was trying :
```
idOper=$(sed -nE 's/.*idOperation="([^"]+)"/\1/p' zarph_response.xml)
```
line to extract the number:
```
{"reqStatus":{"code":0,"message":"Success","success":true},"idOperation":"185-16-6"}
```
how do I get the 185-16-6 ?
and if the data to extract has no ""
like the 1 next to operStatus ?
```
{"reqStatus":{"code":0,"message":"Success","success":true},"operStatus":1,"amountReceived":0,"amountDismissed":0,"currency":"EUR"}
```<issue_comment>username_1: You can use ES6:
```
const concatenated = [...textareas, ...inputs, ...selects];
```
Upvotes: 2 <issue_comment>username_2: From ES6 onwards, you can simply [spread](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) them into another array:
```
var concatenated = [...textareas, ...inputs, ...selects];
```
Upvotes: 2 <issue_comment>username_3: You can use `Array.prototype` to make use of `Array` methods on array-like objects. In this case, you can create a small helper function to call `slice` on your collections, turning them into arrays:
```
function toArray(collection){ return Array.prototype.slice.call(collection, 0) }
```
So your queries get structured like this instead:
```
var textareas = toArray(document.getElementsByTagName("textarea"));
```
Then, you're free to use `concat` to join them.
Even better, you can use `document.querySelectorAll` to just get them all in the first place, then loop over them:
```
var concatenated = document.querySelectorAll('textarea, input, select')
```
Upvotes: 1 <issue_comment>username_4: How about using 1 queryselector
```
var z = document.querySelectorAll("input, textarea");
//now z has input and textarea
for(i=0;i<z.length;i++)
{
alert( z[i]);
}
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,280 | 3,963 | <issue_start>username_0: How can I validate that a `String` encodes a valid American Social Security Number (SSN), with a pattern like `123-45-6789`, using Swift?
Validate means to check if the given string matches a pattern `123-45-6789`.<issue_comment>username_1: **Simply:**
Step1. create a method:
```
func isValidSsn(_ ssn: String) -> Bool {
let ssnRegext = "^(?!(000|666|9))\\d{3}-(?!00)\\d{2}-(?!0000)\\d{4}$"
return ssn.range(of: ssnRegext, options: .regularExpression, range: nil, locale: nil) != nil
}
```
Step2. Use it:
```
let ssn = "123-45-6789"
validateSsn(ssn) // true
```
Hope it helps!
Upvotes: 4 [selected_answer]<issue_comment>username_2: It's best to use a custom type for this. It has a failable initializer that you use to initialize it from a String. Once you have an non-nil instance of an SSN, you know that it's a valid SSN, and you never need to check again.
I'm parsing the SSN into a `UInt32`, and serializing it back to a String whenever necessary. This prevents ARC and heap storage overhead associated with allocating and passing around strings. Here's the implementation:
```
import Foundation
struct SSN {
// Stores aaa-bb-ccc as aaabbbccc
let ssn: UInt32
init?(_ string: String) {
let segments = string.split(separator: "-")
guard segments.lazy.map({ $0.count }) == [3, 2, 3] else {
debugPrint("SSN segments must be lenght 3, 2, 3 (e.g. 123-45-678).")
return nil
}
guard !zip(segments, ["000", "00", "000"]).contains(where: {$0.0 == $0.1}) else {
debugPrint("SSN segments cannot be all zeros.")
return nil
}
let firstSegment = segments[0]
guard firstSegment != "666", !firstSegment.hasPrefix("9") else {
debugPrint("The first SSN segment (\(firstSegment)) cannot be 666, or be in the range 900-999.")
return nil
}
let dashesRemoved = string.replacingOccurrences(of: "-", with: "")
self.ssn = UInt32(dashesRemoved)!
}
}
extension SSN: ExpressibleByStringLiteral {
init(stringLiteral literalString: String) {
self.init(literalString)!
}
}
extension SSN: CustomStringConvertible {
var description: String {
let formatter = NumberFormatter()
formatter.minimumIntegerDigits = 3
let segment1 = formatter.string(from: NSNumber(value: self.ssn / 100000))!
formatter.minimumIntegerDigits = 2
let segment2 = formatter.string(from: NSNumber(value: (self.ssn / 1000) % 100))!
formatter.minimumIntegerDigits = 3
let segment3 = formatter.string(from: NSNumber(value: self.ssn % 1000))!
return "\(segment1)-\(segment2)-\(segment3)"
}
}
```
And some tests:
```
let testSSNString = "123-45-678"
let optionalTestSSN = SSN(testSSNString)
guard let testSSN = optionalTestSSN else {
assertionFailure("Valid SSN (\(testSSNString)) wasn't successfully parsed.")
fatalError()
}
assert(testSSN.description == testSSNString, "SSN (\(testSSN)) was not correctly converted back to String.")
func assertSSNShouldBeNil(_ string: String, because reason: String) {
assert(SSN(string) == nil, reason + " should be nil.")
}
assertSSNShouldBeNil("123-45-678-9", because: "SSN with too many segment")
assertSSNShouldBeNil("123-45", because: "SSN with too few segments")
assertSSNShouldBeNil("", because: "Empty SSN")
assertSSNShouldBeNil("000-12-345", because: "SSN with all-zero segment 1")
assertSSNShouldBeNil("123-00-456", because: "SSN with all-zero segment 2")
assertSSNShouldBeNil("123-45-000", because: "SSN with all-zero segment 3")
assertSSNShouldBeNil("666-12-345", because: "SSN starting with 666")
assertSSNShouldBeNil("900-12-345", because: "SSN starting with number in range 900-999")
assertSSNShouldBeNil("999-12-345", because: "SSN starting with number in range 900-999")
```
Upvotes: 0 |
2018/03/22 | 1,427 | 4,668 | <issue_start>username_0: I can't drop databases or collections using Hibernate OGM. I've tried using these native queries but an exception is thrown for both
```
entityManagerFactory = Persistence.createEntityManagerFactory("myPersistence-unit");
EntityManager entityManager = openEntityManager( entityManagerFactory);
entityManager.getTransaction().begin();
String queryDropCollection = "db.Person.drop()";
String queryDropDB = "db.dropDatabase()";
entityManager.createNativeQuery(queryDropCollection).executeUpdate();
entityManager.createNativeQuery(queryDropDB).executeUpdate();
entityManager.getTransaction().commit();
entityManager.close();
```
The exception for dropping the collection:
```
Exception in thread "main" com.mongodb.util.JSONParseException:
db.Person.drop()
^
```
The exception for dropping the database:
```
Exception in thread "main" com.mongodb.util.JSONParseException:
db.dropDatabase()
^
```<issue_comment>username_1: **Simply:**
Step1. create a method:
```
func isValidSsn(_ ssn: String) -> Bool {
let ssnRegext = "^(?!(000|666|9))\\d{3}-(?!00)\\d{2}-(?!0000)\\d{4}$"
return ssn.range(of: ssnRegext, options: .regularExpression, range: nil, locale: nil) != nil
}
```
Step2. Use it:
```
let ssn = "123-45-6789"
validateSsn(ssn) // true
```
Hope it helps!
Upvotes: 4 [selected_answer]<issue_comment>username_2: It's best to use a custom type for this. It has a failable initializer that you use to initialize it from a String. Once you have an non-nil instance of an SSN, you know that it's a valid SSN, and you never need to check again.
I'm parsing the SSN into a `UInt32`, and serializing it back to a String whenever necessary. This prevents ARC and heap storage overhead associated with allocating and passing around strings. Here's the implementation:
```
import Foundation
struct SSN {
// Stores aaa-bb-ccc as aaabbbccc
let ssn: UInt32
init?(_ string: String) {
let segments = string.split(separator: "-")
guard segments.lazy.map({ $0.count }) == [3, 2, 3] else {
debugPrint("SSN segments must be lenght 3, 2, 3 (e.g. 123-45-678).")
return nil
}
guard !zip(segments, ["000", "00", "000"]).contains(where: {$0.0 == $0.1}) else {
debugPrint("SSN segments cannot be all zeros.")
return nil
}
let firstSegment = segments[0]
guard firstSegment != "666", !firstSegment.hasPrefix("9") else {
debugPrint("The first SSN segment (\(firstSegment)) cannot be 666, or be in the range 900-999.")
return nil
}
let dashesRemoved = string.replacingOccurrences(of: "-", with: "")
self.ssn = UInt32(dashesRemoved)!
}
}
extension SSN: ExpressibleByStringLiteral {
init(stringLiteral literalString: String) {
self.init(literalString)!
}
}
extension SSN: CustomStringConvertible {
var description: String {
let formatter = NumberFormatter()
formatter.minimumIntegerDigits = 3
let segment1 = formatter.string(from: NSNumber(value: self.ssn / 100000))!
formatter.minimumIntegerDigits = 2
let segment2 = formatter.string(from: NSNumber(value: (self.ssn / 1000) % 100))!
formatter.minimumIntegerDigits = 3
let segment3 = formatter.string(from: NSNumber(value: self.ssn % 1000))!
return "\(segment1)-\(segment2)-\(segment3)"
}
}
```
And some tests:
```
let testSSNString = "123-45-678"
let optionalTestSSN = SSN(testSSNString)
guard let testSSN = optionalTestSSN else {
assertionFailure("Valid SSN (\(testSSNString)) wasn't successfully parsed.")
fatalError()
}
assert(testSSN.description == testSSNString, "SSN (\(testSSN)) was not correctly converted back to String.")
func assertSSNShouldBeNil(_ string: String, because reason: String) {
assert(SSN(string) == nil, reason + " should be nil.")
}
assertSSNShouldBeNil("123-45-678-9", because: "SSN with too many segment")
assertSSNShouldBeNil("123-45", because: "SSN with too few segments")
assertSSNShouldBeNil("", because: "Empty SSN")
assertSSNShouldBeNil("000-12-345", because: "SSN with all-zero segment 1")
assertSSNShouldBeNil("123-00-456", because: "SSN with all-zero segment 2")
assertSSNShouldBeNil("123-45-000", because: "SSN with all-zero segment 3")
assertSSNShouldBeNil("666-12-345", because: "SSN starting with 666")
assertSSNShouldBeNil("900-12-345", because: "SSN starting with number in range 900-999")
assertSSNShouldBeNil("999-12-345", because: "SSN starting with number in range 900-999")
```
Upvotes: 0 |
2018/03/22 | 1,612 | 5,285 | <issue_start>username_0: I have a following example simple page:
**App.js**:
```
export default class App extends React.Component {
render() {
return
};
};
```
**ArticlesPage.js**:
```
export default class ArticlesPage extends React.Component {
constructor(props) {
super(props);
}
render() {
return
articles
;
}
};
```
**SearchPage.js**:
```
export default class SearchPage extends React.Component {
constructor(props) {
super(props);
const {q} = queryString.parse(location.search);
this.state = {
query: q
};
}
render() {
return
search {this.state.query}
;
}
};
```
**SearchBox.js**:
```
export default class SearchBox extends React.Component {
constructor(props) {
super(props);
this.state = {
q: ''
};
}
onFormSubmit = (e) => {
e.preventDefault();
const {router} = this.context;
router.history.push('/search?q=' + this.state.q);
};
handleChange = (e) => {
this.setState({q: e.target.value});
};
render() {
return
this.searchInput = i} onChange={this.handleChange} />
;
}
};
```
And now, when I'm on the index page and type something in the input next send form, React render SearchPage.js and return correctly text `search *and what I typed*`, try again type something else in the input and send form, and React still show my previous text (not rerender).
What can be wrong with this simple page?<issue_comment>username_1: **Simply:**
Step1. create a method:
```
func isValidSsn(_ ssn: String) -> Bool {
let ssnRegext = "^(?!(000|666|9))\\d{3}-(?!00)\\d{2}-(?!0000)\\d{4}$"
return ssn.range(of: ssnRegext, options: .regularExpression, range: nil, locale: nil) != nil
}
```
Step2. Use it:
```
let ssn = "123-45-6789"
validateSsn(ssn) // true
```
Hope it helps!
Upvotes: 4 [selected_answer]<issue_comment>username_2: It's best to use a custom type for this. It has a failable initializer that you use to initialize it from a String. Once you have an non-nil instance of an SSN, you know that it's a valid SSN, and you never need to check again.
I'm parsing the SSN into a `UInt32`, and serializing it back to a String whenever necessary. This prevents ARC and heap storage overhead associated with allocating and passing around strings. Here's the implementation:
```
import Foundation
struct SSN {
// Stores aaa-bb-ccc as aaabbbccc
let ssn: UInt32
init?(_ string: String) {
let segments = string.split(separator: "-")
guard segments.lazy.map({ $0.count }) == [3, 2, 3] else {
debugPrint("SSN segments must be lenght 3, 2, 3 (e.g. 123-45-678).")
return nil
}
guard !zip(segments, ["000", "00", "000"]).contains(where: {$0.0 == $0.1}) else {
debugPrint("SSN segments cannot be all zeros.")
return nil
}
let firstSegment = segments[0]
guard firstSegment != "666", !firstSegment.hasPrefix("9") else {
debugPrint("The first SSN segment (\(firstSegment)) cannot be 666, or be in the range 900-999.")
return nil
}
let dashesRemoved = string.replacingOccurrences(of: "-", with: "")
self.ssn = UInt32(dashesRemoved)!
}
}
extension SSN: ExpressibleByStringLiteral {
init(stringLiteral literalString: String) {
self.init(literalString)!
}
}
extension SSN: CustomStringConvertible {
var description: String {
let formatter = NumberFormatter()
formatter.minimumIntegerDigits = 3
let segment1 = formatter.string(from: NSNumber(value: self.ssn / 100000))!
formatter.minimumIntegerDigits = 2
let segment2 = formatter.string(from: NSNumber(value: (self.ssn / 1000) % 100))!
formatter.minimumIntegerDigits = 3
let segment3 = formatter.string(from: NSNumber(value: self.ssn % 1000))!
return "\(segment1)-\(segment2)-\(segment3)"
}
}
```
And some tests:
```
let testSSNString = "123-45-678"
let optionalTestSSN = SSN(testSSNString)
guard let testSSN = optionalTestSSN else {
assertionFailure("Valid SSN (\(testSSNString)) wasn't successfully parsed.")
fatalError()
}
assert(testSSN.description == testSSNString, "SSN (\(testSSN)) was not correctly converted back to String.")
func assertSSNShouldBeNil(_ string: String, because reason: String) {
assert(SSN(string) == nil, reason + " should be nil.")
}
assertSSNShouldBeNil("123-45-678-9", because: "SSN with too many segment")
assertSSNShouldBeNil("123-45", because: "SSN with too few segments")
assertSSNShouldBeNil("", because: "Empty SSN")
assertSSNShouldBeNil("000-12-345", because: "SSN with all-zero segment 1")
assertSSNShouldBeNil("123-00-456", because: "SSN with all-zero segment 2")
assertSSNShouldBeNil("123-45-000", because: "SSN with all-zero segment 3")
assertSSNShouldBeNil("666-12-345", because: "SSN starting with 666")
assertSSNShouldBeNil("900-12-345", because: "SSN starting with number in range 900-999")
assertSSNShouldBeNil("999-12-345", because: "SSN starting with number in range 900-999")
```
Upvotes: 0 |
2018/03/22 | 1,407 | 4,274 | <issue_start>username_0: I have a bootstrap image gallery with lightbox and I'd like to show the close button X on the modal lightbox. This is my HTML:
```
[![](https://farm1.staticflickr.com/665/23578574946_b6f90e1ca8_k.jpg)](https://farm1.staticflickr.com/665/23578574946_b6f90e1ca8_k.jpg)
```
And this is my JS:
```
$(document).on('click', '[data-toggle="lightbox"]', function(event) {
event.preventDefault();
$(this).ekkoLightbox();
})
```
I know, I should set 'alwaysShowClose' to 'true' but I don't understand where.<issue_comment>username_1: **Simply:**
Step1. create a method:
```
func isValidSsn(_ ssn: String) -> Bool {
let ssnRegext = "^(?!(000|666|9))\\d{3}-(?!00)\\d{2}-(?!0000)\\d{4}$"
return ssn.range(of: ssnRegext, options: .regularExpression, range: nil, locale: nil) != nil
}
```
Step2. Use it:
```
let ssn = "123-45-6789"
validateSsn(ssn) // true
```
Hope it helps!
Upvotes: 4 [selected_answer]<issue_comment>username_2: It's best to use a custom type for this. It has a failable initializer that you use to initialize it from a String. Once you have an non-nil instance of an SSN, you know that it's a valid SSN, and you never need to check again.
I'm parsing the SSN into a `UInt32`, and serializing it back to a String whenever necessary. This prevents ARC and heap storage overhead associated with allocating and passing around strings. Here's the implementation:
```
import Foundation
struct SSN {
// Stores aaa-bb-ccc as aaabbbccc
let ssn: UInt32
init?(_ string: String) {
let segments = string.split(separator: "-")
guard segments.lazy.map({ $0.count }) == [3, 2, 3] else {
debugPrint("SSN segments must be lenght 3, 2, 3 (e.g. 123-45-678).")
return nil
}
guard !zip(segments, ["000", "00", "000"]).contains(where: {$0.0 == $0.1}) else {
debugPrint("SSN segments cannot be all zeros.")
return nil
}
let firstSegment = segments[0]
guard firstSegment != "666", !firstSegment.hasPrefix("9") else {
debugPrint("The first SSN segment (\(firstSegment)) cannot be 666, or be in the range 900-999.")
return nil
}
let dashesRemoved = string.replacingOccurrences(of: "-", with: "")
self.ssn = UInt32(dashesRemoved)!
}
}
extension SSN: ExpressibleByStringLiteral {
init(stringLiteral literalString: String) {
self.init(literalString)!
}
}
extension SSN: CustomStringConvertible {
var description: String {
let formatter = NumberFormatter()
formatter.minimumIntegerDigits = 3
let segment1 = formatter.string(from: NSNumber(value: self.ssn / 100000))!
formatter.minimumIntegerDigits = 2
let segment2 = formatter.string(from: NSNumber(value: (self.ssn / 1000) % 100))!
formatter.minimumIntegerDigits = 3
let segment3 = formatter.string(from: NSNumber(value: self.ssn % 1000))!
return "\(segment1)-\(segment2)-\(segment3)"
}
}
```
And some tests:
```
let testSSNString = "123-45-678"
let optionalTestSSN = SSN(testSSNString)
guard let testSSN = optionalTestSSN else {
assertionFailure("Valid SSN (\(testSSNString)) wasn't successfully parsed.")
fatalError()
}
assert(testSSN.description == testSSNString, "SSN (\(testSSN)) was not correctly converted back to String.")
func assertSSNShouldBeNil(_ string: String, because reason: String) {
assert(SSN(string) == nil, reason + " should be nil.")
}
assertSSNShouldBeNil("123-45-678-9", because: "SSN with too many segment")
assertSSNShouldBeNil("123-45", because: "SSN with too few segments")
assertSSNShouldBeNil("", because: "Empty SSN")
assertSSNShouldBeNil("000-12-345", because: "SSN with all-zero segment 1")
assertSSNShouldBeNil("123-00-456", because: "SSN with all-zero segment 2")
assertSSNShouldBeNil("123-45-000", because: "SSN with all-zero segment 3")
assertSSNShouldBeNil("666-12-345", because: "SSN starting with 666")
assertSSNShouldBeNil("900-12-345", because: "SSN starting with number in range 900-999")
assertSSNShouldBeNil("999-12-345", because: "SSN starting with number in range 900-999")
```
Upvotes: 0 |
2018/03/22 | 1,516 | 4,302 | <issue_start>username_0: I was inspired by this [post](https://stackoverflow.com/questions/8523374/mysql-get-most-recent-record?answertab=votes#tab-top). But what I'm going to solve is more complex.
In the table below we have three columns, `id`,`rating`,`created`, call it `test_table`,
```
+----+--------+----------------------+
| id | rating | created |
+----+--------+----------------------+
| 1 | NULL | 2011-12-14 09:25:21 |
| 1 | 2 | 2011-12-14 09:26:21 |
| 1 | 1 | 2011-12-14 09:27:21 |
| 2 | NULL | 2011-12-14 09:25:21 |
| 2 | 2 | 2011-12-14 09:26:21 |
| 2 | 3 | 2011-12-14 09:27:21 |
| 2 | NULL | 2011-12-14 09:28:21 |
| 3 | NULL | 2011-12-14 09:25:21 |
| 3 | NULL | 2011-12-14 09:26:21 |
| 3 | NULL | 2011-12-14 09:27:21 |
| 3 | NULL | 2011-12-14 09:28:21 |
+----+--------+----------------------+
```
I want to write a query which selects the most recent `rating` but not `null` for every `id`. If all of the ratings are `null` for a specific `id`, we select the most recent `rating`. The desired result is as follows:
```
+----+--------+----------------------+
| id | rating | created |
+----+--------+----------------------+
| 1 | 1 | 2011-12-14 09:27:21 |
| 2 | 3 | 2011-12-14 09:27:21 |
| 3 | NULL | 2011-12-14 09:28:21 |
+----+--------+----------------------+
```<issue_comment>username_1: ```
select a.* from #test a join (select id, max(created) created
from #test
where rating is not null
group by id )b on a.id=b.id and a.created=b.created
union
select a.* from #test a join
(select id, max(created) created
from #test
where rating is null
and id not in
(select id from (select id, max(created) created
from #test
where rating is not null
group by id )d
group by id)
group by id )b on a.id=b.id and a.created=b.created
```
Upvotes: 0 <issue_comment>username_2: The following gets the creation date:
```
select t.id,
coalesce(max(case when rating is not null then creation_date end),
creation_date
) as creation_date
from t
group by t.id;
```
You can then do this as:
```
select t.*
from t
where (id, creation_date) in (select t.id,
coalesce(max(case when rating is not null then creation_date end),
creation_date
) as creation_date
from t
group by t.id
);
```
Upvotes: 1 <issue_comment>username_3: One possible answer is this. Create a list of max(create) date per id and id having all NULL rating.
```
select t1.*
from myTable t1
join (
select id, max(created) as created
from myTable
where rating is not NULL
group by id
UNION ALL
select id, max(created) as created
from myTable t3
where rating is NULL
group by id
having count(*) = (select count(*) from myTable t4 where t4.id=t3.id)
) t2
where t1.id=t2.id
and t1.created=t2.created
order by t1.id;
```
Upvotes: 1 [selected_answer]<issue_comment>username_4: This query should work:
```
select a.id, a.rating, b.m from test_table a
join (
select id, max(created) as m from test_table
where rating is not null
group by id
) b on b.id = a.id and b.m = a.created
union
select a.id, a.rating, b.m from test_table a
join(
select id, max(created) as m from test_table a
where not exists
(select 1 from test_table b where a.id = b.id and b.rating is not null)
group by id
)b on b.id = a.id and b.m = a.created
```
Upvotes: 0 <issue_comment>username_5: You can get the `created` value in a correlated `LIMIT 1` subquery:
```
select t.id, (
select created
from mytable t1
where t1.id = t.id
order by rating is null asc, created desc
limit 1
) as created
from (select distinct id from mytable) t
```
If you also need the `rating` column, you will need to join the result with the table again:
```
select t.*
from (
select t.id, (
select created
from mytable t1
where t1.id = t.id
order by rating is null asc, created desc
limit 1
) as created
from (select distinct id from mytable) t
) x
natural join mytable t
```
Demo: <http://sqlfiddle.com/#!9/49e68c/8>
Upvotes: 0 |
2018/03/22 | 1,342 | 4,579 | <issue_start>username_0: I read somewhere that to find the real latency of the ram you can use the following rule :
```
1/((RAMspeed/2)/1000) x CL = True Latency in nanoseconds
```
ie for DDR1 with 400Mhz clock speed, is it logical to me to divide by 2 to get the FSB speed or the real bus speed which is 200Mhz in this case. So the rule above seems to be correct for the DDR1.
From the other side, DDR2 also doubles the freq of the bus in relation to the previous DDR1 generation (ie 4 bits per clock cycle) according the article "*What Every Programmer Should Know About Memory*".
So, in the case of a DDR2 with a 800Mhz clock speed, to find the "True Latency" the above rule should be accordingly changed to
```
1/((RAMspeed/4)/1000) x CL = True Latency in nanoseconds
```
Is that correct? Because in all the cases I read that the correct way is to take `**RAMspeed/2**` no matter if it's DDR, DDR2, DDR3 or DDR4.
Which is the correct way to get the true latency?<issue_comment>username_1: Ok I found the answer.
Everytime the manufacturers increased the memory clock speed they did it in a **constant rate** which always *was the double (2x) of the FSB clock speed*. ie
```
MEM CLK FSB
-------------------
DDR200 100 MHz
DDR266 133 MHz
DDR333 166 MHz
DDR400 200 MHz
DDR2-400 200 MHz
DDR2-533 266 MHz
DDR2-667 333 MHz
DDR2-800 400 MHz
DDR2-1066 533 MHz
DDR3-800 400 MHz
DDR3-1066 533 MHz
DDR3-1333 666 MHz
DDR3-1600 800 MHz
```
So, the memory module has always the double speed of the FSB.
Upvotes: 1 [selected_answer]<issue_comment>username_2: **The CAS latency is in memory-bus clock cycles. This is always one half the transfers-per-second number**. e.g. DDR3-1600 has a memory clock of 800MHz, doing 1600M transfers per second (during a burst transfer).
DDR2, DDR3, and DDR4 still use a double-pumped 64-bit memory bus (transferring data on the rising and falling edges of the clock signal), not quad-pumped. This is why they're still called [Double Data-Rate (DDR) SDRAM](https://en.wikipedia.org/wiki/DDR_SDRAM).
---
**The FSB speed has nothing to do with it**.
On old CPUs without integrated memory controllers, i.e. systems that actually *have* an FSB, its frequency is often configurable (in the BIOS) separately from the memory speed. See [Front Side Bus and RAM speed](https://superuser.com/questions/343192/front-side-bus-and-ram-speed); on even older systems, the FSB and memory clocks were synchronous.
Normally systems were designed with a fast enough FSB to keep up with the memory controller. Running the FSB at the same clock speed as the memory can reduce latency by avoiding buffering between clock domains.
---
So yes, **the CAS latency in seconds is `cycle_count / frequency`**, or more like your formula
`1000ns/us * CL / RAMspeed * 2 transfers/clock`, where RAMspeed is in mega-transfers per second.
Higher CL numbers at a higher memory frequency often work out to a similar absolute latency (in seconds). In other words, modern RAM has higher CAS latency timing numbers because more clock cycles happen in the same amount of time.
Bandwidth has vastly improved, while latency has stayed nearly constant, [according to these graphs from Crucial](http://www.crucial.com/usa/en/memory-performance-speed-latency) which explain CL vs. frequency.
---
Of course **this is not "the memory latency", or the "true" memory latency**.
It's the CAS latency of the DRAM itself, and is the most important factor in latency between the memory controller and the DRAM, but **is only a part of the latency between a CPU core and memory**. There is non-negligible latency inside the CPU between the core and uncore (L3 and memory controller). Uncore is Intel terminology; IDK what AMD calls the parts of the memory hierarchy in their various microarchitectures.
Especially many-core Xeon CPUs have significant latency to L3 / memory controller, because of the large ring bus(es) connecting all the cores. A many-core Xeon has worse L3 and memory latency than a similar dual or quad-core with the same memory and CPU clock frequencies.
This extra latency actually limits single-thread / single-core bandwidth on a big Xeon to worse than a laptop CPU, because a single core can't keep enough requests in flight to fill the memory pipeline with that much latency. [Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?](https://stackoverflow.com/questions/39260020/why-is-skylake-so-much-better-than-broadwell-e-for-single-threaded-memory).
Upvotes: 2 |
2018/03/22 | 1,291 | 4,209 | <issue_start>username_0: ```
df1 <- data_frame(time1 = c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9),
time2 = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10),
id = c("a", "b", "c", "d", "e", "f", "g", "h", "i", "j"))
df2 <- data_frame(time = sort(runif(100, 0, 10)),
C = rbinom(100, 1, 0.5))
```
For every row in df1, I want to find the rows in df2 that overlap for time, then assign the median C value for this group of df2 rows to a new column in df1. I'm sure there's some simple way to do this with dplyr's between function, but I'm new to R and haven't been able to figure it out. Thanks!<issue_comment>username_1: Ok I found the answer.
Everytime the manufacturers increased the memory clock speed they did it in a **constant rate** which always *was the double (2x) of the FSB clock speed*. ie
```
MEM CLK FSB
-------------------
DDR200 100 MHz
DDR266 133 MHz
DDR333 166 MHz
DDR400 200 MHz
DDR2-400 200 MHz
DDR2-533 266 MHz
DDR2-667 333 MHz
DDR2-800 400 MHz
DDR2-1066 533 MHz
DDR3-800 400 MHz
DDR3-1066 533 MHz
DDR3-1333 666 MHz
DDR3-1600 800 MHz
```
So, the memory module has always the double speed of the FSB.
Upvotes: 1 [selected_answer]<issue_comment>username_2: **The CAS latency is in memory-bus clock cycles. This is always one half the transfers-per-second number**. e.g. DDR3-1600 has a memory clock of 800MHz, doing 1600M transfers per second (during a burst transfer).
DDR2, DDR3, and DDR4 still use a double-pumped 64-bit memory bus (transferring data on the rising and falling edges of the clock signal), not quad-pumped. This is why they're still called [Double Data-Rate (DDR) SDRAM](https://en.wikipedia.org/wiki/DDR_SDRAM).
---
**The FSB speed has nothing to do with it**.
On old CPUs without integrated memory controllers, i.e. systems that actually *have* an FSB, its frequency is often configurable (in the BIOS) separately from the memory speed. See [Front Side Bus and RAM speed](https://superuser.com/questions/343192/front-side-bus-and-ram-speed); on even older systems, the FSB and memory clocks were synchronous.
Normally systems were designed with a fast enough FSB to keep up with the memory controller. Running the FSB at the same clock speed as the memory can reduce latency by avoiding buffering between clock domains.
---
So yes, **the CAS latency in seconds is `cycle_count / frequency`**, or more like your formula
`1000ns/us * CL / RAMspeed * 2 transfers/clock`, where RAMspeed is in mega-transfers per second.
Higher CL numbers at a higher memory frequency often work out to a similar absolute latency (in seconds). In other words, modern RAM has higher CAS latency timing numbers because more clock cycles happen in the same amount of time.
Bandwidth has vastly improved, while latency has stayed nearly constant, [according to these graphs from Crucial](http://www.crucial.com/usa/en/memory-performance-speed-latency) which explain CL vs. frequency.
---
Of course **this is not "the memory latency", or the "true" memory latency**.
It's the CAS latency of the DRAM itself, and is the most important factor in latency between the memory controller and the DRAM, but **is only a part of the latency between a CPU core and memory**. There is non-negligible latency inside the CPU between the core and uncore (L3 and memory controller). Uncore is Intel terminology; IDK what AMD calls the parts of the memory hierarchy in their various microarchitectures.
Especially many-core Xeon CPUs have significant latency to L3 / memory controller, because of the large ring bus(es) connecting all the cores. A many-core Xeon has worse L3 and memory latency than a similar dual or quad-core with the same memory and CPU clock frequencies.
This extra latency actually limits single-thread / single-core bandwidth on a big Xeon to worse than a laptop CPU, because a single core can't keep enough requests in flight to fill the memory pipeline with that much latency. [Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?](https://stackoverflow.com/questions/39260020/why-is-skylake-so-much-better-than-broadwell-e-for-single-threaded-memory).
Upvotes: 2 |
2018/03/22 | 801 | 2,947 | <issue_start>username_0: I have the following class **Room** and the enum class **Direction**, how would I write a method to pick a random key from the HashMap **exits** and return it, or return `null` if there are no exits in that Room?
```
public class Room
{
private String description;
private HashMap exits; // stores exits of this room.
public Set chars; // stores the characters that are in this room.
/\*\*
\* Create a room described "description". Initially, it has
\* no exits. "description" is something like "a kitchen" or
\* "an open court yard".
\* @param description The room's description.
\* Pre-condition: description is not null.
\*/
public Room(String description)
{
assert description != null : "Room.Room has null description";
this.description = description;
exits = new HashMap();
chars = new HashSet();
sane();
}
/\*\*
\* Define an exit from this room.
\* @param direction The direction of the exit.
\* @param neighbor The room to which the exit leads.
\* Pre-condition: neither direction nor neighbor are null;
\* there is no room in given direction yet.
\*/
public void setExit(Direction direction, Room neighbor)
{
assert direction != null : "Room.setExit gets null direction";
assert neighbor != null : "Room.setExit gets null neighbor";
assert getExit(direction) == null : "Room.setExit set for direction that has neighbor";
sane();
exits.put(direction, neighbor);
sane();
assert getExit(direction) == neighbor : "Room.setExit has wrong neighbor";
}
```
Direction:
```
public enum Direction
{
NORTH("north"), WEST("west"), SOUTH("south"), EAST("east");
private String name;
/**
* Constructor with parameter.
* Pre-condition: name is not null.
*/
private Direction(String name)
{
assert name != null : "Direction.Direction has null name";
this.name = name;
assert toString().equals(name) : "Direction.Direction produces wrong toString";
}
/**
* Return the direction name.
*/
public String toString()
{
return name;
}
```
}<issue_comment>username_1: Something like that:
```
private Direction getRandomDirection() {
if (exits.isEmpty()) {
return null;
}
int rnd = new Random().nextInt(exits.keySet().size());
return new ArrayList<>(exits.keySet()).get(rnd);
}
```
Upvotes: -1 <issue_comment>username_2: **Note :**
As your Map Key is enum Direction, You should always get the value using the same Key type Direction, Otherwise it will return null
Example: `map.get(Direction.NORTH);`
**Solution**
```
Step 1: Set directions=exits.keySet();
Step 2: if(directions.keySet().size()==0){return null;} //check if there is no exit
Step 2: Object[] dirArray = directions.toArray();
Step 3: Random random = new Random();
int randIndex = random.nextInt(dirArray.length-1);
Step 4: System.out.println(dirArray [randIndex ]); // Random Direction Here
```
Let me know if you have any doubt, I have tested above code snippet
Upvotes: 0 |
2018/03/22 | 575 | 2,100 | <issue_start>username_0: *With Android Studio 3.0 / android\_gradle\_version = '3.0.1' / gradle-4.5*
Let's say I have two android modules
```
module-base
module-a
```
When I want to access sources from module-base in module-a , I just need to write this in my
module-a.gradle
```
dependencies {
implementation project(path: ':module-base')
}
```
**But, what if I want to access test sources from module-base in test of module-a**?
*Here does not work approach like above*
```
dependencies {
testImplementation project(path: ':module-base')
}
```
I found lot of advices (few years old) which says something like
```
compileTestJava.dependsOn tasks.getByPath(':module-base:testClasses')
testCompile files(project(':module-base').sourceSets.test.output.classesDir)
```
or
`testCompile project(':module-base).sourceSets.test.classes`
But no one from mentioned works. There is always something wrong from the compiler point of view :-/
**Can you someone help me how to create Android test code dependency between two modules?**<issue_comment>username_1: Actually I find just workaround for this. Don't use test sources from **module-base** but use sources from test related module **module-testutils** which is defined like this
```
dependencies{
testImplementation project(':module-testutils')
}
```
Thus you can share common test code which will be excluded for non-testable apk.
Upvotes: 3 <issue_comment>username_2: In case anyone else ends up here, one way to accomplish this is to define the target module as a source set. Let's assume **test-mdoule** is the module we want to import stuff from, we can do it this way:
```java
android {
sourceSets {
// non-instrumental test example
test {
java.srcDir project(":test-module").file("src/test/java")
}
// instrumental test example
androidTest {
java.srcDir project(":test-module").file("src/androidTest/java")
}
}
}
```
Reference: <https://jonnycaley.medium.com/sharing-test-code-in-a-multi-module-android-project-7ce4de788016>
Upvotes: 4 |
2018/03/22 | 658 | 2,511 | <issue_start>username_0: I do not have the best understanding of SQL, doing my best to learn. I have built two separate queries, and I am looking to take the result from query 1 and divide it by query 2. I have used the union statement with my two queries.
```
select
*
from
(select count(*) as Numerator
from
(select
*,
datediff(second, yy, xx) as SecondDiff,
datediff(day, yy, xx) as DayDiff
from
database1.dbo.table1
where
month(datecompleted) = month(dateadd(month, -1, current_timestamp))
and year(datecompleted) = year(dateadd(month, -1, current_timestamp))
and datediff(day, yy,xx) <= 15) temptable
union
select count(*) as Denominator
from
(select
*,
datediff(second, yy, xx) as SecondDiff,
datediff(day, yy, xx) as DayDiff
from
database2.dbo.table2
where
month(datecompleted) = month(dateadd(month, -1, current_timestamp))
and year(datecompleted) = year(dateadd(month, -1, current_timestamp))) temptable1
) finaltable
```
When this query is executed, I get the following results;
```
Numerator
-----------
114
131
```
I want to divide 114 by 131 and display a new column named 'x' as the result.
All tips and advice is greatly appreciated.
Thank you<issue_comment>username_1: Actually I find just workaround for this. Don't use test sources from **module-base** but use sources from test related module **module-testutils** which is defined like this
```
dependencies{
testImplementation project(':module-testutils')
}
```
Thus you can share common test code which will be excluded for non-testable apk.
Upvotes: 3 <issue_comment>username_2: In case anyone else ends up here, one way to accomplish this is to define the target module as a source set. Let's assume **test-mdoule** is the module we want to import stuff from, we can do it this way:
```java
android {
sourceSets {
// non-instrumental test example
test {
java.srcDir project(":test-module").file("src/test/java")
}
// instrumental test example
androidTest {
java.srcDir project(":test-module").file("src/androidTest/java")
}
}
}
```
Reference: <https://jonnycaley.medium.com/sharing-test-code-in-a-multi-module-android-project-7ce4de788016>
Upvotes: 4 |
2018/03/22 | 663 | 2,117 | <issue_start>username_0: How can I save the state of my toggle button using userdefaults? Here is the code, please help me.
```
import UIKit
class LawFigure: UIViewController {
//Outlets:
@IBOutlet weak var likeButton: UIButton!
//Variables:
var isButtonOn = false
override func viewDidLoad() {
super.viewDidLoad()
customiseUI()
}
func customiseUI() {
likeButton.layer.borderWidth = 3.0
likeButton.layer.borderColor = UIColor(red: 0/255, green: 166/255, blue: 221/255, alpha: 1.0).cgColor
likeButton.layer.cornerRadius = 20.0
likeButton.clipsToBounds = true
}
@IBAction func followActionPressed(_ sender: Any) {
self.activateButton(bool: !isButtonOn)
}
func activateButton(bool: Bool) {
isButtonOn = bool
likeButton.backgroundColor = bool ? UIColor(red: 0/255, green: 166/255, blue: 221/255, alpha: 1.0) : .clear
likeButton.setTitle(bool ? "unlike": "like", for: .normal)
likeButton.setTitleColor(bool ? .white : UIColor(red: 0/255, green: 166/255, blue: 221/255, alpha: 1.0), for: .normal)
}
}
```<issue_comment>username_1: Actually I find just workaround for this. Don't use test sources from **module-base** but use sources from test related module **module-testutils** which is defined like this
```
dependencies{
testImplementation project(':module-testutils')
}
```
Thus you can share common test code which will be excluded for non-testable apk.
Upvotes: 3 <issue_comment>username_2: In case anyone else ends up here, one way to accomplish this is to define the target module as a source set. Let's assume **test-mdoule** is the module we want to import stuff from, we can do it this way:
```java
android {
sourceSets {
// non-instrumental test example
test {
java.srcDir project(":test-module").file("src/test/java")
}
// instrumental test example
androidTest {
java.srcDir project(":test-module").file("src/androidTest/java")
}
}
}
```
Reference: <https://jonnycaley.medium.com/sharing-test-code-in-a-multi-module-android-project-7ce4de788016>
Upvotes: 4 |
2018/03/22 | 624 | 2,039 | <issue_start>username_0: This is a follow-up on [this question](https://stackoverflow.com/questions/19125091/pandas-merge-how-to-avoid-duplicating-columns?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa)
I have two dataframes that I want to merge, but I want to avoid to have duplicate columns, so I'm doing:
```
cols_to_use = df2.columns-df1.columns
```
If I print cols\_to\_use I get this:
```
Index([col1,col2,col3...],dtype=object)
```
However, I have one column that I need it to be kept in both dfs, it is the co\_code. That's because I'm going to merge on that column.
My question is: how to add one extra column to cols\_to\_use? I need it to look like this:
```
Index([co_code,col1,col2,col3...],dtype=object)
```
I tried different synthaxes but nothing seemed to work:
```
cols_to_use = df2.columns-df1.columns+'co_code'
cols_to_use = df2.columns-df1.columns+['co_code']
cols_to_use = df2.columns-df1.columns+df2['co_code'].columns
```<issue_comment>username_1: Actually I find just workaround for this. Don't use test sources from **module-base** but use sources from test related module **module-testutils** which is defined like this
```
dependencies{
testImplementation project(':module-testutils')
}
```
Thus you can share common test code which will be excluded for non-testable apk.
Upvotes: 3 <issue_comment>username_2: In case anyone else ends up here, one way to accomplish this is to define the target module as a source set. Let's assume **test-mdoule** is the module we want to import stuff from, we can do it this way:
```java
android {
sourceSets {
// non-instrumental test example
test {
java.srcDir project(":test-module").file("src/test/java")
}
// instrumental test example
androidTest {
java.srcDir project(":test-module").file("src/androidTest/java")
}
}
}
```
Reference: <https://jonnycaley.medium.com/sharing-test-code-in-a-multi-module-android-project-7ce4de788016>
Upvotes: 4 |
2018/03/22 | 2,206 | 7,479 | <issue_start>username_0: I want to compute the maximum value of a function in python, then print the maximum value, as well as values of the variables at that maximum.
I think I have gotten pretty far. In my example below, the function is just the values from `x` multiplied by the values from `y` multiplied by the values from `z` (in theory this function could be anything). The maximum of which would be `3 * 6 * 9 = 162`.
My problem is: if these lists are hundreds of numbers long, I don't really want to print the entire list of inputs, which could be a massive list hundreds of thousands of entries long.
I am only interested in the final set of values (so `3 6 9` here), but I just can't seem to work out how to print only this set.
Any help would be greatly appreciated.
Thanks!
```
p=[1,2,3]
q=[6,5,4]
r=[7,8,9]
def f(x,y,z):
maximum=0
for a in x:
for b in y:
for c in z:
if a*b*c > maximum:
maximum = a*b*c
print (a,b,c)
print (maximum)
f(p,q,r)
```
Result:
```
1 6 7
1 6 8
1 6 9
2 6 7
2 6 8
2 6 9
3 6 7
3 6 8
3 6 9
162
```<issue_comment>username_1: Turns out writing your question out is a good way of solving it. Thanks anyway!
```
def f(x,y,z):
maximum=0
for a in x:
for b in y:
for c in z:
if a*b*c > maximum:
j=(a,b,c)
maximum = a*b*c
print (maximum)
print (j)
```
Upvotes: 2 <issue_comment>username_2: Here is a solution that is a bit more streamlined and `pythonic` since you are wanting this to be a learning example. I also commented heavily so you can follow-along:
```
# importing this is necessary in python3 as `reduce` was removed in python3
import functools
p = [1,2,3]
q = [6,5,4]
r = [7,8,9]
def f (x,y,z):
# 1. Combine the list arguments together into 1 list "[x,y,z]"
# 2. Iterate over each list in the group of lists "for li in [x,y,z]"
# 3. Get the max number within each list "max(li)"
# 4. Assign the resulting list to the maxv variable "maxv = ..." yielding [3,6,9]
maxv = [max(li) for li in [x,y,z]]
# Print the 3 largest integers used to multiply next
print( maxv )
# Use reduce to multiply each list element by themselves
product = functools.reduce(lambda x,y: x*y, maxv)
# Print the output = 162 in this case
print( product )
f(p,q,r)
```
Update
======
Incase you don't know how many lists of integers are going to be passed into the function (perhaps more than 3) you could iterate over the entire argument handler and make that your list instead of coding `[x,y,z]`, like so:
```
p = [1,2,3]
q = [6,5,4]
r = [7,8,9]
s = [8,9,10]
t = [9,10,11]
def d (*args):
maxv = [max(li) for li in [x for x in args]]
print( maxv )
product = functools.reduce(lambda x,y: x*y, maxv)
print( product )
d(p,q,r) # yields 162
d(p,q,r,s) # yields 1620
d(p,q,r,s,t) # yeilds 17820
```
Upvotes: 2 <issue_comment>username_3: The easiest way to do it would be to keep track of the values in the same way that you keep track of the current maximum:
```
def f(x,y,z):
maximum = 0
for a in x:
for b in y:
for c in z:
if a*b*c > maximum:
maximum = a*b*c
**a\_max = a
b\_max = b
c\_max = c
print(a\_max,b\_max,c\_max)**
print(maximum)
```
You can make your code more robust by using a technique different than setting `maximum = 0` in the beginning. For example, what if your list only has negative numbers? The result will be wrong. You can use what is called a sentinel value, like `None`, to indicate the first iteration. A sentinel is a marker that indicates that an "invalid" value. That will ensure that the correct maximum is always found:
```
def f(x,y,z):
maximum = **None**
for a in x:
for b in y:
for c in z:
if **maximum is None** *or* a*b*c > maximum:
maximum = a*b*c
**abc\_max = (a, b, c)**
print (**\*abc\_max, sep=', '**)
print (maximum)
```
You can improve the design of your code by *returning* the numbers you are interested in, instead of just printing them. That will allow you to manipulate them in any way you want, including printing them, afterwards. Python allows multiple return values using tuples, so you can do something like this:
```
def f(x,y,z):
maximum = None
for a in x:
for b in y:
for c in z:
if maximum is None or a*b*c > maximum:
maximum = a*b*c
abc_max = (a, b, c)
**return maximum, abc\_max
max, pqr = f(p, q, r)
print(\*pqr, sep=', ')
print(max)**
```
An optimization you may want to consider is not evaluating your "function" more than once. This is especially important for more complex cases and functions that don't return the same value twice. You can replace the beginning of the `if` block with something like
```
...
current_value = a * b * c
if current_value > maximum:
maximum = current_value
...
```
If you are really interested in making this work on arbitrary functions, you can use Python to do that. Functions are objects, and can be passed around just like lists, numbers and strings:
```
def f(**fn**, x, y, z):
maximum = None
for a in x:
for b in y:
for c in z:
current = **fn(a, b, c)**
if maximum is None or current > maximum:
maximum = current
abc_max = (a, b, c)
return maximum, abc_max
max, pqr = f(**lambda x, y, z: x \* y \* z,** p, q, r)
```
If you are interested in further generalizations, you can take a look at [`itertools`](https://docs.python.org/3/library/itertools.html), which has the function [`product`](https://docs.python.org/3/library/itertools.html#itertools.product) for running a "flattened" version of your nested `for` loop. This will allow you to accept any number of input iterables, not just three. Python allows an arbitrary number of positional arguments using a `*` in the argument list. The argument with the `*` in its name will be a tuple of all the remaining positional arguments. It can be used in a lambda as well:
```
from functools import reduce
from itertools import product
from operator import mul
def f(fn, *iterables):
max_values = max(product(*iterables), key=lambda x: fn(*x))
return fn(*max_values), max_values
max, pqr = f(lambda *args: reduce(mul, args), p, q, r)
```
You could consider adding a comparator to do something other than computing the maximum. However, that would actually be overkill, since you can convert any other operation to a maximum computation using a properly implemented `fn`. That is the same logic to implementing Pythons sort functions in terms of a key instead of a comparator. In the example above, you could find the minimum value by flipping the sign of the key: `lambda *args: -reduce(mul, args)`. You could find the combination whose product is closest to it's sum with something like `lambda *args: -abs(sum(args) - reduce(mul, args))`. The possibilities are vast.
Here is a link to an IDEOne example demonstrating the last version shown above: <https://ideone.com/QNj55T>. IDEOne is a handy tool for showing off or testing small snippets. I have no affiliation with it but I use it quite often.
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,121 | 4,189 | <issue_start>username_0: * List item
I am a newbie to python. I wanted to know Instead of asking the user to give input file with a path, how can I automatically select input files one after another the from a folder. i.e, it should pick the first image file from the folder, do some processing, then pick the 2nd file, then 3rd file... and so on till all the files in that folder have been processed and do action when a condition from a function which is called is satisfied.?
I am trying compare\_images:
```
def compare_images(img1, img2):
# normalize to compensate for exposure difference
img1 = to_grayscale(imread(img1).astype(float))
img2 = to_grayscale(imread(img2).astype(float))
img1 = normalize(img1)
img2 = normalize(img2)
# calculate the difference and its norms
diff = img1 - img2 # elementwise for scipy arrays
m_norm = sum(abs(diff)) # Manhattan norm
s = m_norm/img1.size
return s
```
This is where I'm calling the compare\_images function.But this throws an error. It simply runs and stops without producing any output even on the console or throws error saying unable to find file even when it exists. I feel I'm going wrong in my approach. Help.
```
path=os.getcwd()
folder1 = os.listdir(path)
folder2 = os.path.join(path,"cd\\")
for filename1 in folder1:
for filename2 in os.listdir(folder2):
if filename1.endswith(".jpg"):
s = compare_images(filename1,filename2)
print(s)
if s > 10:
shutil.copy(filename1,folder2)
```
Please rectify me as to where I'm going wrong. How to copy files only when a condition is met and that condition is drawn from another function?<issue_comment>username_1: Take a look at the link I suggested: [How can I iterate over files in a given directory?](https://stackoverflow.com/questions/10377998/how-can-i-iterate-over-files-in-a-given-directory)
Adapted from that answer:
```
directory = "/some/directory/with/images/"
for filename in os.listdir(directory):
if filename.endswith(".png") or filename.endswith(".jpg"):
# do image processing for the file here
# instead of just printing the filename
print(filename)
```
Upvotes: 1 <issue_comment>username_1: Based on the changes to the original question, and the clarifying comments below it, let me attempt another answer.
Assuming that your `compare_images` function works as expected (I cannot verify this), the following code should do what you are looking for. This checks if all of the images in the target folder are above the comparison threshold. If so, the file needs to be copied (it is a different image). If not, there's at least one similar-enough image, so the current one is skipped.
Note that performance may suffer for a large number of images. Depending on the complexity of the `compare_images` computations, and the number of images it needs to process, this may not be the most efficient solution.
```
# [...other code... (e.g. imports, etc)]
def compare_images(img1, img2):
# normalize to compensate for exposure difference
img1 = to_grayscale(imread(img1).astype(float))
img2 = to_grayscale(imread(img2).astype(float))
img1 = normalize(img1)
img2 = normalize(img2)
# calculate the difference and its norms
diff = img1 - img2 # elementwise for scipy arrays
m_norm = sum(abs(diff)) # Manhattan norm
s = m_norm/img1.size
return s
def above_threshold(img1, img2):
s = compare_images(img1, img2)
return s > 10
def process_files():
folder1 = os.getcwd()
folder2 = os.path.join(folder1, "cd")
print("Folder1: " + folder1)
print("Folder2: " + folder2)
for filename1 in os.listdir(folder1):
print("File: " + filename1)
if filename1.endswith(".png"):
if all(above_threshold(filename1, filename2) for filename2 in os.listdir(folder2)):
print(" Copying (similar image was not found)")
shutil.copy(filename1, folder2)
else:
print(" Skipping (found similar image)")
else:
print(" Skipping (not a png file)")
```
Upvotes: 1 [selected_answer] |
2018/03/22 | 1,033 | 3,957 | <issue_start>username_0: Here is my code:
```
Labs
var left = $('.entry-content').width();
$('.entry-content').scrollLeft(left);
![](map.png)
```
I want the scroll to display to the far right automatically when the user visits the page. Currently it scrolls to the left by default. Please tell me what's wrong with my code.<issue_comment>username_1: generally with positional situations like this you'd just say `var left = $('.entry-content').width() * -1;`, does that not work in this case?
Upvotes: 0 <issue_comment>username_2: You should execute your code when the page has finished loading (and so the image width is known) and for that you need to insert your code in the handler for the `window.load` event, like this:
```
Labs
$(window).on('load', function() {
var left = $('.entry-content').width();
$('.entry-content').scrollLeft(left);
});
![](map.png)
```
Upvotes: 0 <issue_comment>username_3: Make sure you include that script **after** the DOM element, or wrap it in a [DOMContentLoaded handler](https://developer.mozilla.org/en-US/docs/Web/Events/DOMContentLoaded). For example:
```
document.addEventListener('DOMContentLoaded', function() {
var left = $('.entry-content').width();
$('.entry-content').scrollLeft(left);
});
```
In your example it's impossible for the selectors `$('.entry-content')` to select the actual element, because it does not yet exist when the javascript code runs. The DOM is parsed top to bottom, and any scripts that are found along the way are executed immediately.
---
As username_2 also points out in his answer, it is also a good idea to keep in mind that the image might take a while to load. Only once it is loaded, the image will start to occupy the actual space it needs. So, if you measure the dimensions of `.entry-content` before the image loads, you get a number that is inaccurate if the image requires more horizontal space than the containing element naturally occupies.
To fix this you can listen for the [`"load"` event](https://developer.mozilla.org/en-US/docs/Web/Events/load) on `window` instead, which will always fire **after** DOMContentLoaded, and not before all resources (among which your image) have been loaded:
```
window.addEventListener('load', function() {
var left = $('.entry-content').width();
$('.entry-content').scrollLeft(left);
});
```
*This example is equivalent to [username_2's answer](https://stackoverflow.com/a/49436811/6361314)*.
Upvotes: 2 [selected_answer]<issue_comment>username_4: Try following script:
```
$(document).ready(function(){
var left = document.querySelector('div.entry-content').getBoundingClientRect().left;
window.scrollBy(left,0);
});
```
I used the getBoundingClientRect() to get the position of div element with .entry-content class attribute. The left property gives x-offset value from the left of the browser window. Now, i use that value to scroll the window using window.scrollBy(x,y) method.
**For your snippet to work** , you need to make some changes in your html :
```
$(document).ready(function(){
var left = $('.entry-content').width();
$('.entry-content').scrollLeft(left);
});
![](map.png)
```
Notice that i have *restricted the size to 30% for div and setting overflow to true* will cause div to show scrollbars if image width is more then width of the div. Now left will contain the width of the div element with .entry-content class applied and since the content of div can be scrolled now with scrollbars available, you can call scrollLeft(value) method on the div to scroll contents of this div to the left. **If you want to scroll to the extreme right of the image**, you should pass amountToScroll = widthOfImage-widthOfDivContaing image to the scrollLeft(amountToScroll) i.e.
```
$(document).ready(function(){
var amountToScroll = $('img').width() - $('.entry-content').width()
$('.entry-content').scrollLeft(amountToScroll);
});
```
Hope this helps.
Upvotes: 0 |
2018/03/22 | 446 | 1,656 | <issue_start>username_0: I have a function that has a select statement. I want this function to return a value which will be assigned to a cell, but I keep getting errors. Here is my code that calls the select statement function:
```
Sub ChangeSizes()
For i = 2 To 50
Range("G" & i) = LookupSize(Range("G" & i).value)
Next i
End Sub
```
And here is my function that I want to return a value that will be assigned to the range:
```
Public Function LookupSize(Size) As String
Select Case Size
Case Is = 1
Return '5/8" or 1/4"'
Case Is = 43
Return '3/8"'
End Select
End Function
```
However, as it stands, I get an error: `Return without Gosub`
How do a return the result of the case statement and assign it to a range?<issue_comment>username_1: You assign the return value to the function itself. Single quotes create comments, double quotes are used for quoted strings.
```
Public Function LookupSize(Size) As String
Select Case Size
Case 1
LookupSize = "5/8" or "1/4"
Case 43
LookupSize = "3/8"
End Select
End Function
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Functions in VBA do not work using `Return`, but using the Function name as a variable. If you want to end the function after assigning it a value, you must also use `Exit Function`. Your code would look like this:
```
Public Function LookupSize(Size) As String
Select Case Size
Case 1
LookupSize = '5/8" or 1/4"' or whatever you want
Case 43
LookupSize = '3/8"' or whatever you want
End Select
End Function
```
Upvotes: 2 |
2018/03/22 | 563 | 2,414 | <issue_start>username_0: I'm trying to [verify transaction results](https://docs.connect.squareup.com/payments/checkout/setup#optional-add-code-to-receive-and-verify-the-transaction) on the redirect URL after the [Square Checkout process](https://docs.connect.squareup.com/payments/checkout/overview#verify-transaction-results) using the [Square Connect C# SDK](https://github.com/square/connect-csharp-sdk/), but RetrieveTransaction is returning the following exception:
```
{
"errors":
[{
"category": "INVALID_REQUEST_ERROR",
"code": "NOT_FOUND",
"detail": "Location `XXXXXXX` does not have a transaction with ID `XXXXXXX`.",
"field":"transaction_id"
}]
}
```
I have verified that both the location ID and the returned transaction ID are correct. In fact, if I make the exact same call a minute or two later, it successfully returns the correct transaction details. It only fails immediately after the user completes the checkout.
It seems like the transaction hasn't actually been created yet when the user is redirected back to the Redirect URL. Is that correct? [Square's PHP example](https://docs.connect.squareup.com/payments/checkout/setup#optional-add-code-to-receive-and-verify-the-transaction) does not mention anything about waiting for the transaction to exist.
Do I need to implement some sort of delay before I try to verify the transaction?
UPDATE: I just tried running a test where I repeatedly called RetrieveTransaction until it was successful. It looks like the transaction doesn't exist until a second or two after the redirect happens. If that's normal, then the documentation severely needs an update.
Is it possible for it to take even longer before the transaction exists?<issue_comment>username_1: You are correct in that you have to wait a few seconds after the user is redirected to retrieve the transaction.
We’re constantly working to improve our products and services based on the feedback we receive from customers. I’ll be sure to share this with the appropriate team.
Feel free to let me know if I can assist you further. I’m happy to help.
Upvotes: 2 [selected_answer]<issue_comment>username_2: We also ran into this. Quite a big and fatal problem thus the charge is being made and you get an error. Not convenient at all. We added a sleeper to our function to make it work. Not optimal I would say but it works.
Upvotes: 0 |
2018/03/22 | 380 | 1,465 | <issue_start>username_0: Why is this working
```
val foo: kotlin.collections.List = java.util.ArrayList()
```
The `ArrayList` is not inheriting the Kotlin `List`, is it?<issue_comment>username_1: That's because `kotlin.collections.List`, among other types, is a [mapped type](https://kotlinlang.org/docs/reference/java-interop.html#mapped-types): during compilation for the JVM, its usages are compiled into the corresponding usages of the Java `java.util.List` interface.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The reason this works is that the Kotlin compiler will map the Java platform-specific type (e.g. `java.util.ArrayList()`) to a Kotlin platform-independent type (`kotlin.collections.List`). Not surprisingly, this is why they are called [Mapped Types](http://kotlinlang.org/docs/reference/java-interop.html#mapped-types).
The key fact to understand is that after compilation, you only need binary compatibility. If all Kotlin-code on the java platform uses the `java.util.List` interface when using a `kotlin.collections.List` object, the stack manipulations on the java engine work just fine. The same is also true of the opposite direction, of course.
As for the other direction: As [this question](https://discuss.kotlinlang.org/t/how-can-java-util-arraylist-inherit-from-mutablelist/6146) points out, at least on the Java platform, `kotlin.collections.ArrayList` is essentially a `typealias` for `java.util.ArrayList`.
Upvotes: 2 |
2018/03/22 | 931 | 3,352 | <issue_start>username_0: I'm trying to learn how to use SoapUI to integrate Web Services to my website. I've been trying to follow PHP's documentation, but it is very confusing. My question is: how can I translate this soap code to PHP, so I can call the SOAP function. This is what I got so far:
```
$wsdl = "http://api.rlcarriers.com/1.0.2/ShipmentTracingService.asmx?wsdl";
$request = [
'APIKey' => '<KEY>',
'traceNumbers' => $pro,
'TraceType' => 'PRO',
'FormatResults' => 'false',
'IncludeBlind' => 'false',
'OutputFormat' => 'Standard'
];
$client = new SoapClient($wsdl);
$result = $client->TraceShipment($request);
print_r($result);
```
However, this is not working. I don't know what I'm doing wrong. I appreciate any help provided. I've spent hours trying to figure it out and it's driving me crazy. This is the soap request code that I get with SoapUI by following this wsdl file: <http://api.rlcarriers.com/1.0.2/ShipmentTracingService.asmx?wsdl>
```
\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
143248716
PRO
false
false
Standard
```<issue_comment>username_1: First mistake is to use the function name as a method of the `SoapClient`.
Correct is to use native method [SoapClient::\_\_soapCall()](http://php.net/manual/en/soapclient.soapcall.php) and the function name use as first parameter like this:
```
$client = new SoapClient($wsdl);
$result = $client->__call('TraceShipment', $request);
```
For easier debugging use the try...catch block that gave you access to messages returned from the server:
```
try {
$result = $client->__soapCall('TraceShipment', $request);
} catch (Exception $e) {
print_r($e);
print_r($client);
}
```
Second mistake
The arguments `$request` should be an array of associative array, ie two *level array*, to be accepted by SoapServer:
```
$request = [[
//...
]];
```
Third mistake
The mandatory arguments are
```
```
So do update your `$request` array by the `request` key (updated based on username_2 post):
```
$request = [[
'APIKey' => '<KEY>',
'request' => [
'TraceNumbers' => [
'string' => $pro
],
'TraceType' => 'PRO',
'FormatResults' => 'false',
'IncludeBlind' => 'false',
'OutputFormat' => 'Standard'
]
]];
```
---
When fixed you could get response like:
```
stdClass Object
(
[TraceShipmentResult] => stdClass Object
(
[WasSuccess] => 1
[Messages] => stdClass Object
(
)
[Result] => stdClass Object
(
)
)
)
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: I don't know if anyone will ever need this, but I figured out the problem. I'm just learning about SOAP now, and realized that in order to translate the SOAP call to PHP one needs to treat all tags as arrays. Therefore, given the structure of the request call, the PHP request should look like this:
```
$request = array(
'APIKey' => '***********************',
'request' => array(
'TraceNumbers' => array(
'string' => $pro
),
'TraceType' => 'PRO',
'FormatResults' => 'false',
'IncludeBlind' => 'false',
'OutputFormat' => 'Standard'
)
);
```
Upvotes: 1 |
2018/03/22 | 381 | 1,133 | <issue_start>username_0: `list = [[1,'abc'], [2, 'bcd'], [3, 'cde']]`
I have a list that looks like above.
I want to parse the list and try to get a list that only contains the second element of all lists.
`output = ['abc','bcd','cde']`
I think I can do it by going over the whole list and add only the second element to a new list like:
```
output = []
for i in list:
output.append(i[1])
```
something like that, but please tell me if there is a better way.<issue_comment>username_1: You can use a list comprehension:
```
l = [[1,'abc'], [2, 'bcd'], [3, 'cde']]
new_l = [b for _, b in l]
```
Output:
```
['abc', 'bcd', 'cde']
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Another way to do it is to use `itemgetter` from `operator` module:
```
from operator import itemgetter
l = [[1,'abc'], [2, 'bcd'], [3, 'cde']]
result = list(map(itemgetter(1), l))
print(result)
```
Output:
```
['abc', 'bcd', 'cde']
```
Another not so elegant way is to use the `zip` function, take the second element from the newly created list and cast it to `list`:
```
result = list(list(zip(*l))[1])
```
Upvotes: 2 |