id
stringlengths
3
6
prompt
stringlengths
100
55.1k
response_j
stringlengths
30
18.4k
257589
Matt Ridley in [Rational Optimist](https://en.wikipedia.org/wiki/The_Rational_Optimist:_How_Prosperity_Evolves) says that countries are prosperous when divided due to competition among government. Countries are backward when united. According to this theory, China advanced after dividing. Sung is after chaos that followed the Tang dinasty. During the Ming dynasty, the Chinese send one big freakin fleet. After some reasonable success of the fleet, the one government unilaterally stopped the expedition and prohibit all ships production to prevent private enterprise from sailing ahead. These stupid decisions can't be made collectively by Europeans. Europeans consist of many countries and no one idiotic emperor can prevent the whole civilization from exploring. This explains why Europe became far more advanced than the Chinese. Yet, Europeans do not seem to prosper when the Roman Empire fell and they got divided into many smaller tribes fighting each other. So what's the catch? Does Mr. Ridley's theory stand up when examined across history? Are there counterexamples or alternative compelling theories?
Ridley's argument as presented above is very simplistic. Fragmentation may sometimes lead to prosperity, but only if a number of other conditions are fulfilled. To my mind, these other conditions are more important and more interesting than the number of countries in the world, which seems incidental and arbitrary. Does Ridley think more fragmentation is always better, or are these diminishing returns when you reach [the levels of the Holy Roman Empire](https://en.wikipedia.org/wiki/List_of_states_in_the_Holy_Roman_Empire)? Is there something magical about our 195-some countries today, or would we for some reason be more prosperous if the U.S. split up into 50 countries? What are some of these preconditions for prosperity among political fragmentation? For example, fragmentation of government does not lead to wealth if [every minor baron demands a toll to use his road or float barrels down his river](https://en.wikipedia.org/wiki/Road_toll_(historic)). This is grossly inefficient. The Roman Empire (which mind you, also had the revenue to invest in roads and other infrastructure) was more conducive to growth. Political fragmentation can also lead to a lack of standardized currency, volatile exchange rates, and jealous attempts to protect domestic industry, all of which can slow growth. The United States' economy [puttered](https://en.wikipedia.org/wiki/America's_Critical_Period#Debt_and_Taxes) under the Articles of Confederation, but started to prosper after the new Constitution sought further political integration of the states--which led to further economic integration. Note that it was the merchants of New England and the pro-growth Hamiltonians who wanted closer political integration in order to achieve economic integration. Note also that governance can exist at many levels. The modern state system has been accompanied by a proliferation of trade-related international agreements. These often seek to harmonize domestic laws, establish international standards, standardize payment methods, protect property rights, tear down tariffs and customs. International political economy faces enormous pressures for coordinated governance at the highest levels--and that level of coordination appears all the higher when we consider the role of behemoth transnational corporations, which generate a "private" layer of world governance. It's unlikely that the modern economy could sustain such high levels of economic integration without these many, many public and private coordinating mechanisms. **TLDR:** A fragmented world is conducive to growth in some ways (innovation; distribution of wealth) but deleterious in other ways (incompatible standards; higher barriers of trade). However, the fact that governance exists at subnational levels means that even very large political entities can enjoy great periods of economic growth and innovation; the fact that governance also exists at supranational levels means that even a world as fragmented as today's can enjoy the benefits of political integration. The number of countries is only loosely related to the quality of governance, and so it's better to write directly about the policies and institutions conducive to growth, trade, and innovation.
257615
I am losing my mind, but I cannot see what I am doing wrong here, but each time this macro is run I keep on getting a blank row between the column headers and the actual data. The data that is being returned is correct, but I cannot get see why I am getting an extra line at the top! Can I have a fresh pair of eyes, please! Thanks ``` Dim LSearchRow As Long Dim LCopyToRow As Long Dim wks As Worksheet On Error GoTo Err_Execute For Each wks In Worksheets LSearchRow = 4 LCopyToRow = 4 ThisWorkbook.Worksheets.Add After:=Worksheets(Worksheets.Count) Set wksCopyTo = ActiveSheet wks.Rows(3).EntireRow.Copy wksCopyTo.Rows(3) While Len(wks.Range("A" & CStr(LSearchRow)).Value) > 0 If wks.Range("AB" & CStr(LSearchRow)).Value = "Yes" And wks.Range("AK" & CStr(LSearchRow)).Value = "Yes" And wks.Range("BB" & CStr(LSearchRow)).Value = "Y" Then Rows(CStr(LSearchRow) & ":" & CStr(LSearchRow)).Select Selection.Copy wksCopyTo.Select wksCopyTo.Rows(CStr(LCopyToRow) & ":" & CStr(LCopyToRow)).Select wksCopyTo.Paste 'Move counter to next row LCopyToRow = LCopyToRow + 1 'Go back to Sheet1 to continue searching wks.Select End If LSearchRow = LSearchRow + 1 Wend Application.CutCopyMode = False Range("A3").Select MsgBox "All matching data has been copied." Next wks Exit Sub Err_Execute: MsgBox "An error occurred." ```
> > > > > > > > > > > > Can I have a fresh pair of eyes, please! > > > > > > > > > > > > > > > > > > Maybe because you a missing the worksheetname before `Rows(CStr(LSearchRow) & ":" & CStr(LSearchRow)).Select`? After the code executes this line `ThisWorkbook.Worksheets.Add After:=Worksheets(Worksheets.Count)` The current sheet is the new sheet and hence it will refer to the newly created sheet. And later the `wks.Select` returns the control back to your main sheet. So change that to ``` wks.Rows(CStr(LSearchRow) & ":" & CStr(LSearchRow)).Select ``` Also your entire sub can be re-written as (**UNTESTED**) ``` Option Explicit Sub Sample() Dim LSearchRow As Long, LCopyToRow As Long Dim wks As Worksheet, wksCopyTo As Worksheet On Error GoTo Err_Execute For Each wks In Worksheets LSearchRow = 4: LCopyToRow = 4 With wks ThisWorkbook.Worksheets.Add After:=Worksheets(Worksheets.Count) Set wksCopyTo = ActiveSheet .Rows(3).EntireRow.Copy wksCopyTo.Rows(3) While Len(Trim(.Range("A" & LSearchRow).Value)) > 0 If .Range("AB" & LSearchRow).Value = "Yes" And _ .Range("AK" & LSearchRow).Value = "Yes" And _ .Range("BB" & LSearchRow).Value = "Y" Then .Rows(LSearchRow).Copy wksCopyTo.Rows(LCopyToRow) LCopyToRow = LCopyToRow + 1 End If LSearchRow = LSearchRow + 1 Wend End With MsgBox "All matching data has been copied." Next wks Exit Sub Err_Execute: MsgBox "An error occurred." End Sub ```
257815
I'm trying to stop certain fields being changed by the user. However I don't know what columns those fields will be in, only what value they will initially contain. My current approach is this: ``` Private Sub Workbook_SheetChange(ByVal Sh As Object, ByVal Target As Range) Dim columnHeaderRange As Range Set shtData = Worksheets("Data") Set columnHeaderRange = Union(shtData.Columns(ColumnNumber(5, "example1")), _ shtData.Columns(ColumnNumber(5, "example2")), _ shtData.Columns(ColumnNumber(5, "example3"))) Set columnHeaderRange = Application.Intersect(Target, columnHeaderRange) ElseIf Not (columnHeaderRange Is Nothing) Then With Application .EnableEvents = False .Undo MsgBox "Change is not possible.", 16 .EnableEvents = True End With Else Exit Sub End If ``` My ColumnNumber function in the above code takes the row and field value as parameters and returns the column number. Since I'm using fixed field values though, this fails if a field has been changed so my union call fails. Is there a way to have this code run upon a user attempting to change the value of a cell but before the actual value of the cell is changed? Alternatively can anyone suggest a better approach?
Further to my comments **EXAMPLE 1** Create a sheet called `List`, which will store your values. The best part about this method is that you do not have to amend the code every time you want to add/delete items from your list. See Screenshot ![enter image description here](https://i.stack.imgur.com/lKK0A.png) And let's say this is your main sheet ![enter image description here](https://i.stack.imgur.com/73hiH.png) Paste this code in the Sheet Code Area ``` Dim rngList As Range, aCell As Range Dim RowAr() As Long Private Sub Worksheet_Change(ByVal Target As Range) Dim i As Long On Error GoTo Whoa Application.EnableEvents = False For Each aCell In Target If aCell.Row = 5 Then With Application For i = LBound(RowAr) To UBound(RowAr) If RowAr(i) = aCell.Column Then MsgBox "Change is not possible." .Undo GoTo Letscontinue End If Next End With End If Next Letscontinue: Application.EnableEvents = True Exit Sub Whoa: MsgBox Err.Description Resume Letscontinue End Sub Private Sub Worksheet_SelectionChange(ByVal Target As Range) Dim wsList As Worksheet Dim n As Long, lrow As Long Set wsList = ThisWorkbook.Sheets("list") With wsList lrow = .Range("A" & .Rows.Count).End(xlUp).Row Set rngList = .Range("A1:A" & lrow) End With n = 0 ReDim RowAr(n) For Each aCell In Range("5:5") If Len(Trim(aCell.Value)) <> 0 Then If Application.WorksheetFunction.CountIf(rngList, aCell.Value) > 0 Then n = n + 1 ReDim Preserve RowAr(n) RowAr(n) = aCell.Column Debug.Print aCell.Column End If End If Next End Sub ``` ![enter image description here](https://i.stack.imgur.com/lPT1f.png) **Code in Action** ![enter image description here](https://i.stack.imgur.com/U7A7b.png) **EXAMPLE 2** This uses the hardcoded list. ``` Option Explicit Dim RowAr() As Long, aCell As Range Private Sub Worksheet_SelectionChange(ByVal Target As Range) Dim MyList As String, MyAr() As String Dim n As Long, i As Long '~~> This is the list MyList = "Header 1,Header 2" MyAr = Split(MyList, ",") n = 0 ReDim RowAr(n) For Each aCell In Range("5:5") If Len(Trim(aCell.Value)) <> 0 Then For i = LBound(MyAr) To UBound(MyAr) If aCell.Value = MyAr(i) Then n = n + 1 ReDim Preserve RowAr(n) RowAr(n) = aCell.Column End If Next End If Next End Sub Private Sub Worksheet_Change(ByVal Target As Range) Dim i As Long On Error GoTo Whoa Application.EnableEvents = False For Each aCell In Target If aCell.Row = 5 Then With Application For i = LBound(RowAr) To UBound(RowAr) If RowAr(i) = aCell.Column Then MsgBox "Change is not possible." .Undo GoTo Letscontinue End If Next End With End If Next Letscontinue: Application.EnableEvents = True Exit Sub Whoa: MsgBox Err.Description Resume Letscontinue End Sub ```
258012
seems like a very, common & frustrating topic... but my wifi is at a super slow crawl. I am getting speeds slower than dial-up modems, which gets maddening when you are trying develop a website. **TEMPORARY SOLUTION** After trying -EVERY- solution below, nothing worked. So I used Timeshift and rolled back my machine to Oct. 22nd, then I did NOT do any updates or upgrades.. and now wifi seems to be working pretty well. So I plan to ignore updates & upgrades for the next month and see if the problem gets corrected elsewhere. It was definitely a recent update that caused this problem. This is my hardware: ``` $sudo lshw -class network -short && nmcli device status H/W path Device Class Description =============================================================== /0/100/14.3 wlp0s20f3 network Killer Wi-Fi 6 AX1650i 160MHz Wireless Network Adapter ``` My current kernel is: 5.11.0-38-generic **More Details** $ lsb\_release -a ``` No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal ``` I have tried -all- of the solutions found here: [Ubuntu 20.04 Network Performance Extremely Slow](https://askubuntu.com/questions/1230525/ubuntu-20-04-network-performance-extremely-slow) and nothing has worked. Does anybody have any new tricks, that I can try? even 1MB download would be a blessing from the heavens. **Additional Requested Info** Machine Brand/Model: Acer Aspire 5 A515-55G Command 1: ``` $ sudo lshw -class network [sudo] password for dragonpharaoh: *-network description: Wireless interface product: Killer Wi-Fi 6 AX1650i 160MHz Wireless Network Adapter (201NGW) vendor: Intel Corporation physical id: 14.3 bus info: pci@0000:00:14.3 logical name: wlp0s20f3 version: 30 serial: 34:cf:f6:61:c0:38 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=5.11.0-38-generic firmware=59.601f3a66.0 Qu-c0-hr-b0-59.uc ip=192.168.0.20 latency=0 link=yes multicast=yes wireless=IEEE 802.11 resources: iomemory:600-5ff irq:16 memory:6013104000-6013107fff *-network description: Ethernet interface product: RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: enp3s0 version: 15 serial: b4:a9:fc:b0:b0:d1 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=5.11.0-38-generic firmware=rtl8168h-2_0.0.2 02/26/15 latency=0 link=no multicast=yes port=twisted pair resources: irq:16 ioport:4000(size=256) memory:51104000-51104fff memory:51100000-51103fff ``` Command 2: ``` $ ls -al /etc/pm/config.d/ ls: cannot access '/etc/pm/config.d/': No such file or directory ``` Command 3: ``` $ ls -al /etc/modprobe.d/ total 64 drwxr-xr-x 2 root root 4096 Sep 20 17:55 . drwxr-xr-x 177 root root 12288 Oct 29 2021 .. -rw-r--r-- 1 root root 2507 Jul 31 2015 alsa-base.conf -rw-r--r-- 1 root root 154 Feb 16 2020 amd64-microcode-blacklist.conf -rw-r--r-- 1 root root 325 Mar 12 2020 blacklist-ath_pci.conf -rw-r--r-- 1 root root 1518 Mar 12 2020 blacklist.conf -rw-r--r-- 1 root root 210 Mar 12 2020 blacklist-firewire.conf -rw-r--r-- 1 root root 677 Mar 12 2020 blacklist-framebuffer.conf -rw-r--r-- 1 root root 156 Jul 31 2015 blacklist-modem.conf lrwxrwxrwx 1 root root 41 Dec 28 2020 blacklist-oss.conf -> /lib/linux-sound-base/noOSS.modprobe.conf -rw-r--r-- 1 root root 583 Mar 12 2020 blacklist-rare-network.conf -rw-r--r-- 1 root root 127 Jan 22 2020 dkms.conf -rw-r--r-- 1 root root 154 Jun 11 2020 intel-microcode-blacklist.conf -rw-r--r-- 1 root root 347 Mar 12 2020 iwlwifi.conf -rw-r--r-- 1 root root 95 Sep 20 17:55 v4l2loopback.conf ``` Command 3: ``` $ lspci -knn | grep Net -A2 00:14.3 Network controller [0280]: Intel Corporation Killer Wi-Fi 6 AX1650i 160MHz Wireless Network Adapter (201NGW) [8086:34f0] (rev 30) Subsystem: Intel Corporation Killer Wi-Fi 6 AX1650i 160MHz Wireless Network Adapter (201NGW) [8086:0074] Kernel driver in use: iwlwifi Kernel modules: iwlwifi ``` Command 4: ``` $ modinfo iwlwifi Too long, pastebin: https://pastebin.com/ThKWTSyf ``` Command 5: ``` $ sudo iwlist freq lo no frequency information. enp3s0 no frequency information. wlp0s20f3 32 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Channel 12 : 2.467 GHz Channel 13 : 2.472 GHz Channel 36 : 5.18 GHz Channel 40 : 5.2 GHz Channel 44 : 5.22 GHz Channel 48 : 5.24 GHz Channel 52 : 5.26 GHz Channel 56 : 5.28 GHz Channel 60 : 5.3 GHz Channel 64 : 5.32 GHz Channel 100 : 5.5 GHz Channel 104 : 5.52 GHz Channel 108 : 5.54 GHz Channel 112 : 5.56 GHz Channel 116 : 5.58 GHz Channel 120 : 5.6 GHz Channel 124 : 5.62 GHz Channel 128 : 5.64 GHz Channel 132 : 5.66 GHz Channel 136 : 5.68 GHz Channel 140 : 5.7 GHz Current Frequency:2.412 GHz (Channel 1) wg-client1 no frequency information. vboxnet0 no frequency information. ``` Command 6: ``` $ sudo iwconfig lo no wireless extensions. enp3s0 no wireless extensions. wlp0s20f3 IEEE 802.11 ESSID:"8D" Mode:Managed Frequency:2.412 GHz Access Point: 1C:AB:C0:21:E1:C8 Bit Rate=144.4 Mb/s Tx-Power=22 dBm Retry short limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=70/70 Signal level=-31 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:900 Missed beacon:0 wg-client1 no wireless extensions. vboxnet0 no wireless extensions. ``` Command 7: ``` $ inxi -Fxz System: Kernel: 5.11.0-38-generic x86_64 bits: 64 compiler: N/A Desktop: Gnome 3.36.9 Distro: Ubuntu 20.04.3 LTS (Focal Fossa) Machine: Type: Laptop System: Acer product: Aspire A515-55G v: V1.13 serial: <filter> Mobo: IL model: Doc_IL v: V1.13 serial: <filter> UEFI: Insyde v: 1.13 date: 10/15/2020 Battery: ID-1: BAT1 charge: 39.5 Wh condition: 39.5/53.0 Wh (75%) model: PANASONIC AP19B5L status: Full CPU: Topology: Quad Core model: Intel Core i5-1035G1 bits: 64 type: MT MCP arch: Ice Lake rev: 5 L2 cache: 6144 KiB flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 19046 Speed: 2256 MHz min/max: 400/3600 MHz Core speeds (MHz): 1: 2995 2: 3104 3: 3389 4: 3109 5: 2424 6: 1099 7: 3223 8: 3275 Graphics: Device-1: Intel vendor: Acer Incorporated ALI driver: i915 v: kernel bus ID: 00:02.0 Device-2: NVIDIA GP107M [GeForce MX350] vendor: Acer Incorporated ALI driver: nvidia v: 460.91.03 bus ID: 02:00.0 Display: x11 server: X.Org 1.20.11 driver: modesetting,nvidia unloaded: fbdev,nouveau,vesa resolution: 1920x1080~60Hz OpenGL: renderer: GeForce MX350/PCIe/SSE2 v: 4.6.0 NVIDIA 460.91.03 direct render: Yes Audio: Device-1: Intel Smart Sound Audio vendor: Acer Incorporated ALI driver: snd_hda_intel v: kernel bus ID: 00:1f.3 Sound Server: ALSA v: k5.11.0-38-generic Network: Device-1: Intel Killer Wi-Fi 6 AX1650i 160MHz Wireless Network Adapter driver: iwlwifi v: kernel port: 6000 bus ID: 00:14.3 IF: wlp0s20f3 state: up mac: <filter> Device-2: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet vendor: Acer Incorporated ALI driver: r8169 v: kernel port: 4000 bus ID: 03:00.0 IF: enp3s0 state: down mac: <filter> IF-ID-1: vboxnet0 state: up speed: 10 Mbps duplex: full mac: <filter> IF-ID-2: wg-client1 state: unknown speed: N/A duplex: N/A mac: N/A Drives: Local Storage: total: 2.96 TiB used: 2.16 TiB (73.0%) ID-1: /dev/nvme0n1 vendor: Kingston model: RBUSNS8154P3256GJ1 size: 238.47 GiB ID-2: /dev/sda vendor: Seagate model: ST1000LM035-1RK172 size: 931.51 GiB temp: 41 C ID-3: /dev/sdb type: USB vendor: Western Digital model: WD Elements 25A2 size: 1.82 TiB Partition: ID-1: / size: 91.17 GiB used: 57.70 GiB (63.3%) fs: ext4 dev: /dev/nvme0n1p5 ID-2: /home size: 821.50 GiB used: 547.91 GiB (66.7%) fs: ext4 dev: /dev/sda2 Sensors: System Temperatures: cpu: 74.0 C mobo: N/A gpu: nvidia temp: 70 C Fan Speeds (RPM): N/A Info: Processes: 425 Uptime: 33m Memory: 19.33 GiB used: 9.02 GiB (46.7%) Init: systemd runlevel: 5 Compilers: gcc: 9.3.0 Shell: bash v: 5.0.17 inxi: 3.0.38 ``` Command 8: ``` $ nmcli device wifi list IN-USE BSSID SSID MODE CHAN RATE SIGNAL BARS SECURITY * 1C:AB:C0:21:E1:C8 8D Infra 1 270 Mbit/s 100 β–‚β–„β–†β–ˆ WPA1 WPA2 1C:AB:C0:22:3F:B8 8_A Infra 6 270 Mbit/s 67 β–‚β–„β–†_ WPA1 WPA2 0C:9D:92:53:8F:71 CHU_ Home Infra 8 195 Mbit/s 52 β–‚β–„__ WPA2 1C:AB:C0:DB:BC:B8 77-7F Infra 11 130 Mbit/s 50 β–‚β–„__ WPA1 WPA2 F4:30:B9:E1:8D:0A DIRECT-09-HP DeskJet 5820 series Infra 11 65 Mbit/s 49 β–‚β–„__ WPA2 98:DA:C4:F9:E4:CE CCL Infra 3 270 Mbit/s 42 β–‚β–„__ WPA2 60:A4:B7:3D:37:A7 TANT Infra 4 130 Mbit/s 37 β–‚β–„__ WPA2 5C:92:5E:C3:E0:30 mkmkmk Infra 5 270 Mbit/s 35 β–‚β–„__ WPA2 74:DA:88:B2:89:B6 DiDihome Infra 2 195 Mbit/s 34 β–‚β–„__ WPA1 WPA2 40:9B:CD:A4:38:A0 dlink-38A0 Infra 1 130 Mbit/s 32 β–‚β–„__ WPA1 WPA2 FC:4A:E9:4D:BF:32 56N9F Infra 8 130 Mbit/s 32 β–‚β–„__ WPA2 AC:20:2E:EB:CE:58 110-8FB Infra 6 270 Mbit/s 27 β–‚___ WPA1 WPA2 B8:55:10:44:FA:D4 pumpkin Infra 6 135 Mbit/s 27 β–‚___ WPA2 CA:6C:87:FD:EE:54 stobene Infra 11 65 Mbit/s 27 β–‚___ WPA2 C4:12:F5:40:89:A8 D-Link_DIR-612 Infra 1 270 Mbit/s 20 β–‚___ WPA1 WPA2 74:DA:88:B2:89:B5 DiDihome_5G Infra 157 270 Mbit/s 17 β–‚___ WPA1 WPA2 ``` iperf test output following @matigo 's solution (post-reboot) ``` # iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 128 KByte (default) ------------------------------------------------------------ [ 4] local 173.18.61.138 port 5001 connected with 49.212.186.177 port 58582 [ ID] Interval Transfer Bandwidth [ 4] 0.0-14.7 sec 7.75 MBytes 4.43 Mbits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 47048 [ 4] 0.0- 5.0 sec 35.0 Bytes 56.0 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 48766 (peer 29797.29556.29797-unk) [ 5] 0.0- 5.0 sec 4.00 Bytes 6.34 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 49782 [ 4] 0.0- 5.0 sec 35.0 Bytes 56.0 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 52368 [ 5] 0.0- 5.0 sec 4.00 Bytes 6.40 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 53856 [ 4] 0.0- 5.0 sec 35.0 Bytes 56.0 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 56368 [ 5] 0.0- 5.0 sec 243 Bytes 389 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 57994 (peer 29797.29556.29797-unk) [ 4] 0.0- 5.1 sec 4.00 Bytes 6.33 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 60388 [ 5] 0.0- 5.0 sec 1.00 Bytes 1.60 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 33430 [ 4] 0.0- 5.0 sec 5.00 Bytes 8.00 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 35588 [ 5] 0.0- 5.0 sec 289 Bytes 463 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 36804 (peer 29797.29556.29797-unk) [ 4] 0.0- 5.1 sec 4.00 Bytes 6.27 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 39110 [ 4] 0.0- 5.0 sec 1.00 Bytes 1.60 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 40280 [ 5] 0.0- 5.0 sec 18.0 Bytes 28.8 bits/sec recvn abort failed: Resource temporarily unavailable [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 42074 [ 4] 0.0- 0.0 sec 0.00 Bytes 0.00 bits/sec recvn abort failed: Resource temporarily unavailable [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 42078 [ 5] 0.0- 0.0 sec 0.00 Bytes 0.00 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 42088 (peer 12992.43009.275-unk) [ 4] 0.0- 5.0 sec 22.0 Bytes 34.9 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 43546 [ 5] 0.0- 4.8 sec 20.0 Bytes 33.1 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 45790 (peer 3338.3338.18245-alpha) [ 4] 0.0- 4.6 sec 40.0 Bytes 69.0 bits/sec connect failed: Connection refused connect failed: Connection refused [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 47048 (peer 14901.12336.12557-unk) [ 5] 0.0- 5.1 sec 45.0 Bytes 69.9 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 48896 (peer 3338.3338.18245-alpha) [ 4] 0.0- 5.1 sec 40.0 Bytes 62.3 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 50210 (peer 3338.3338.18245-alpha) [ 5] 0.0- 5.1 sec 40.0 Bytes 63.4 bits/sec recvn abort failed: Resource temporarily unavailable [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 52430 [ 4] 0.0- 0.0 sec 0.00 Bytes 0.00 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 52444 (peer 116.25971.29812) [ 5] 0.0- 5.1 sec 37.0 Bytes 57.8 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 53996 [ 4] 0.0- 5.0 sec 52.0 Bytes 83.2 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 56104 (peer 29797.29556.29797-unk) [ 5] 0.0- 5.1 sec 4.00 Bytes 6.33 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 57490 [ 4] 0.0- 5.0 sec 50.0 Bytes 80.0 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 59838 [ 5] 0.0- 5.0 sec 210 Bytes 336 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 32774 [ 4] 0.0- 5.0 sec 18.0 Bytes 28.8 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 34698 (peer 18245.21536.12064-unk) [ 5] 0.0- 5.1 sec 18.0 Bytes 28.5 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 35982 (peer 12081.11824.3338) [ 4] 0.0- 5.0 sec 52.0 Bytes 82.4 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 36596 [ 5] 0.0- 5.0 sec 22.0 Bytes 34.9 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 38688 [ 4] 0.0- 5.1 sec 21.0 Bytes 33.2 bits/sec [ 5] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 39754 [ 5] 0.0- 5.0 sec 1.00 Bytes 1.60 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 41736 (peer 29797.29556.29797-unk) [ 4] 0.0- 5.1 sec 4.00 Bytes 6.29 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 9.132.28.227 port 42864 (peer 29556.24948.29556-unk) [ 4] 0.0- 5.0 sec 4.00 Bytes 6.34 bits/sec [ 4] local 173.18.61.138 port 5001 connected with 49.212.186.177 port 59900 [ 4] 0.0-12.7 sec 6.88 MBytes 4.55 Mbits/sec Client Client connecting to 49.212.186.177, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.10.10.2 port 59900 connected with 49.212.186.177 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-11.0 sec 6.88 MBytes 5.26 Mbits/sec ``` While the Bandwith and Transfer on the client side seem high... that is not what I am experiencing most of the time... I still luck to break 800KB transfer. Being more clear, there are periods when wifi works as it should and I can 3, even 4 MB transfer rates..but this tansitory... it might last for 30 minutes, an hour, then instantly drop to 50KB/s at random. Also, it is always slow at boot time.
So It just depends on you what you want, I can only suggest because your system specs are OK, You can have any type of ubuntu flavour you want, Let me give you a brief Idea You can find alternatives of ubuntu I.e ubuntu flavours [here](https://ubuntu.com/download/flavours) --- Lubuntu ======= A good Linux distro, If your PC isn't much good in specs then Lubuntu can help you to get maximum performance even from an ultra potato PC, It also offers an OK GUI, It also comes with a handful of apps and stuff, So I guess it would be good for you PC specs, [Official page of Lubuntu](https://lubuntu.me) [Wikipedia Page of Lubuntu](https://en.m.wikipedia.org/wiki/Lubuntu) --- Xubuntu ======= Good for performance with very minimal memory usage, This is what is good and recommend for your PC. It would be good in this case as you are not on a bad PC, It is just normal. Lubuntu is designed for a very minimal thing. But Xubuntu can help you as it is for the average PC and works well even with very good PCs <https://xubuntu.org/about/> --- Kubuntu ======= Good for a beautiful GUI, It may cut off some performance but is beautiful. But as your specs it will not give much performance, So it's not for heavy use as per your specs but it is good if you are a GUI lover More info can be found at <https://kubuntu.org/about-us/> --- Ubuntu Vanilla ============== Good but I don't recommend it for your specs, Don't install it. --- I cannot tell you everything all about distros but you need to check out more about them yourself ================================================================================================= --- CONCLUSION ========== It depends on your admiring, every distribution is good, but I would personally recommend Xubuntu for you. --- **There are many other distros too, check out Gentoo, mint, kali or Garuda, All these are just the best in their forms** --- "Every Distribution has its abilities" ====================================== --- Your specs are good enough to use any of them, So go ahead and pick one today.
258043
Right now I'm using an N-Channel MOSFET. I am inputting a 2V pk-pk square wave into the gate, the drain is connected to a bare wire (that is connected to nothing) and the source is connected to ground. For some reason there's a voltage reading in the drain of my MOSFET. Is this normal?
If you see just the "edges" (positive/negative peaks at the times when there are rising/falling edges at the input signal) it can be explained by the capacitance between gate and drain.
258951
after a lot of help yesterday, I came up against a known error in asp.net4 beta - I upgraded to VS2012 RC Express (4.5), and now I'm getting an internal server error, and I can't see why. I'm creating a web API: **Model** ``` using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.ComponentModel.DataAnnotations; using System.Data.Entity.ModelConfiguration.Conventions; using System.Data.Entity; using System.ComponentModel.DataAnnotations.Schema; namespace MvcApplication6.Models { public class tblCustomerBooking { [Key()] public int customer_id { get; set; } public string customer_name { get; set; } public string customer_email { get; set; } public virtual ICollection<tblRental> tblRentals { get; set; } } public class tblRental { [Key()] public int rental_id { get; set; } public int room_id { get; set; } public DateTime check_in { get; set; } public DateTime check_out { get; set; } public decimal room_cost { get; set; } public int customer_id { get; set; } [ForeignKey("customer_id")] public virtual tblCustomerBooking tblCustomerBooking { get; set; } } } ``` I then used the Add Controller wizard, selected "Template: API controller with read/write actoins, using Entity Framework", chose tblCustomerBooking as my Model Class, and clicked , which is: ``` using System.Data.Entity; namespace MvcApplication6.Models { public class BookingsContext : DbContext { public BookingsContext() : base("name=BookingsContext") { } public DbSet<tblCustomerBooking> tblCustomerBookings { get; set; } } } ``` My Controller (BookingsController.cs) automatically generated by Visual Studio 2012 Express is: ``` using System; using System.Collections.Generic; using System.Data; using System.Data.Entity.Infrastructure; using System.Linq; using System.Net; using System.Net.Http; using System.Web; using System.Web.Http; using MvcApplication6.Models; namespace MvcApplication6.Controllers { public class BookingsController : ApiController { private BookingsContext db = new BookingsContext(); // GET api/Bookings public IEnumerable<tblCustomerBooking> GettblCustomerBookings() { return db.tblCustomerBookings.AsEnumerable(); } } } ``` I added a breakpoint at "return db....." above, and checked the Watch part in VS - it clearly shows the object, with the customer, and the associated rentals: ![Snapshot of customer and rental objects](https://i.stack.imgur.com/CUf6J.png) However if I allow the script to continue, I just get an http500 error (as shown in Fiddler below): ![Fiddler screenshot showing HTTP500](https://i.stack.imgur.com/vCNpJ.png) Is there any more code I can add into the controller to allow me to see why it is erroring? Or can anyone see what may be wrong? VS appears to retrieve it ok, as shown in the first screenshot, but doesn't seem to be able to send it out. Thanks for any help or pointers, Mark **Update** Hi - am I simply asking too much of the API? Is it not possible (out of the box) for it to simply return objects with one to many relationships? Can it only really produce a single object list? Thanks, Mark
You might want to add a global error handler to your project. It can trap and log any odd errors that are happening in background threads. This S/O article talks about some solid approaches. They will save you lots of time in any project: [ASP.NET MVC Error Logging in Both Global.asax and Error.aspx](https://stackoverflow.com/questions/832046/asp-net-mvc-error-logging-in-both-global-asax-and-error-aspx)
259148
SQl 2008 Reporting Services (SP1, CU4) installed on Windows 2008 R2. Service account is configured to use Network Service. Port 80 is open in the Firewall. Continue to get "HTTP Error 503. The service is unavailable." error no matter what i try. Need advice on what else I can check or where I can look for any errors with more detail beside 503 error. I have tried turning on customerror but still cannot figure this out. Any help would be greatly appreciated. Thanks
Restarting the SSRS service solved the issue for me. Environment: SSRS2016, Window Server 2012 R2 Standard
259378
Right now I need to keep changing the line below. Is there a programmatic way to check if I am running the site locally vs. on production ? The closest I got was this post but it seems to be referring to Silverlight and I am just calling from an asp.net site. [Silverlight application cannot accesss WCF services on other machines](https://stackoverflow.com/questions/2210777/silverlight-application-cannot-accesss-wcf-services-on-other-machines/2210960#2210960) ``` <client> <endpoint address="http://www.punkoutersoftware.com/Service1.svc" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IService1" contract="ServiceReference1.IService1" name="BasicHttpBinding_IService1" /> </client> ```
Is this for an ASP.NET application? If so, consider using VS2010's [Web.config Transformation](http://msdn.microsoft.com/en-us/library/dd465326.aspx), which can alter your web.config based on your build type (Release, Debug, etc). --- [Here's a better introductory tutorial](http://www.codeguru.com/csharp/.net/net_asp/miscellaneous/article.php/c17407/ASPNET-Tutorial-WebConfig-Transformation.htm)
259614
im creating a webapplication which connects to a database and uses a ODBC connector im using the following code to do this ``` private void SetupConnection() { conn.ConnectionString = ConfigurationManager.ConnectionStrings["live"].ConnectionString; OdbcDataAdapter da = new OdbcDataAdapter("SELECT * FROM MTD_FIGURE_VIEW1 '", conn); da.Fill(ds); } ``` i want to create a try catch statement which will keep on trying to connect to the database. psudocode will be try above function, if cant connect try again if it does connect without any errors carry on. can someone please help me on this **Question relevant update from the comments:** > > i just want it to keep on trying the > reason for this is the web application > im making will never switch of its > constantly refreshing data, but the > database is switched of fro two hours > every night for backup during this > period i want the app to keep on > trying to connect > > >
``` private void SetupConnection() { conn.ConnectionString = ConfigurationManager.ConnectionStrings["ZenLive"].ConnectionString; bool success = false; while(!success) { try { OdbcDataAdapter da = new OdbcDataAdapter("SELECT * FROM MTD_FIGURE_VIEW1 '", conn); da.Fill(ds); success = true; } catch(Exception e) { Log(e); Thread.Sleep(_retryPeriod) } } } ``` Note as people have commented, this is **really not a good idea**. You need some way to distinguish between an "expected" exception where the database is turned off and you want to keep retrying, and the case that something unexpected has happened and you don't want to keep trying.
259759
I am trying to plot a 3d wireframe plot from a csv file. Data in format first column x is CPU in percentage (range from 10-90%), second column y memory (range from 10-80%), in percentage and last column drop rate in percentage(range from 10-70%). Sample data ``` 10,10,30 10,20,10 10,30,5 10,40,30 20,10,4 20,20,30 20,30,40 20,40,20 sample_data = np.genfromtxt("data.csv", delimiter=",", names=["x", "y","z"]) x, y, z = zip(*sample_data) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.set_xlabel('CPU') ax.set_ylabel('Memory') ax.set_zlabel('Rate') ax.plot_wireframe(x,y,z, color='green') plt.show() ``` I am retrieving below error ``` if Z.ndim != 2: AttributeError: 'tuple' object has no attribute 'ndim' ```
The problem is that you are expecting too much from `zip`. `zip` is a Python function that generates tuples from iterables. Python tuples are not numpy data structures and therefore cannot respond to an ndim request. If you want to acccess the data points associated with `x`, `y`, and `z` as ndarrays then you want to do something like the example below: ``` from io import StringIO import numpy as np txt = """10,10,30 10,20,10 10,30,5 10,40,30 20,10,4 20,20,30 20,30,40 20,40,20 """ s = StringIO(txt) sample_data = np.genfromtxt(s, delimiter=",", names=["x", "y","z"]) sample_data["z"].ndim ``` ``` 1 ```
259811
What do I need to do to change the RPCNFSDCOUNT setting without a full service restart? I need to reload the NFS configuration without restarting the service. The RPCNFSDCOUNT thread count is too low for the workload but I cannot get the management to agree on a schedule for a change window. A normal service can frequently do this with a SIGHUP. I did attempt this with a `kill -HUP $(pidof rpc.mountd)`, but that was unsuccessful in getting the new RPCNFSDCOUNT setting applied from /etc/sysconfig/nfs on this old Fedora 8 box. The man pages for the other NFS daemons incline me to think that HUP'ing those processes won't be of any benefit, and I'm rather reluctant to HUP the kthreadd process that is the parent process of the nfsd threads themselves.
And after an obvious mental lapse, I remembered the /proc filesystem. /proc/fs/nfsd, specifically, controls the runtime settings of the nfsd service. In my case, this meant `echo '32' > /proc/fs/nfsd/threads` sets the number of threads to 32.
259907
I'm learning some more PHP and after PHP itself to work, I can't seem to get it to validate any form field correctly. My goal is to check if the firstname field is empty, and if it is, it will give a message in a red color. The message in the red works, but only because the echo script is being called by the form submission, not because it has detected any empty field, because when I made an else statement to say "wassup" if its not empty, I got the same message for when the field is empty. Also, is there a way to check off multiple input fields at once like you could with JavaScript? For example if input1 == '' || input2 == '' and so on. Here is my HTML: ``` <html> <head> <title>Welcome</title> </head> <body> <form action="welcome.php" method="post"> <fieldset> <legend>Personal Info</legend> First name <input name="name" type="text"> Middle name <input name="middlename" type="text"> Surname <input name="lastname" type="text"> Age <input name="age" type="number"> Date of birth <input name="dob" type="date"> </fieldset> <fieldset> <legend>Regional & location info</legend> Continent <select> <option value="europe">Europe</option> <option value="americas">America</option> <option value="africa">Africa</option> <option value="asia">Asia</option> <option value="australia">Australia</option> <option value="eurasia">Eurasia</option> </select> Country <input name="country" type="text"> State <input type="text"> City <input name="city" type="text"> Street number <input name="streetno" type="number"> Street name <input name="streetname" type="text"> <br><br> Suburb <input name="suburb" type="text"> Postcode <input name="postcode" type="number"> If none of these apply to your accommodations, enter a typed location here <input type="text"> </fieldset> <fieldset> <legend>Previous lifestyle accommodations</legend> Previous &/or most recent job title <input name="job" type="text"> First time job seeker <input type="checkbox" name="check1" value="ftjb"> I'm a student <input type="checkbox" name="check2" value="ias"> Previous &/or most recent acedemic title <input name="school" type="text"> First time applying for a qualification <input type="checkbox" name="check3" value="ftafaq"> I have work experience <input type="checkbox" name="check4" value="ihwe"> </fieldset> <fieldset> <legend>Details of arrival</legend> Reason for arrival <input name="reason" type="text"> Date of arrival <input name="arrival" type="date"> Amount of stay expectancy <input type="checkbox" name="check3">Temporary <input type="checkbox" name="check4">Longterm </fieldset> <fieldset> <legend>Signiture</legend> <input name='signiture' type="text"> </fieldset> <input type="submit" name="submit" value="Submit"> </form> </body> </html> ``` Here is my PHP code: ``` <?php $firstname = $_POST['name']; $lastname = $_POST['lastname']; $age = $_POST['age']; $dob = $_POST['dob']; $country = $_POST['country']; $city = $_POST['city']; $suburb = $_POST['suburb']; $postcode = $_POST['postcode']; $streetno = $_POST['streetno']; $streetname = $_POST['streetname']; $suburb = $_POST['suburb']; $job = $_POST['job']; $school = $_POST['school']; $reason = $_POST['reason']; $arrival = $_POST['arrival']; $signiture = $_POST['signiture']; if (isset($_POST['submit'])) { if (empty($_POST[$firstname])) { echo '<p style="color: red; text-align: center">Your first name is required</p>'; } else { echo "wassaup"; } } ?> ```
Found solution! Looks, like this is some adb or Genymotion bug. To solve this, you must manually connect adb to running Genymotion Device In first, you need to know Androids internal IP. You can find it in window title of Genymotion device. Next: 1. cd to your Android SDK dir 2. cd to platform-tools 3. ./adb connect 192.168.57.102:5555 where 192.168.57.102 is my IP address of Genymotion Virtual Device. You must put here your own
260143
Tech Stack: Java 1.6, JAXB, Spring 3, JAX-RS (RESTEasy), XSD Hello, I am using Spring with JAX-RS to create RestFul Webservice. Everything is working fine except that the generated responses contain the setters info e.g. ``` { ... "setName": true, "setId": true, "setAddress": true, "setAge": true, } ``` I don't know what might be causing this? How can I turn this off? Adi **UPDATE 1:** The PersonRequest class is generated by the JAXB and contains all the javax.xml.bind.annotation.\* annotations. ``` @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "personResponse " }) @XmlRootElement(name = "PersonResponse ") public class PersonResponse { @XmlElement(name = "Name", required = true) protected String name; @XmlElement(name = "Id", required = true) protected String id; // and the setters and getters } ``` and the Resource looks like this: ``` @Component @Path("/person") public class PersonImpl implements Person { @Override @GET @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML, "application/x-amf" }) @Path("v1") public PersonResponse getPerson() { .... .... } } ``` \*\* UPDATE 2 \*\* This happens only when Content-Type is json, in case of Content Type as 'xml', the setters are not returned. If that helps.
Use a `row-fluid` to nest a 12-column layout in your `span9` ``` <div class="span9"> <div class="row-fluid"> <div class="span6"> </div> <div class="span6"> </div> </div> </div> ```
260579
I have a lazy load module which needs to expose providers, so I am using the `forRoot` convention and returning the following code: ``` @NgModule({ imports: [RouterModule.forChild([ {path: "", component: LazyComponent}, ])], declarations: [LazyComponent], }) export class LazyModule { static forRoot() { return { ngModule: LazyModule, providers: [provider] }; } } ``` The problem is when I invoke the forRoot in my app module the lazy load does not work anymore. ( I don't see the separate chunk in my console ) ``` @NgModule({ declarations: [ AppComponent, HelloComponent ], imports: [ BrowserModule, AppRoutingModule, LazyModule.forRoot() <======== this stops the lazy load module ], bootstrap: [AppComponent] }) export class AppModule { } ``` From what I learned it should only make the providers singleton, why it does not work?
When you import a `LazyModule` in your `AppModule` imports array it is not "lazy" anymore. A lazy module should only be referenced in a dedicated `RoutingModule`. So if I understood you correctly you would like to share a Service between your LazyModules? If so remove `LazyModule` from `AppModule` and create a SharedModule and move your Service you like to share inside the `providers` array in SharedModule. Import SharedModule in your `AppModule` with `forRoot` and import your `SharedModule` without `forRoot` in your LazyModules
260644
Hoping someone can shed some light on my problem. I followed the tutorial found here <http://msdn.microsoft.com/en-us/library/ms171925(v=VS.100).aspx#Y3500> and cannot get this to work. My code is as follows: ``` namespace CityCollectionCSharp { public partial class frmSwitch : Form { public frmSwitch() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // TODO: This line of code loads data into the 'newCityCollectionDataSet.ClientTable' table. You can move, or remove it, as needed. this.clientTableTableAdapter.Fill(this.newCityCollectionDataSet.ClientTable); // TODO: This line of code loads data into the 'newCityCollectionDataSet.PropertyInformation' table. You can move, or remove it, as needed. this.propertyInformationTableAdapter.Fill(this.newCityCollectionDataSet.PropertyInformation); } private void propertyInformationDataGridView_CellDoubleClick(object sender, DataGridViewCellEventArgs e) { System.Data.DataRowView SelectedRowView; newCityCollectionDataSet.PropertyInformationRow SelectedRow; SelectedRowView = (System.Data.DataRowView)propertyInformationBindingSource.Current; SelectedRow = (newCityCollectionDataSet.PropertyInformationRow)SelectedRowView.Row; frmSummary SummaryForm = new frmSummary(); SummaryForm.LoadCaseNumberKey(SelectedRow.CaseNumberKey); SummaryForm.Show(); } private void comboBox1_SelectedIndexChanged(object sender, EventArgs e) { propertyInformationBindingSource.Filter = "ClientKey ='" + comboBox1.SelectedValue + "'"; } } } ``` That is for the first form and now the second form: ``` namespace CityCollectionCSharp { public partial class frmSummary : Form { public frmSummary() { InitializeComponent(); } private void Form2_Load(object sender, EventArgs e) { // TODO: This line of code loads data into the 'newCityCollectionDataSet.PropertyInformation' table. You can move, or remove it, as needed. this.propertyInformationTableAdapter.Fill(this.newCityCollectionDataSet.PropertyInformation); } internal void LoadCaseNumberKey(String CaseNumber) { propertyInformationTableAdapter.FillByCaseNumberKey(newCityCollectionDataSet.PropertyInformation, CaseNumber); } } } ``` I have the query set up as follows in the propertyInfromationTableAdapter : ``` SELECT CaseNumberKey, BRTNumber, ParcelNumber, Premises, ClientKey, ParcelNum, Registry, TaxAcctName, StreetCode, CoverDate, OrderDate, Assessment, TaxFrom, TaxTo, TaxOpen, WaterOpen, WaterAcct, WaterTo, WaterFrom, AssessedBeg, AssessedDim, SumNotes, Legal, TotalWater, TotalTax, Type, OPARec, OPADoc, OPADocNum, Recital, Num, Name, Direction, Unit, ProductKey, DateFinished, Finished, Paid, BD, BDPaid, Search, Exam FROM PropertyInformation WHERE (CaseNumberKey = @CaseNumberKey) ``` I cannot figure out for the life of me why this does not work as prescribed. When I click on a record it passes both records in the table and always has the first one in the boxes I have. I only know this as I left the bindingnavigator. Any help would be much appreciated.
`&nbsp;` is a HTML entity. When doing `.text()`, all HTML entities are decoded to their character values. Instead of comparing using the entity, compare using the actual raw character: ``` var x = td.text(); if (x == '\xa0') { // Non-breakable space is char 0xa0 (160 dec) x = ''; } ``` Or you can also create the character from the character code manually it in its Javascript escaped form: ``` var x = td.text(); if (x == String.fromCharCode(160)) { // Non-breakable space is char 160 x = ''; } ``` More information about `String.fromCharCode` is available here: > > [fromCharCode - MDC Doc Center](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/String/fromCharCode) > > > More information about character codes for different charsets are available here: > > [Windows-1252 Charset](http://en.wikipedia.org/wiki/Windows-1252) > > [UTF-8 Charset](http://en.wikipedia.org/wiki/UTF-8) > > >
261065
I use PostgreSQL 9.4.1 My query: ``` copy(select * from city) to 'C:\\temp\\city.csv' copy(select * from city) to E'C:\\temp\\city.csv' ``` > > ERROR: relative path not allowed for COPY to file > \*\*\*\*\*\*\*\*\*\* Error \*\*\*\*\*\*\*\*\*\* > > > ERROR: relative path not allowed for COPY to file SQL state: 42602 > > >
As with [this case](http://www.postgresql.org/message-id/CFF47E56EA077241B1FFF390344B5FC10ACB1C0C@webmail.begavalley.nsw.gov.au), it seems likely that you are attempting to use `copy` from a computer other than the one which hosts your database. `copy` does I/O from the database host machine's local file system only. If you have access to that filesystem, you can adjust your attempt accordingly. Otherwise, you can use the [\copy](http://www.postgresql.org/docs/9.4/static/app-psql.html#APP-PSQL-META-COMMANDS-COPY) command in `psql`.
261073
I am trying to over-ride Google Maps driving directions zoom. Yes, I have seen another question here on stackoverflow that is identical; however, that solution is not working for me. Here is where I call a new DirectionsRenderer, and I define preserveViewport: true; ``` var dr = new google.maps.DirectionsRenderer({ map: map, draggable: false, preserveViewport: true }); ``` Here is where I call the routing service, and I define setOptions preserveviewport:true; ``` ds.route({ origin: from, destination: to, travelMode: mode }, function(result, status) { if (status == google.maps.DirectionsStatus.OK) { fitBounds = true; dr.setOptions({ preserveViewport: true }); dr.setDirections(result); } }); ``` However, this is not overriding the zoom functionality of directions service. Thoughts?
I had the same issues initially with ROR on Windows. (Everyone suggested me to move to LINUX/UNIX) Still I managed to install it on Windows. Rather than installing the Heroku Toolbet (which breaks ruby and rails which is already installed) install the heroku gem along with foreman gem. > > gem install heroku > > > gem install foreman > > > Then u can use it easily.
263074
``` \documentclass[]{article} \usepackage[utf8x]{inputenc} \usepackage[english]{babel} \usepackage[T1]{fontenc} \usepackage{amsmath,amsthm,amsfonts,amssymb} \usepackage{graphicx} \usepackage{floatrow} \usepackage{subfig} \usepackage{microtype} \usepackage{braket} \usepackage{physics} \usepackage{bm} \usepackage{scalefnt} \usepackage{asymptote} \usepackage{asypictureB} \usepackage{float} \usepackage{makecell} \usepackage[all,cmtip]{xy} \usepackage{thmtools,thm-restate} \usepackage[shortlabels]{enumitem} \usepackage{xcolor,colortbl} \makeatletter \def\l@subsubsection#1#2{} \makeatother \usepackage{tikz} \usetikzlibrary{calc} \usepackage{zref-savepos} \usepackage{tabu} \newcommand*\circled[1]{\tikz[baseline=(char.base)]{% <---- BEWARE \node[shape=circle,draw,inner sep=2pt] (char) {#1};}} \begin{document} \begin{table}[H] \centering \begin{tabular}{c|c|c} \circled{A} & \circled{B} & \circled{C} \\ \hline \cellcolor{red!25}{p$_1$} & \cellcolor{red!25}{p$_1$} & \cellcolor{red!25}{p$_1$} \\ \hline \cellcolor{red!25}{p$_2$}& \cellcolor{red!25}{p$_2$} & \cellcolor{red!25}{p$_2$} \\ \hline \cellcolor{red!25}{p$_3$} & \cellcolor{red!25}{p$_3$} & \cellcolor{red!25}{p$_3$} \end{tabular} \end{table} \end{document} ``` The above code creates the following table. [![enter image description here](https://i.stack.imgur.com/TbYWz.png)](https://i.stack.imgur.com/TbYWz.png) Is it possible to make the cell coloring not merge with the neighboring cells such that the text in the cell has the colored background but there is still some white spacing between the neighboring cells in both vertical and horizontal directions?
With `{NiceTabular}` of `nicematrix`. ``` \documentclass{article} \usepackage{nicematrix,tikz} \newcommand*\circled[1]{\tikz[baseline=(char.base)]{% <---- BEWARE \node[shape=circle,draw,inner sep=2pt] (char) {#1};}} \begin{document} \begin{NiceTabular}{ccc}[cell-space-limits=1mm] \CodeBefore \begin{tikzpicture} \fill [red!15] (2-|1) rectangle (5-|4) ; \begin{scope} [color = white, line width = 2 pt] \draw (2-|1) -- (2-|4) (3-|1) -- (3-|4) (4-|1) -- (4-|4) ; \draw (2-|2) -- (5-|2) (2-|3) -- (5-|3) ; \end{scope} \draw (2-|1) -- (2-|4) (3-|1) -- (3-|4) (4-|1) -- (4-|4) ; \draw (1-|2) -- (5-|2) (1-|3) -- (5-|3) ; \draw [color = white, line width = 2 pt] (1-|1) rectangle (5-|4) ; \end{tikzpicture} \Body \circled{A} & \circled{B} & \circled{C} \\ p$_1$ & p$_1$ & p$_1$ \\ p$_2$ & p$_2$ & p$_2$ \\ p$_3$ & p$_3$ & p$_3$ \end{NiceTabular} \end{document} ``` You need several compilations (because `nicematrix` uses PGF/Tikz nodes under the hood). [![Output of the above code](https://i.stack.imgur.com/FzYqE.png)](https://i.stack.imgur.com/FzYqE.png)
263326
I am very new to iOS development and am writing an app with multiple views, each view having a tableview. For every view, I need to read a separate JSON URL and then display the results. All the tutorials that I have been checking for this read the data in ViewController.m, but since I have a separate URL for each view, can I generalize the code and write it somewhere else? Also, I would like to put it in some place from where the data is loaded the most effectively, i.e. there is not much waiting period in my app when I click a button. I am using the below link for reference: <http://www.raywenderlich.com/5492/working-with-json-in-ios-5> Thanks!
I could solve my problem by making my filter bean SessionScoped. I also don't bind the currently selected rows to my backing bean anymore. I get the selected rows using: ```java public void tableSelection (AjaxBehaviorEvent event) { UIExtendedDataTable dataTable= (UIExtendedDataTable) event.getComponent(); for (Object selectionKey: dataTable.getSelection()) { ``` It could also be achieved using [rowKeyVar](https://stackoverflow.com/a/1982048/1207146) to get the correct row index.
263409
I want to align the red surface to the center of the page, how can i do that? ``` @Composable fun Screen() { Row( modifier = Modifier.fillMaxSize(), horizontalArrangement = Arrangement.Center, verticalAlignment = Alignment.CenterVertically ) { Surface( color = Color.Red, modifier = Modifier.size(100.dp, 50.dp) ){} Surface( color = Color.Blue, modifier = Modifier.size(100.dp, 50.dp) ) {} } } ``` View: [![enter image description here](https://i.stack.imgur.com/QiY5X.png)](https://i.stack.imgur.com/QiY5X.png) Want this: [![enter image description here](https://i.stack.imgur.com/nriSo.png)](https://i.stack.imgur.com/nriSo.png)
If your blue view has a static size, just add `Spacer` of the same size on the other side. ``` Row( modifier = Modifier.fillMaxSize(), horizontalArrangement = Arrangement.Center, verticalAlignment = Alignment.CenterVertically ) { val blueViewWidth = 100.dp Spacer(Modifier.width(blueViewWidth)) RedView() BlueView(Modifier.size(blueViewWidth, 50.dp)) } ``` If blue view size depends on content, you can use it instead of `Spacer` and apply `Modifier.alpha(0)` - this will be less performant than creating a custom layout, but should be perfectly fine, unless your view has a really huge layout to measure. ``` Row( modifier = Modifier.fillMaxSize(), horizontalArrangement = Arrangement.Center, verticalAlignment = Alignment.CenterVertically ) { BlueView(Modifier.alpha(0f)) RedView() BlueView() } ```
263554
I am trying to use cdk-virtual-viewpoint in my chat application. Sadly, it doesn't render anything. When I try to use the normal "ngfor" it works fine. But when I use cdkVirtualfor it does not display anything. Please refer to the code below for more details. ``` chatbox.component.ts--- <cdk-virtual-scroll-viewport itemSize="100" class = "cdk"> <div *cdkVirtualFor="let item of chat_history" class="chat-data"> <ul> <li class = "me">{{item.message}}</li> </ul> </div> </cdk-virtual-scroll-viewport> app.module.ts------ @NgModule({ declarations: [AppComponent, ChatboxComponent], imports: [BrowserModule, AppRoutingModule, HttpClientModule, ScrollingModule], providers: [{provide: APP_BASE_HREF, useValue: '/consumer/'}], bootstrap: [AppComponent], }) chatboxcomponent.css----- .form-container .chat-data { height: 100px; } .form-container .cdk { height: 500px; } ``` Please let me know on what I am doing right. Trust me on this but chat history does have data. I have checked it.
One quick option is to apply [`numpy.flatnonzero`](https://numpy.org/doc/stable/reference/generated/numpy.flatnonzero.html) to each row: ``` import numpy as np df.apply(np.flatnonzero, axis=1) 0 [0, 1] 1 [0] 2 [1] 3 [0, 1, 2, 5, 7, 8] dtype: object ``` If you care about performance, here is a pure numpy option (caveat for this option is if the row doesn't have any non zero values, it will be ignored in the result. Choose the method that works for you depending on your need): ``` idx, idy = np.where(df != 0) np.split(idy, np.flatnonzero(np.diff(idx) != 0) + 1) [array([0, 1], dtype=int32), array([0], dtype=int32), array([1], dtype=int32), array([0, 1, 2, 5, 7, 8], dtype=int32)] ```
263572
I am trying build a dictionary of Expressions that have different input parameter types. I am trying to store the type of the parameter because later down the road I plan to use Reflection to discover a method on the type. Here is the code that creates the dictionary and a generic Add function I created to add entries to it: ``` public class LoadEntityQuery : IQuery<LoadEntityQueryResult> { public IDictionary<Type, Expression<Func<Type, bool>>> Entities { get; set; } public LoadEntityQuery() { Entities = new Dictionary<Type, Expression<Func<Type, bool>>>(); } public void Add<T>(Expression<Func<T, bool>> where = null) where T : Entity { Expression<Func<Type, bool>> _lambda = null; if (where != null) { ParameterExpression param = Expression.Parameter(typeof(T), where.Parameters[0].Name); var body = Expression.Invoke(where, param); _lambda = Expression.Lambda<Func<Type, bool>>(body, param); } Entities.Add(typeof(T), _lambda); } } ``` The body of the new method is created properly. The issue is when I try to create the new Lambda expression with the type from the expression being passed in, I receive this error: ParameterExpression of type 'TestNamespace.TestClass' cannot be used for delegate parameter of type 'System.Type' Does anybody have an idea as to what I can do in this situation? Like I said before, at some point later I am going to loop through this dictionary to do some reflective programming on each entry. If there is a better way to do this I am all ears. As an example of what I am trying to do, I store the expressions for Where clauses for POCO objects that need to be initialized: ``` LoadEntityQuery _query = new LoadEntityQuery(); _query.Add<PayrollLocation>(); _query.Add<PayrollGroupBU>(); _query.Add<PersonnelPosition>(t => t.DataSet == MasterDataSet); _query.Add<EmployeeStatus>(); _query.Add<PayrollGrade>(); ``` This list of Entities will be different for each app. The idea is to collect all the entities and Where clause for each and discover a certain method using reflection on each one. (e.g. PayrollLocation has a GetPayrollLocationsQuery() method, PayrollGroupBU has a GetPayrollGroupBUQuery() method...). The Add method is generic in order for me to make use of the lambda expression in the calling code. Thanks, Jason
Looking closely at your code, the expression you generate has some problems. See my explanation at the top of [this answer](https://stackoverflow.com/questions/5744764/linq-to-sql-throwing-a-stackoverflowexception/5751931#5751931) to explain one of them, it's the same issue here. You're creating a new lambda where the parameter instance you create here is not used in the body. The bigger problem is that your expressions are just wrong for what you appear to be trying to do. As far as I can tell, you are just trying to create a mapping from entity types to functions that take an entity of that type and returns a bool. `Type -> Expression<Func<TEntity, bool>>`. The expression you build just does not work. You should make the dictionary store non-generic lambdas that way you can store these functions easily without performing conversions or rebuilding the expressions. You will not be able to store them as generic lambdas here. Then cast to the generic lambda when you access them. I'd put this in a separate class to manage the casting and refactor your code to this: ``` // add all necessary error checking where needed and methods public class EntityPredicateDictionary { private Dictionary<Type, LambdaExpression> dict = new Dictionary<Type, LambdaExpression>(); public Expression<Func<TEntity, bool>> Predicate<TEntity>() where TEntity : Entity { return (Expression<Func<TEntity, bool>>)dict[typeof(TEntity)]; } public LambdaExpression Predicate(Type entityType) { return dict[entityType]; } internal void Add<TEntity>(Expression<Func<TEntity, bool>> predicate) where TEntity : Entity { dict.Add(typeof(TEntity), predicate); } } public class LoadEntityQuery : IQuery<LoadEntityQueryResult> { public EntityPredicateDictionary Entities { get; private set; } public LoadEntityQuery() { Entities = new EntityPredicateDictionary(); } public void Add<TEntity>(Expression<Func<TEntity, bool>> predicate = null) where TEntity : Entity { Entities.Add(predicate); } } // then to access the predicates LoadEntityQuery query = ...; var pred1 = query.Entities.Predicate<Entity1>(); var pred2 = query.Entities.Predicate(typeof(Entity2)); ```
263740
I have a web view which request one web page, now that web has some action events which may reply to IOS app as HTML or json. So how would the app will come to know the response type sent. So that the response is handled within the app. Reading of static HTML content in webview using its delegate is what i have tried, when it is dynamic then how can one handle. code logic:- 1. ON load of controller, request page with URL(something) in web view, user will interact 2. check response type if JSON then 4 else 3 3. load web page with different URL 4. Deserialize the JSON Data and store in native DB
**Method 1** First, You need to set and handle the `UIWebView` delegate methods in your `UIViewController` Then, in `webView: shouldStartLoadWithRequest: navigationType:` method, use the following ``` - (BOOL)webView:(UIWebView *)webView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType { NSError *error; NSString *responseString = [NSString stringWithContentsOfURL:request.URL encoding:NSASCIIStringEncoding error:&error]; //Parse the string here to confirm if it's JSON or HTML //In case of JSON, stop the loading of UIWebview if(json) { retrun NO; } return YES; } ``` **Note:** This will take a performance hit in case of HTML content as the response will be loaded 2 times. Once in `stringWithContentsOfURL` method and second time when the web view loads. **Method 2** To avoid the double loading you can let the web view load, irrespective of the content type. And then get the loaded content in `webViewDidFinishLoad` method. *For this you may need to perform some changes on the server end as well.* Suppose your HTML page with JSON is structured as : ``` <html> <body> <div id="json" style="display: none">{"jsonvalue":"{{result}}"}</div> </body> </html> ``` Then in `webViewDidFinishLoad` ``` - (void)webViewDidFinishLoad:(UIWebView *)webView { NSString *res = [webView stringByEvaluatingJavaScriptFromString:@"document.getElementById('json').innerHTML"]; //Use the JSON string as required } ```
263880
``` import networkx as nx #@UnresolvedImport from networkx.algorithms import bipartite #@UnresolvedImport from operator import itemgetter from random import choice corpus = open('/home/abehl/Desktop/Corpus/songs.wx', 'r') ALPHA = 1.5 EPSILON = 0.5 song_nodes = [] word_nodes = [] edges = zip(song_nodes, word_nodes) B = nx.Graph(edges) degX,degY = bipartite.degrees(B, word_nodes) sortedSongNodesByDegree = sorted(degX.iteritems(), key=itemgetter(1)) print sortedSongNodesByDegree song_nodes2 = [] word_nodes2 = [] Vc = list(set(word_nodes)) edges2 = zip(song_nodes2, word_nodes2) C= nx.Graph(edges2) for songDegreeTuple in sortedSongNodesByDegree: for i in range(songDegreeTuple[1]): connectedNodes = C.neighbors(songDegreeTuple[0]) VcDash = [element for element in Vc if element not in connectedNodes] calculateBestNode(VcDash) def calculateBestNode(VcDashsR): nodeToProbailityDict = {} for node in VcDashsR: degreeOfNode = bipartite(C, [node])[1][node] probabiltyForNode = (degreeOfNode ** ALPHA) + EPSILON nodeToProbailityDict[node] = probabiltyForNode ``` In the above python program, python interpreter is throwing the following error even though the function 'calculateBestNode' is defined in the program. Am I missing something here. > > NameError: name > 'calculateBestNode' is not defined > > > Pardon me for posting a large program here.
A Python program is executed from top to bottom, so you need to define the function before you use it. A common alternative is putting all the code that is automatically executed in a `main` function, and adding at the bottom of the file: ``` if __name__ == '__main__': main() ``` This has the additional advantage that you have now written a module that can be imported by others.
263984
Everything should be fine but its not. No console errors, nothing. Result of my object is displayed correctly in console and my marker is set up correctly if I manually type in coordinates. However if I pass the data to LatLng with my object, zero results. This in console returns "46.00,45.00": ``` var data = jQuery.parseJSON(data); console.log(data[0].coords); ``` and this sets up my marker for the Google maps: ``` var latLng = new google.maps.LatLng(46.00,45.00); var marker = new google.maps.Marker({ position: latLng, map: map }); ``` BUT! If I write it like this nothing happens: ``` var data = jQuery.parseJSON(data); var latLng = new google.maps.LatLng((data[0].coords)); var marker = new google.maps.Marker({ position: latLng, map: map }); ```
You're just passing one argument with `data[0].coords` as string, when you need to pass 2 arguments as numbers, as specified in the [documentation](https://developers.google.com/maps/documentation/javascript/reference#LatLng): ``` LatLng(lat:number, lng:number, noWrap?:boolean) ``` You'll need to split up the `coords` variable before: ``` var coordinates = data[0].coords.split(","); var latLng = new google.maps.LatLng(coordinates[0], coordinates[1]); ```
264066
I build an Angular 2 application and bundle it with webpack. At the moment, my application is still small but the webpack task already takes around 10 seconds. Is it possible to optimize my webpack config or the TypeSript compilation options to improve the compilation and packaging duration ? This is the webpack config I use : ``` var webpack = require('webpack'); var LiveReloadPlugin = require('webpack-livereload-plugin'); module.exports = { entry: __dirname + '/assets/app/app.ts', output: { filename: 'myApp.bundle.js', path: __dirname + '/build/' }, // Turn on sourcemaps devtool: 'source-map', resolve: { extensions: ['.ts', '.js'] }, plugins: [ new LiveReloadPlugin({ appendScriptTag: true }), // Fixes angular 2 warning new webpack.ContextReplacementPlugin( /angular(\\|\/)core(\\|\/)(esm(\\|\/)src|src)(\\|\/)linker/, __dirname ) ], module: { rules: [{ enforce: 'pre', test: /\.js$/, loader: "source-map-loader" }, { enforce: 'pre', test: /\.tsx?$/, use: "ts-loader" } ] } } ``` And the tsconfig : ``` { "compilerOptions": { "target": "ES5", "module": "commonjs", "moduleResolution": "node", "sourceMap": true, "pretty": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "noUnusedLocals": false, "removeComments": true, "skipLibCheck": true, "strictNullChecks": false, "baseUrl": "./src", "typeRoots": ["node_modules/@types"], "types": [ "core-js", "systemjs" ], "outDir": "./build" }, "exclude": [ "node_modules" ] } ``` **UPDATE** *(see my answer for the fixed webpack.config)* I give a try to the DLL webpack plugin suggested by @jpwiddy by compiling angular in a separate build, in order to have to rebuild only the application code during developments and gain considerable time of compilation. However, after an inspection of the output JS, the file size is quite the same and there is still angular code inside. Here is the new webpack config file for angular sources : ``` var webpack = require('webpack'); module.exports = { entry: { angular:[ '@angular/platform-browser', '@angular/platform-browser-dynamic', '@angular/core', '@angular/common', '@angular/compiler', '@angular/http', '@angular/router', '@angular/forms' ] }, output: { filename: 'ng2.dll.js', path: __dirname + '/build/', library: 'ng2' }, plugins: [ // Fixes angular 2 warning new webpack.ContextReplacementPlugin( /angular(\\|\/)core(\\|\/)(esm(\\|\/)src|src)(\\|\/)linker/, __dirname ), new webpack.DllPlugin({ name: 'ng2', path: __dirname + '/build/ng2.json' }) ] } ``` And the updated webpack config for application : ``` var webpack = require('webpack'); var LiveReloadPlugin = require('webpack-livereload-plugin'); module.exports = { entry: __dirname + '/assets/app/app.ts', output: { filename: 'myApp.bundle.js', path: __dirname + '/build/' }, // Turn on sourcemaps devtool: 'source-map', resolve: { extensions: ['.ts', '.js'] }, plugins: [ new LiveReloadPlugin({ appendScriptTag: true }), // Fixes angular 2 warning new webpack.ContextReplacementPlugin( /angular(\\|\/)core(\\|\/)(esm(\\|\/)src|src)(\\|\/)linker/, __dirname ), new webpack.DllReferencePlugin({ context: __dirname + '/build/', manifest: require(__dirname + '/build/ng2.json') }) ], module: { rules: [{ enforce: 'pre', test: /\.js$/, loader: "source-map-loader" }, { enforce: 'pre', test: /\.tsx?$/, use: "ts-loader" } ] } } ``` Here is one of the angular code I found in my application JS output : ``` _TsEmitterVisitor.prototype.visitBuiltintType = function (type, ctx) { var typeStr; switch (type.name) { case __WEBPACK_IMPORTED_MODULE_2__output_ast__["R" /* BuiltinTypeName */].Bool: typeStr = 'boolean'; break; case __WEBPACK_IMPORTED_MODULE_2__output_ast__["R" /* BuiltinTypeName */].Dynamic: typeStr = 'any'; break; case __WEBPACK_IMPORTED_MODULE_2__output_ast__["R" /* BuiltinTypeName */].Function: typeStr = 'Function'; break; case __WEBPACK_IMPORTED_MODULE_2__output_ast__["R" /* BuiltinTypeName */].Number: typeStr = 'number'; break; case __WEBPACK_IMPORTED_MODULE_2__output_ast__["R" /* BuiltinTypeName */].Int: typeStr = 'number'; break; case __WEBPACK_IMPORTED_MODULE_2__output_ast__["R" /* BuiltinTypeName */].String: typeStr = 'string'; break; default: throw new Error("Unsupported builtin type " + type.name); } ctx.print(typeStr); return null; }; ``` Did I missed something in the new config to prevent webpack including angular sources in the output ? Thank you
One great way I've personally done that has sped up the Webpack build process is implementing DLLs within your build. Webpack works by analyzing your code for `require`s and `import`s, and then builds a table from these statements of all of your module dependencies and links to where those files can be found. The DLL plugin improves on this, as when you register your dependencies with a DLL, every time those dependencies changeβ€Š(should be very infrequent), you build a DLL (made up of a javascript bundle and a JSON manifest file) and wraps all of those dependencies in a single package. That package is then referenced when pulling in those dependencies into the app. A quick example: ``` entry: { angular:[ '@angular/platform-browser', '@angular/platform-browser-dynamic', '@angular/core', '@angular/common', '@angular/compiler', '@angular/http', '@angular/router', '@angular/forms' ], bs: [ 'bootstrap', 'ng-bootstrap' ] }, output: { filename: '[name].dll.js', path: outputPath, library: '[name]', }, plugins: [ new webpack.DllPlugin({ name: '[name]', path: join(outputPath, '[name].json') }) ] ``` ... and then referenced as so - ``` { plugins: [ new webpack.DllReferencePlugin({ context: process.cwd(), manifest: require(join(outputPath, 'angular.json')) }), new webpack.DllReferencePlugin({ context: process.cwd(), manifest: require(join(outputPath, 'bs.json')) }), ] } ```
264212
I just set up SSH on my freeNAS server. However every time I connect I get kicked out right away. I get the following message: ``` Connection to (my server name) closed by remote host. ``` Anybody knows why this happens? This is the server log for SSH services: ``` Oct 9 09:35:52 sshd[2389]: Did not receive identification string from 204.16.252.112 Oct 9 09:36:22 sshd[2390]: Did not receive identification string from 69.163.149.200 Oct 9 09:36:44 sshd[2391]: SSH: Server;Ltype: Version;Remote: 192.168.1.1-55235;Protocol: 2.0;Client: OpenSSH_5.1p1 Debian-5ubuntu1 Oct 9 09:36:54 sshd[2391]: Accepted password for "username" from 192.168.1.1 port 55235 ssh2 ```
It could happen because * the server only accepts a SSH protocol version your client doesn't provide, * the server only allows asymmetric authentication (RSA,DSA) and you provided no key Try to set the verbose flag on your client. It should give you some hints. Otherwise check the server log.
264373
I have the following class defined: ``` <TypeConverter(GetType(ExpandableObjectConverter))> <DataContract()> Public Class Vector3 <DataMember()> Public Property X As Double <DataMember()> Public Property Y As Double <DataMember()> Public Property Z As Double Public Overrides Function ToString() As String Return String.Format("({0}, {1}, {2})", Format(X, "0.00"), Format(Y, "0.00"), Format(Z, "0.00")) End Function End Class ``` Using the `DataContractJsonSerializer` I receive the following JSON as expected: ``` { "Vector": { "X": 1.23, "Y": 4.56, "Z": 7.89 } } ``` However, JSON.NET produces: ``` { "Vector": "(1.23, 4.56, 7.89)" } ``` If I remove the `ExpandableObjectConverter` attribute from the class, JSON.NET produces results as expected (same as DataContractJsonSerializer). Unfortunately I need the `ExpandableObjectConverter` so that the class works with a property grid. Is there any way to tell JSON.NET to ignore `ExpandableObjectConverters`? I prefer to use JSON.NET instead of `DataContractJsonSerializer` because it is much easier to serialize enums to their string representations.
Although I appreciate Rivers' answer I am really looking for a solution that ignores all expandable object converters automatically (like the DataContractJsonSerializer does) rather than building a custom JsonConverter for each offending class. I have found the following two solutions: 1. Use the built in DataContractJsonSerializer instead (at the expense of some other conveniences of JSON.NET). 2. Use a custom ExpandableObjectConverter (see below). Since the default ExpandableObjectConverter supports converting to/from string, JSON.NET is serializing the class with a string. To counteract this I have created my own expandable object converter which does not allow conversions to/from string. ``` Imports System.ComponentModel Public Class SerializableExpandableObjectConverter Inherits ExpandableObjectConverter Public Overrides Function CanConvertTo(context As System.ComponentModel.ITypeDescriptorContext, destinationType As System.Type) As Boolean If destinationType Is GetType(String) Then Return False Else Return MyBase.CanConvertTo(context, destinationType) End If End Function Public Overrides Function CanConvertFrom(context As System.ComponentModel.ITypeDescriptorContext, sourceType As System.Type) As Boolean If sourceType Is GetType(String) Then Return False Else Return MyBase.CanConvertFrom(context, sourceType) End If End Function End Class ``` Applying the above converter works flawlessly with JSON.NET and with the property grid control!
266120
So I am just tinkering with Lua after hearing that it was more versatile than python, so I tried to make a countdown to one year, in the form of DDD:HR:MN:SC. If anyone could give me an example it would be much appreciated!
You can use [`os.date`](https://www.lua.org/manual/5.3/manual.html#pdf-os.date) to retreive the current date as a table, then just build the difference by subtracting component-wise like this: ```lua local function print_remaining(target) local current = os.date("*t") print(string.format("%i years, %i months and %i days", target.year-current.year, target.month-current.month, target.day-current.day )) end local function countdown(target) while true do print_remaining(target) os.execute('sleep 1') end end countdown {year = 2019, month=12, day=25} ``` If you want it to be cooler, of course you'd have to adjust what components are shown depending on how much time is left. --- Inspired by @csaars answer, I changed a few things and ended up with this ```lua local function split(full, step, ...) if step then return math.floor(full % step), split(math.floor(full / step), ...) else return full end end local function countdown(target) local s, m, h, d = split(os.difftime(os.time(target), os.time()), 60, 60, 24) print(string.format("%i days, %i:%i:%i", d, h, m, s)) if os.execute('sleep 1') then return countdown(target) end end countdown {year = 2019, month=12, day=25} ``` The `plit` function is a bit more complex than it needs to be for this example, but I thought it'd be a nice chance to showcase how nicely some things can be expressed with variadic recursive functions in Lua.
266301
I am on a website...it has jquery and is sending some requests using javascript out to a php page. Is their any way to see what data it is sending out from my computer and/or which URLs it is talking to? I am using firefox and can load software if their is any needed. EDIT - I have downloaded firebug and have the page loaded. Any idea what option I need to select?
Use Firebug browser addon, it will show you background XHR requests.
266320
I have created a consumable in app purchase in iTunes connect as shown in the following figure. [![enter image description here](https://i.stack.imgur.com/msces.png)](https://i.stack.imgur.com/msces.png) After that I created a sandbox user and I verified that appid. When I try to check the products using the following code ``` func requestProducts(forIds ids: Set<String>) { productRequest.cancel() productRequest = SKProductsRequest(productIdentifiers: "com.iapcourse.meal") productRequest.delegate = self productRequest.start() } func productsRequest(_ request: SKProductsRequest, didReceive response: SKProductsResponse) { self.products = response.products print(products.count) if products.count == 0 { requestProducts(forIds: "com.iapcourse.meal") } else { delegate?.iapProductsLoaded() } } ``` I receive zero product from the delegate method.
The status of your in-app purchases needs to be in the "Ready to Submit" stage before you can test in Sandbox. A common thing to miss is the preview image, for testing, you can upload anything you want so the meta-data is complete. Also, you need to make sure you have the "Paid Applications" agreement signed - it's not very clear anywhere in App Store connect that this is required so it's sometimes overlooked. I really good article that goes over a checklist of everything you need to configure is: [Configuring In-app Products is Hard](https://www.revenuecat.com/2018/10/11/configuring-in-app-products-is-hard)
267043
I am getting an error while trying to pragmatically remove products from some categories. Error: > > Fatal error: Uncaught TypeError: Argument 3 passed to > Mage\_Catalog\_Model\_Resource\_Abstract::\_canUpdateAttribute() must be of > the type array, null given, called in > /xxx/public\_html/app/code/core/Mage/Eav/Model/Entity/Abstract.php on > line 1225 and defined in > /xxx/public\_html/app/code/core/Mage/Catalog/Model/Resource/Abstract.php:540 > > > Stack trace: > #0 /xxx/public\_html/app/code/core/Mage/Eav/Model/Entity/Abstract.php(1225): > Mage\_Catalog\_Model\_Resource\_Abstract->\_canUpdateAttribute(Object(Mage\_Catalog\_Model\_Resource\_Eav\_Attribute), '2019-01-14 00:0...', NULL) > > > #1 /xxx/public\_html/app/code/core/Mage/Eav/Model/Entity/Abstract.php(1123): > Mage\_Eav\_Model\_Entity\_Abstract->\_collectSaveData(Object(Mage\_Catalog\_Model\_Product)) > > > #2 /xxx/public\_html/app/code/core/Mage/Core/Model/Abstract.php(318): Mage\_Eav\_Model\_Entity\_Abstract->save(Object(Mage\_Catalog\_Model\_Product)) > > > #3 /home/admin/domains/gr in /xxx/public\_html/app/code/core/Mage/Catalog/Model/Resource/Abstract.php > on line 540 > > > And my code: ``` $products = Mage::getModel('catalog/category')->setStoreId($storeId)->load($specialCategoryId) ->getProductCollection() ->addAttributeToSelect(array('special_to_date', 'category_ids')) ->addAttributeToFilter('special_to_date', array('lteq' => date("Y-m-d"))); foreach ($products as $product) { $oldCategories = $product->getCategoryIds(); $newCategories = array_diff($oldCategories, $specialCategoryIds); if($oldCategories != $newCategories){ $_product = Mage::getModel('catalog/product')->load($product->getEntityId()); $_product->setCategoryIds($newCategories); Mage::log('Product ID: ' .$product->getEntityId(), null, 'RemoveFromCategoryLog.log', true); try { $_product->save(); } catch(Exception $e){ Mage::log($e->getMessage(), null, 'RemoveFromCategoryLog.log', true); } } } ``` Loading the product did not solve it. And as there are lots of products i am reluctant to use category\_api.
**Using Custom Module (Recommended)** Create a simple module and add a helper class then call it in your phtml file. You can use Category's `CollectionFactory` class and select all attributes by using a `star (*)` symbol in `addAttributeToSelect` method. You can try this code example in your helper class. ``` protected $_categoryFactory; public function __construct( // ... \Magento\Catalog\Model\ResourceModel\Category\CollectionFactory $collecionFactory, ) { // ... $this->_categoryFactory = $collecionFactory; } public function yourFunctionName() { $catId = 5; // category id $collection = $this->_categoryFactory ->create() ->addAttributeToSelect('*') ->addAttributeToFilter('entity_id',['eq'=>$catId]) ->setPageSize(1); $catObj = $collection->getFirstItem(); $catData = $catObj->getData(); // dump this line to check all data // ... return $catObj->getPopularCategory(); } ``` Call the helper method in your phtml file. ``` $helper = $this->helper('{Vendor}\{Module}\Helper\Data'); $values = $helper->yourFunctionName(); ``` **Using Object Manager (Not Recommended)** ``` $objectManager = \Magento\Framework\App\ObjectManager::getInstance(); $catId = 5; // category id $collection = $objectManager->create('Magento\Catalog\Model\ResourceModel\Category\CollectionFactory') ->create() ->addAttributeToSelect('*') ->addAttributeToFilter('entity_id',['eq'=>$catId]) ->setPageSize(1); $catObj = $collection->getFirstItem(); $catData = $catObj->getData(); // dump this line to check all data echo $catObj->getPopularCategory(); ```
267590
I'm trying to set my own **stopword list** for MySQL (5.1.54) fulltext index, but encountered problems with configuration. I tried the following steps: 1. I did set system variable in */etc/mysql/my.cnf* by adding line: ``` ft_stopword_file = "/home/buli/stopwords.txt" ``` 2. I created file */home/buli/stopwords.txt* with words that should be ignored 3. Now when I restart MySQL using *sudo service mysql restart* there is entry in */var/log/mysql/error.log* saying: ``` /usr/sbin/mysqld: File '/home/buli/stopwords.txt' not found (Errcode: 13) 111218 19:07:18 [Note] Event Scheduler: Loaded 0 events 111218 19:07:18 [Note] /usr/sbin/mysqld: ready for connections. ``` Running *perror 13* translates it as **permission denied** problem. The file however exists and I even gave it full permissions: ``` $ ls -l /home/buli/stopwords.txt -rwxrwxrwx 1 buli buli 6 2011-12-18 18:41 /home/buli/stopwords.txt ``` Could there be any other, mysql-specific reason for this error to happen (as file permissions seems ok)?
Are you running MySQL on a distro that uses AppArmor, chroot etc. to secure it? For AppArmor you have to update /etc/apparmor.d/usr.sbin.mysqld (or similar), for chroot you have to copy the file. Best thing is to put the stopword file below the data directory and give a realtive path only.
268776
First of all, sorry because I am so new at C# and I decided to make this question because I have been choked in this for hours. I have an GUI that works with Google Cloud Speech services and make a Speech-to-Text operation. I share with you the whole method that runs when a button is clicked: ``` private async Task<object> StreamingMicRecognizeAsync(int seconds) { if (NAudio.Wave.WaveIn.DeviceCount < 1) { Console.WriteLine("No microphone!"); return -1; } GoogleCredential googleCredential; using (Stream m = new FileStream(@"..\..\credentials.json", FileMode.Open)) googleCredential = GoogleCredential.FromStream(m); var channel = new Grpc.Core.Channel(SpeechClient.DefaultEndpoint.Host, googleCredential.ToChannelCredentials()); var speech = SpeechClient.Create(channel); var streamingCall = speech.StreamingRecognize(); // Write the initial request with the config. await streamingCall.WriteAsync( new StreamingRecognizeRequest() { StreamingConfig = new StreamingRecognitionConfig() { Config = new RecognitionConfig() { Encoding = RecognitionConfig.Types.AudioEncoding.Linear16, SampleRateHertz = 48000, LanguageCode = "es-ES", }, InterimResults = true, } }); // Read from the microphone and stream to API. object writeLock = new object(); bool writeMore = true; var waveIn = new NAudio.Wave.WaveInEvent(); waveIn.DeviceNumber = 0; waveIn.WaveFormat = new NAudio.Wave.WaveFormat(48000, 1); waveIn.DataAvailable += (object sender, NAudio.Wave.WaveInEventArgs args) => { lock (writeLock) { if (!writeMore) return; streamingCall.WriteAsync( new StreamingRecognizeRequest() { AudioContent = Google.Protobuf.ByteString .CopyFrom(args.Buffer, 0, args.BytesRecorded) }).Wait(); } }; // Print responses as they arrive. Task printResponses = Task.Run(async () => { while (await streamingCall.ResponseStream.MoveNext(default(CancellationToken))) { foreach (var result in streamingCall.ResponseStream .Current.Results) { foreach (var alternative in result.Alternatives) { Console.WriteLine(alternative.Transcript); //Textbox1.Text = alternative.Transcript; } } } }); waveIn.StartRecording(); Console.WriteLine("Speak now."); Result_Tone.Text = "Speak now:\n\n"; await Task.Delay(TimeSpan.FromSeconds(seconds)); // Stop recording and shut down. waveIn.StopRecording(); lock (writeLock) writeMore = false; await streamingCall.WriteCompleteAsync(); await printResponses; return 0; } ``` My problem is that I want to update the content of the `Textbox1`control but it doesnΒ΄t work. It writes perfectly the output into the console with the line `Console.WriteLine(alternative.Transcript);` but not into my textbox. If someone could help I would appreciate so much his help.
The problem is that you're using `Task.Run`, which means your code will be running on a thread-pool thread. Instead of calling `Task.Run()`, just move that code into a separate async method: ``` async Task DisplayResponses(IAsyncEnumerator<StreamingRecognizeResponse> responses) { while (await responses.MoveNext(default(CancellationToken))) { foreach (var result in responses.Current.Results) { foreach (var alternative in result.Alternatives) { Textbox1.Text = alternative.Transcript; } } } } ``` Then call that method *directly* (without `Task.Run`) from code that's already on the UI thread (e.g. an event handler). The async machinery will make sure that after the `await` expression, you're back on the UI thread (the same *synchronization context*). So the assignment to the `Text` property will occur on the UI thread, and all should be well. For example: ``` // This would be registered as the event handler for a button void HandleButtonClick(object sender, EventArgs e) { var stream = client.StreamingRecognize(); // Send the initial config request await stream.WriteAsync(...); // Presumably you want to send audio data... StartSendingAudioData(stream); await DisplayResponses(stream.ResponseStream); } ```
269300
I have the following method which takes in GPS coordinates as strings and converts them to doubles with a maximum of 6 decimal points. Now I'm trying to randomize the decimal points starting from the first decimal. ``` public void randomizeCoordinate(String latString, String lonString) { double lat = Double.parseDouble(latString); double lon = Double.parseDouble(lonString); DecimalFormat df = new DecimalFormat("#.######"); df.setRoundingMode(RoundingMode.HALF_EVEN); for (Number n : Arrays.asList(lat, lon)) { Double d = n.doubleValue(); System.out.println(df.format(d)); } } ``` For example if I have 2.34 I want the randomized to be something like 2.493473 or 2.294847 or 2.346758 The first decimal point in which in this case is 3 from 2.34 should only change a maximum of one digit. Up or down randomly. The leading decimal points can change to anything in a random fashion. What would be the best way of doing this?
``` Random rand = new Random(); double x = ...; x = ((int)(x*10) + rand.nextDouble()*2 - 1)/10.0; ``` Multiply by 10 and truncate to get the digits including the first decimal, add a random number between -1 and 1, scale back. Note that your doubles will not have just 6 decimal digits; these numbers aren't decimal numbers. For output to show 6 decimals you'll need to format the number by using a format requesting 6 decimal digits.
269791
I'm new to Razor and I'm getting this error when trying to loop through a list of objects. [![enter image description here](https://i.stack.imgur.com/NytxU.png)](https://i.stack.imgur.com/NytxU.png) This is my View: ``` @page @model QuizModel @{ ViewData["Title"] = "Quiz Page"; } <div class="text-center"> <h1 class="display-4">Thanks for checking my first website out, @Model.Visitor.Name. Are you ready?</h1> <p>Let's see how much you know about me.</a>.</p> </div> <div class="text-center"> <h1 class="display-4">Welcome</h1> <form method="post"> @foreach (var question in @Model.QuestionList) { <label>@question.Query</label> } <button type="submit">Send</button> </form> </div> ``` This is the .cs ``` namespace myquiz.Pages { public class QuizModel : PageModel { [ViewData] [BindProperty] public string Name { get; set; } [BindProperty] public Visitor Visitor { get; set; } public List<Question> QuestionList { get; set; } public void OnGet() { var quizService = new QuizService(); ///QuestionList = new List<Question>(); QuestionList = quizService.GetQuestions(); } public void OnPost() { Name = Visitor.Name; } } } ``` Here's the service ``` public class QuizService { public List<Question> GetQuestions() { return new List<Question>() { new Question() { Id=1, Query = "What's my favourite band?", Option1 = "Beatles", Option2 = "Rolling Stones", Option3 = "Led Zeppelin", Answer = "Led Zeppelin" }, new Question() { Id=2, Query = "What's my favourite colour?", Option1 = "Pink", Option2 = "Yellow", Option3 = "Maroon", Answer = "Pink" }, }; } } ``` I tried to initialise the list in the comment but It did't work either :( Thanks!
Why not implementing the code like that? ``` app.post('/', function(req, res){ stub.PostWorkflowResults( { workflow_id: "my-custom-workflow", inputs: [ { data: { image: { url: req.body.input // guessing that your input is a url } } } ] }, metadata, (err, response) => { if (err) { throw new Error(err); } if (response.status.code !== 10000) { throw new Error("Post workflow results failed, status: " + response.status.description); } // Since we have one input, one output will exist here. const result = response.results[0] // One output is present for each model in the workflow. for (const output of result.outputs) { console.log("Predicted concepts for model: " + output.model.name); for (const concept of output.data.concepts) { console.log("\t" + concept.name + " " + concept.value); } console.log(); } } ); }); ```
270379
here is the code and it returns "IGNORE HIM!" when the size is odd ``` #include <set> std::set<char> s; int main(){ char c; while(std::cin>>c)s.insert(c); std::cout<<(s.size()&1?"IGNORE HIM!":"CHAT WITH HER!"); return 0; } ```
You've already noticed the pattern, i.e. `set.size()&1` is true when the size is odd. Doing a bitwise-and (`&`) on a number will set all the bits to 0, except for the last bit, if it's 1. The last bit is only 1 when the number is odd. e.g. ``` 101100 // even & 000001 = 000000 // false 101101 // odd & 000001 = 000001 // true ```
270507
I want to include the YAMM!3 CSS library in my Drupal 8 site. To that end, I wrote a custom module following the instructions in [this answer](https://drupal.stackexchange.com/a/224911/1441). Although that answer is in regards to a php library, I thought it would also work for a CSS library. This is the `composer.json` from my custom module: ``` { "name": "drupal/my_yamm3", "type": "drupal-module", "require": { "geedmo/yamm3": "*" } } ``` In the `composer.json` in the root of my Drupal codebase, I have this: ``` "extra": { ... "merge-plugin": { "include": [ "core/composer.json", "modules/custom/my_yamm3/composer.json" ], ... ``` I updated my codebase with this commands: ``` $ composer update $ composer dumpauto ``` And I noticed this line in composer's output: ``` > Drupal\Core\Composer\Composer::vendorTestCodeCleanup - Installing geedmo/yamm3 (1.1.0): Loading from cache ``` Which is confirmed by an examination of the filesystem: ``` $ ls vendor/geedmo/yamm3/ bower.json composer.json demo/ gulpfile.js index.html LICENSE.txt package.json README.md yamm/ ``` I've enabled the `my_yamm3` module on my site, and under `admin/config/development/performance`, I've disabled css aggregation. However, when I load the page, I do not see the yamm libraries being loaded in the network tab in my browser. Am I using the correct method to load a third-party css library? What am I doing wrong? **EDIT** osman asked for some confirmation of the state of my environment. In my drupal project root, while I do have `/composer.json` and the directories `/drush/` and `/vendor/`, I don't have `/scripts/` nor `/web/`. I guess that means I am not using the `drupal-composer/drupal-project` project. Regarding his recommended additions to `composer.json`, this is what I had: ``` "extra": { "installer-paths": { "core": ["type:drupal-core"], "modules/contrib/{$name}": ["type:drupal-module"], "profiles/contrib/{$name}": ["type:drupal-profile"], "themes/contrib/{$name}": ["type:drupal-theme"], "drush/contrib/{$name}": ["type:drupal-drush"], "modules/custom/{$name}": ["type:drupal-custom-module"], "themes/custom/{$name}": ["type:drupal-custom-theme"] } }, ``` So again it looks like I don't have the composer project?
**Fundamentals** To bind a CSS library into your Drupal installation, it should be defined and exposed via a \*.libraries.yaml file. For example (taken from core's classy theme): ``` book-navigation: version: VERSION css: component: css/components/book-navigation.css: {} ``` defines that there is a library called `book-navigation` that when included in a page will deliver the `css/components/book-navigation.css` css file. **Using composer with 3rd-party CSS libraries** Composer works easily and reliably for PHP dependencies, using the `merge-plugin` in the way you started working in the OP. But, CSS and Javascript libraries, especially when some preprocessing or building step is required, are a different story. This is an open and unsolved (to my knowledge and to the date) problem, you can read this issue to get an idea of what problems arise: * <https://www.drupal.org/project/drupal/issues/2873160> The main problem is, that when you import the library as a composer dependency, as you did, the library is installed inside the `vendor` directory **wherever** that is located. So, you cannot assume much inside your \*libraries.yaml file about the file path to the CSS/JS file(s) you want to use. **Alternative (A) without composer: using CDN / externally hosted libraries definitions** For you case, and taken that you want to include this file: ``` https://raw.githubusercontent.com/geedmo/yamm3/master/yamm/yamm.css ``` you could define it inside your \*libraries.yaml file as an externally hosted library (see [#external](https://www.drupal.org/docs/8/creating-custom-modules/adding-stylesheets-css-and-javascript-js-to-a-drupal-8-module#external)): ``` my_module.yamm3: remote: https://raw.githubusercontent.com/geedmo/yamm3/master/yamm/yamm.css version: VERSION license: name: MIT url: https://github.com/geedmo/yamm3/blob/master/LICENSE.txt gpl-compatible: true css: theme: https://raw.githubusercontent.com/geedmo/yamm3/master/yamm/yamm.css : { type: external } ``` N.B.: This approach should always deliver the latest `yamm.css` version. **Alternative (B) without composer: keeping a local copy in your own module's folder** If you place the CSS file inside your module's folder, e.g. `css/yamm.css`, you will be able to use the following yaml definition in your \*.libraries.yaml file in order to be able to use it: ``` my_module.yamm3: version: VERSION css: theme: css/yamm.css : {} ``` **Attaching a library to page(s)** Finally, after you are done defining your library, you can start using it. This is documented extensively here (<https://www.drupal.org/docs/8/creating-custom-modules/adding-stylesheets-css-and-javascript-js-to-a-drupal-8-module>) and I would recommend reading that if you haven't already. Good luck!
270687
i need little help in writing the MYSQL Query. i want to retreive the data from 3 tables, but i want to retreive the data from 3rd table only if the count() value is equals to 1. please see the below query. ``` SELECT count(orderdetails.orderId) as total,gadgets.*,orders.* FROM orders JOIN orderdetails ON orders.orderId = orderdetails.orderId CASE total WHEN 1 THEN (JOIN gadgets ON gadgets.gadgetId = orders.gadgetId) GROUP BY orders.orderId ORDER BY orders.orderId DESC; ``` mysql always gives me an error, and i couldnt find any solution over internet.
Just add a Simple Condition in Join, and it would work (Of course you have make it `Left Join`). ``` SELECT count(orderdetails.orderId) as total,gadgets.*,orders.* FROM orders JOIN orderdetails ON orders.orderId = orderdetails.orderId LEFT JOIN gadgets ON gadgets.gadgetId = orders.gadgetId and total=1 --Simple Logic GROUP BY orders.orderId ORDER BY orders.orderId DESC; ```
271079
I am trying to get round a silly firewall issue. I am trying to chain ssh access to git through a proxy ssh: ~/git-chain: ``` #!/bin/bash ssh hosticangetto ssh $* # End ``` And then: ``` $ export GIT_SSH=~/git-chain $ git clone ssh://git@somerepostore.com/somerepo.git E, [2012-02-07T12:50:21.434913 #28359] ERROR -- : git command not recognized fatal: The remote end hung up unexpectedly ``` the arguments being passed into my script are: ``` git@somerepostore.com git-upload-pack 'somerepo.git' ``` I've checked the keys from the hosticangetto to the git repository store and everything is ok. I am not sure why this fails. I've done this sort of thing with mercurial before with mercurial -e switch.
Add these lines to your `~/.ssh/config` (or create that file with these lines if it doesn't already exist): ``` Host somerepostore.com ProxyCommand ssh -W %h:%p hosticangetto ``` Then unset GIT\_SSH. This tells ssh to ssh using hosticangetto as a proxy. This is better than your solution because in your solution, the packets are decrypted on hosticangetto and then re-encrypted for somerepostore.com on the remote host. Using a proxycommand like this means that packets are encrypted for somerepostore.com on your local machine and these encrypted packets are passed through hosticangetto.
271577
I have a list of dataframes like these: ``` library(plyr) mt_list <- dlply(mtcars, .(cyl), data.frame) names(mt_list) <- c("four", "six", "eight") ``` I want to create a nested latex table using the stargazer package. By nested, I mean I want to combine three dataframes within a single table and give each of the three dataframe's their own title within the table. The latex table should look this (photoshopped image): ![enter image description here](https://i.stack.imgur.com/ZGBfC.jpg) Is it possible to create a table that looks like this from within R? I'm specifically interested in using the stargazer package to do this.
Not exactly what you are looking for but should be a good start. ``` stargazer(mt_list,type='text',summary=FALSE, title=c('Six','Four','Eight')) ## ## Six ## ========================================================= ## mpg cyl disp hp drat wt qsec vs am gear carb ## --------------------------------------------------------- ## 22.800 4 108 93 3.850 2.320 18.610 1 1 4 1 ## 24.400 4 146.700 62 3.690 3.190 20 1 0 4 2 ## 22.800 4 140.800 95 3.920 3.150 22.900 1 0 4 2 ## 32.400 4 78.700 66 4.080 2.200 19.470 1 1 4 1 ## 30.400 4 75.700 52 4.930 1.615 18.520 1 1 4 2 ## 33.900 4 71.100 65 4.220 1.835 19.900 1 1 4 1 ## 21.500 4 120.100 97 3.700 2.465 20.010 1 0 3 1 ## 27.300 4 79 66 4.080 1.935 18.900 1 1 4 1 ## 26 4 120.300 91 4.430 2.140 16.700 0 1 5 2 ## 30.400 4 95.100 113 3.770 1.513 16.900 1 1 5 2 ## 21.400 4 121 109 4.110 2.780 18.600 1 1 4 2 ## --------------------------------------------------------- ## ## Four ## ========================================================= ## mpg cyl disp hp drat wt qsec vs am gear carb ## --------------------------------------------------------- ## 21 6 160 110 3.900 2.620 16.460 0 1 4 4 ## 21 6 160 110 3.900 2.875 17.020 0 1 4 4 ## 21.400 6 258 110 3.080 3.215 19.440 1 0 3 1 ## 18.100 6 225 105 2.760 3.460 20.220 1 0 3 1 ## 19.200 6 167.600 123 3.920 3.440 18.300 1 0 4 4 ## 17.800 6 167.600 123 3.920 3.440 18.900 1 0 4 4 ## 19.700 6 145 175 3.620 2.770 15.500 0 1 5 6 ## --------------------------------------------------------- ## ## Eight ## ========================================================= ## mpg cyl disp hp drat wt qsec vs am gear carb ## --------------------------------------------------------- ## 18.700 8 360 175 3.150 3.440 17.020 0 0 3 2 ## 14.300 8 360 245 3.210 3.570 15.840 0 0 3 4 ## 16.400 8 275.800 180 3.070 4.070 17.400 0 0 3 3 ## 17.300 8 275.800 180 3.070 3.730 17.600 0 0 3 3 ## 15.200 8 275.800 180 3.070 3.780 18 0 0 3 3 ## 10.400 8 472 205 2.930 5.250 17.980 0 0 3 4 ## 10.400 8 460 215 3 5.424 17.820 0 0 3 4 ## 14.700 8 440 230 3.230 5.345 17.420 0 0 3 4 ## 15.500 8 318 150 2.760 3.520 16.870 0 0 3 2 ## 15.200 8 304 150 3.150 3.435 17.300 0 0 3 2 ## 13.300 8 350 245 3.730 3.840 15.410 0 0 3 4 ## 19.200 8 400 175 3.080 3.845 17.050 0 0 3 2 ## 15.800 8 351 264 4.220 3.170 14.500 0 1 5 4 ## 15 8 301 335 3.540 3.570 14.600 0 1 5 8 ## --------------------------------------------------------- ```
271720
This is my xhtml code containing a datatable using row expansion. Using primefaces 4.0, jsf mozarra 2.2.4 ``` <p:dataTable id="myTable" value="#{myBean.lazyModel}" var="dd" rowKey="#{dd.hashCode()}" paginator="true" selection="#{myBean.myModel.selectedRecords}" rows="#{myBean.pageSize}" paginatorPosition="top" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" rowsPerPageTemplate="5,10,20,50,100" widgetVar="dataTable" currentPageReportTemplate="(Number of Records: {totalRecords})" lazy="true"> <p:ajax event="rowToggle" listener="#{myBean.onRowToggle}" process="@this"/> <p:column> <p:rowToggler /> </p:column> <p:column selectionMode="multiple" id="select" /> <p:column id="cpn" headerText="#{messages['cpn']}" filterMatchMode="contains" sortBy="#{dd.cpn}" filterBy="#{dd.cpn}"> <p:inputText id="cpnid" value="#{dd.cpn}" /> </p:column> <p:column id="user" headerText="#{messages['user']}" filterMatchMode="contains" sortBy="#{dd.number}" filterBy="#{dd.number}"> <p:inputText id="addid" value="#{dd.number}" /> </p:column> : : <p:rowExpansion id="rowExpansion"> <p:panelGrid> <p:row> <p:column> <h:outputText value="#{messages['name']}" /> </p:column> <p:column> <p:inputText id="name" name="txtBox" value="#{dd.name}" /> </p:column> <p:column> <h:outputText value="#{messages['ageGroup']}" /> </p:column> <p:column id="agecol"> <p:selectOneMenu id="agegrp" value="#{dd.agegrp}"> <f:selectItem itemLabel="21-25" itemValue="21-25" /> <f:selectItem itemLabel="26-30" itemValue="26-30" /> </p:selectOneMenu> </p:column> </p:row> </p:panelGrid> </p:rowExpansion> </p:dataTable> ``` Now I expanded a row and entered name and selected age group and collapsed the row. If I reexpand the same row I couldn't see the values I have entered. When I debugged on collapsing the row The name field and age grp fields setters are called with null parameters. If I remove the ajax rowToggle event then there is no request is sent to the server on row collapse. All the examples I found are showing only static data on row expansion. Is there any way to process data user entered on row collapse? Any help is highly appreciated.
I had the same problem. Do you use this datatable in a dialog? Try set `dynamic=false` in parent dialog. Dynamic dialog may override your ajax request
271895
The official display has a 5 pin header marked "GPIO". One pin is marked "5V" and appears to be a power output. However, it is not connected directly to the 5V rail of the board which is available on the "PWR OUT" USB connector. Instead it is connected to the collector pin of Q2. What is the intended purpose of this pin, and why is it not connected directly to the 5V rail? What is the function of Q2, and where does this 5V supply actually come from? The voltage on this pin appears to be a few mV higher than that of the PWR OUT connector. Why? Two of the pins are marked SDA and SCL, and they are connected to the same I2C bus that is available on the DSI connector. Why are they duplicated? Finally there is the INT pin. This is not available on the DSI connector. What does it do? Edit: Also, why is the unit supplied with a four pin dupont cable?
The I2C ports are duplicated because older models do not have an I2C port on the DSI connector: > > The DSI connector on Model A/B boards does not have the I2C connections required to talk to the touchscreen controller and DSI controller. This can be worked around by using the additional set of jumpers provided with the display kit to wire up the I2C bus on the GPIO pins to the display controller board. > > > Using the wire jumpers, connect SCL/SDA on the GPIO header to the horizontal pins marked SCL/SDA on the display board. It is also recommended to power the Model A/B via the GPIO pins using the jumpers. > > > * <https://www.raspberrypi.org/documentation/hardware/display/> This explains why the I2C pins are duplicated and what the cable is for. The 5V GPIO pin is separated from the main 5V supply by this small circuit: [![Voltage detection circuit](https://i.stack.imgur.com/KA0y2.jpg)](https://i.stack.imgur.com/KA0y2.jpg) U5 is a voltage detector. When the board is powered it waits for a short amount of time before switching on Q3, which powers up the GPIO 5V line as well as the display backlight power circuit. I am still not sure why the GPIO 5V has this but the USB PWR OUT does not. The INT line is connected to PC7 of the ATTINY88 chip. This chip controls the backlight brightness and also seems to be connected to the touch screen controller, so it is probably passing through the touch interrupt. That doesn't make any sense at all, but I don't know what else it could be.
272044
I installed Proxmox 5.4 and this is how it setup my logical volumes: ```none --- Logical volume --- LV Path /dev/pve/swap LV Name swap VG Name pve LV UUID AmJwba-alii-Uqkw-XHAy-ka2g-EIeQ-l7jlw3 LV Write Access read/write LV Creation host, time proxmox, 2020-04-24 16:38:41 +0100 LV Status available # open 2 LV Size 8.00 GiB Current LE 2048 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Path /dev/pve/root LV Name root VG Name pve LV UUID Wd0A6l-qmKA-EUz1-oxzZ-nILJ-FGfe-xtEuji LV Write Access read/write LV Creation host, time proxmox, 2020-04-24 16:38:41 +0100 LV Status available # open 1 LV Size 96.00 GiB Current LE 24576 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Name data VG Name pve LV UUID e20Iqk-sRI2-KsGF-FW5b-KUtR-Pp5C-j6CJVe LV Write Access read/write LV Creation host, time proxmox, 2020-04-24 16:38:42 +0100 LV Pool metadata data_tmeta LV Pool data data_tdata LV Status available # open 0 LV Size 794.79 GiB Allocated pool data 0.00% Allocated metadata 0.04% Current LE 203466 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 ``` I want to make use of the LV called "data" but I don't know how to create a filesystem on this since there is no path associated. Is there another LVM command I need to map this to a device?
The `data` LV appears to be a [thinpool LV](https://pve.proxmox.com/wiki/Storage:_LVM_Thin). It's an additional layer between the VG and the final LV. It allows additional features (eg: overprovisionning or also efficient copy-on-write where performances don't decrease the more additional snapshots are made). There's plenty of documentation available about it. For example the [lvmthin(7)](https://manpages.debian.org/lvm2/lvmthin.7) manpage describing the feature: > > Blocks in a standard lvm(8) Logical Volume (LV) are allocated when the > LV is created, but **blocks in a thin provisioned LV are allocated as > they are written**. Because of this, a thin provisioned LV is given a > virtual size, and **can then be much larger than physically available > storage**. The amount of physical storage provided for thin provisioned > LVs can be increased later as the need arises. > > > Blocks in a standard LV are allocated (during creation) from the > Volume Group (VG), but blocks in a thin LV are allocated (during use) > from a special "thin pool LV". The thin pool LV contains blocks of > physical storage, and blocks in thin LVs just reference blocks in the > thin pool LV. > > > **A thin pool LV must be created before thin LVs can be created within > it**. A thin pool LV is created by combining two standard LVs: a large > data LV that will hold blocks for thin LVs, and a metadata LV that > will hold metadata. The metadata tracks which data blocks belong to > each thin LV. > > > It's kept hidden under the hood, but the large data LV is `data_tdata` and the metadata LV is `data_tmeta`. To display them you can use `lvs --all` or `lvdisplay --all` (which will show `Internal LV Name`, telling it's not for direct use), but you can also forget about it, it's not really needed to know about this. Anyway to use them, there's also a lot of documentation available around. For example at [Redhat](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/assembly_thinly-provisioned-logical-volumes_configuring-and-managing-logical-volumes "RHEL8 - Chapter 12. Creating and managing thinly-provisioned logical volumes (thin volumes)"): > > The following command uses the `-T` option of the `lvcreate` command to create a thin volume named thinvolume in the thin pool `vg001/mythinpool`. Note that in this case you are specifying virtual size, and that you are specifying a virtual size for the volume that is greater than the pool that contains it. > > > > ``` > # lvcreate -V 1G -T vg001/mythinpool -n thinvolume > > ``` > > So in your case to create a 100 terabytes thin provisionned logical volume (to show off the over-provisioning feature): ``` lvcreate -V 100T -T pve/data -n bigthinvolume ``` It will then be usable like other LVs, with slight differences when it comes to advanced features like snapshots. Be careful with overprovisionning, once data% (or more rarely meta%), as displayed with [`lvs`](https://manpages.debian.org/lvm2/lvs.8) reaches 100% on a thinpool, any write that causes block allocation will fail, resulting in an I/O error in the layer above (the thin LV): you must increase available real size before this ever happens. On a Debian-based system, the package [`thin-provisioning-tools`](https://packages.debian.org/stable/thin-provisioning-tools) should probably be installed, it might be needed to handle some thin provisioning advanced features, including at boot. I don't know where this applies with Proxmox.
272402
I am fairly new to C++ programming, so please bear with me. I am writing a small application in visual studio that would be used to communicate with an FTDI module (UM232H high speed USB module). FTDI provides the D2XX drivers for this module and is readily available on their website. Right now the program I have is very simple. It is calling a function called FT\_Open (ftd2xx.h) to simply open the device and top check to see if the device is connected to the computer. However right now I keep getting the `error LNK2019`, unresolved external symbol. Now I did read through the application note provide by Visual Studio website but I still can not seem to resolve my error. I think I am making a silly mistake and would like some guidance from you guys, if you guys could help me out. I have provide my code as well as the header file (ftd2xx.h) that was provide from the FTDI website. Main Program: ``` // ConsoleApplication2.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "ftd2xx.h" #include <iostream> using namespace std; FT_HANDLE ftHandle; FT_STATUS ftStatus; int main(){ ftStatus = FT_Open(0,&ftHandle); if (ftStatus == FT_OK) { cout << "hello world"; // FT_Open OK, use ftHandle to access device } else { // FT_Open failed } return 0; } ftd2xx.h: #include "windows.h" /*#include <stdarg.h> #include <windef.h> #include <winnt.h> #include <winbase.h>*/ /*++ Copyright Β© 2001-2011 Future Technology Devices International Limited THIS SOFTWARE IS PROVIDED BY FUTURE TECHNOLOGY DEVICES INTERNATIONAL LIMITED "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL FUTURE TECHNOLOGY DEVICES INTERNATIONAL LIMITED BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES LOSS OF USE, DATA, OR PROFITS OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. FTDI DRIVERS MAY BE USED ONLY IN CONJUNCTION WITH PRODUCTS BASED ON FTDI PARTS. FTDI DRIVERS MAY BE DISTRIBUTED IN ANY FORM AS LONG AS LICENSE INFORMATION IS NOT MODIFIED. IF A CUSTOM VENDOR ID AND/OR PRODUCT ID OR DESCRIPTION STRING ARE USED, IT IS THE RESPONSIBILITY OF THE PRODUCT MANUFACTURER TO MAINTAIN ANY CHANGES AND SUBSEQUENT WHQL RE-CERTIFICATION AS A RESULT OF MAKING THESE CHANGES. Module Name: ftd2xx.h Abstract: Native USB device driver for FTDI FT232x, FT245x, FT2232x and FT4232x devices FTD2XX library definitions Environment: kernel & user mode --*/ #ifndef FTD2XX_H #define FTD2XX_H // The following ifdef block is the standard way of creating macros // which make exporting from a DLL simpler. All files within this DLL // are compiled with the FTD2XX_EXPORTS symbol defined on the command line. // This symbol should not be defined on any project that uses this DLL. // This way any other project whose source files include this file see // FTD2XX_API functions as being imported from a DLL, whereas this DLL // sees symbols defined with this macro as being exported. #ifdef FTD2XX_EXPORTS #define FTD2XX_API __declspec(dllexport) #else #define FTD2XX_API __declspec(dllimport) #endif typedef PVOID FT_HANDLE; typedef ULONG FT_STATUS; // // Device status // enum { FT_OK, FT_INVALID_HANDLE, FT_DEVICE_NOT_FOUND, FT_DEVICE_NOT_OPENED, FT_IO_ERROR, FT_INSUFFICIENT_RESOURCES, FT_INVALID_PARAMETER, FT_INVALID_BAUD_RATE, FT_DEVICE_NOT_OPENED_FOR_ERASE, FT_DEVICE_NOT_OPENED_FOR_WRITE, FT_FAILED_TO_WRITE_DEVICE, FT_EEPROM_READ_FAILED, FT_EEPROM_WRITE_FAILED, FT_EEPROM_ERASE_FAILED, FT_EEPROM_NOT_PRESENT, FT_EEPROM_NOT_PROGRAMMED, FT_INVALID_ARGS, FT_NOT_SUPPORTED, FT_OTHER_ERROR, FT_DEVICE_LIST_NOT_READY, }; #define FT_SUCCESS(status) ((status) == FT_OK) // // FT_OpenEx Flags // #define FT_OPEN_BY_SERIAL_NUMBER 1 #define FT_OPEN_BY_DESCRIPTION 2 #define FT_OPEN_BY_LOCATION 4 // // FT_ListDevices Flags (used in conjunction with FT_OpenEx Flags // #define FT_LIST_NUMBER_ONLY 0x80000000 #define FT_LIST_BY_INDEX 0x40000000 #define FT_LIST_ALL 0x20000000 #define FT_LIST_MASK (FT_LIST_NUMBER_ONLY|FT_LIST_BY_INDEX|FT_LIST_ALL) // // Baud Rates // #define FT_BAUD_300 300 #define FT_BAUD_600 600 #define FT_BAUD_1200 1200 #define FT_BAUD_2400 2400 #define FT_BAUD_4800 4800 #define FT_BAUD_9600 9600 #define FT_BAUD_14400 14400 #define FT_BAUD_19200 19200 #define FT_BAUD_38400 38400 #define FT_BAUD_57600 57600 #define FT_BAUD_115200 115200 #define FT_BAUD_230400 230400 #define FT_BAUD_460800 460800 #define FT_BAUD_921600 921600 // // Word Lengths // #define FT_BITS_8 (UCHAR) 8 #define FT_BITS_7 (UCHAR) 7 // // Stop Bits // #define FT_STOP_BITS_1 (UCHAR) 0 #define FT_STOP_BITS_2 (UCHAR) 2 // // Parity // #define FT_PARITY_NONE (UCHAR) 0 #define FT_PARITY_ODD (UCHAR) 1 #define FT_PARITY_EVEN (UCHAR) 2 #define FT_PARITY_MARK (UCHAR) 3 #define FT_PARITY_SPACE (UCHAR) 4 // // Flow Control // #define FT_FLOW_NONE 0x0000 #define FT_FLOW_RTS_CTS 0x0100 #define FT_FLOW_DTR_DSR 0x0200 #define FT_FLOW_XON_XOFF 0x0400 // // Purge rx and tx buffers // #define FT_PURGE_RX 1 #define FT_PURGE_TX 2 // // Events // typedef void (*PFT_EVENT_HANDLER)(DWORD,DWORD); #define FT_EVENT_RXCHAR 1 #define FT_EVENT_MODEM_STATUS 2 #define FT_EVENT_LINE_STATUS 4 // // Timeouts // #define FT_DEFAULT_RX_TIMEOUT 300 #define FT_DEFAULT_TX_TIMEOUT 300 // // Device types // typedef ULONG FT_DEVICE; enum { FT_DEVICE_BM, FT_DEVICE_AM, FT_DEVICE_100AX, FT_DEVICE_UNKNOWN, FT_DEVICE_2232C, FT_DEVICE_232R, FT_DEVICE_2232H, FT_DEVICE_4232H, FT_DEVICE_232H, FT_DEVICE_X_SERIES }; // // Bit Modes // #define FT_BITMODE_RESET 0x00 #define FT_BITMODE_ASYNC_BITBANG 0x01 #define FT_BITMODE_MPSSE 0x02 #define FT_BITMODE_SYNC_BITBANG 0x04 #define FT_BITMODE_MCU_HOST 0x08 #define FT_BITMODE_FAST_SERIAL 0x10 #define FT_BITMODE_CBUS_BITBANG 0x20 #define FT_BITMODE_SYNC_FIFO 0x40 // // FT232R CBUS Options EEPROM values // #define FT_232R_CBUS_TXDEN 0x00 // Tx Data Enable #define FT_232R_CBUS_PWRON 0x01 // Power On #define FT_232R_CBUS_RXLED 0x02 // Rx LED #define FT_232R_CBUS_TXLED 0x03 // Tx LED #define FT_232R_CBUS_TXRXLED 0x04 // Tx and Rx LED #define FT_232R_CBUS_SLEEP 0x05 // Sleep #define FT_232R_CBUS_CLK48 0x06 // 48MHz clock #define FT_232R_CBUS_CLK24 0x07 // 24MHz clock #define FT_232R_CBUS_CLK12 0x08 // 12MHz clock #define FT_232R_CBUS_CLK6 0x09 // 6MHz clock #define FT_232R_CBUS_IOMODE 0x0A // IO Mode for CBUS bit-bang #define FT_232R_CBUS_BITBANG_WR 0x0B // Bit-bang write strobe #define FT_232R_CBUS_BITBANG_RD 0x0C // Bit-bang read strobe // // FT232H CBUS Options EEPROM values // #define FT_232H_CBUS_TRISTATE 0x00 // Tristate #define FT_232H_CBUS_TXLED 0x01 // Tx LED #define FT_232H_CBUS_RXLED 0x02 // Rx LED #define FT_232H_CBUS_TXRXLED 0x03 // Tx and Rx LED #define FT_232H_CBUS_PWREN 0x04 // Power Enable #define FT_232H_CBUS_SLEEP 0x05 // Sleep #define FT_232H_CBUS_DRIVE_0 0x06 // Drive pin to logic 0 #define FT_232H_CBUS_DRIVE_1 0x07 // Drive pin to logic 1 #define FT_232H_CBUS_IOMODE 0x08 // IO Mode for CBUS bit-bang #define FT_232H_CBUS_TXDEN 0x09 // Tx Data Enable #define FT_232H_CBUS_CLK30 0x0A // 30MHz clock #define FT_232H_CBUS_CLK15 0x0B // 15MHz clock #define FT_232H_CBUS_CLK7_5 0x0C // 7.5MHz clock // // FT X Series CBUS Options EEPROM values // #define FT_X_SERIES_CBUS_TRISTATE 0x00 // Tristate #define FT_X_SERIES_CBUS_RXLED 0x01 // Tx LED #define FT_X_SERIES_CBUS_TXLED 0x02 // Rx LED #define FT_X_SERIES_CBUS_TXRXLED 0x03 // Tx and Rx LED #define FT_X_SERIES_CBUS_PWREN 0x04 // Power Enable #define FT_X_SERIES_CBUS_SLEEP 0x05 // Sleep #define FT_X_SERIES_CBUS_DRIVE_0 0x06 // Drive pin to logic 0 #define FT_X_SERIES_CBUS_DRIVE_1 0x07 // Drive pin to logic 1 #define FT_X_SERIES_CBUS_IOMODE 0x08 // IO Mode for CBUS bit-bang #define FT_X_SERIES_CBUS_TXDEN 0x09 // Tx Data Enable #define FT_X_SERIES_CBUS_CLK24 0x0A // 24MHz clock #define FT_X_SERIES_CBUS_CLK12 0x0B // 12MHz clock #define FT_X_SERIES_CBUS_CLK6 0x0C // 6MHz clock #define FT_X_SERIES_CBUS_BCD_CHARGER 0x0D // Battery charger detected #define FT_X_SERIES_CBUS_BCD_CHARGER_N 0x0E // Battery charger detected inverted #define FT_X_SERIES_CBUS_I2C_TXE 0x0F // I2C Tx empty #define FT_X_SERIES_CBUS_I2C_RXF 0x10 // I2C Rx full #define FT_X_SERIES_CBUS_VBUS_SENSE 0x11 // Detect VBUS #define FT_X_SERIES_CBUS_BITBANG_WR 0x12 // Bit-bang write strobe #define FT_X_SERIES_CBUS_BITBANG_RD 0x13 // Bit-bang read strobe #define FT_X_SERIES_CBUS_TIMESTAMP 0x14 // Toggle output when a USB SOF token is received #define FT_X_SERIES_CBUS_KEEP_AWAKE 0x15 // // Driver types #define FT_DRIVER_TYPE_D2XX 0 #define FT_DRIVER_TYPE_VCP 1 #ifdef __cplusplus extern "C" { #endif FTD2XX_API FT_STATUS WINAPI FT_Open( int deviceNumber, FT_HANDLE *pHandle ); ``` --Rest of the of the header file was omitted because of the limit to the number of characters I could write. Sorry for the large amount of code I provided here, I just wanted to be sure that I provided everything that I possibly could. Let me know if I would need to provide the rest of the header file.
Maybe it is bit later for an answer. I got same error with old outdated ftd2xx.lib file. Afetr adding new ftd2xx.lib, ftd2xx.dll and ftd2xx.h files, cleaning and rebuilding the project a problem was solved.
273305
I get a syntax error in the following code: ``` if value[0] == "ta" or "su": num_var = len(value) i = 0 while value[i][0] != "-" and i <= num_var: if i == 0 and value[0][0].isdigit(): f3["var_%s" %i] = VARFD[[value[0].split("/")[1]] else: f3["var_%s" %i] = VARFD[[value[0]] f4["val_%s" %i] = "T" i += 1 ``` it claims that the syntax error is on line that starts with "else:". What's wrong with it?
Gii actually checks every table to see if there are join tables (see `ModelCode::isRelationTable()` in gii/generators/model). It detects a table as a join table if: * The table has 2 columns * Both columns are foreign keys * The foreign keys point to different tables Gii then creates a many-to-many relationship between the participating models.
273676
Dears, I am at very pressed situation :-( I started to learn Javascript some months ago and I have written some pages for amusements of my invalid niece to her birthday this Friday. Separate pages are OK. But I feel calamitous that I am still unable to set php including. I have two earlier websites with php code that runs very well so I used to copy the code from those to my new index.php. Of course, I changed the name of folder and names of files. My two websites have menu and submenus, my new website has only menu without submenu, but every file has its own folder. I quickly translated it to English and made it shorter but principle is the same. I tried: ``` <?php $x = isset($_GET['x']) ? $_GET['x'] : 'hello'; if (isset($x) && preg_match('/^[a-zA-Z0-9_-]+$/', $x)) { if (file_exists ("../pokus/".$x.".php")) { include "../pokus/".$x.".php"; } else { include (__DIR__)."/404.php"; } } ?> ``` but allegedly the files "was not found on this server." Naturally, they are all of them uploaded to my server. My HTML code: ``` <ul> <li> <a href="/hello/"<?php echo $x == "hello" ? 'class="choice"' : '';?>> <img src = "pictures/hello.jpg" width = "320" height = "240"> </a> </li> <li> <a href="/folder1/file1/" <?php echo $x == '../file1' ? 'class="choice"' : '';?>> <img src = "pictures/pict1-menu.jpg" width = "320" height = "240"> </a> </li> <li> <a href="/folder2/file2/" <?php echo $x == '../file2' ? 'class="choice"' : '';?>> <img src = "pictures/pict2-menu.jpg" width = "320" height = "240"> </a> </li> <li> <a href="/folder3/file3/" <?php echo $x == '../file3' ? 'class="choice"' : '';?>> <img src = "pictures/pict3-menu.jpg" width = "320" height = "240"> </a> </li> </ul> ``` At folder called pokus I have index.php and hello.php (= some sentences to her birthday). Then I have subfolder folder1 and file1.php inside, subfolder folder2 and file2.php inside etc. I am forced to use many subfolders because each subfolder contains many pictures A1.jpg, A2.jpg ... D4.jpg using JS FOR cycle and therefore it is unable to mix them together. I am very astonished that not even file hello.php sitting beside index.php I am not able to load. My Apache says that this file "was not found on this server." But the file hello.php is loaded immediately with opening index.php! Why is Apache so paranoid? Here is live illustration: <http://pokus.zlatberry.cz/> Can anyone devote your time to help me, please? Many thanks in advance!
1. If your app reuses the data of manufactures and colors in other places. It makes sense to put these two data in the global store (redux). Otherwise, don't store it in the local state of your component. 2. Use Promise Promise.all(getManufactures(), getColors()).then([manufactures, colors] => { getCars(manufactures, colors)) } 3. Separate the logic of calling API (getCars), use eventHandler to call that logic `onColorChange`, `onManufactureChange` 4. Use local state to handle data while filtering as you may not use this data in any other places.
273824
I have Date\_time field with yyyy-mm-dd hh:mm:ss format in my database. I had stored 8 days data in my database. Now i want data at every 15 minute. what is the solution for it? please help me... I have Date\_time field with yyyy-mm-dd hh:mm:ss format in my database. I had stored 8 days data in my database. Now i want data at every 15 minute. what is the solution for it? please help me... ``` my code is: <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%@ page import="java.sql.*" %> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Fetching data from the database</title> </head> <body> <table border="2"> <tr> <th>Inv_id</th> <th>Inv_phase1_V</th> <th>Inv_phase1_A</th> <th>Inv_phase1_kW</th> </tr> <% try { Class.forName("com.mysql.jdbc.Driver"); String url="jdbc:mysql://localhost:3306/solar"; String username="root"; String password="root"; Connection conn=DriverManager.getConnection(url,username,password); String query="select Inv_id,Inv_phase1_V,Inv_phase1_A,Inv_phase1_kW from inverter_detail"; Statement stmt=conn.createStatement(); ResultSet rs=stmt.executeQuery(query); while(rs.next()) { String Inverter_id = rs.getString("Inv_id"); Double voltage = rs.getDouble("Inv_phase1_V"); Double ampere = rs.getDouble("Inv_phase1_A"); Double kiloWatt = rs.getDouble("Inv_phase1_kW"); %> <tr> <td class="cotainer"><%=Inverter_id%></td> <td><%=voltage%></td> <td><%=ampere%></td> <td><%=kiloWatt%></td> </tr> <% } %> <% rs.close(); stmt.close(); conn.close(); } catch(Exception e) { e.printStackTrace(); } %> </table> </body> </html> ``` now i want this value at every 15 minute interval. What can i do??? I have no more idea about javascript n jquery.
Finally found : ``` exec('/usr/local/bin/convert '.$source_img.' -font '.$font_location.' -pointsize 14 -draw "gravity south fill black text 0,12 \'some text\' fill white text 1,11 \'some text\' " '.$output_img); ```
273991
I have a table with the following columns: name - course1 - course2 - course3. Two rows look like this: ``` John - physics - math - art Sara - math - chemistry - psychology ``` Now John has been expelled from the math class and I want to replace "math" with "none" on his row. When I look for a solution I find things like this: ``` UPDATE tableName SET `course1` = 'none' WHERE `name`='John' AND `course1`='math'; ``` That could be useful if I knew the column where 'math' was recorded for John. But that word can be under any column. What I need is something like this: sql\_query="find the row where `name`='John' and then find the column where we have the word 'math' and only there replace 'math' with 'none'. Can you kindly help me with this?
In this case, I think there is no other way besides evaluating each column, like this: ``` update my_table set course1 = if(course1 = 'math', 'none', course1), course2 = if(course2 = 'math', 'none', course2), course3 = if(course3 = 'math', 'none', course3) where name = 'John'; ```
274251
After update of chromedriver to version 2.46 my tasts fail to initialize. I got message like this: ``` Starting ChromeDriver 2.46.628402 (536cd7adbad73a3783fdc2cab92ab2ba7ec361e1) on port 44269 Only local connections are allowed. Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code. Failed to invoke configuration method com.personal.CustomTest.initTests not created: Chrome version must be between 71 and 75 (Driver info: chromedriver=2.46.628402,platform=Windows NT 10.0.16299 x86_64) (WARNING: The server did not provide any stacktrace information) Command duration or timeout: 1.58 seconds Build info: version: '2.53.1' ``` It is clearly saying that my browser version is not valid. But I am using Chrome 72.0.3626.119 so it is between 71 and 75. Selenium version is 2.53.1. And I am running test through console command with the help of testNG. Any idea? Every ideas that I found was about changing selenium version but I cant do it.
This error message... ``` Starting ChromeDriver 2.46.628402 (536cd7adbad73a3783fdc2cab92ab2ba7ec361e1) on port 44269 Only local connections are allowed. Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code. Failed to invoke configuration method com.personal.CustomTest.initTests not created: Chrome version must be between 71 and 75 ``` ...implies that the **ChromeDriver v2.46** is not compatible with the *Chrome Browser* version which is being accessed by your program/webdriver. Your main issue is the **incompatibility** between the version of the binaries you are using as follows: * You are using *chromedriver=2.46* * Release Notes of [*chromedriver=2.46*](https://chromedriver.storage.googleapis.com/2.46/notes.txt) clearly mentions the following : > > Supports **Chrome v71-73** > > > * Though you mentioned you are using *Chrome 72.0.3626.119* possibly there are multiple instances of *Chrome Browser* installed within your system and your program by default is accessing the *Chrome Browser* whose version is not between **`v71.x`** and **`v75.x`** * You are using *chrome=67.0* * Release Notes of [*ChromeDriver v2.38*](https://chromedriver.storage.googleapis.com/2.38/notes.txt) clearly mentions the following : > > Supports **Chrome v65-67** > > > --- Solution -------- * Keep *JDK* upgraded to recent levels [JDK 8u201](https://www.oracle.com/technetwork/java/javase/downloads/index.html). * Uninstall all the instances of *Chrome Browser* (you can opt to use [Revo Uninstaller](https://www.revouninstaller.com/revo_uninstaller_free_download.html)). * Upgrade *ChromeDriver* to current [ChromeDriver v2.46](https://sites.google.com/a/chromium.org/chromedriver/downloads) level. * Keep *Chrome* version between ***Chrome v71-73*** levels. ([as per ChromeDriver v2.46 release notes](https://chromedriver.storage.googleapis.com/2.46/notes.txt))
274353
*Sorry for being such a novice.* For this question I used C language, and the libraries stdlio.h and stdlib.h. --- Question -------- So a question is asking me to: 1. Open a text file named 'numbers.txt' in read mode. This text file has 6 integers in it. 2. Read the 6 integers from that text file using a loop. 3. Calculate and display the total and average of those 6 integers. The text file 'numbers.txt' holds the integers: 5, 10, 15, 20, 25. Here's my code: ``` FILE *n; n = fopen("numbers.txt", "r"); int a, num, sum = 0; float avg; for (a = 0; a < 6; a++) { fscanf(n, "%d", &num); sum = sum + num; } avg = sum / (a - 1); printf("Sum = %d\nAverage = %.2f\n\n", sum, avg); fclose(n); ``` Another variation of the question is that I need to use a while loop to read the integers in the text file. Here's my code for that: ``` FILE *n; n = fopen("numbers.txt", "r"); int a = 0, num, sum = 0; float avg; while (fscanf(n, "%d", &num) != EOF) { fscanf(n, "%d", &num); sum = sum + num; a++; } avg = sum / a; printf("Sum = %d\nAverage = %.2f\n\n", sum, avg); fclose(n); ``` --- Problem ------- When I run each of the above programs, I expect this output: ``` Sum = 75 Average = 15.00 ``` However I get this instead (for the first code): ``` Sum = 100 Average 20.00 ``` And this (for the second code): ``` Sum = 55 Average = 18.00 ``` How am I able to get the correct output from both of these programs? *Again I apologise for how basic this question is. Nonetheless, any help would be appreciated.*
In your word counting code, you are using the variable `content` which holds all of the users inputted lines in a list of lists. In your letter counting code, you use the variable `line` which is just the last line the user entered. Since that line must be "." to leave the loop it never has any letters thus you always return 0. Apply your letter counting technique to each word from word list like: ``` letter = sum(c.isalpha() for word in words_list for c in word) ```
274403
I would like to read a text file into an array of strings using System.IO.File.ReadAllLines. However, ReadAllLines strips out some odd characters in the file that I would like to keep, such as chr(187). I've tried some different encoding options, but that doesn't help and I don't see an option for "no encoding." I can use FileOpen and LineInput to read the file without modification, but this is quite a bit slower. Using FileSystemObject also works properly, but I would rather not use that. What is the best way to read a text file into an array of strings without modification in .net?
There's no such concept as "no encoding". You *must* find out the right encoding, otherwise you can't possibly interpret the data correctly. When you say "chr(187)" what Unicode character do you mean? Some encodings you might want to try: * Encoding.Default - the system default encoding * Encoding.GetEncoding(28591) - ISO-Latin-1 * Encoding.UTF8 - very common in modern files
275535
I want build a form using a block module in `Drupal 8`. I am aware of building the forms in `Drupal` 7 but the same seems to be different in Drupal 8. Request anyone who has worked on drupal8 custom forms as block to help me.
Your question is very vague, as I don't know how much you already know about modules, forms and blocks in Drupal 8. So here is a small guide what to do, further information on how to do stuff in detail would be overkill for this answer. **1. Create a new module and enable it** Look here: [Naming and placing your Drupal 8 module](https://www.drupal.org/node/2116781). Basically you create the module folder and the module info yml file to let Drupal know about the module. Then you enable it using drush or the admin area in Drupal. **2. Create the form** Look here: [Introduction to Form API](https://www.drupal.org/node/2117411). under `your_module/src/Form` you create the form. More details in the link above. **3. Create the block and render the form** Look here: [Create a custom block](https://www.drupal.org/node/2101565). under `your_module/src/Plugin/Block/` you create the block which will render the form. The idea is basically (code updated with suggestion from Henrik): ``` $builtForm = \Drupal::formBuilder()->getForm('Drupal\your_module\Form\Yourβ€Œβ€‹Form'); $renderArray['form'] = $builtForm; return $renderArray; ``` Note: You don't need to wrap the `$builtForm` with the `$renderArray`, you can return just the `$builtForm` and be fine. I just personally like to do it that way, because often times I need to add something else to the final render array like some markup, cache settings or a library etc. **4. Place the block** Place the block in the desired region(s). Done.
275778
[![It align now](https://i.stack.imgur.com/epmU3.png)](https://i.stack.imgur.com/epmU3.png)[![enter image description here](https://i.stack.imgur.com/XIRn1.png)](https://i.stack.imgur.com/XIRn1.png)CSS: This suppose to be at the left of the first navigation which is the clothing ``` #nav { width:95%; font-family: Tahoma; font: bold; color: #00FFFF; float: none; } .form{ height:50px; position: absolute; text-align:center; font-size:16px; font-family:Candara; color:#00FFFF; background-color: transparent; min-width: 160px; float:left-wards; } .dropbtn { background-color: transparent; color: #00FFFF; padding: 16px; font-size: 16px; border: none; cursor: pointer; } .dropdown { position: relative; display: inline-block; } .dropdown-content { display: none; position: absolute; background-color: transparent; min-width: 160px; box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2); z-index:1000; } .dropdown-content a { color: #00FFFF; padding: 6px 16px; text-decoration: none; display: block; z-index:1000; } .dropdown-content a:hover {background-color: transparent} .dropdown:hover .dropdown-content { display: block; z-index:1000; } .dropdown:hover .dropbtn { background-color: transparent; } ``` NAV MENU: This is the main nav menu suppose to be the radio male or female should be inline with the navigation but its not the case here ``` <nav> <div id="nav" align="center"> <div style="display:inline-block"> <script src="https://code.jquery.com/jquery-3.1.1.min.js"></script> <form class="form"> <input type="radio" id="male" name="gender" value="male"> Male <input type="radio" id="female" name="gender" value="female"> Female </form> </div> <div class = "dropdown" width = "10"> <div class = "dropdown" width = "10"> <button class="dropbtn">Clothing</button> <div class="dropdown-content"> <a id="formal" href="#formal">Formal</a></br> <a id="maleFormal" style="display:none" href="formalM.html">Male Formal</a> <a id="femaleFormal" style="display:none" href="formalF.html">Female Formal</a> <a id="shirt" href="#shirt">Shirt</a></br> <a id="maleShirt" style="display:none" href="shirtM.html">Male Shirt</a> <a id="femaleShirt" style="display:none" href="shirtF.html">Female Shirt</a> <a id="jeans" href="#jeans">Jeans</a></br> <a id="maleJeans" style="display:none" href="jeansM.html">Pants</a> <a id="femaleJeans" style="display:none" href="jeansF.html">Jeans</a> <a id="shortandskirt" href="#shortandskirt">Short and Skirt</a></br> <a id="maleshortandskirt" style="display:none" href="shorts.html">Shorts</a> <a id="femaleshortandskirt" style="display:none" href="skirts.html">Skirts</a> </div> </div> <script> $(document).ready(function(){ $("#male").click(function(){ $("#formal").hide(); $("#maleFormal").show(); $("#femaleFormal").hide(); }); $("#female").click(function(){ $("#formal").hide(); $("#maleFormal").hide(); $("#femaleFormal").show(); }); }); </script> <script> $(document).ready(function(){ $("#male").click(function(){ $("#shirt").hide(); $("#maleShirt").show(); $("#femaleShirt").hide(); }); $("#female").click(function(){ $("#shirt").hide(); $("#maleShirt").hide(); $("#femaleShirt").show(); }); }); </script> <script> $(document).ready(function(){ $("#male").click(function(){ $("#jeans").hide(); $("#maleJeans").show(); $("#femaleJeans").hide(); }); $("#female").click(function(){ $("#jeans").hide(); $("#maleJeans").hide(); $("#femaleJeans").show(); }); }); </script> <script> $(document).ready(function(){ $("#male").click(function(){ $("#shortandskirt").hide(); $("#maleshortandskirt").show(); $("#femaleshortandskirt").hide(); }); $("#female").click(function(){ $("#shortandskirt").hide(); $("#maleshortandskirt").hide(); $("#femaleshortandskirt").show(); }); }); </script> </div> <div class = "dropdown" width = "10"> <button class="dropbtn">Accesory</button> <div class="dropdown-content"> <a href="#ring">Ring</a></br> <a href="#necklace">Necklace</a></br> <a href="#pendant">Pendant</a></br> <a href="#bracelet">Bracelet</a></br> <a href="#eye glasses">Eye Glasses</a></br> <a href="#sun glasses">Sun Glasses</a></br> <a href="#fashion glasses">Fashion Glasses</a></br> </div> </div> <div class = "dropdown" width = "10"> <button class="dropbtn">Shoe</button> <div class="dropdown-content"> <a href="#rubber shoes">Rubber Shoes</a></br> <a href="#running shoe">Running Shoe</a></br> <a href="#formal shoe">Formal Shoe</a></br> <a href="#flat shoes">Flat Shoes</a></br> <a href="#high heels">High Heels</a></br> <a href="#sandals">Sandals</a></br> </div> </div> <div class = "dropdown" width = "10"> <button class="dropbtn">Bags</button> <div class="dropdown-content"> <a href="#sling bags">Sling bags</a></br> <a href="#backpack">Backpack</a></br> <a href="#office Bags">Office Bags</a></br> <a href="#fahion bags">Fashion bags</a></br> <a href="#gym bag">Gym Bag</a></br> </div> </div> <div class = "dropdown" width = "10"> <button class="dropbtn">Watches</button> <div class="dropdown-content"> <a href="#rolex">Rolex</a></br> <a href="#swatch">Swatch</a></br> <a href="#timex">Timex</a></br> </div> </div> <div class = "dropdown" width = "10"> <button class="dropbtn">Kids</button> <div class="dropdown-content"> <a href="#clothing">Clothing</a></br> <a href="#accesory">Accesory</a></br> <a href="#shoe">Shoe</a></br> <a href="#bags">Bags</a></br> <a href="#watch">Watch</a></br> </div> </div> <div class = "dropdown" width = "10"> <button class="dropbtn">Toys</button> <div class="dropdown-content"> <a href="#cars">Cars</a></br> <a href="#guns">Guns</a></br> <a href="#kitchen sets">kitchen sets</a></br> <a href="#doll">Doll</a></br> <a href="#doll house">Doll House</a></br> <a href="#stuff toys">Stuff Toy</a></br> </div> </div> <div class = "dropdown" width = "10"> <button class="dropbtn">Unisex</button> <div class="dropdown-content"> <a href="#clothing">Clothing</a></br> <a href="#accesory">Accesory</a></br> <a href="#shoe">Shoe</a></br> <a href="#bags">Bags</a></br> <a href="#watch">Watch</a></br> </div> </div> <div class = "dropdown" width = "10"> <button class="dropbtn">Brands</button> <div class="dropdown-content"> <a href="#nike">Nike</a></br> <a href="#guess">Guess</a></br> <a href="#humane">Humane</a></br> <a href="#prada">Prada</a></br> <a href="#LV">LV</a></br> </div> </div> <div class = "dropdown" width = "10"> <button class="dropbtn">Deals</button> <div class="dropdown-content"> <a href="#50% discounts">50% Discounts</a></br> <a href="#60% discounts">60% Discounts</a></br> <a href="#70% discounts">70% Discounts</a></br> <a href="#free shipping">Free Shipping</a></br> <a href="#coupons">Coupons</a></br> </div> </div> </div> </nav> ``` This is the result [![Nav menu](https://i.stack.imgur.com/a47l1.png)](https://i.stack.imgur.com/a47l1.png)
I would use an array variant to keep it simple. What you need to create is a multivalue field, rather than a string with semicolons in it. ``` Dim checkListValues as Variant 'turn this into a variant array checkListValues = split("") 'add the values currently selected and remove the blank checkListValues = FullTrim(arrayappend(checkListValues, doc.CheckListInitiator)) 'add the value you want to add checkListValues = ArrayAppend(checkListValues, "Allotment Approval attached") 'return this list to the document doc.CheckListInitiator = checkListValues ``` There are other ways to do this, with proper arrays and so on, but for me, this is the simplest way in LotusScript. It's also VERY simple in @Formula language. ``` @Setfield("CheckListInitiator"; CheckListInitiator : "Allotment Approval attached") ```
276270
I want to backup some filesystem folders from a client to an external server. My goal is to have a secure backup on the external server even if the client gets compromised/hacked in any way. I am considering the following techniques (prio in that order): * rsync * aws S3 * sFTP * HTTPs AFAIK rsync and sFTP transmit over SSH and therefore I need to put some sort of SSH key on the client. If an attacker gets access to the client, he could also read the SSH key and connect to the external backup server. This is what I want to prevent. AWS s3 permission model (<https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html>) has a WRITE permissions which "Allows grantee to create, overwrite, and delete any object in the bucket". So a compromised client could delete all of my backups. Not suted in that scenario. With HTTPs there are of course also credentials stored on the client, but I would create some sort of API on the external system only allowing to POST/Upload data. So an attacker could "only" upload "bulk" data onto the external server, and not e.g. delete my stored backups over SSH. But performing backups via HTTPs sounds like the wrong tool for the job. Am I missing something obvious? Any recommendations are highly appreciated.
The way that I handle this is to move control of the data to the *backups* destination. For the purposes of this description, *production* is your production (main) machine and *backups* is the backup server. This does require that you have a *backups* server rather than just a data repository, though. * Remove ssh equivalence from between *production* and *backups* unless you protect the keys with a password. (This removes any opportunity for someone or something to get from one to the other.) DO NOT use the same password on both systems! * Set up `rsyncd` on *production* with a read-everything but read-only configuration protected by username/password. Consider denying any access to `~/.ssh` directories - particularly root's * Create a regular `rsync` job on *backups* to pull data from the *production* server. I couple this with `rsnapshot`, and actually to save space I run the entire set of backups on top of [S3QL](http://www.rath.org/s3ql-docs/) (hung off local disk rather than S3 storage). This will stop a break-in on *production* getting to your *backups* data. It will also stop a break-in on *backups* overwriting your *production* system. The downside of this as described is that `rsyncd` only delivers data in-the-clear. There are two options to handle this, as I wouldn't recommend clear text data transfer for any trans-Internet communication * `stunnel`, which can be configured to present an SSL socket on behalf of `rsyncd` * a point-to-point VPN tunnel using something such as as Wireguard, OpenVPN or IPsec. I should point out that although this solution is Linux-centric, I do have `rsyncd` running on Windows systems, too, successfully backing up user data. The caveat here is that it's not easy to backup data from open files without using a VSS shadow snapshot, and `rsyncd` doesn't know how to create those.
276275
I have a dialog where users allocate product on an order to various available lots of product. ![enter image description here](https://i.stack.imgur.com/2lGkX.jpg) So basically if there are 100 units on a line item of an order, the user would have to allocate this quantity out to the available lots of product in a warehouse. This part of the design will work well. The part I am having problem designing is the returns. Consider that weeks later, some product may be returned for one reason or another. I would like the user to be able to go back to this allocation dialog and input the return in some way. The return would have to be specific to a certain lot of product, so I have considered putting an extra column of inputs labeled Returns next to the Quantities column. Here's the issue however. Will the user assume that any amount entered in the Returns column will have to be removed from the original Quantity column manually? Or will they assume that they should enter the Return amount and leave the originally entered quantity alone? I could design it to behave either way, but I am trying to think of other ways to make sure that they expect it to behave the way that it actually does behave. Perhaps automatically subtracting from the quantity column when they enter an amount into the returns column (this would probably cause other problems however...)? I thought of making the quantity column readonly after the requested pickup date has passed, but am worried about what if they are editing the order quantities after the pickup date due to a mis-ship or other exception case. What do you think? Any ideas on making this UI better?
Keep in mind that the user interface for returns doesn't have to be a modified version of the interface you show above. Maybe a specialized returns interface would be a better way to go?
276394
Python 3 Consider the following two code samples. Two questions: 1: What is the difference between the two approaches? For what benefit would I implement A versus B? 2: In sample A, is the self variable prefix required? Sample A: ``` class DoSomething(): self.dispatcher = { 'SaveToS3': self.savetos3, 'SendWebHook': self.webhook, 'AddToQueue': self.addtoqueue, 'SendSms': self.sendsms, 'SendEmail': self.sendemail, } def __init__(self): pass ``` Sample B: ``` class DoSomething(): def __init__(self): self.dispatcher = { 'SaveToS3': self.savetos3, 'SendWebHook': self.webhook, 'AddToQueue': self.addtoqueue, 'SendSms': self.sendsms, 'SendEmail': self.sendemail, } ```
1. In Sample A, `dispatcher` is shared among all instances of the class, and so modifying it will modify `self.dispatcher` for every `DoSomething`. With Sample B, on the other hand, each instance gets a separate `self.dispatcher`, and modifying one will leave all of the others alone. Sample A also lets you access `DoSomething.dispatcher` without involving any instances, while this won't work in B. 2. `self.` is actually not allowed at "class level," as there is no "`self`" object defined there.
276676
Now I have this: ``` $data = array(); $query = mysql_query("SELECT * FROM users WHERE username = '{$username}'"); while ($row = mysql_fetch_row($query)) { $data[] = $row; } ``` Now `$data[0] = array('Joe','Sally','3 5'); $data[1] = array('Joe','alice','30 65');` Access `$data[0][0] = 'Joe'` or `$data[1][0] = 'Joe'. $data[1][1] = 'alice'`. then how can i send this 2D array from php and receive it in android using JSONObject or JSONArray?
try this ``` $data = array(); $query = mysql_query("SELECT * FROM users WHERE username = '{$username}'"); while ($row = mysql_fetch_row($query)) { $data[] = $row; } $json=json_encode(array("userlist" => $data)); echo $json; ```
277012
I have a csv file with each row containing lists of adjectives. For example, the first 2 rows are as follows: ``` ["happy","sad","colorful"] ["horrible","sad","cheerful","happy"] ``` I want to extract all the data from this file to get a list containing each adjective only one. (Here, it would be a list as follows : ``` ["happy","sad","colorful","horrible","cheerful"] ``` I am doing this using Python. ``` import csv with open('adj.csv', 'rb') as f: reader = csv.reader(f) adj_list = list(reader) filtered_list = [] for l in adj_list: if l not in new_list: filtered_list.append(l) ```
Supposing that "memory is not important" and that one liner is what you are looking for: ``` from itertools import chain from csv import reader print(list(set(chain(*reader(open('file.csv')))))) ``` having 'file.csv' content like this: ``` happy, sad, colorful horrible, sad, cheerful, happy ``` **OUTPUT:** ['horrible', ' colorful', ' sad', ' cheerful', ' happy', 'happy'] You can remove the `list()` part if you don't mind receive a *Python set* instead of a list.
277116
I'm trying to make a `RecurrenceTable` with conditionals in Mathematica, and the recursive stuff is working right, but it won't evaluate it completely. ``` In:= RecurrenceTable[{x[n] == If[Mod[n, 2] == 0, x[n - 1], y[n - 1]], y[n] == If[Mod[n, 2] == 0, R x[n - 1] (1 - x[n - 1]), y[n - 1]], x[1] == x0, y[1] == 0}, {x, y}, {n, 1, 10}] Out:= {{0.25, 0.}, {x[1], 3 (1 - x[1]) x[1]}, {y[2], y[2]}, {x[3], 3 (1 - x[3]) x[3]}, {y[4], y[4]}, {x[5], 3 (1 - x[5]) x[5]}, {y[6], y[6]}, {x[7], 3 (1 - x[7]) x[7]}, {y[8], y[8]}, {x[9], 3 (1 - x[9]) x[9]}} ``` These are the right results, but I need it to be in numeric form, i.e. `{{0.25, 0.}, {0.25, 0.5625} ...` Is there a way to do this? Thanks!
Typically, you should use [`Piecewise`](http://reference.wolfram.com/mathematica/ref/Piecewise.html) for mathematical functions, and reserve `If` for programming flow. You can convert many `If` statements using [`PiecewiseExpand`](http://reference.wolfram.com/mathematica/ref/PiecewiseExpand.html): ``` If[Mod[n, 2] == 0, x[n - 1], y[n - 1]] // PiecewiseExpand If[Mod[n, 2] == 0, r*x[n - 1] (1 - x[n - 1]), y[n - 1]] // PiecewiseExpand ``` The final code may look something like this: ``` r = 3; x0 = 0.25; RecurrenceTable[ {x[n] == Piecewise[{{x[n - 1], Mod[n, 2] == 0}}, y[n - 1]], y[n] == Piecewise[{{r*x[n - 1] (1 - x[n - 1]), Mod[n, 2] == 0}}, y[n - 1]], x[1] == x0, y[1] == 0}, {x, y}, {n, 10} ] ``` ``` {{0.25, 0.}, {0.25, 0.5625}, {0.5625, 0.5625}, {0.5625, 0.738281}, {0.738281, 0.738281}, {0.738281, 0.579666}, {0.579666, 0.579666}, {0.579666, 0.73096}, {0.73096, 0.73096}, {0.73096, 0.589973}} ``` A couple of related points: 1. It is best not to use capital letters for your symbol names, as these may conflict with built-in functions. 2. You may consider `Divisible[n, 2]` in place of `Mod[n, 2] == 0` if you wish.
277117
I'm new to python programming and I'm having trouble selecting an option. I have created a menu for example I have : Instructions Catering Packages add When the user selects i, c, a or p each menu will come up. However, if the user selects 'p' before 'a' then I need to set a prompt to select a first.. ``` INSTRUCTIONS = "I" CATERING = "C" PACKAGES = "P" def menu(): userInput = True while userInput != False: print("Instructions Catering Packages") userInput = input(">>>") if userInput == INSTRUCTIONS: instructions() elif userInput == CATERING: Catering() elif userInput == PACKAGES: Packages() else: print("Error") ``` Thank you
One way to do this, is to make the SwitchViewController a custom container controller, and add either the table view or map view as its child. Here is an example that I've used before that has the same set up as your storyboard (my ContainerViewController is your SwitchViewController, and my controllers with the identifiers "InitialVC" and "substituteVC" would be your table view and map view). This is code I have in the ContainerViewController, ``` - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; UIViewController *initial = [self.storyboard instantiateViewControllerWithIdentifier:@"InitialVC"]; [self addChildViewController:initial]; [self.view addSubview:initial.view]; self.currentController = initial; [self constrainViewEqual:self.currentController]; } -(void)switchToNewView { UIViewController *sub = [self.storyboard instantiateViewControllerWithIdentifier:@"SubstituteVC"]; [self addChildViewController:sub]; sub.view.frame = self.view.bounds; [self moveToNewController:sub]; } -(void)moveToNewController:(UIViewController *) newController { [self.currentController willMoveToParentViewController:nil]; [self transitionFromViewController:self.currentController toViewController:newController duration:.6 options:UIViewAnimationOptionTransitionCrossDissolve animations:^{} completion:^(BOOL finished) { [self.currentController removeFromParentViewController]; [newController didMoveToParentViewController:self]; self.currentController = newController; [self constrainViewEqual:self.currentController]; }]; } ``` constrainViewEqual is a category method that I use to set up the layout constraints for the views. It looks like this, ``` -(void)constrainViewEqual:(UIViewController *) vc { [vc.view setTranslatesAutoresizingMaskIntoConstraints:NO]; NSLayoutConstraint *con1 = [NSLayoutConstraint constraintWithItem:self.view attribute:NSLayoutAttributeCenterX relatedBy:0 toItem:vc.view attribute:NSLayoutAttributeCenterX multiplier:1 constant:0]; NSLayoutConstraint *con2 = [NSLayoutConstraint constraintWithItem:self.view attribute:NSLayoutAttributeCenterY relatedBy:0 toItem:vc.view attribute:NSLayoutAttributeCenterY multiplier:1 constant:0]; NSLayoutConstraint *con3 = [NSLayoutConstraint constraintWithItem:self.view attribute:NSLayoutAttributeWidth relatedBy:0 toItem:vc.view attribute:NSLayoutAttributeWidth multiplier:1 constant:0]; NSLayoutConstraint *con4 = [NSLayoutConstraint constraintWithItem:self.view attribute:NSLayoutAttributeHeight relatedBy:0 toItem:vc.view attribute:NSLayoutAttributeHeight multiplier:1 constant:0]; NSArray *constraints = @[con1,con2,con3,con4]; [self.view addConstraints:constraints]; } ``` I call switchToNewView from a button in the initial view controller (the one you have labeled as ViewController) like this, ``` -(IBAction)doStuff:(id)sender { ContainerController *cc = (ContainerController *)self.childViewControllers[0]; [cc switchToNewView]; } ```
277292
I am developing REST api using Spring Boot. I've a controller which accepts POST requests. > > <http://localhost:8085/carride/end-ride> > > > In the above request i want to access the parameter `ride_transection_id` for finding particular transection object and also some other value as well. So basically i have 3 way to do that. **1.** i can use `@PathVariable` ``` @RequestMapping(value = "/end-ride", method = RequestMethod.POST) public ResponseEntity<?> endRide(@PathVariable("ride_transection_id") long ride_transection_id,@RequestBody SomeDTORequest someDTORequest ) { //find transaction using path varibale } ``` **2.**i can use `@RequestParam` ``` @RequestMapping(value = "/end-ride", method = RequestMethod.POST public @ResponseBody item getitem(@RequestParam("ride_transection_id") long ride_transection_id,@RequestBody SomeDTORequest someDTORequest ){ //find transaction using RequestParam varibale } ``` 3. i can use DTO Object `SomeDTORequest` and accept `ride_transection_id` into that with other value as well. ``` @RequestMapping(value = "/end-ride", method = RequestMethod.POST) public ResponseEntity<?> endRide(@RequestBody SomeDTORequest someDTORequest ) { //find transaction using path someDTORequest .getID() ``` } i am little bit confuses.just want ask which is safest and right way to access the `ride_transection_id` ? thanks
You can use any of them but every way is designed for a certain use. Path variable: is used when you need to access an entity using a certain field for example i want to access an order and this order is defined by id so to access this order i need the following request `Get /order/{id}` Request Parameter: when you want to send a specific variable or flag for a certain method for example `Get /orders?is_shipped=true`, so this will get all shipped orders or you may need orders at certain page `Get /orders?page=1` Request body: when you need to update the entity by the put or patch request as you will update the entity using the entity's json representation which can be send through the request body for example `PUT /orders/{id}` body: `{"title": "order_1"}` then the order with id {id} will be updated with the new title [Spring data rest](https://spring.io/guides/gs/accessing-data-rest/) [See also](https://hellokoding.com/restful-api-example-with-spring-boot-spring-data-rest-and-mysql/)
277420
I just installed Moblin Operating System. It's using GRUB2. On my Ubuntu 8.04 GRUB 0.97 was being used in which i was using the **default saved** option comfortably. I found that with GRUB2 i should not edit /boot/grub/menu.lst directly *but I did* :) because my Moblin does not contain any **/etc/default/grub** where they say I should do the modification I want. So what I did is as following which did not work: ``` default=saved timeout=1 #splashimage=(hd0,0)/boot/grub/splash.xpm.gz #hiddenmenu #silent title Moblin (2.6.31.5-10.1.moblin2-netbook) root (hd0,0) kernel /boot/vmlinuz-2.6.31.5-10.1.moblin2-netbook ro root=/dev/sda1 vga=current savedefault=1 title Pathetic Windows rootnoverify (hd0,1) chainloader +1 savedefault=0 ``` By doing so I should have automatically switch between Moblin and Window at each boot but it's not working. Almost all the troubleshooters on internet are saying that I should enable the **DEFAULT=save** option in **/etc/default/grub** but I am unable to find this file. Any idea what else should I do? Thanks a lot **Update:** I used the equal to sign because by default my menu.lst had an entry as **default=0**. However, **default 0**, is also working fine. Moreover the **menu.lst**, i have is actually a symbolic link to **./grub.conf**. I have also noticed that **grub-intall** and **grub-set-default** commands are not working.
The correct syntax for enabling `savedefault` in `/etc/default/grub` (GRUB 2) is with the lines: ``` GRUB_DEFAULT=saved GRUB_SAVEDEFAULT=true ``` and *not* with the line `DEFAULT=save` that the enquirer suggested. After adding the two lines above to `/etc/default/grub`, one has to update GRUB's configuration, as a superuser, with the following commands: ``` grub-mkconfig update-grub ``` At least on Lubuntu 12.10, `grub-mkconfig` is not needed (as it, without options, dumps the config file to stdout). The only needed command is `update-grub` which actually invokes `grub-mkconfig` with the proper `-o file` option.
277941
I have a .net core project with pipeline (YAML, multi-stage) pipeline set up in Azure DevOps. We have a bunch of unit tests that we execute at pipeline run time - everything is fine. However - we would like to dig more deeper into our code coverage. So we have configured our task like this ``` - task: DotNetCoreCLI@2 displayName: Run UnitTests enabled: true inputs: command: test projects: '**/PM.UnitTests.csproj' arguments: '--configuration $(buildConfiguration) --collect "Code coverage"' ``` The result is this [![enter image description here](https://i.stack.imgur.com/B4pRh.png)](https://i.stack.imgur.com/B4pRh.png) We can now see this in pipeline results: [![enter image description here](https://i.stack.imgur.com/uYgyO.png)](https://i.stack.imgur.com/uYgyO.png) This .coverage file can be downloaded and analyzed in e.g. Visual Studio. However - we would really like to be able to see the results directly in Azure Pipelines. We have a Typescript project where we do this. The result is this: [![enter image description here](https://i.stack.imgur.com/x1PDi.png)](https://i.stack.imgur.com/x1PDi.png) Sadly, it is not at all apparent to me how to apply this to a .net core project. **Q1: Is it possible to have the same experience for .net core projects ... and how?** Additionally - we would like to be able to apply filtering on which parts of the code is used to calculate the code coverage percentage. **Q2: Is it correctly understood that this is done using a *.runsettings* file?** Thank you :-) /Jesper For reference, this is my complete YAML pipeline in my test .net core solution. The solution is super simple - a .net core class library and a .net core test class library [![enter image description here](https://i.stack.imgur.com/WvmTW.png)](https://i.stack.imgur.com/WvmTW.png) ``` pool: vmImage: vs2017-win2016 steps: - task: DotNetCoreCLI@2 inputs: command: 'restore' feedsToUse: 'select' vstsFeed: 'd12c137f-dac4-4ea7-bc39-59bd2b784537' - task: DotNetCoreCLI@2 displayName: Build inputs: projects: '**/*.csproj' arguments: --configuration $(BuildConfiguration) --no-restore - task: DotNetCoreCLI@2 displayName: Run UnitTests inputs: command: test projects: '**/Test.Coverage/Test.Coverage.csproj' arguments: '--configuration $(BuildConfiguration) --collect:"XPlat Code Coverage" ' - script: 'dotnet tool install -g dotnet-reportgenerator-globaltool ' displayName: 'Install dotnet-reportgenerator-globaltool' - script: 'reportgenerator -reports:$(Agent.TempDirectory)/**/coverage.cobertura.xml -targetdir:$(build.sourcesdirectory) -reporttypes:"Cobertura"' displayName: 'Executes reportgenerator' - task: PublishCodeCoverageResults@1 displayName: 'Publish code coverage from $(build.sourcesdirectory)/Cobertura.xml' inputs: codeCoverageTool: Cobertura summaryFileLocation: '$(build.sourcesdirectory)/Cobertura.xml' ```
You could use the [dotnet-reportgenerator-globaltool](https://www.nuget.org/packages/dotnet-reportgenerator-globaltool/) package to generate the HTML Code Coverage Report. Here is an example: ``` - task: DotNetCoreCLI@2 displayName: Test inputs: command: test projects: '$(Parameters.TestProjects)' arguments: '--configuration $(BuildConfiguration) --collect:"XPlat Code Coverage" ' - script: 'dotnet tool install -g dotnet-reportgenerator-globaltool ' displayName: 'Command Line Script' - script: 'reportgenerator -reports:$(Agent.TempDirectory)/**/coverage.cobertura.xml -targetdir:$(build.sourcesdirectory) -reporttypes:"Cobertura"' displayName: 'Command Line Script' - task: PublishCodeCoverageResults@1 displayName: 'Publish code coverage from $(build.sourcesdirectory)/Cobertura.xml' inputs: codeCoverageTool: Cobertura summaryFileLocation: '$(build.sourcesdirectory)/Cobertura.xml' ``` Result: [![enter image description here](https://i.stack.imgur.com/xq6H6.png)](https://i.stack.imgur.com/xq6H6.png)
279313
I'm not exactly sure how to ask this question really, and I'm no where close to finding an answer, so I hope someone can help me. I'm writing a Python app that connects to a remote host and receives back byte data, which I unpack using Python's built-in struct module. My problem is with the strings, as they include multiple character encodings. Here is an example of such a string: "^LThis is an example ^Gstring with multiple ^Jcharacter encodings" Where the different encoding starts and ends is marked using special escape chars: * ^L - Latin1 * ^E - Central Europe * ^T - Turkish * ^B - Baltic * ^J - Japanese * ^C - Cyrillic * ^G - Greek And so on... I need a way to convert this sort of string into Unicode, but I'm really not sure how to do it. I've read up on Python's codecs and string.encode/decode, but I'm none the wiser really. I should mention as well, that I have no control over how the strings are outputted by the host. I hope someone can help me with how to get started on this.
There's no built-in functionality for decoding a string like this, since it is really its own custom codec. You simply need to split up the string on those control characters and decode it accordingly. Here's a (very slow) example of such a function that handles latin1 and shift-JIS: ``` latin1 = "latin-1" japanese = "Shift-JIS" control_l = "\x0c" control_j = "\n" encodingMap = { control_l: latin1, control_j: japanese} def funkyDecode(s, initialCodec=latin1): output = u"" accum = "" currentCodec = initialCodec for ch in s: if ch in encodingMap: output += accum.decode(currentCodec) currentCodec = encodingMap[ch] accum = "" else: accum += ch output += accum.decode(currentCodec) return output ``` A faster version might use str.split, or regular expressions. (Also, as you can see in this example, "^J" is the control character for "newline", so your input data is going to have some interesting restrictions.)
279803
I have a list of hostnames / IP addresses, and my script takes the each item from the text file and stores them in the `nodes` variable as a list. I want to ping each host and output the results to a text file. I can do it with a single host, but am having trouble understanding how to iterate through the list. I have looked at other posts on Stackoverflow, but most of the posts are using the OS module, which has been deprecated. **My code:** ``` #!/usr/local/bin/python3.6 import argparse import subprocess parser = argparse.ArgumentParser(description="Reads a file and pings hosts by line.") parser.add_argument("filename") args = parser.parse_args() # Opens a text file that has the list of IP addresses or hostnames and puts #them into a list. with open(args.filename) as f: lines = f.readlines() nodes = [x.strip() for x in lines] # Opens the ping program ping = subprocess.run( ["ping", "-c 1", nodes[0]], stdout=subprocess.PIPE, stderr=subprocess.PIPE ) # Captures stdout and puts into a text file. with open('output.txt', 'w') as f: print(ping.stdout.decode(), file=f) f.close() ```
You can iterate directly through your list of nodes like so: ``` with open(args.filename) as f: lines = f.readlines() nodes = [x.strip() for x in lines] with open('output.txt', 'w') as f: for node in nodes: # Opens the ping program ping = subprocess.run( ["ping", "-c 1", node], stdout=subprocess.PIPE, stderr=subprocess.PIPE ) # Captures stdout and puts into a text file. print(ping.stdout.decode(), file=f) ``` Note that you can also directly iterate over your input file, which is said to be more 'Pythonic' than using `readlines()`: ``` with open(args.filename,'r') as infile, open('output.txt', 'w') as outfile: for line in infile: node = line.strip() # Opens the ping program ping = subprocess.run( ["ping", "-c 1", node], stdout=subprocess.PIPE, stderr=subprocess.PIPE ) # Captures stdout and puts into a text file. print(ping.stdout.decode(), file=outfile) ``` Note that this is untested, but I can't see any obvious errors.
280131
net core web api application. I have configured swagger for my web api app. I am doing authentication and authorization from swagger and I do not have webapp or SPA. Now I want to do authorization based on groups. When I saw JWT token I saw hasgroups: true rather than group ids. This is changed If more than 5 groups are associated with user. Please correct me If my understanding is wrong. So I have now hasgroups: true. So to get groups I need to call graph api. Once I get groups from graph API I need to create policies. This is my understanding and please correct me If I am on wrong track. Now I have my below web api app. Startup.cs ``` public Startup(IConfiguration configuration) { Configuration = configuration; azureActiveDirectoryOptions = Configuration.GetSection("AzureAd").Get<AzureActiveDirectoryOptions>(); swaggerUIOptions = Configuration.GetSection("Swagger").Get<SwaggerUIOptions>(); } public IConfiguration Configuration { get; } public void ConfigureServices(IServiceCollection services) { services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>(); services .AddAuthentication(o => { o.DefaultScheme = JwtBearerDefaults.AuthenticationScheme; }) .AddJwtBearer(o => { o.Authority = azureActiveDirectoryOptions.Authority; o.TokenValidationParameters = new TokenValidationParameters { ValidAudiences = new List<string> { azureActiveDirectoryOptions.AppIdUri, azureActiveDirectoryOptions.ClientId }, }; }); services.AddMvc(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }) .SetCompatibilityVersion(CompatibilityVersion.Version_2_1); services.AddSwaggerGen(c => { c.SwaggerDoc("v1", new Info { Title = "My API", Version = "v1" }); c.AddSecurityDefinition("oauth2", new OAuth2Scheme { Type = "oauth2", Flow = "implicit", AuthorizationUrl = swaggerUIOptions.AuthorizationUrl, TokenUrl = swaggerUIOptions.TokenUrl }); c.AddSecurityRequirement(new Dictionary<string, IEnumerable<string>> { { "oauth2", new[] { "readAccess", "writeAccess" } } }); }); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IHostingEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } else { app.UseHsts(); } app.UseHttpsRedirection(); app.UseSwagger(); app.UseSwaggerUI(c => { c.OAuthClientId(swaggerUIOptions.ClientId); c.OAuthClientSecret(swaggerUIOptions.ClientSecret); c.OAuthRealm(azureActiveDirectoryOptions.ClientId); c.OAuthAppName("Swagger"); c.OAuthAdditionalQueryStringParams(new { resource = azureActiveDirectoryOptions.ClientId }); c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1"); }); app.UseAuthentication(); app.UseMvc(); } } ``` I have API as below. ``` [Authorize] [Route("api/[controller]")] [ApiController] public class ValuesController : ControllerBase { private IHttpContextAccessor _httpContextAccessor; public ValuesController(IHttpContextAccessor httpContextAccessor) { _httpContextAccessor = httpContextAccessor; } // GET api/values [HttpGet] public ActionResult<string> Get() { string owner = (User.FindFirst(ClaimTypes.Name))?.Value; var accessToken = _httpContextAccessor.HttpContext.Request.Headers["Authorization"]; return owner; } } ``` Now After log in I can hit to API. Now I want to have something like Authorize(admin/user) based on the groups I want to control authorization. Now I am having trouble, where I should call graph api and get groups. Can some one help me to understand this? Any help would be appreciated. Thanks
* `Ctrl`+`H` * Find what: `^(\d+)\.(\d+)\.` * Replace with: `$1-$2` * **CHECK** *Wrap around* * **CHECK** *Regular expression* * **UNCHECK** `. matches newline` * `Replace all` **Explanation:** ``` ^ # beginning of line (\d+) # group 1, 1 or more digits \. # a dot (\d+) # group 2, 1 or more digits \. # a dot ``` **Screen capture (before):** [![enter image description here](https://i.stack.imgur.com/WCnBG.png)](https://i.stack.imgur.com/WCnBG.png) **Screen capture (after):** [![enter image description here](https://i.stack.imgur.com/zbzlx.png)](https://i.stack.imgur.com/zbzlx.png)
280335
Since a lot of days, maybe, months, ago, one of my servers is crashing, almost everyday. Sometimes, more then once a day.![alt text][1] That is worrying me a lot. My lot at /vars/log/messages is full of lines like these below: ``` Oct 8 13:36:25 host kernel: Firewall: *TCP_IN Blocked* IN=eth1 OUT= MAC=00:30:48:63:3b:5d:00:1b:0d:ec:8e:40:08:00 SRC=93.150.204.152 DST=00.000.000.000 LEN=60 TOS=0x00 PREC=0x00 TTL=40 ID=33286 DF PROTO=TCP SPT=4957 DPT=23 WINDOW=5840 RES=0x00 SYN URGP=0 Oct 8 13:36:25 host kernel: Firewall: *TCP_IN Blocked* IN=eth1 OUT= MAC=00:30:48:63:3b:5d:00:1b:0d:ec:8e:40:08:00 SRC=93.150.204.152 DST=00.000.000.000 LEN=60 TOS=0x00 PREC=0x00 TTL=40 ID=14135 DF PROTO=TCP SPT=4959 DPT=23 WINDOW=5840 RES=0x00 SYN URGP=0 Oct 8 13:36:25 host kernel: Firewall: *TCP_IN Blocked* IN=eth1 OUT= MAC=00:30:48:63:3b:5d:00:1b:0d:ec:8e:40:08:00 SRC=93.150.204.152 DST=00.000.000.000 LEN=60 TOS=0x00 PREC=0x00 TTL=40 ID=63643 DF PROTO=TCP SPT=4958 DPT=23 WINDOW=5840 RES=0x00 SYN URGP=0 Oct 8 13:36:26 host kernel: Firewall: *TCP_IN Blocked* IN=eth1 OUT= MAC=00:30:48:63:3b:5d:00:1b:0d:ec:8e:40:08:00 SRC=93.150.204.152 DST=00.000.000.000 LEN=60 TOS=0x00 PREC=0x00 TTL=40 ID=4301 DF PROTO=TCP SPT=4960 DPT=23 WINDOW=5840 RES=0x00 SYN URGP=0 Oct 8 13:39:10 host kernel: Firewall: *UDP_IN Blocked* IN=eth1 OUT= MAC=00:30:48:63:3b:5d:00:1b:0d:ec:8e:40:08:00 SRC=218.30.22.82 DST=00.000.000.000 LEN=404 TOS=0x00 PREC=0x00 TTL=116 ID=34607 PROTO=UDP SPT=1271 DPT=1434 LEN=384 Oct 8 13:40:14 host kernel: Firewall: *TCP_IN Blocked* IN=eth1 OUT= MAC=00:30:48:63:3b:5d:00:1b:0d:ec:8e:40:08:00 SRC=119.152.144.40 DST=00.000.000.000 LEN=56 TOS=0x00 PREC=0x00 TTL=49 ID=23737 DF PROTO=TCP SPT=2435 DPT=23 WINDOW=5808 RES=0x00 SYN URGP=0 ``` Note that I replace my server's IP by 00.000.000.000. I aways get a lot or log messages about brute force attack. Failed login attemps... Can someone give me some Idea, about what to do to solve this problem? I already have CSF and DDOS deflate installed. But they are not solving the problem. My server is Cent OS, Apache2
It looks like someone is trying to connect via telnet (DPT=23) and SQL(DPT=1434) ports, these REALLY should not be exposed to the internet. I would completly filter them at the firewall. That should at least clean up your logs if the server keeps crashing you can try and see if it is something else.
280942
I have a weird problem with this method. It's called, when i edit a EditTextPreference, only when i change the value, and on a MultiSelectListPreference only the first time i change. This is my fragment code. ``` public class PrefFragment extends PreferenceFragment implements OnSharedPreferenceChangeListener { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setRetainInstance(true); addPreferencesFromResource(R.xml.my_preferences); } @Override public void onSharedPreferenceChanged(SharedPreferences sharedPreferences, String key) { // DO STUFF } @Override public boolean onPreferenceTreeClick(final PreferenceScreen preferenceScreen, final Preference preference) { // TODO Auto-generated method stub super.onPreferenceTreeClick(preferenceScreen, preference); // DO STUFF } @Override public void onResume() { super.onResume(); // Set up a listener whenever a key changes getPreferenceManager().getSharedPreferences().registerOnSharedPreferenceChangeListener(this); } @Override public void onPause() { super.onPause(); // Unregister the listener whenever a key changes getPreferenceManager().getSharedPreferences().unregisterOnSharedPreferenceChangeListener(this); } } ``` and this is my\_preferences.xml file ``` ``` `<PreferenceCategory android:title="Title1" > <EditTextPreference android:dialogMessage="@string/pref_pers_1_msg" android:dialogTitle="@string/pref_pers_1" android:key="@string/pref_pers_1_key" android:title="@string/pref_pers_1" /> <EditTextPreference android:dialogMessage="@string/pref_pers_2_msg" android:dialogTitle="@string/pref_pers_2" android:key="@string/pref_pers_2_key" android:title="@string/pref_pers_2" /> <EditTextPreference android:dialogMessage="@string/pref_pers_3_msg" android:dialogTitle="@string/pref_pers_3" android:key="@string/pref_pers_3_key" android:title="@string/pref_pers_3" /> </PreferenceCategory> <PreferenceCategory android:title="Title2" > <MultiSelectListPreference android:dialogTitle="title_dialog" android:entries="@array/array" android:entryValues="@array/array_elements" android:key="pref_pers_4" android:title="Title2" /> </PreferenceCategory>`
Move `unregisterOnSharedPreferenceChangeListener` from `onPause()` to `onStop()`.
280974
How do I set the color of the text of a label? ``` myLabel.setText("Text Color: Red"); myLabel.??? ``` Can I have two seperate colors in one label? For example here: The `"Text Color:"` to be black and the `"Red"` to be red.
You can set the color of a JLabel by altering the foreground category: ``` JLabel title = new JLabel("I love stackoverflow!", JLabel.CENTER); title.setForeground(Color.white); ``` As far as I know, the simplest way to create the two-color label you want is to simply make two labels, and make sure they get placed next to each other in the proper order.
281186
I followed the commands mentioned on this page... <https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html> elastic service is stared successfully. But I do not see external-ip ``` # /usr/local/bin/kubectl --kubeconfig="wzone2.yaml" get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 10m quickstart-es ClusterIP 10.245.97.209 <none> 9200/TCP 3m11s quickstart-es-discovery ClusterIP None <none> 9300/TCP 3m11s ``` I tried port forwarding command but that did not help. > > kubectl port-forward service/quickstart-es 9200 > > > How do I connect to this elastic server?
ClusterIP services are only available from inside the cluster. To make it visible from the outside you would need to change it to LoadBalancer type, and have an implementation of that available (read: be running on a cloud provider or use MetalLB).
281621
I have a photo gallery on my site with 1M photos in it. There are 2 search tables associated with it. Table #1 contains a list of words used in the photos. Table #2 contains a list of what words match up with what photos. Table #2 is 7M rows. I am testing partitioning this 7M row table because I have another set of tables with 120,000,000 rows. Queries against the 120M row wordmatch table below, with or without a join again the wordlist table below, take multiple seconds to run. I am trying to perform a join between these 2 tables and MySQL 5.6 EXPLAIN PARTITIONS shows it is using all the partitions. How can I redo this query to make this correctly use only a single partition? The 2 tables: ``` CREATE TABLE wordlist ( word_text varchar(50) NOT NULL DEFAULT '', word_id mediumint(8) unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY (word_text), KEY word_id (word_id) ) ENGINE=InnoDB CREATE TABLE wordmatch ( pic_id int(11) unsigned NOT NULL DEFAULT '0', word_id mediumint(8) unsigned NOT NULL DEFAULT '0', title_match tinyint(1) NOT NULL DEFAULT '0', PRIMARY KEY (word_id,pic_id,title_match), KEY pic_id (pic_id) ) ENGINE=InnoDB /*!50100 PARTITION BY HASH (word_id) PARTITIONS 11 */; ``` SQL query I am performing: ``` EXPLAIN PARTITIONS SELECT m.pic_id FROM wordlist w, wordmatch m WHERE w.word_text LIKE 'bacon' AND m.word_id = w.word_id +----+-------------+-------+-----------------------------------+-------+-----------------+---------+---------+----------------------------+------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-----------------------------------+-------+-----------------+---------+---------+----------------------------+------+-------------+ | 1 | SIMPLE | w | NULL | range | PRIMARY,word_id | PRIMARY | 52 | NULL | 1 | Using where | | 1 | SIMPLE | m | p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10 | ref | PRIMARY | PRIMARY | 3 | w.word_id | 34 | Using index | +----+-------------+-------+-----------------------------------+-------+-----------------+---------+---------+----------------------------+------+-------------+ ``` The join produces a query that uses all partitions. If I retrieve the word\_id # first and go straight against the wordmatch table, everything is ok: ``` EXPLAIN PARTITIONS SELECT m.pic_id FROM wordmatch m WHERE m.word_id = 219657; +----+-------------+-------+------------+------+---------------+---------+---------+-------+-------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------------+------+---------------+---------+---------+-------+-------+-------------+ | 1 | SIMPLE | m | p9 | ref | PRIMARY | PRIMARY | 3 | const | 18220 | Using index | +----+-------------+-------+------------+------+---------------+---------+---------+-------+-------+-------------+ ``` How do I get this to work correctly? I prefer not to split this into multiple queries if possible. You may have noticed I am using LIKE above. People will often search on bacon% to get plurals of words, etc. Example: ``` SELECT m.pic_id FROM wordlist w, wordmatch m WHERE w.word_text LIKE 'bacon%' AND m.word_id = w.word_id ``` I realize this wildcard search may result in 2 or more partitions being selected. This is probably ok, although if there is a way to change the partitioning to prevent that, I welcome any tips. **Edit #1:** Added details as my original question was confusing. I was testing my 7M row table first before doing my 120M row table. **Edit #2:** Resolution to my overall issue: My performance issues seem to be resolved as I partitioned my 120M row table into 101 partitions per this post: [MySQL performance: partitions](https://stackoverflow.com/questions/16721772/mysql-performance-multiple-tables-vs-index-on-single-table-and-partitions) I do not know if MySQL is going against all the partitions at runtime - Ollie Jones says it does not in the comments below and EXPLAIN PARTITIONS is incorrect - but it is fast now so I am happy.
To get your query working with efficient indexing is probably a good idea before you dive into the partitioning project. Here's your query refactored to use `JOIN`: ``` SELECT m.pic_id FROM wordlist w JOIN wordmatch m ON w.word_id = m.word_id WHERE w.word_text LIKE 'bacon%' ``` This query can use a compound index on `wordlist (word_test, word_id)`. It will random-access the index for the first matching `word_text`, and then scan the index retrieving the `word_id` values until it gets to the last matching `word\_text. It can also use your existing primary key on `wordmatch (word_id, pic_id)` It speeds up your query because the data base engine can satisfy your query directly from the index without having to bat the hard drive back and forth to the table itself. So, give those indexes a try. Your large table, the `wordmatch` table, should work fairly well without partitioning. It's more common to partition tables that contain lots of content (like the text of articles) than it is to partition this kind of fixed-row-size join table. Notice that your `EXPLAIN` announces it will look at all the partitions because `EXPLAIN` can't tell which partition (or partitions) your `w.word_text LIKE 'bacon%'` WHERE-clause will need to examine. `EXPLAIN` isn't as dumb as a box of hammers, but it is close. MySQL won't examine the partitions it doesn't need to, but it doesn't know which partitions are involved until runtime. Have you considered using FULLTEXT search? It might simplify what you're doing.
281756
this must be possible via PowerShell as it can be done from Central Administration. I have a task to modify just the public URL of an existing alternate access mapping (AAM). ``` PS U:\> $aam = Get-SPAlternateURL -WebApplication http://mysite PS U:\> $aam IncomingUrl Zone PublicUrl ----------- ---- --------- http://default.mysite Default http://default.mysite http://mysite Internet http://mysite ``` So with the above AAM, in the Internet zone, I need to change the public URL to say `http://othermysite` If I run ``` PS U:\> Set-SPAlternateURL -Identity http://mysite -Zone Internet -Url http://othermysite ``` I end up with both the public URL and Internal URL changed ``` PS U:\> $aam = Get-SPAlternateURL -WebApplication http://othermysite PS U:\> $aam IncomingUrl Zone PublicUrl ----------- ---- --------- http://default.mysite Default http://default.mysite http://othermysite Internet http://othermysite ``` As well, Set-SPAlternateURL does not provide an argument -Internal like the New-SPAlternateURL cmdlet. If I run ``` PS U:\> New-SPAlternateURL -WebApplication http://default.mysite -Zone Internet -Url http://mysite -Internal ``` I get the correct aam, but I have two in the list of Internet zone aam now. ``` PS U:\> $aam = Get-SPAlternateURL -WebApplication http://default.mysite PS U:\> $aam IncomingUrl Zone PublicUrl ----------- ---- --------- http://default.mysite Default http://default.mysite http://othermysite Internet http://othermysite http://mysite Internet http://othermysite ``` If I try to remove the aam ``` http://othermysite Internet http://othermysite ``` both aam for the Internet zone are removed. So how can I get the following accomplished via PowerShell, or is this only possible via the GUI in Central Administration? ``` IncomingUrl Zone PublicUrl ----------- ---- --------- http://default.mysite Default http://default.mysite http://mysite Internet http://othermysite ```
You need to do this in three steps as the IncomingURL and PublicURL are equal. Start by removing what you have on the internet zone (1) and build your url-structure starting with the PublicURL (2). When that's done - set a new URL for the IncomingURL (3). ``` #(1) Remove-SPAlternateURL -WebApplication http://mysite -Zone "Internet" #(2) New-SPAlternateURL http://othermysite -Zone "Internet" #(3) New-SPAlternateURL http://mysite -Zone "Internet" -internal ``` Now you have what you wish for: ``` IncomingUrl Zone PublicUrl ----------- ---- --------- http://mysite Internet http://othermysite ``` From this setting, you are able to set the publicURL since it differs from the IncomingURL: ``` Get-SPAlternateURL β€œhttp://othermysite” | Set-SPAlternateURL β€œhttp://newothermysite” ``` And the outcome will be ``` IncomingUrl Zone PublicUrl ----------- ---- --------- http://mysite Internet http://newothermysite ``` Reference: [Remove-SPAlternateUrl](http://technet.microsoft.com/en-us/library/ff607587.aspx), [New-SPAlternateUrl](http://technet.microsoft.com/en-us/library/ff607632.aspx) and [Managing SharePoint 2013 with PowerShell: Working with Alternate Access Mappings](http://www.petri.co.il/manage-alternate-access-mapping-in-sharepoint-2013-powershell.htm)
282619
[HyperSpace](http://codepen.io/noahblon/pen/GKflw) is neat demo that (I think) only uses css. But when I copy the html and css to my directory, [it doesn't work](http://www.phillipsenn.com/LR/WinnerWinnerChickenDinner/HyperSpace/HyperSpace.htm). Q: What am I missing?
The CodePen demo is set to apply [-prefix-free](http://leaverou.github.io/prefixfree/), which basically adds the prefixes when necessary. ![Screenshot of codepen using -prefix-free](https://i.stack.imgur.com/Pmq98.png) Either you add -prefix-free to your project (which I don't recommend) or add the prefixes when necessary.
282897
I am trying to build a query to get data per hour for a particular day(or today). I have a device data table that ``` # Name Type 1 : idPrimary -- int(11) 2 : inputDate -- varchar(32) 3 : input1 -- varchar(11) 4 : input2 -- varchar(11) ``` And in inputDate my data in this format : ``` SELECT inputDate FROM `deviceData` WHERE `inputDate` BETWEEN '2015-05-29 16:30:07' AND '2015-05-29 21:30:07' ORDER BY id ``` Showing rows 0 - 24 (3304 total, Query took 0.3487 seconds.) ```html 2015-05-29 16:30:09 2015-05-29 16:30:20 2015-05-29 16:30:25 2015-05-29 16:30:41 2015-05-29 16:30:46 2015-05-29 16:30:51 2015-05-29 16:30:56 2015-05-29 16:31:01 2015-05-29 16:31:07 2015-05-29 16:31:49 2015-05-29 16:31:54 2015-05-29 16:32:00 2015-05-29 16:32:10 2015-05-29 16:32:15 ``` Now I want to take only one data in a hour, The minutes and seconds isn't important.For example : ```html 2015-05-29 5:*:* 2015-05-29 6:*:* 2015-05-29 7:*:* 2015-05-29 8:*:* ``` I don't know the best way to do this. I use regex but i can't do this. Thanks
Well, this is hideous, but. Try this query :) ``` select * from (select substr(inputDate, 1, 13) as hours, input1 from deviceData order by inputDate desc) t GROUP BY hours; ``` I might ve messed up with sorting direction, though/
284465
I am a Maya user and I am currently writting an Auto-Rig. I created different Classes for each major tasks of the tool. (ex: Class\_UI, Class\_Arms\_Rig, etc..) The problem I have is that I can't call a method from "Class\_Joints" (the class that will generates every needed Joints) with my "Class\_UI" Here are the codes : First the Class\_UI ``` import sys sys.path.append('G:\\3D2\\Script\\Auto_Rig') import Class_Joints import Class_Arms import maya.cmds as mc class Window_UI(object): # Initializing global variables def __init__(self): # Getting acces to the different modules self.Arms = Class_Arms.Arms_Rig() self.Joints = Class_Joints.Gen_Joints() # Create Ui self.create_UI() # Creating the UI def create_UI(self): # Create window self.UI = mc.window(title='Auto-Rig Tool', w=(300), h=(350)) # Main layout self.mainLayout = mc.menuBarLayout() ### Joints Option ### # Create Joints Button self.createJointsButton = mc.button(label='Create Joints', command=self.Joints.gen_arms_joints) Window_UI() mc.showWindow() ``` Then the Class\_Joints : ``` import maya.cmds as mc class Gen_Joints: # Creating arm Jnts and the list of it def gen_arms_joints(self): self.shoulderJnt = mc.joint(absolute=True, position=[5,8,0], n='L_Shoulder_Jnt') self.elbowJnt = mc.joint(absolute=True, position=[10,8,-1.5], n='L_Elbow_Jnt') self.wristJnt = mc.joint(absolute=True, position=[15,8,0], n='L_Wrist_Jnt') self.handcupJnt = mc.joint(absolute=True, position=[18,8,0], n='L_HandCup_Jnt') self.jntList = mc.ls(self.shoulderJnt, self.elbowJnt, self.wristJnt, self.handcupJnt) ``` When I run the `Class_UI` Code, the button within the UI is supposed to run the `gen_arms_joints` method within the `Class_Joints` But I get this error message : `# Error: gen_arms_joints() takes exactly 1 argument (2 given) #` I know that self is an implicit argument here but I do not know how to avoid this error. Thank you all for your time. :D Cordially, Luca.
Two things i would recommend you do. I dont use Maya but i have built apps with multiple different GUIs. 1. Every GUI I've used when it comes to buttons is the first argument is a reference to self, and then there is usually 1 or 2 more arguments passed in. Some pass the reference to the button itself while others pass a argument that holds event details. My guess is this is what is happening. When you click the button it is passing in an "event" object that hold details about what was clicked and other details. 2. To truly find out what is passed change your function signature to this and see what is logged. ``` def gen_arms_joints(self, mystery_second_arg): print(type(mystery_second_arg), mystery_second_arg) self.shoulderJnt = mc.joint(absolute=True, position=[5,8,0], n='L_Shoulder_Jnt') self.elbowJnt = mc.joint(absolute=True, position=[10,8,-1.5], n='L_Elbow_Jnt') self.wristJnt = mc.joint(absolute=True, position=[15,8,0], n='L_Wrist_Jnt') self.handcupJnt = mc.joint(absolute=True, position=[18,8,0], n='L_HandCup_Jnt') self.jntList = mc.ls(self.shoulderJnt, self.elbowJnt, self.wristJnt, self.handcupJnt) ```
284548
I am interested in creating new rows is SAS that are conditional on specific variables. Suppose these are the names of the columns RecordID, ItemName, ItemCount, ItemX, ItemY, ItemZ. ItemX, ItemY, and ItemZ are numeric variables that can be equal to or greater than 0. When ItemX, ItemY, and/or ItemZ holds a value greater than 0, I want a new row to be created where ItemName contains the variable name (ItemX, ItemY, ItemZ) and ItemCount to contain the numeric value. For example, if RecordNumber=1 has ItemX=5 and ItemY=10, then ItemName=ItemX and ItemCount=5 will appear in a new row. ItemName=ItemY and ItemCount=10 has its own row. How should I go about this in SAS? Thanks.
Pretty easy: ``` data want(keep=ItemName ItemCount); set have; array items[3] itemx itemy itemz; format ItemName $32. ItemCount best.; do i=1 to 3; if items[i] > 0 then do; ItemName = vname(items[i]); ItemCount = items[i]; output; end; end; run; ``` Create an array of the item variables. Loop over that array. Check your condition. Use the `vname()` function to get the variable name and the `output` statement to create the output record.
284825
``` import pandas as pd dfs = pd.HDFStore('xxxxx.h5') ``` throws this error: `"ImportError: HDFStore requires PyTables, "No module named tables" problem importing"` I tried to install PyTables, which Requires Cython. I have Cython 0.21 installed, but it is throwing an error stating that Cython should be greater than 0.13 Here is the log that I am getting: ``` ".. ERROR:: You need Cython 0.13 or greater to compile PyTables! ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 in /private/tmp/pip_build_root/tables Storing debug log for failure in /Users/nikhilsahai/Library/Logs/pip.log Nikhils-MacBook-Pro:~ nikhilsahai$ sudo pip install cython Requirement already satisfied (use --upgrade to upgrade): cython in /Library/Python/2.7/site-packages/Cython-0.21-py2.7-macosx-10.9-intel.egg Cleaning up..." ``` Please guide me how to do solve this issue.
Simply updating `pytables` with: ``` pip install --upgrade tables ``` worked for me.
284927
[This answer](https://aviation.stackexchange.com/a/47132) made me wonder if engines are stopped automatically after an emergency landing as soon as one of the exit doors are opened. > > (...) if the engine is still running, you're about to run forward into the area where you're at risk of being sucked into the running engine. > > > I'm not sure how high the risk would be to be sucked in or being blasted away by the jet-blast, I guess a lot of the emergency landings are because of engine failure so they aren't running anymore. Maybe as an added extra - how long does it take for an engine to spin down to a safe RPM (so that it won't suck in people or blast them away)?
There is no automatic shutdown. However, shutting down the engines is part of the evacuation checklist done by the flight crew. It's happened before that the controls link was severed and an engine could not be shutdown, such as in [Qantas Flight 32](https://en.wikipedia.org/wiki/Qantas_Flight_32). > > Upon landing, the crew were unable to shut down the No. 1 engine, which had to be doused by emergency crews until flameout was achieved. > > > Normally the captain would then instruct the cabin crew to not use that side of the plane. Same thing happens when there is a fire on one side, in that case the captain would also steer the plane so that the [good side is upwind](https://aviation.stackexchange.com/q/37104/14897). Aircraft are tested to ensure all occupants can evacuate with 50% of the emergency exits not working (glass half-full: 50% of the doors working). See: [How are evacuation tests made as realistic as possible?](https://aviation.stackexchange.com/q/21160/14897) About how long it takes for the engine to stop producing thrust, it's about 5 seconds. If there is no or little thrust, there is no suction to worry about. Also note that figure is for a spool down from full thrust (not from idle thrust, which is to be expected after landing). [![enter image description here](https://i.stack.imgur.com/ZMZLO.png)](https://i.stack.imgur.com/ZMZLO.png) (Source: Jet Transport Performance Methods)
285159
I have this ``` USAGE="Usage: -f [file name] -c [column] -v [value] ." while getopts ":f:c:v: " OPTIONS; do case $OPTIONS in f ) file=$OPTARG;; c ) column=$OPTARG;; v ) value=$OPTARG;; h ) echo $USAGE;; \? ) echo $USAGE exit 1;; * ) echo $USAGE exit 1;; esac done ``` the filename is fun2.sh ... I want to echo the $USAGE if they fail to put a parameter in -f or -c or -v. I have tryied putting a ``` " ") echo $USAGE exit1;; ``` but that didn't work.. I also tried ``` if [ $file || $column || $value == "" ] echo $USAGE ``` but then again it didn't work.. Any Help would be appreciated EDIT What worked for me ``` if [ "$file" == "" ] ; then echo $USAGE elif [ "$column" == "" ]; then echo $USAGE elif [ "$value" == "" ]; then echo $USAGE else show_column check_temp file_move write_file ```
You can't do this in the loop. Check the values of the variables after the loop and print `$USAGE` if they are empty or the values are wrong (not an integer, for example).
285288
I'm writing a utility Mac OS X app that basically acts as a web server accepting incoming HTTP requests (think of it as a mock REST API server). I want to be able to support HTTPS, but ideally I'd like to remove the requirement for my users to have to purchase their own SSL certificates. I've been thinking a little on how I might achieve this. Let's say I register a domain called `myapp.com`. I then purchase an SSL cert for `myserver.myapp.com` that is signed by a registered CA. I ship my app with those SSL cert details embedded within it. All my users have to do is update their `/etc/hosts` file to point `myserver.myapp.com` to whatever IP address my app is installed and running on. In fact, by far, the most common scenario would be my app running on the same machine as the client, so I'm considering updating the main DNS entry for `myserver.myapp.com` to point to `127.0.0.1`, and most users wouldn't have to change anything. So, that's the basic theory. What have I missed that would make this an unworkable plan? A couple of things that have crossed my mind: * I could use a self-signed cert. However, many SSL clients barf (or throw up warnings) if the cert doesn't have a valid CA chain. I'm happy to pay the money for a real cert to alleviate this inconvenience for my users. * I would be embedding the private key for my SSL cert into my app. In theory, someone could extract that and use it to impersonate my app. I guess my reaction is "so what?" My app is a small productivity app, it isn't an e-commerce site. It doesn't collect sensitive info. It literally just simulates web server responses so devs can test their apps. Any advice/feedback would be greatly appreciated. Thanks.
This won't work - but for nontechnical reasons. Distributing an SSL certificate to your users along with its associated key will violate the issuance terms of your SSL certificate provider, and they will revoke the certificate when they discover what you have done. (This happened, for example, [when Pivotal tried to offer SSL service for developers through sslip.io](https://blog.pivotal.io/labs/labs/sslip-io-a-valid-ssl-certificate-for-every-ip-address).) This will, of course, cause your application to stop working. If your users have administrative access to their machines, they can [create and trust their own self-signed CA using Keychain Access](https://support.apple.com/kb/PH20132?locale=en_US). Once they have done so, they could create a certificate (again, using Keychain Access) and insert that into your application to enable SSL.
285926
**Background on the machine I'm having a problem with:** The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2-year-old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a DFI CM33-TL/G ATX (identified using SiSoft Sandra) motherboard hosting an Intel Celeron 1.3Β GHz CPU, 768Β MB PC133 SDRAM, a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board based on the NEC uPD720102 chipset containing three external and one internal USB sockets. It's also hosting a 1.44Β MB floppy drive on FDD0, a new 80Β GB Western Digital hard drive running as master on IDE0 and a Panasonic DVD+/-RW running as master on IDE1. All this is sitting in a slimline case running off a Macron Power MPT-135 135Β W Flex power supply. The motherboard is running a version of Award BIOS 05/24/2002-601T-686B-6A6LID4AC-00. Could this be updated? If so, from where? I've raked through the manufacturer's website, but I can't find any hint of downloads for either drivers *or* BIOS updates. The hard disk is freshly formatted and built with WindowsΒ XP Professional/Service Pack 3 and is up to date with all current patches. In addition to WindowsΒ XP, the only other software it's running is Skype 4.1 (4.2 hangs the whole machine as soon as it starts up, requiring a hard boot to recover). It's got a Daytek MV150 15" touch screen hooked up to the on board VGA and COM1 sockets with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. **The problem:** If I hook any devices to the USB 2.0 sockets, it hangs the whole machine, and I have to hard boot it to get it back up. If I have any devices attached to the USB 2.0 sockets when I boot up, it hangs before Windows gets to the login prompt and I have to hard boot it to recover. **Workarounds found:** I can plug the same devices into the on board USB 1.0 sockets and everything works fine, albeit at reduced performance. I've tried three different kinds of USB thumb drives, three different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0, but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. **Attempted solutions:** I've seen suggestions that this could be a power problem - that the PSU just doesn't have the power to drive these ports. While I'm doubtful this is the problem [after all the motherboard has the same standard connector regardless of the PSU power], I tried disabling all the on board devices that I'm not using - on board LAN, the second COM port, the AGP connector, etc. through the BIOS in what I'm sure is a futile attempt to reduce the power consumption... I also modified the ACPI and power management settings. It didn't have any noticeable affect, although it didn't do any harm either. Could the wattage of the PSU really cause this problem? If it can, is there anything I need to be aware of when replacing it or do I just need to make sure it's got a higher power than the current one? My interpretation was that the power only affected the number of drives you could hook up to the power connectors. Is that right? I've installed the USB card in another machine, and it works without any issues, so it's not a problem with the USB card itself, and Windows says the card is installed and working correctly... right up until I connect a device to it. The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the Wi-Fi and USB cards in the slots... although I'm not sure if this will make any difference - does anyone have any experience that would suggest this might work? **Other thoughts/questions:** Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will *any* version of Award BIOS function in its place? **Question:** Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine?
I had all manner of problems on an older machine trying to use my iPod over USB through a cheap USB 2.0 PCI card I had. In the end, I chucked it and bought a more expensive one, and all the problems went away. I'd suggest trying that. I can't imagine that a USB 2.0 PCI card will cost much these days, but do what I did and buy the next one up from the cheapest one you can get.
285959
I want to have custom colors according to the mouse events (mouse enter, exit, pressed, etc). So to accomplish this, I wrote the code below. It is fine for everything, except in the case of the mouse pressed event, which does nothing. It only works if I override the color in the `UIManager` like this `UIManager.put("Button.select", Color.red);`. The problem with the `UIManager` is that it will change for all my buttons. Can anyone tell me what I might be doing wrong or what is the best approach to accomplish what I'm trying to do? My Code: ``` final JButton btnSave = new JButton("Save"); btnSave.setForeground(new Color(0, 135, 200).brighter()); btnSave.setHorizontalTextPosition(SwingConstants.CENTER); btnSave.setBorder(null); btnSave.setBackground(new Color(3, 59, 90)); btnSave.addMouseListener(new MouseListener() { @Override public void mouseReleased(MouseEvent e) { btnSave.setBackground(new Color(3, 59, 90)); } @Override public void mousePressed(MouseEvent e) { // Not working :( btnSave.setBackground(Color.pink); } @Override public void mouseExited(MouseEvent e) { btnSave.setBackground(new Color(3, 59, 90)); } @Override public void mouseEntered(MouseEvent e) { btnSave.setBackground(new Color(3, 59, 90).brighter()); } @Override public void mouseClicked(MouseEvent e) { btnSave.setBackground(new Color(3, 59, 90).brighter()); } }); ``` Edit1: So, instead of `MouseListener`, I'm using `ChangeListener` and `ButtonModel` as suggested by mKorbel. With this code I'm still not observing any changes on mouse pressed in the button except when I press and drag outside the button. Any thoughts? ``` btnSave.getModel().addChangeListener(new ChangeListener() { @Override public void stateChanged(ChangeEvent e) { ButtonModel model = (ButtonModel) e.getSource(); if (model.isRollover()) { btnSave.setBackground(new Color(3, 59, 90).brighter()); } else if (model.isPressed()) { btnSave.setBackground(Color.BLACK); } else { btnSave.setBackground(new Color(3, 59, 90)); } } }); ```
The problem is caused by the fact that a JButton has its content area filled by default and that the Metal L&F will automatically fill it with its internal chosen color when button is pressed. Best thing to do, is to extend JButton to create your own button, disable content area filled, and paint yourself the background of the button. Here is a small demo for that (not sure it works on other L&F, even pretty sure it does not): ``` import java.awt.Color; import java.awt.Graphics; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.SwingConstants; import javax.swing.SwingUtilities; public class TestButton { class MyButton extends JButton { private Color hoverBackgroundColor; private Color pressedBackgroundColor; public MyButton() { this(null); } public MyButton(String text) { super(text); super.setContentAreaFilled(false); } @Override protected void paintComponent(Graphics g) { if (getModel().isPressed()) { g.setColor(pressedBackgroundColor); } else if (getModel().isRollover()) { g.setColor(hoverBackgroundColor); } else { g.setColor(getBackground()); } g.fillRect(0, 0, getWidth(), getHeight()); super.paintComponent(g); } @Override public void setContentAreaFilled(boolean b) { } public Color getHoverBackgroundColor() { return hoverBackgroundColor; } public void setHoverBackgroundColor(Color hoverBackgroundColor) { this.hoverBackgroundColor = hoverBackgroundColor; } public Color getPressedBackgroundColor() { return pressedBackgroundColor; } public void setPressedBackgroundColor(Color pressedBackgroundColor) { this.pressedBackgroundColor = pressedBackgroundColor; } } protected void createAndShowGUI() { JFrame frame = new JFrame("Test button"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); final MyButton btnSave = new MyButton("Save"); btnSave.setForeground(new Color(0, 135, 200).brighter()); btnSave.setHorizontalTextPosition(SwingConstants.CENTER); btnSave.setBorder(null); btnSave.setBackground(new Color(3, 59, 90)); btnSave.setHoverBackgroundColor(new Color(3, 59, 90).brighter()); btnSave.setPressedBackgroundColor(Color.PINK); frame.add(btnSave); frame.setSize(200, 200); frame.setVisible(true); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { @Override public void run() { new TestButton().createAndShowGUI(); } }); } } ```
286384
that is the code i connect to untrusted server but i always get this error i put the code in the using statement but it is not working return empty string also tried and see the link of this issue before but it is not working ``` private String requestAndResponse(String url) { string responseValue = string.Empty; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = httpMethod.ToString(); HttpWebResponse response = null; // for un trusted servers System.Net.ServicePointManager.ServerCertificateValidationCallback = delegate { return true; }; try { using (response = (HttpWebResponse)request.GetResponse()) { if (response.StatusCode != HttpStatusCode.OK) { throw new ApplicationException("error code " + response.StatusCode.ToString()); } } //process the response stream ..(json , html , etc.. ) Encoding enc = System.Text.Encoding.GetEncoding(1252); StreamReader loResponseStream = new StreamReader(response.GetResponseStream(), enc); responseValue = loResponseStream.ReadToEnd(); loResponseStream.Close(); response.Close(); } catch (Exception ex) { throw ex; } return responseValue; } ```
The first `using` block is disposing your response. Move your code after this block into the `using` statement.
286513
Im using <https://trezor.io/> to send signed transactions I have succesfully send transactions to: * Truffle development network * Ropsten Right now im using a private local go-ethereum node, the invocation of the signed transaction is exactly the same code as for truffle and ropsten and i get `Invalid sender` when sending the transaction When doing some research about this i found that this error is produced by not having the same chainId and networkId, i check my configuration and how i ran the geth node and the chainId is the same that networkId I have specified in the genesis.json of my geth node the chain id 10 ``` "config": { "chainId": 10, "homesteadBlock": 0, "eip150Block": 0, "eip150Hash": "0x0000000000000000000000000000000000000000000000000000000000000000", "eip155Block": 0, "eip158Block": 0, "byzantiumBlock": 0, "clique": { "period": 15, "epoch": 30000 } } ``` and i ran the node with networkId 10: ``` geth --datadir node1/ --syncmode 'full' --port 30311 --rpc --rpcport 8545 --rpcaddr '192.168.1.244' --rpccorsdomain="*" --ws --wsaddr "192.168.1.244" --wsorigins "http://192.168.1.182" --wsport 8546 --wsapi 'personal,db,eth,net,web3,txpool,miner' --rpcapi 'personal,db,eth,net,web3,txpool,miner' --bootnodes 'enode://8235e42bec82ad8944dcf65b57d25b7a970d6e94f35961a188b2dfd306c6964f2d00d078e3bf1d9ccc6664112669d7ea9c04aa45a8ab9113aa8fe8a04b088f80@127.0.0.1:30310' --networkid 10 --gasprice '1' -unlock 'd770217581e0ca1265c88c9faaff81f5038b129f' --password node1/password.txt --mine console ``` Any ideas of why this could happen? Im using geth 1.8 and web3 1.0-beta33 I think that it is something about the configuration of geth, because as i said, i have sent transactions to Truffle dev and Ropsten with the same code Here is how i send the transactions (source code) <https://github.com/ethereum/web3.js/issues/1669>
Below, I use `cut` to bin the data and then `table` to count instances of each bin. ``` data.frame(cut(q4$prop, breaks = c(0, 0.1, 0.2, 0.3)) %>% table) ``` produces ``` # . Freq # 1 (0,0.1] 341 # 2 (0.1,0.2] 13 # 3 (0.2,0.3] 2 ```
286763
Been fighting with this on and off for 48 hours now; I'm still getting undefined reference errors when attempting to link a dynamic library with its dependency - despite all exports existing, and the library being found successfully. Scenario: * libmemory (C++) - exports functions with `extern "C"` * libstring (C) - exports functions, imports from libmemory libmemory builds successfully: ``` $ g++ -shared -fPIC -o ./builds/libmemory.so ...$(OBJECTS)... ``` libstring compiles successfully, but fails to link: ``` $ gcc -shared -fPIC -o ./builds/libstring.so ...$(OBJECTS)... -L./builds -lmemory ./temp/libstring/string.o: In function `STR_duplicate': string.c:(.text+0x1cb): undefined reference to `MEM_priv_alloc' ./temp/libstring/string.o: In function `STR_duplicate_replace': string.c:(.text+0x2a0): undefined reference to `MEM_priv_free' string.c:(.text+0x2bf): undefined reference to `MEM_priv_alloc' /usr/bin/ld: ./builds/libstring.so: hidden symbol `MEM_priv_free' isn't defined /usr/bin/ld: final link failed: Bad value collect2: error: ld returned 1 exit status ``` Verifying libmemory exports its symbols, and the library itself is found by using `-v` to gcc: ``` ... attempt to open ./builds/libmemory.so succeeded -lmemory (./builds/libmemory.so) ... $ nm -gC ./builds/libmemory.so | grep MEM_ 0000000000009178 T MEM_exit 0000000000009343 T MEM_init 00000000000093e9 T MEM_print_leaks 00000000000095be T MEM_priv_alloc 000000000000971d T MEM_priv_free 00000000000099c1 T MEM_priv_realloc 0000000000009d26 T MEM_set_callback_leak 0000000000009d3f T MEM_set_callback_noleak $ objdump -T ./builds/libmemory.so | grep MEM_ 0000000000009d3f g DF .text 0000000000000019 Base MEM_set_callback_noleak 00000000000093e9 g DF .text 00000000000001d5 Base MEM_print_leaks 0000000000009d26 g DF .text 0000000000000019 Base MEM_set_callback_leak 00000000000099c1 g DF .text 0000000000000365 Base MEM_priv_realloc 0000000000009343 g DF .text 00000000000000a6 Base MEM_init 00000000000095be g DF .text 000000000000015f Base MEM_priv_alloc 000000000000971d g DF .text 00000000000002a4 Base MEM_priv_free 0000000000009178 g DF .text 00000000000000a7 Base MEM_exit $ readelf -Ws ./builds/libmemory.so | grep MEM_ 49: 0000000000009d3f 25 FUNC GLOBAL DEFAULT 11 MEM_set_callback_noleak 95: 00000000000093e9 469 FUNC GLOBAL DEFAULT 11 MEM_print_leaks 99: 0000000000009d26 25 FUNC GLOBAL DEFAULT 11 MEM_set_callback_leak 118: 00000000000099c1 869 FUNC GLOBAL DEFAULT 11 MEM_priv_realloc 126: 0000000000009343 166 FUNC GLOBAL DEFAULT 11 MEM_init 145: 00000000000095be 351 FUNC GLOBAL DEFAULT 11 MEM_priv_alloc 192: 000000000000971d 676 FUNC GLOBAL DEFAULT 11 MEM_priv_free 272: 0000000000009178 167 FUNC GLOBAL DEFAULT 11 MEM_exit 103: 0000000000009343 166 FUNC GLOBAL DEFAULT 11 MEM_init 108: 0000000000009178 167 FUNC GLOBAL DEFAULT 11 MEM_exit 148: 0000000000009d3f 25 FUNC GLOBAL DEFAULT 11 MEM_set_callback_noleak 202: 00000000000095be 351 FUNC GLOBAL DEFAULT 11 MEM_priv_alloc 267: 000000000000971d 676 FUNC GLOBAL DEFAULT 11 MEM_priv_free 342: 0000000000009d26 25 FUNC GLOBAL DEFAULT 11 MEM_set_callback_leak 346: 00000000000099c1 869 FUNC GLOBAL DEFAULT 11 MEM_priv_realloc 366: 00000000000093e9 469 FUNC GLOBAL DEFAULT 11 MEM_print_leaks ``` Is there something horribly simple I'm missing? All the other related questions to this have simple answers such as link library order, and the paths used - but I've already verified they're in place and working as expected. Tinkering with `-fvisibility` led to no changes either. The same result exists whether using clang or gcc. `Linux 3.16.0-38-generic` `gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3)`
I don't think you need a custom directive to fire the `$http` call to fetch the results whenever your input value changes.Instead you can do as below ``` <input type="text" ng-model="header.search" typeahead-on-select="searchClicked($item)" uib-typeahead="state as state.data.name for state in suggestions($viewValue) | filter:$viewValue | limitTo:8" typeahead-min-length="0" placeholder="SΓΈg..." > ``` In your JS controller you can write the function `$scope.suggestions()` to fetch the new results on query type. ``` var app = angular.module('plunker', ['ui.bootstrap']); app.factory('dataProviderService', ['$http', function($http) { var factory = {}; factory.getCities = function(input) { //I've used google api to fetch cities ..we can place our service call here.. return $http.get('//maps.googleapis.com/maps/api/geocode/json', { params: { address: input, sensor: false } }).then(function(response) {//on success the results can be further processed as required.. return response.data.results.map(function(item) { return item.formatted_address; }); }); }; return factory; }]); app.controller('TypeaheadCtrl', ['$scope', '$log','$http', 'dataProviderService', function($scope, $log,$http, dataProviderService) { $scope.suggestions= function(viewValue) { //you can call your own service call via a factory return dataProviderService.getCities(viewValue); }; }]); ``` Here is the [DEMO](https://plnkr.co/edit/4Z027v3meFwbfRPlFzDV?p=preview) of the working sample for the above mentioned way,hope this helps you.In this approach no mater how fast you type you will always fetch fresh results instantly.
286819
There is a RestFull method that return a List of Menu objects ``` public ResponseEntity<List<Menu>> getMenus() { .. } ``` But I don't know how to get them from the RestTemplate, getting the class from ResponseEntity> ``` ResponseEntity<List<Menu>> response = restTemplate .exchange("http://127.0.0.1:8080/elcor/api/users/1/menus", HttpMethod.GET, entity, ResponseEntity<List<Menu>>.getClass()); ```
Try use `ParameterizedTypeReference` ``` ResponseEntity<List<Menu>> response = restTemplate .exchange("URI", HttpMethod.GET, entity, new ParameterizedTypeReference<List<Menu>>() { }); ```
286887
I am making a search engine for songs using tags, and I have trouble building the SQL query that will list all the songs that match with the tags. The database looks like this: <http://i.imgur.com/5zmfAz8.png> Songs have many Tags through an intermediate table(SongTags). Let's have a population as example: **Tags**: Electro, Instrumental, Energetic, Melancholic, Vocal, Rock **Songs**: SongA (Electro, Melancholic, Vocal) SongB (Instrumental, Melancholic, Rock) SongC (Energetic, Vocal) --- The search should return the songs that contains ALL the tags requested. **Search1:** "Vocal" returns: SongA, SongB **Search2:** "Vocal", "Energetic" returns: SongC **Search3:** "Vocal", "Energetic", "Electro" returns: nothing --- I see how to do a search on 1 tag, but not on multiple tags. For exemple for the Search1, I know that this would work: ``` SELECT * FROM "songs" INNER JOIN "song_tags" ON "song_tags"."song_id" = "songs"."id" INNER JOIN "tags" ON "tags"."id" = "song_tags"."tag_id" WHERE "tags"."name" = 'Vocal' ``` But then I have no idea on how I could execute Search2, because I need the Song to contain both "Vocal" and "Energetic". Edit: I am using PostgreSQL
To understand why your code didn't work, you should understand what's `flashvars` parameter and how it's working. Adobe said about that [here](https://helpx.adobe.com/flash/kb/pass-variables-swfs-flashvars.html), for example : > > The FlashVars parameter of the HTML <OBJECT> tag sends variables into the top level of a SWF file when it loads in a web browser. The <OBJECT> tag is used to add SWF files to HTML pages. The <EMBED> tag can also be used, but is older and now obsolete. > > > So here we can understand that those variables are loaded when the SWF is loaded and that's why even if you've changed the `flashvars` parameter, that will do nothing, absolutely nothing to that loaded SWF which should be loaded again to get them (variables) applied. So to do that, take this simple example : **HTML :** ``` <div id='swf_container'> <embed id='swf_object' src='swf.swf' flashvars='id=1' /> </div> ``` **JavaScript :** ``` // change the flashvars attribute var swf_object = document.getElementById('swf_object'); swf_object.setAttribute('flashvars', 'id=2'); var swf_container = document.getElementById('swf_container'); var inner_html = swf_container.innerHTML; // reload the swf object swf_container.innerHTML = ''; swf_container.innerHTML = inner_html; ``` This manner is, of course, working but maybe it's not a good idea to reload the SWF object everytime we need it to do something, and that's why we have [`ExternalInterface`](http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/external/ExternalInterface.html) to communicate between the SWF and JavaScript. --- So in the case where you've access to the ActionScript code to create that SWF, you can use `ExternalInterface` to call any function in your SWF when it's already loaded. For that, take this example : **ActionScript :** ``` if(ExternalInterface.available) { // registers an AS function to be called from JS ExternalInterface.addCallback('from_JS_to_AS', from_JS); } function from_JS(id:int) : void { // use the id sent by JS } ``` **JavaScript :** ``` var swf_object = document.getElementById('swf_object'); swf_object.from_JS_to_AS(1234); ``` ... and don't forget to use [`swfobject`](https://code.google.com/p/swfobject/) to avoid some browsers compatibility and to be sure that you establish the communication between your ActionScript side and the JavaScript one ... Hope that can help.
288351
make\_shared allocates single block for object and reference counter. So there is an obvious performance benefit using such technique. I made simple experiment in VS2012 and I was looking for 'evidence': ``` std::shared_ptr<Test> sp2 = std::make_shared<Test>(); std::shared_ptr<Test> sp(new Test()); // Test is a simple class with int 'm_value' member ``` when debugging I got in locals view something like this (some lines are deleted) ``` - sp2 shared_ptr {m_value=0 } [make_shared] std::shared_ptr<Test> + _Ptr 0x01208dec {m_value=0 } Test * + _Rep 0x01208de0 make_shared std::_Ref_count_base * - sp shared_ptr {m_value=0 } [default] std::shared_ptr<Test> + _Ptr 0x01203c50 {m_value=0 } Test * + _Rep 0x01208d90 default std::_Ref_count_base * ``` It seems that **sp2** is allocated in **0x01208de0** (there is a ref counter) and then in **0x01208dec** there is an Test object. Locations are very close to each other. In second version we have **0x01208d90** for ref counter, and **0x01203c50** for object. Those locations are quite distant. Is this proper output? Do I understand this correctly?
If you read [cppreference's page for `make_shared`](http://en.cppreference.com/w/cpp/memory/shared_ptr/make_shared), they say: > > This function allocates memory for the `T` object and for the `shared_ptr`'s control block with a single memory allocation. In contrast, the declaration `std::shared_ptr<T> p(new T(Args...))` performs two memory allocations, which may incur unnecessary overhead. > > > So this is the intended behaviour and you interpreted it properly. And of course, it makes sense; how could the `shared_ptr` control the allocation of an object you already allocated? With `make_shared`, you leave it in charge of allocating the object, so it can allocate space your object wherever it wants, which is right beside the counter. *Addendum*: As Pete Becker noted in the comments, Β§20.7.2.2.6/6 of the Standard says that implementations are encouraged, but not required, to perform only one allocation. So this behaviour you observed should not be relied upon, though it's safe to say you have nothing to lose and something to gain if you always use `make_shared`.
288377
I am a beginner and my question is not a homework. I want to write a C++ code in Linux to copy a large (40GiG) binary file. My code has to meet the following conditions: * the speed should be the same as copying with the OS * amount of RAM should be assumed to be 4GiG Currently, I use: > > sendfile (write\_fd, read\_fd, &offset, stat\_buf.st\_size); > > > Does `sendfile` satisfy the mentioned conditions? How i can evaluate the speed of copying by a code versus the speed of copying by the OS?
The best answer is that you should try it yourself and run some benchmarks. However, to give you a hint, I quote the man page for [sendfile()](http://man7.org/linux/man-pages/man2/sendfile.2.html): > > Because this copying is done within the kernel, sendfile() is more efficient than the combination of read(2) and write(2), which would require transferring data to and from user space. > > > This should give you a pretty good idea.
288568
This may turn out to be a bit embarassing, but here it is. It is probably not true that the ascending and descending central series (\*) of a nilpotent group have the same terms. (Or at least one of MacLane-Birkhoff, Rotman and Jacobson would have mentioned it.) However, I have been unable to find an example where they are different. I thought I had a sketch of proof that they are always equal, but there is a gap, of the kind where you feel it is not patchable. I've proved it for a few nilpotent groups (dihedral of the square, any group of order p^3, the Heisenberg groups of dimension 3 and 4 over any ring -- I think the argument extends to any dimension), and checked a few more exotic examples in the excellent [Group Properties Wiki](http://groupprops.subwiki.org/wiki/Main_Page). So, > > What is the simplest (preferably finite) nilpotent group such that its a.c.s. and d.c.s. are different? > > > and > > Do the a.c.s. and d.c.s. coincide in some interesting general case? > > > (\*) For completeness, the ascending central series of a group G is defined by Z\_0 = 1, Z\_{i+1} = the pullback of Z(G/Z\_i(G)) along the projection, and the descending central series by G\_0 = G, G\_{i+1} = [G,G\_i]. The group G is nilpotent iff ever Z\_m = G or G\_m = 1. It turns out that if such m exists it is the same for both.
It's false even if $m=2$. Try the product of a group of order $2$ and a dihedral group of order $8$. The center has order $4$ and the quotient by the center is abelian. The commutator subgroup has order $2$.
288594
I want to fill an array with values. The **keys** of this array should be **readable dates** in the format 'YEAR-MONTH-DAY'. Starting point is '2010-5-25'. The process should abort on the current date. Obviously, all dates should be valid dates. I thought about doing this loop. But it seems that PHP is not able to check the condition of more than one in a 'for' loop. It does not give me any warnings or errors, though. ``` for ($d = 25, $m = 5, $y = 2010, $this_day = date('j'), $this_month = date('n'), $this_year = date('Y'); ($y <= $this_year) && ($m <= $this_month) && ($d <= $this_day); $d++) { $values[$y.'-'.$m.'-'.$d] = 0; //fill array $d++; if(!checkdate($m, $d, $y)){ $d = 1; $m++; if($m > 12) { $m = 1; $y++; } } } ``` Doing this with nested loops would be rather painful. One solution would be to use integer times as keys and then convert them later in another loop into the readable dates. Is there a more efficient way?
Simply you can try using `strtotime()`. Example: ``` $values = array(); $oldDate = strtotime('2010-05-25'); while($oldDate <= time()){ $values[date('Y-m-d', $oldDate)] = 'Your value'; $oldDate += 86400; //Other codes } ```
289100
I learned that numbers can be converted into strings with `sprintf`: ``` int main() { int n; char s[32]; printf("n="); scanf("%d",&n); sprintf(s,"%d",n); //HERE puts(s); return 0; } ``` Is it possible to convert a string into a number with a similar command, without checking if each character is a number?
Yes. You can use [`strtol`](http://linux.die.net/man/3/strtol) function. ``` long int strtol(const char * restrict nptr, char ** restrict endptr, int base); ``` The `strtol` convert the initial portion of the string pointed to by `nptr` to `long int` representation. Better to not use `atoi`. It tells noting when unable to convert a string to integer unlike `strtol` which specify by using `endptr` that whether the conversion is successful or not. If no conversion could be performed, zero is returned. Suggested reading: [correct usage of strtol](https://stackoverflow.com/a/14176593/2455888) Example: ``` char *end; char *str = "test"; long int result = strtol(str, &end, 10); if (end == str || *temp != '\0') printf("Could not convert '%s' to long and leftover string is: '%s'\n", str, end); else printf("Converted string is: %ld\n", result); ```
289562
I have a Layout with a searchbar on the top, a listview and then a bottom navigation view. The problem is, the last item of the listview hides behind the bottom navigation view. Here is the code of the layout: ``` <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@color/white"> <RelativeLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentStart="true" android:layout_alignParentTop="true" android:id="@+id/rel1" > <RelativeLayout android:id="@+id/relLayout1" android:layout_width="match_parent" android:layout_height="wrap_content"> <include layout="@layout/snippet_searchbar" /> </RelativeLayout> <ListView android:id="@+id/listView" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/relLayout1" android:scrollbars="vertical"> </ListView> <android.support.v7.widget.RecyclerView android:id="@+id/recyclerView" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/relLayout1" /> </RelativeLayout> <android.support.design.widget.FloatingActionButton android:id="@+id/fab_group" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:layout_alignParentEnd="true" android:layout_gravity="bottom|end" android:layout_marginBottom="27dp" android:layout_marginEnd="27dp" android:src="@drawable/ic_next" /> <include layout="@layout/layout_bottom_navigation_view" /> </RelativeLayout> ``` For bottom navigation view: ``` <merge xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto"> <RelativeLayout android:layout_width="match_parent" android:layout_height="50dp" android:layout_alignParentBottom="true"> <com.ittianyu.bottomnavigationviewex.BottomNavigationViewEx android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/bottomNavViewBar" android:background="@drawable/white_grey_border_bottom" app:menu="@menu/bottom_navigation_menu" app:itemIconTint="@color/secondaryLightColor" > </com.ittianyu.bottomnavigationviewex.BottomNavigationViewEx> </RelativeLayout> </merge> ``` How can I resolve this issue so that the bottom navigation view appears below the listview and the last item of the listview with scrollbars appears above the bottom navigation even when scrolled to the bottom.
The Python 3.7 environment does not yet have support for Google Cloud Endpoints, unfortunately. I would recommend you stick with the flex environment for the time being.
289716
> > **Possible Duplicate:** > > [escape method is not supported by view page in admin area or backend of site in zend framework?](https://stackoverflow.com/questions/11135434/escape-method-is-not-supported-by-view-page-in-admin-area-or-backend-of-site-in) > > > I have installed zend framework properly on my localhost and got the zend framework default index page. Now I am using escape method in view page and run the page then it shows following error:- Fatal error: Call to undefined function escape()..... I am not getting the solution to remove this error, I have set the virtual host and create site structure using Zend\_Tool command like- zf create project myproject. A directory structure I have got in proper way and when I run my site on localhost, it shows a zend default index page. So I assume that zend has installed properly on my system. A directory structure created by zend\_tool have a blank library folder and there is no bin folder is showing in created directory structure of myproject folder. I am using latest version of zend framework 1.11.11 I am very confused that why escape() method is not working I am using following code in controller:- ``` $this->view->assign('username', 'Username'); $this->view->assign('password', 'Password'); $this->view->assign('rememberMe', 'Remember Me'); ``` I am calling escape method in view page as following:- ``` <td id="userlogin" align="left" width="30%"><?php echo escape($this->username);?>:</td> ``` In my controller I am not including any file. It is just a simple controller file like this:- ``` <?php class IndexController extends Zend_Controller_Action { public function init() { /* Initialize action controller here */ } public function indexAction() { // action body $request = $this->getRequest(); $this->view->assign('username', 'Username'); $this->view->assign('password', 'Password'); $this->view->assign('rememberMe', 'Remember Me'); } } ``` Where I have made error please let me know.
`escape()` is not a php function. If you want to use ZF's view helper Zend\_View\_Helper\_Escape, you need to call it on the view object: `<?php echo $this->escape($this->username);?>`
289912
I'm asked to implement a program that generates a random number of jelly beans in a jar, prompt the user to make a guess on how many jelly beans are in the jar, and count how many times the user tried to guess before getting it right. That's my problem right there -- getting the program to count how many times the user inputted a guess. Here's my code: ``` import java.util.Scanner; import java.util.Random; public class JellyBeanGame { public static void main(String[] args) { int numOfJellyBeans = 0; //Number of jellybeans in jar int guess = 0; //The user's guess Random generator = new Random(); Scanner scan = new Scanner (System.in); //randomly generate the number of jellybeans in jar numOfJellyBeans = generator.nextInt(999)+1; System.out.println("There are between 1 and 1000 jellybeans in the jar,"); do { System.out.print("Enter your guess: ");//prompt user to quess and read in guess = scan.nextInt(); if(guess < numOfJellyBeans) //if the quess is wrong display message { System.out.println("Too low."); } else if(guess > numOfJellyBeans); { System.out.println("Too High."); } else { System.out.println("You got it"); // display message saying guess is correct } } while (guess != numOfJellyBeans); } ``` }
Have a counter variable that you increment on each loop in the while loop. Something like this: ``` int num_guesses = 0; do { System.out.print("Enter your guess: ");//prompt user to quess and read in guess = scan.nextInt(); num_guesses++; // increment the number of guesses if(guess < numOfJellyBeans) //if the quess is wrong display message { System.out.println("Too low."); } else if(guess > numOfJellyBeans) { System.out.println("Too High."); } else { System.out.println("You got it"); // display message saying guess is correct System.out.println("It took you " + num_guesses + " guesses!"); // display message with number of guesses } } while (guess != numOfJellyBeans); ```
290076
I am using this simple method of finding a user in the current domain, that works for all users that 'exist' but I can't find any way to determine if the user does not exist. ``` string userLDAP = @"MYDOMAIN/username"; string path = "WinNT://" + userLDAP ; DirectoryEntry root = new DirectoryEntry(path, null, null, AuthenticationTypes.Secure); ``` Other than letting an exception be thrown, how can I use a directory entry to determine if a user does not exist? ``` if (root.Properties != null) if (root.Properties["objectSid"] != null) //// EXCEPTION HERE if (root.Properties["objectSid"][0] != null) ```
It's better to use DirectorySearcher for this purpose... ``` string userName = "TargetUserName"; using (DirectorySearcher searcher = new DirectorySearcher("GC://yourdomain.com")) { searcher.Filter = string.Format("(&(objectClass=user)(sAMAccountName={0}))", userName); using (SearchResultCollection results = searcher.FindAll()) { if (results.Count > 0) Debug.WriteLine("Found User"); } } ``` This sample will search and entire forest including child domains. If you want to target only a single domain use "LDAP://mydomain.com" instead of "GC://mydomain.com". You can also supply searcher.SearchRoot with a DirectoryEntry to use as the root of a search (i.e. a specific OU or domain). Don't forget most of the AD stuff is IDisposable so dispose properly as shown above.
290158
I wan't to limit the number of append that happen in Jquery whenever I click the button ``` function standardRoom() { var flag = 0 if ($('select#selectBoxStandard option').length > 1 ) { flag++ $('#selectBoxStandard').find("option:nth-last-child(-n+" + $('#selectBoxStandard').val() + ")").remove(); if (flag <= 1) { $("#roomDetail ul").append('<li><strong>Standard Room - Regular Online Rate</strong> </li>').append('<li class="pull-right"><h4 style="color:darkorange">PHP {{$availableRooms[0]['nightRate']}}</h4></li>').append('<li>Number of night(s): {{$n_nights}} </li>').append('<li>Number of person(s): </li>').append('<li class="hr">Number of room(s): </li>').append(flag); } }else { alert("No more rooms"); } } ``` I tried using if statement based on what I find here. But it's not working, it just keeps on appending.
Everytime you call standardRoom function the `flag` variable is reset to 0, so it will always keeps adding the elements. Make sure you store that variable somewhere more global where you don't need to reset it: ``` var flag = 0; function standardRoom() { if ($('select#selectBoxStandard option').length > 1 ) { flag++ $('#selectBoxStandard').find("option:nth-last-child(-n+" + $('#selectBoxStandard').val() + ")").remove(); if (flag <= 1) { $("#roomDetail ul").append('<li><strong>Standard Room - Regular Online Rate</strong> </li>').append('<li class="pull-right"><h4 style="color:darkorange">PHP {{$availableRooms[0]['nightRate']}}</h4></li>').append('<li>Number of night(s): {{$n_nights}} </li>').append('<li>Number of person(s): </li>').append('<li class="hr">Number of room(s): </li>').append(flag); } } else { alert("No more rooms"); } } ```
290484
I want to use [jquery autocomplete](http://jqueryui.com/autocomplete/) jquery plugin, I test below code but nothing happend, I check, the file added correctly and there is not js error. ``` function imodbdev_form_alter(&$form,$form_state,$form_id){ if($form_id='movie_node_form'){ drupal_add_library('system', 'drupal.autocomplete'); drupal_add_js(drupal_get_path('module', 'imodbdev') . '/js/jquery.livequery.js', array('scope' => 'footer')); drupal_add_js(drupal_get_path('module', 'imodbdev') . '/js/imodbdev.js', array('scope' => 'footer')); } } ``` and `js/imodbdev.js` file ``` (function($) { Drupal.behaviors.imodbdev = { attach: function(context) { var availableTags = [ "ActionScript", "AppleScript", "Asp", "BASIC", "C", "C++", "Clojure", "COBOL", "ColdFusion", "Erlang", "Fortran", "Groovy", "Haskell", "Java", "JavaScript", "Lisp", "Perl", "PHP", "Python", "Ruby", "Scala", "Scheme" ]; $( "#edit-title" ).autocomplete({ source: availableTags }); } } })(jQuery); ``` I had install `jquery update` module and set jquery version to 1.8 . where is the problem? **update** after sometimes I type in textfield to test , I saw `TypeError: c.curCSS is not a function` in firebug, I search this error in google and suggestion in upgrade jquery ui to 1.9.3 but drupal not has version higher than 1.8.
Yes, jQuery UI is available in drupal core. To include libraries like that you need to call the [drupal\_add\_library()](https://api.drupal.org/api/drupal/includes!common.inc/function/drupal_add_library/7) function to load the required js & css. For exmaple: ``` drupal_add_library('system', 'drupal.autocomplete'); ``` So where you are adding your javascript, add that first. For example: ``` // Add the jQuery UI autocomplete library. drupal_add_library('system', 'drupal.autocomplete'); // Load the custom js for this module. drupal_add_js(drupal_get_path('module', 'MODULE_NAME') . '/js/SCRIPT_FILE.js'); ``` The caps lock parts need to be changed to suit your module. As for where you would put this, I can't say unless I know where you are putting your code. It could be in your theme or in a custom module. If it is in your theme then the drupal\_add\_js() part would be drupal\_add\_js(drupal\_get\_path('theme', 'THEME\_NAME') . '/js/SCRIPT\_FILE.js'); I'm also assuming your scripts are in a js sub-directory. You also need to make sure that any custom js you have is inside a drupal behavior (or $(document).ready() at the very least). Otherwise your javascript will run to early in the page load before the text field even is on the page, so it will do nothing. It will be like this: ``` (function ($) { Drupal.behaviors.yourThemeAutocomplete = { attach: function (context, settings) { // Your code goes here. } }; })(jQuery); ``` For more info see <https://drupal.org/node/756722#behaviors> (the whole page is useful info)
290790
I'm learning to code in React from [Tyler Mcginnis' React course](https://tylermcginnis.com/courses/react/) (which I strongly recommend btw) and I decided to develop my own project, a university administration website, which can be displayed in different languages. So far, I have developed the Login page, please **note that I'm using Babel**: **Login.js** ``` import React from 'react' import PropTypes from 'prop-types' import { languagedata } from './Languages' import languagesdata from '../languagesdata.json' function LanguagesNav ({ selected, onUpdateLanguage}) { const languages = ['EU', 'ES', 'EN'] return ( <div > <h1 className='center-text header-lg'> GAUR 2.0 </h1> <ul className='flex-center'> {languages.map((language) => ( <li key={language}> <button className='btn-clear nav-link' style={language === selected ? { color: 'rgb(187, 46, 31)' } : null } onClick={() => onUpdateLanguage(language)}> {language} </button> </li> ))} </ul> </div> ) } LanguagesNav.propTypes = { selected: PropTypes.string.isRequired, onUpdateLanguage: PropTypes.func.isRequired } export default class Login extends React.Component { constructor(props) { super(props) this.state = { selectedLanguage: 'EU' } this.updateLanguage = this.updateLanguage.bind(this) } componentDidMount () { this.updateLanguage(this.state.selectedLanguage) } updateLanguage (selectedLanguage) { this.setState({ selectedLanguage }) } render() { const { selectedLanguage } = this.state return ( <React.Fragment> <LanguagesNav selected={selectedLanguage} onUpdateLanguage={this.updateLanguage} /> <form className='column player'> <label htmlFor='username' className='player-label'> //Just testing whether JSON can be displayed { languagesdata.data } </label> <div className='row player-inputs'> <input type='text' id='username' className='input-light' placeholder='Erabiltzailea' autoComplete='off' /> </div> <div className='row player-inputs'> <input type='password' id='username' className='input-light' placeholder='Pasahitza' autoComplete='off' /> </div> <div className='row player-inputs'> <button className='btn dark-btn' type='submit' > Sartu </button> </div> </form> </React.Fragment> ) } } ``` [![website login interface](https://i.stack.imgur.com/C0zrY.png)](https://i.stack.imgur.com/C0zrY.png) I would like to display the website in the selected language based on the information stored in a local JSON file: ``` [ { "EU": { "welcome": "Sartu GAUR 2.0ra", "login": "Sartu" }, "ES": { "welcome": "Entra a GAUR 2.0", "login": "Entrar" }, "EN":{ "welcome": "Log into GAUR 2.0", "login": "Log in" } } ] ``` I've tried several solutions that I've found on the Internet, but none of them worked for me. I've tried importing a Javascript file that contains a JSON data (see below) and importing a JSON file itself (see JSON above). ``` const languagedata = { "data": { "languages": [ { "euskara": { welcome: "Sartu GAUR 2.0ra", login: "Sartu" }, "espaΓ±ol": { welcome: "Entra a GAUR 2.0", login: "Entrar" }, "english":{ welcome: "Log into GAUR 2.0", login: "Log in" } } ] } }; ``` Thus, I have 2 questions: * What would be the best way to do what I'm trying to do? * How can I display the JSON data in Login.js?
You can also get several documents by their id using: ``` Firestore.instance.collection('tournaments')..where(FieldPath.documentId, whereIn: myIdList).snapshots() ``` where myIdList is a List Enjoy