text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Service Tutorial 6 (C#) - Retrieving State and Displaying it Using an XML Transform6\CSharp This tutorial teaches you how to: - Get the State of Another Service. - Use an XSLT to Present Service State. - Use XSLT as Embedded Resource. - Use the DSS XSLT Template. See Also: Prerequisites This tutorial uses the service written in Service Tutorial 5 (C#) - Subscribing. The service you will create is referred to as ServiceTutorial: Get the State of Another Service In ServiceTutorial6Types.cs, add the properties Clock and InitialTicks to the ServiceTutorial6State class. These properties will be used to demonstrate how to get the state of another service. private string _clock; [DataMember] public string Clock { get { return _clock; } set { _clock = value; } } private int _initialTicks; [DataMember] public int InitialTicks { get { return _initialTicks; } set { _initialTicks = value; } } In the file, ServiceTutorial6.cs, remove the line that calls _clockPort.Subscribe in the Start method and replace it with the following code. This will cause the iterator method OnStartup to run concurrently with the execution of the service. SpawnIterator(OnStartup); Now write the OnStartup method. The first thing that this method does is send a Get message to the ServiceTutorial4 partner service. To send this message, call the Get method on the _clockPort field. This constructs and sends a Get message to the service. The iterator then returns a Choice task using yield return, where the two choices available are the two possible response message types defined by the Get message of ServiceTutorial4. In general, the Get message of a service returns the state of that service or a Fault. Here, the state of the service is stored in a local variable state in the success path and the fault is logged in the error path. private IEnumerator<ITask> OnStartup() { rst4.Get get; yield return _clockPort.Get(GetRequestType.Instance, out get); rst4.ServiceTutorial4State state = get.ResponsePort; if (state == null) { LogError("Unable to Get state from ServiceTutorial4", (Fault)get.ResponsePort); } If the success path was followed, state will not be null. In that case, this code creates a new ServiceTutorial6State for the current state and populates the InitialTicks property with the current value of Ticks in the state of the ServiceTutorial4 service. The code also looks up the partner definition for the Clock partner and sets the Clock property to be the service address of the partner. A replace message is then sent to the main port of this service to set the new state values. else //); } Having gotten the state of the ServiceTutorial4 service, the OnStartup method concludes by sending a Subscribe message to the ServiceTutorial4 service and logging a fault. rst4.Subscribe subscribe; yield return _clockPort.Subscribe(_clockNotify, out subscribe); Fault fault = subscribe.ResponsePort; if (fault != null) { LogError("Unable to subscribe to ServiceTutorial4", fault); } } Step 2: Use an XSLT to Present Service State In Service Tutorial 1 (C#) - Creating a Service, you learned how to display the XML representation of a service state in a web browser. That state can be transformed to HTML (or text, or another XML representation) to better present it to the user. This process requires only three steps: - Write an Extensible Stylesheet Language Transformations (XSLT) file describing how to transform the state - this is out of the scope of these tutorials, has information on writing XSLTs and an example XSLT is presented in the appendix of this tutorial. - Add the XSLT as an embedded resource of the service assembly. - Provide a link to that XSLT in the serialized XML representation of the state. First we’ll show you how to use the Mount Service to reference an XSLT stylesheet in the store\transforms sub-directory of the DSS installation directory. In the file, ServiceTutorial6.cs, declare a string type called _transform and assign to it the relative path to the XSLT file through the Mount service. string _transform = "/mountpoint/store/transforms/ServiceTutorial6.xslt"; In the method, HttpGetHandler, provide the path of the XSLT to the HttpResponseType constructor. httpGet.ResponsePort.Post(new HttpResponseType(System.Net.HttpStatusCode.OK, _state, _transform)); Copy the ServiceTutorial6.xslt file to the store\transforms directory then build and run the service. You will see a page like the following. Step 3: Use XSLT as Embedded Resource Copying the XSLT file to the store\transforms directory isn't always appropriate. It adds a level of complexity to deploying a service. DSS supports loading resources embedded in an assembly. To embed the XSLT file, select it in the Solution Explorer in Microsoft Visual Studio. In the Properties window, change the Build Action field from Content to Embedded Resource. The resource is given a name that is a concatenation of the default project namespace and the name of the file. In this case, Robotics.ServiceTutorial6.ServiceTutorial6.xslt. In the file, ServiceTutorial6.cs, add an EmbeddedResource attribute with the path to the transform. [EmbeddedResource("Robotics.ServiceTutorial6.ServiceTutorial6.xslt")] string _transform = null; The value of the _transform field will be populated on the service start at runtime. There is also an easier way to attach the embedded XSLT resource to the state in cases where you want to use the default Get and HttpGet handlers. In this case, instead of defining a null transform field and using the [EmbeddedResource] attribute you can do the following: - Remove the _transform field and the EmbeddedResource attribute on top of it. - Annotate the state field with the [ServiceState] attribute providing the embedded XSLT file to use: // Declare the service state and also the XSLT Transform for displaying it [ServiceState(StateTransform = "RoboticsServiceTutorial6.ServiceTutorial6.xslt")] private ServiceTutorial6State _state = new ServiceTutorial6State(); - Remove the Get and HttpGet handlers. This will tell the base class to implicitly handle the Get and HttpGet messages using the default handlers for both, using the specified XSL Transformation for HttpGet responses. Note that the Get and HttpGet operations should already be in the service's operations port for them to be handled implicitly. Step 4: Use the DSS XSLT Template You may have noticed that the transformed state view above does not have the common layout as in the other DSS services. DSS uses a common XSLT template for all DSS services to give them a consistent look. To learn how to apply this template on your services see the Tutorial for Using Default DSS XSLT Template. Summary In this tutorial, you learned how to: Appendix A: ServiceTutorial6.xslt <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet <xsl:output <xsl:template <html> <head> <title>Service Tutorial 6</title> <link rel="stylesheet" type="text/css" href="/resources/dss/Microsoft.Dss.Runtime.Home.Styles.Common.css" /> </head> <body style="margin:10px"> <h1>Service Tutorial 6</h1> <table border="1"> <tr class="odd"> <th colspan="2">Service State</th> </tr> <tr class="even"> <th>Clock:</th> <td> <xsl:value-of </td> </tr> <tr class="odd"> <th>Initial Tick Count:</th> <td> <xsl:value-of </td> </tr> <tr class="even"> <th>Current Tick Count:</th> <td> <xsl:value-of </td> </tr> </table> </body> </html> </xsl:template> </xsl:stylesheet> Appendix B: Alternate pattern for request/response in DSSP iterator handlers In the OnStartup() method above it uses an alternate way to handle DSS request/responses in the iterator which is a little more concise and allows handling only those response messages that are of interest for us rather than having to define anonymous methods for all of the response types. The OnStartup() method above is equivalent to the following code. You can compare the two yield return parts to see the difference. For more information on CCR Iterators see CCR Iterators. private IEnumerator<ITask> OnStartup() { rst4.ServiceTutorial4State state = null; yield return Arbiter.Choice( _clockPort.Get(GetRequestType.Instance), delegate(rst4.ServiceTutorial4State response) { state = response; }, delegate(Fault fault) { LogError(null, "Unable to Get state from ServiceTutorial4", fault); } );); } yield return Arbiter.Choice( _clockPort.Subscribe(_clockNotify), delegate(SubscribeResponseType response) { }, delegate(Fault fault) { LogError(null, "Unable to subscribe to ServiceTutorial4", fault); } ); }
https://msdn.microsoft.com/en-us/library/bb483058.aspx
CC-MAIN-2015-18
en
refinedweb
On Sun, Oct 18, 2009 at 6:45 PM, sebb <sebbaz@gmail.com> wrote: > On 18/10/2009, bayard@apache.org <bayard@apache.org> wrote: >> Author: bayard >> Date: Sun Oct 18 20:14:30 2009 >> New Revision: 826514 >> >> URL: >> Log: >> Sebb pointed out that the implementation for LANG-507 was not thread safe. Rewriting to pass parameters in to the constructor, but doing so in an experimental way - comments very much desired on whether this makes for a nice API or not >> >> Modified: >> commons/proper/lang/trunk/src/java/org/apache/commons/lang/text/translate/UnicodeUnescaper.java >> commons/proper/lang/trunk/src/test/org/apache/commons/lang/text/translate/UnicodeUnescaperTest.java >> >> Modified: commons/proper/lang/trunk/src/java/org/apache/commons/lang/text/translate/UnicodeUnescaper.java >> URL: >> ============================================================================== >> --- commons/proper/lang/trunk/src/java/org/apache/commons/lang/text/translate/UnicodeUnescaper.java (original) >> +++ commons/proper/lang/trunk/src/java/org/apache/commons/lang/text/translate/UnicodeUnescaper.java Sun Oct 18 20:14:30 2009 >> @@ -19,6 +19,9 @@ >> import java.io.IOException; >> import java.io.Writer; >> >> +import java.util.EnumSet; >> +import java.util.Arrays; >> + >> /** >> * Translates escaped unicode values of the form \\u+\d\d\d\d back to >> * unicode. >> @@ -26,13 +29,18 @@ >> */ >> public class UnicodeUnescaper extends CharSequenceTranslator { >> >> - private boolean escapingPlus = false; >> + public static enum PARAM { escapePlus }; >> + >> + private EnumSet<PARAM> params; > > This is not final, so its value is not necessarily published to other > threads - i.e. the class is still not thread-safe. I guess no reason not to use private final - though EnumSet itself isn't synchronized and I don't see how there is a current thread safe problem (I'm not aware of any way a constructor can be called twice on the same object :) , and the only method call used is contains() which I'll naively assume is thread safe as it would seem to be read-only. Hen --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org For additional commands, e-mail: dev-help@commons.apache.org
http://mail-archives.apache.org/mod_mbox/commons-dev/200910.mbox/%3C31cc37360910182127j29a9122al7aa35eba1fcc7a92@mail.gmail.com%3E
CC-MAIN-2015-18
en
refinedweb
We will start by reviewing simple harmonic motion (SHM) as it contains many of the important concepts that we will meet in waves. The most general form of the equation for a simple harmonic oscillator (SHO) including damping and driving forces can be written:$$ m\frac{\partial^2 \psi}{\partial t^2} = -s\psi -b \frac{\partial \psi}{\partial t} + F_0 \cos \omega t, $$ where $\psi$ is the displacement, $m$ is the mass of the oscillator, $s$ is the constant giving the restoring force (e.g. a spring constant) sometimes known as the stiffness, $b$ is the damping coefficient or resistance and $F_0$ gives the magnitude of the driving force (which has angular frequency $\omega$). If we set the damping and driving coefficients to zero, we recover the original, SHM equation:$$m\frac{\partial^2 \psi}{\partial t^2} = -s \psi,$$ which can be shown to be solved using sinusoidal motion: $$ \psi = A \sin \omega_0 t + B \cos \omega_0 t = C \cos \left(\omega t + \phi\right), $$ where $\omega_0 = \sqrt{s/m}$, $\phi$ is a constant phase and $A = -C\sin \phi$ and $B = C \cos \phi$. As you have seen in both PHAS1245 (Mathematical Methods I) and PHAS1247 (Classical Mechanics), we can use De Moivre's theorem to write the sinusoidal terms as a complex exponential ($e^{i\theta} = \cos \theta + i \sin \theta$ where $i = \sqrt{-1}$), giving: $$ \psi = Re\left[A e^{i(\omega_0 t + \phi)}\right] = Re\left[D e^{i\omega_0 t}\right] $$ where $D = Ae^{i\phi}$. We often call the argument of a trigonometric function the phase, and $\phi$ is often called the phase difference when there is more than one oscillation; it represents the offset in the phase at $t=0$. Now that we have the representation of the oscillation in terms of the complex exponential (or the sum of a $\cos$ and $\sin$) we can see that there is an immediate link with circular motion: the amplitude of the oscillation is just the projection of circular motion onto the $x$-axis (or any other axis that is chosen). (Note that from now on I will not write the need to take the real part of $\psi$ explicitly, but will assume it.) All these ideas are illustrated in the figure below. from IPython.display import Image Image(filename='SHMCircularMotion.png',width=300) #Size is in pixels We can also illustrate this more dynamically, with the python code below. The blue line shows the phasor at $t=0$ (with the angle relative to the x-axis being $\phi$. The red line shows the phasor at a later time $t$, with the real part shown as a dashed red line along the x-axis. phi = 30.0*pi/180. # Phase in degrees rad = 1.0 x = (0,rad*cos(phi)) y = (0,rad*sin(phi)) axis('scaled') axis([-1.1,1.1,-1.1,1.1]) plot(x,y) # Add arc to indicate phi theta = linspace(0,phi,1000) plot(0.5*rad*cos(theta),0.5*rad*sin(theta),'b-') # Label text(0.5, 0.18, r"$\phi$", fontsize=20, color="blue") # Now draw full circle theta = linspace(0,2.0*pi,1000) plot(rad*cos(theta),rad*sin(theta),'g-') # Plot the phasor omega = 2.0 t = 2.5 psix = (0,rad*cos(omega*t+phi)) psiy = (0,rad*sin(omega*t+phi)) plot(psix,psiy,'r-') # Projection on the x-axis plot(psix,(0.,0.),'r--') # Label text(0.4,-0.2, r"$\psi$", fontsize=20, color="red") # Arc showing omega t theta = linspace(phi,omega*t+phi,1000) plot(0.6*rad*cos(theta),0.6*rad*sin(theta),'r:') # Label text(-0.5, 0.6, r"$\omega t$", fontsize=20, color="red") <matplotlib.text.Text at 0x110c412d0> The representation of a simple harmonic oscillator at a single point in time in the complex plane (e.g. figure above) is often called a phasor diagram, and the arrow representing the amplitude and phase of the oscillator relative to a fixed time or phase is called a phasor; note that for a phasor we just need the arrow, not the associated circle. We will see later that we can combine two oscillations using phasors (or using complex arithmetic - the two are completely equivalent). A simple way to think about phase differences is to consider the velocity and acceleration of the oscillator:\begin{eqnarray} \psi &=& A e^{i(\omega_0 t + \phi)}\\ \dot{\psi} &=& \frac{\partial \psi}{\partial t} = i\omega_{0} A e^{i(\omega_0 t + \phi)} = i\omega_{0} \psi\\ \ddot{\psi} &=& \frac{\partial^2 \psi}{\partial t^2} = -\omega_{0}^2 A e^{i(\omega_0 t + \phi)} = -\omega_{0}^2 \psi \end{eqnarray} Notice that both of these quantities vary harmonically and with the same frequency as the oscillation, though with different amplitudes; more importantly, however, note that the velocity is a factor of $i$ different to the displacement, while the acceleration is a factor of $-1$ different. These are easier to understand when we write them in complex exponential form: $i = e^{i\pi/2}$ and $-1 = e^{i\pi}$. So we can write:\begin{eqnarray} \psi &=& A e^{i(\omega_0 t + \phi)}\\ \dot{\psi} &=& \omega A e^{i(\omega_0 t + \phi + \pi/2)} = \omega_{0} \psi e^{i\pi/2}\\ \ddot{\psi} &=& \omega^2 A e^{i(\omega_0 t + \phi + \pi)} = \omega_{0}^2 \psi e^{i\pi} \end{eqnarray} We see that the velocity leads the displacement by a phase factor of $\pi/2$ and the acceleration leads the velocity by a phase factor of $\pi/2$. This phase relationship is shown in the next figure. from IPython.display import Image Image(filename='SHMDispVelAcc.png',width=300) #Size is in pixels The simple harmonic oscillator, as with all mechanical systems, has two forms of energy: kinetic and potential. If we have $W$ as the total energy and $T$ as kinetic and $V$ as potential, we can write:\begin{eqnarray} T &=& \frac{1}{2} m\dot{\psi}^2\\ V &=& \frac{1}{2} s\psi^2\\ W &=& T + V \end{eqnarray} The form of the potential energy follows directly from the form of the force (and is why the motion is called harmonic). As there is no form of dissipation in the motion, we can assert that the total energy does not change with time, just exchanging between potential (at a maximum when the displacement is at a maximum and the velocity is at a minimum) and kinetic (at a maximum when the displacement is at a minimum). They are out of phase - as we expect from the phase differences between displacement and velocity. As the total energy is constant, we can write:\begin{eqnarray} \frac{d W}{d t} = \frac{d T}{d t} + \frac{d V}{dt} &=& 0\\ \Rightarrow m\dot{\psi}\ddot{\psi} + s\psi\dot{\psi} &=&0\\ \Rightarrow m\ddot{\psi} &=& -s\psi \end{eqnarray} which is of course just the original equation for simple harmonic motion. Restoring the damping term changes the equation and its solution. We will use $\gamma = b/2m$ in the equations below. These solutions were derived in PHAS1247 and are discussed in many textbooks, so I will not derive them here. We find:\begin{eqnarray} m\frac{\partial^2 \psi}{\partial t^2} &=& -s \psi -b \frac{\partial \psi}{\partial t}\\ \psi &=& \begin{cases} A e^{-\gamma t}e^{i\omega t} &\gamma < \omega_0\\ \hskip 0.5cm\omega = \sqrt{\omega^2_0 - \gamma^2} &\, \\ A e^{-\mu_+ t} + B e^{-\mu_-t} & \gamma > \omega_0\\ \hskip 0.5cm\mu_\pm = \gamma\mp \sqrt{\gamma^2 - \omega_0^2} &\, \\ A(1+\omega_0 t)e^{-\omega_0 t} & \gamma = \omega_0 \end{cases} \end{eqnarray} The cases for damped harmonic motion correspond to light (or underdamped, $\gamma < \omega_0$), heavy (or overdamped, $\gamma > \omega_0$) and critical ($\gamma=\omega_0$) damping respectively. The total energy is \emph{not} conserved in this case (contrast the undamped oscillator) as the damping force opposes the motion at all times. We write the energy as before, and find the change with time:\begin{eqnarray} W &=& \frac{1}{2}m\dot{\psi}^2 + \frac{1}{2}s\psi^2\\ \frac{dW}{dt} &=& \frac{d W}{d \dot{\psi}}\frac{d \dot{\psi}}{d t} + \frac{d W}{d \psi}\frac{d \psi}{d t} = (m\ddot{\psi} + s\psi)\dot{\psi}\\ &=& -b\dot{\psi}^2 \end{eqnarray} where the last line comes from using the SHM equation for damped motion. Notice that the change in energy is always less than zero (unless $\dot{\psi}=0$) and so the total energy of the system decreases with time (as we would expect). We illustrate this behaviour in the interactive figure below: test the effect of changing b and k (and re-run the cell to render the figure). t = linspace(0,10.0,1000) m = 1.0 k = 2.0 b = 5.0 omega0 = sqrt(k/m) gamma = b/(2.0*m) print "omega0 and gamma are ",omega0,gamma A = 1.0 if gamma<omega0 : print "Light damping" omega = sqrt(omega0*omega0 - gamma*gamma) plot(t,A*exp(-gamma*t)*cos(omega*t)) elif gamma>omega0 : print "Heavy damping" mum = gamma + sqrt(gamma*gamma - omega0*omega0) mup = gamma - sqrt(gamma*gamma - omega0*omega0) B = 1.0 plot(t,A*exp(-mum*t) + B*exp(-mup*t),label='sum') plot(t,A*exp(-mum*t),'r--',label=r"$\mu_{-}$") plot(t,B*exp(-mup*t),'g--',label=r"$\mu_{+}$") legend() omega0 and gamma are 1.41421356237 2.5 Heavy damping Now we will introduce a driving term to the system. As we have a harmonic oscillator, it will make sense to use a harmonic driving term (though we could, for instance, use impulses at regular or irregular intervals). Note that the driving term will have an angular frequency $\omega$ which is \emph{different} to the natural frequency of the system, $\omega_0 = \sqrt{s/m}$. Also remember that $\omega = 2\pi \nu$ where $\nu$ is the frequency. We will retain the damping term to give a damped, driven oscillator (again derived in PHAS1247 and textbooks).\begin{eqnarray} m\frac{\partial^2 \psi}{\partial t^2} &=& -s \psi -b \frac{\partial \psi}{\partial t} + F_0 \cos \omega t\\ \psi &=& A\cos (\omega t + \phi)\\ A &=& \frac{F_0}{m} \left( \frac{1}{(\omega_0^2 - \omega^2)^2 + 4\gamma^2 \omega^2}\right)^{\frac{1}{2}}\\ \tan \phi &=& \frac{-2\gamma\omega}{\omega_0^2 - \omega^2} \end{eqnarray} The response of the system (including both the phase and the amplitude) depends strongly on the frequency; we can consider three regimes (though there isn't really time to do this in depth). The behaviour of $A$ and $\phi$ are plotted below; you should try changing the values of b (damping) and s (stiffness) to see the effects. In particular, you may need to increase or decrease F0 to see the graph on a sensible scale. (Remember to run the cell by pressing the play button in the toolbar above.) # Set up parameters of oscillator m = 1.0 k = 2.0 b = 0.2 gamma = b/(2.0*m) # Find resonant frequency omega0 = sqrt(k/m) print "omega0 and gamma are ",omega0,gamma F0 = 1.0 # Create array of values of omega from omega0-1 to omega0+1 omega = linspace(omega0-1.0,omega0+1.0,200) # Calculate phi phi = arctan(-2.0*gamma*omega/(omega0*omega0 - omega*omega)) # arctan in NumPy is defined to lie between -pi/2 and pi/2 so we correct # by -pi to give physical reasonableness phi[100:200] -=pi # Calculate amplitude fac = (omega0*omega0 - omega*omega)**2 + 4*gamma*gamma*omega*omega A = F0/(sqrt(fac)*m) plot(omega,phi,label='phi') plot(omega,A,label='A') legend() omega0 and gamma are 1.41421356237 0.1 <matplotlib.legend.Legend at 0x110cba8d0> Let's think a little about the power absorbed by the oscillator; to do that, we must consider the work done against the drag. If the displacement changes from $\psi$ to $\psi + \Delta \psi$ then the work done is $-F_d \Delta\psi$, where $F_d = -b\dot{\psi}$ is the work done against the drag. If that takes a time $\Delta t$ then the rate of work is $-F_d (\Delta \psi/\Delta t)$ which tends to $-F_d \dot{\psi}$ as $\Delta t \rightarrow 0$. So the instantaneous power adsorption becomes: \begin{eqnarray} P &=& -F_d \dot{\psi} = b\dot{\psi}^2 \end{eqnarray} We are not actually interested in the instantaneous power, but the time averaged power (often written $\langle P\rangle$), which for a harmonic force can be shown to be: \begin{eqnarray} \langle P \rangle &=& b\langle \dot{\psi}^2 \rangle = \frac{1}{2}b\omega^2A^2\ &=& \frac{F_0^2}{m^2}\frac{m\gamma\omega^2}{(\omega_0^2 - \omega^2)^2 + 4\gamma^2\omega^2} \end{eqnarray} where we have substituted $b = 2m\gamma$. This has a maximum value of $F^2_0/(4m\gamma)$ when $\omega = \omega_0$. Note that this is inversely proportional to the resistive force. Finally we introduce the idea of impedance which is a measure of the resistance to the motion of the oscillator. It is defined as the amplitude of the driving force divided by the complex amplitude of the velocity, $Z(\omega) = F_0/\dot{\psi} = b + i(m\omega - s/\omega)$. At resonance, $Z(\omega) = b$; in the resonance region, it only departs very slightly from this value. Away from resonance, it includes a phase lag or lead, which reflects the relation of velocity to the driving force. We will encounter impedance again with waves. We plot below the power absorbed, imaginary part of the impedance and the magnitude of the impedance, using the system given above. Note how the impedance has a minimum at the point that the maximum power is absorbed (and is purely real at that point). Away from that, the power is mainly dissipated (by the impedance). # We use the values of b, m, k, omega and A defined above power = 0.5*b*omega*omega*A*A impi = m*omega - k/omega Z = sqrt(b*b+impi*impi) plot(omega,power,label='Power') plot(omega,impi,label='Imaginary part of impedance') plot(omega,Z,label=r'$\vert Z\vert$') legend(loc=4) <matplotlib.legend.Legend at 0x110eac990> Note that so far we have discussed a steady state solution: there is a transient behaviour at the beginning of the oscillation which takes the form of a damped oscillator added to a solution of the homogenous equation. So a real system will respond at its natural frequency, but damping will cause that solution to die off, while the driven oscillation persists into the steady state:\begin{equation} \psi(t) = A_s \cos(\omega t +\phi) + A_t e^{-\gamma t} \cos(\omega_0 t + \phi_t) \end{equation} where the subscripts s and t stand for steady state and transient respectively. For lightly damped system near resonance we can approximate and find that $A_t \simeq A_s$ and $\phi_t \simeq \phi_s + \pi$, giving: \begin{equation} \psi(t) = A_s \left(\cos(\omega t +\phi) - e^{-\gamma t} \cos(\omega_0 t + \phi_s)\right) \end{equation} We plot an example of this below, with the total response of the system over time in blue, and the transient in green: notice how the initial response of the system is at its natural frequency, $\omega_0$, and how this dies down and leads to the steady-state, driven response at $\omega$. You should try changing the value of the driving frequency omegaD (though you will also have to change the amplitude of the driving force, F0, and the initial response, F1). t = linspace(0,50.0,1000) # Driving frequency omegaD = 7.0 # Amplitude of driving (F0) and response (F1) F0 = 10.0 F1 = 0.10 fac = (omega0*omega0 - omegaD*omegaD)**2 + 4*gamma*gamma*omegaD*omegaD A = F0/(sqrt(fac)*m) # Define transient transient = 2*(F1/m)*exp(-gamma*t)*cos(omega0*t+pi/2) steady = A*cos(omegaD*t+pi/2) plot(t,transient+steady) plot(t,transient) #plot(t,A*cos(omegaD*t+pi/2)+2*(F1/m)*exp(-gamma*t)*cos(omega0*t+pi/2)) #plot(t,2*(F1/m)*exp(-gamma*t)*cos(omega0*t+pi/2)) [<matplotlib.lines.Line2D at 0x10da39e50>] Instead of combining a single driving force with a damped response, what happens if we have an oscillator with two driving forces ? The full equation looks like this:$$ \begin{equation} m\frac{\partial^2 \psi}{\partial t^2} = -s \psi -b \frac{\partial \psi}{\partial t} + F_1 \cos (\omega_1 t + \phi_1) + F_2 \cos (\omega_2 t + \phi_2), \end{equation} $$ We know that the SHO equation is linear, so we can try solving this problem by summing the steady state solutions arising from each driving force independently. We will consider three specific cases (where it is easier to understand the behaviour) with the general case more complex: In all cases, we assume that we can write the displacement as a superposition of the responses to the individual oscillations: $$ \begin{equation} \psi = A_1 \cos(\omega_1 t + \phi_1) + A_2 \cos(\omega_2 t + \phi_2) \end{equation} $$ For simplicity, we will set $\phi_1 = 0$ and set $\phi_2 = \phi$. Then we can perform the following manipulations, using the complex exponential form for simplicity: $$ \begin{eqnarray} \psi &=& Ae^{i\omega t} + Ae^{i(\omega t + \phi)}\\ &=& Ae^{i\omega t}\left(1 + e^{i\phi}\right)\\ &=& Ae^{i\omega t}\left(e^{i\phi/2}e^{-i\phi/2} + e^{i\phi/2}e^{i\phi/2}\right)\\ &=& Ae^{i\omega t}e^{i\phi/2}\left(e^{-i\phi/2} + e^{i\phi/2}\right)\\ &=& 2Ae^{i(\omega t+\phi/2)}\cos(\phi/2) = 2A\cos(\phi/2)e^{i(\omega t+\phi/2)} \end{eqnarray} $$ where the identification of $\cos (\phi/2)$ follows from De Moivre's theorem. So the resultant oscillation has magnitude $2A\cos(\phi/2)$ and phase $\phi/2$. The same result can be found using trigonometry and a phasor diagram from IPython.display import Image Image(filename='PhasorSameFreqAmp.png',width=200) We can write a solution as before, by using superposition:$$ \begin{equation} \psi = A_1 \cos(\omega t + \phi_1) + A_2 \cos(\omega t + \phi_2) = A \cos(\omega t + \theta) \end{equation} $$ This is illustrated in a phasor diagram below. from IPython.display import Image Image(filename='Phasors.png',width=200) This is just an oscillation with frequency $\omega$ but a total amplitude and phase which depend on the amplitudes and phases of the two driving forces. If, for instance, the phase difference is $\phi_1 The phase is also found using trigonometry; by projecting onto thereal and imaginary axes, we can write: $$ \begin{equation} \tan \theta = \frac{A_1\sin \phi_1 + A_2\sin \phi_2}{A_1 \cos \phi_1 + A_2 \cos \phi_2} \end{equation} $$ We can get the same result using complex notation. Start by finding that, for two general complex numbers $z_1$ and $z_2$: $$ \begin{eqnarray} \left| z_1 + z_2\right|^2 &=& (z_1+z_2)(z_1 + z_2)^\star = (z_1+z_2)(z^\star_1+z^\star_2)\\ &=& |z_1|^2 + |z_2|^2 + (z_1z^\star_2 + z^\star_1z_2)\\ &=& |z_1|^2 + |z_2|^2 +2\mathrm{Re}(z_1z^\star_2) \end{eqnarray} $$ Now we have $z_1 = A_1 e^{i(\omega t +\phi_1)}$ and $z_2 = A_2 e^{i(\omega t +\phi_2)}$ with $A_1$ and $A_2$ real. This gives: $$ \begin{eqnarray} \left| z_1 + z_2\right|^2 &=& |z_1|^2 + |z_2|^2 +2\Re(z_1z^\star_2)\\ &=& A_1^2 + A_2^2 + 2\mathrm{Re}(A_1 e^{i(\omega t +\phi_1)}A_2 e^{-i(\omega t +\phi_2)})\\ &=& A_1^2 + A_2^2 + 2\mathrm{Re}(A_1A_2 e^{i(\phi_1 - \phi_2)})\\ &=& A_1^2 + A_2^2 + 2A_1A_2 \cos(\phi_2 - \phi_1) \end{eqnarray} $$ while we can find the phase from: $$ \begin{eqnarray} \mathrm{arg}(z_1+z_2) &=& \tan^{-1} \left[ \frac{\mathrm{Im}(z_1+z_2)}{\mathrm{Re}(z_1+z_2)} \right]\\ \Rightarrow \theta &=& \tan^{-1} \left[\frac{A_1\sin \phi_1 + A_2\sin \phi_2}{A_1 \cos \phi_1 + A_2 \cos \phi_2}\right] \end{eqnarray} $$ For this case, we use the principle of superposition to write: $$ \begin{equation} \psi = A \cos \omega_1 t + A \cos \omega_2 t \end{equation} $$ Note that we have assumed that the common phase can be set to zero for simplicity. But we can rearrange this using a standard trigonometrical formula for the sum of two cosines: $$ \begin{eqnarray} \psi &=& A \left(\cos \omega_1 t + \cos \omega_2 t\right)\\ &=& 2A \cos\left(\frac{\omega_1 + \omega_2}{2} t\right) \cos\left(\frac{\omega_1 - \omega_2}{2} t\right) \end{eqnarray} $$ Now let's define two \emph{new} angular frequencies, and rewrite the solution: $$ \begin{eqnarray} \omega &=& \frac{1}{2}\left(\omega_1 + \omega_2\right)\\ \Delta \omega &=& \frac{1}{2}\left(\omega_1 - \omega_2\right)\\ \psi &=& 2A\cos (\omega t) \cos (\Delta\omega t) \end{eqnarray} $$ If the two angular frequencies are relatively close, then the form of the resulting oscillation is rather simple. There is an oscillation at the average frequency (the term $\cos \omega t$) whose amplitude is \emph{modulated} by a slow oscillation at the difference frequency (the term $\cos \Delta\omega t$ - also known as the envelope). This is a phenomenon known as \emph{beats}, and is illustrated in the figure below. It is important to understand that, while the angular frequency of the modulation is $\Delta \omega = \frac{1}{2}|\omega_1 - \omega_2|$, the angular frequency at which \emph{peaks} of activity occur is $|\omega_1 - \omega_2|$ (or equivalently zeroes of activity). The number of minima per second is $\Delta \omega/\pi$. The perceived effect (say for sound) will be of a sound at the average angular frequency with its amplitude varying according to the envelope. fig = figure(figsize=[12,3]) # Add subplots: number in y, x, index number ax = fig.add_subplot(121,autoscale_on=False,xlim=(0,65),ylim=(-2,2)) ax2 = fig.add_subplot(122,autoscale_on=False,xlim=(0,65),ylim=(-2,2)) t = linspace(0.0,65.0,1000) o1 = 1.3 o2 = 1.0e206690>] The result of two driving forces on one SHO with different frequencies but the same amplitude. Left: two forces shown together. Right: the resultant motion with the envelope superimposed in dashed lines. The original frequencies are $\omega = 1.0$ and $\omega=1.2 s^{-1}$; on ce how the peaks coincide at regular intervals, and this leads to the maxima in the envelope function. The figure above shows an illustration of exactly this behaviour for the frequencies $\omega_1 = 1.2 s^{-1}$ and $\omega_2 = 1.0 s^{-1}$ which gives a resulting motion at frequency $\omega = 1.1 s^{-1}$ modulated by an envelope with frequency $\Delta \omega = 0.1 s^{-1}$. The resulting motion will show a true periodicity if the ratio of the frequencies can be written as a ratio of integers (i.e. $\omega_1/\omega_2 = n_1/n_2$). The motion from two frequencies which are almost non-periodic is shown in the figure below (I used 25/3 and 7.1 here but we could have made it properly non-periodic if we'd tried harder). Note that if there is a phase difference between the two oscillations then beats still result, but the phase difference will appear in both sum and difference phases. We can write $\cos(\omega_1 t + \phi_1) + \cos(\omega_2 t + \phi_2) = 2\cos(\omega t + \phi)\cos(\Delta \omega t + \Delta \phi)$ where $\omega$ and $\Delta \omega$ have the same meaning as before and $\phi = (\phi_1 + \phi_2)/2$ and $\Delta \phi = (\phi_1 - \phi_2)/2$. fig = figure(figsize=[12,3]) # Add subplots: number in y, x, index number xmax = 15.0 ax = fig.add_subplot(121,autoscale_on=False,xlim=(0,xmax),ylim=(-2,2)) ax2 = fig.add_subplot(122,autoscale_on=False,xlim=(0,xmax),ylim=(-2,2)) t = linspace(0.0,xmax,1000) o1 = 25./3 o2 = 7.1d231810>] If the amplitudes and frequencies all differ, then the phasor diagram must be dynamic as illustrated below. from IPython.display import Image Image(filename='PhasorsDiffFreq2.png',width=200) #Size is in pixels The tip of the resultant will trace out a shape in time called a cycloid. For instance, the path followed by the tip of the resultant vector in the complex plane for different amplitudes and frequencies differing by a factor of two is plotted below - you should try playing with the parameters, and seeing what effects they have. # Define figure size to get square fig = figure(figsize=[5,5]) # The range of t may need to change with angular frequencies t = linspace(0,10*pi,200) # Original frequencies: 1.2 and 0.6 o1 = 1.2 o2 = 0.6 # Original amplitudes: 1.5 and 1.3 A1 = 1.5 A2 = 1.3 # Calculate x and y projections of the two oscillations when added x = A1*cos(o1*t) + A2*cos(o2*t) y = A1*sin(o1*t) + A2*sin(o2*t) # Plot path of the tip of the vector plot(x,y) [<matplotlib.lines.Line2D at 0x10cd57c10>]
http://nbviewer.ipython.org/github/davidbowler/PHAS1224/blob/master/WavesChapter2.ipynb
CC-MAIN-2015-18
en
refinedweb
Price elasticity is a measure of the responsiveness of demand or supply of a good or service to changes in price. The price elasticity of demand measures the ratio of the proportionate change in quantity demanded to the proportionate change of the price . The price elasticity of supply is analogous. Demand for a good is said to be "price elastic" if the elasticity measure is greater than one in absolute value and "inelastic" if less than one. The cross-price elasticity of demand measures the proportionate change in the quantity demanded of one good to the proportionate price change of another good. A positive cross-price elasticity indicates that the goods are substitutes for one another; a negative cross price elasticity indicates the goods are complements. M. Poirot wishes to sell a bond that has a face value of $1,000. The bond bears an interest rate of 9% with bond interest payable semiannually. Six years ago, $980 was paid for the bond. At least a 12% return (yield) on the investment is desired. A. The semiannual bond interest payment that M. Poirot received is _________ B. The minimum selling price must
http://www.chegg.com/homework-help/definitions/price-elasticity-12
CC-MAIN-2015-18
en
refinedweb
IRC log of lld on 2010-09-09 Timestamps are in UTC. 13:56:25 [RRSAgent] RRSAgent has joined #lld 13:56:25 [RRSAgent] logging to 13:56:31 [bernard] will go through boston 13:56:33 [TomB] rrsagent, bookmark 13:56:33 [RRSAgent] See 13:56:45 [TomB] zakim, this will be lld 13:56:45 [Zakim] ok, TomB, I see INC_LLDXG()10:00AM already started 13:57:13 [Zakim] +??P14 13:57:23 [TomB] zakim, ??P14 is me 13:57:23 [Zakim] +TomB; got it 13:57:33 [fsasaki] fsasaki has joined #lld 13:57:44 [TomB] zakim, who is on the call? 13:57:44 [Zakim] On the phone I see [IPcaller], TomB 13:57:54 [Zakim] + +1.614.764.aaaa 13:58:11 [Zakim] + +33.1.53.79.aabb 13:58:18 [TomB] zakim, aaaa is Jeff_ 13:58:18 [Zakim] +Jeff_; got it 13:58:21 [marma] hi, I'm following this only via IRC (Martin Malmsten from KB, Stockholm) 13:58:22 [emma] zakim, aabb is me 13:58:22 [Zakim] +emma; got it 13:58:31 [Zakim] +??P24 13:58:42 [Zakim] +??P21 13:58:49 [TomB] zakim, ??P24 is Bernard 13:58:49 [Zakim] +Bernard; got it 13:59:07 [TomB] zakim, IPcaller is Gordon 13:59:07 [Zakim] +Gordon; got it 13:59:13 [TomB] zakim, who is on the call? 13:59:13 [Zakim] On the phone I see Gordon, TomB, Jeff_, emma, Bernard, Felix (muted) 13:59:14 [kcoyle] kcoyle has joined #lld 13:59:32 [TomB] Agenda: 13:59:42 [Andras] Andras has joined #lld 13:59:48 [TomB] Regrets: Jodi, Anette, Joachim, Monica, Kim 14:00:12 [Zakim] +??P25 14:00:35 [TomB] 14:00:51 [TomB] rrsagent, please make record public 14:00:59 [TomB] Meeting: LLD XG 14:01:02 [Zakim] +??P29 14:01:02 [TomB] Chair: Tom 14:01:02 [whalb] whalb has joined #lld 14:01:03 [Zakim] + +1.361.279.aacc 14:01:07 [TomB] Scribe: Bernard 14:01:15 [TomB] scribenick: bernard 14:01:20 [TomB] zakim, aacc is Alexander 14:01:20 [Zakim] +Alexander; got it 14:01:28 [Andras] zakim, P29 is Andras 14:01:30 [mzrcia] mzrcia has joined #LLD 14:01:31 [Zakim] sorry, Andras, I do not recognize a party named 'P29' 14:01:39 [ww] is the bristol dialin broken? 14:01:42 [Andras] zakim, ??P29 is Andras 14:01:42 [Zakim] +Andras; got it 14:01:50 [Zakim] + +43.316.876.aadd 14:01:51 [Zakim] +[LC] 14:01:52 [TomB] Regrets+: Antoine 14:01:55 [whalb] zakim, aadd is me 14:01:56 [Zakim] +whalb; got it 14:02:06 [kefo] kefo has joined #lld 14:02:16 [TomB] zakim, LC is kefo 14:02:16 [Zakim] +kefo; got it 14:02:31 [Zakim] + +1.330.655.aaee 14:02:36 [michaelp] michaelp has joined #lld 14:03:20 [ww] Does anyone know if +44 117 370 6152 still works? 14:03:28 [Zakim] +Jonathan_Rees 14:03:41 [ww] I get Allison Smith's voice telling me the number I have dialed is not in service 14:03:55 [jar] jar has joined #lld 14:04:04 [ww] Allison Smith is the "voice of Asterisk" so I'm guessing it comes from the W3C voice bridge not the carrier 14:04:08 [bernard] seems that only Boston bridge is working 14:04:25 [Andras] as usual :-) 14:04:47 [TomB] ww, are you dialing Boston: +1-617-761-6200, Nice: +33.4.26.46.79.03 , Bristol: +44.203.318.0479 - some of the numbers are new 14:05:08 [ww] on the 020 number, Allison Smith tells me "all circuits are busy now 14:05:12 [Zakim] + +1.423.463.aaff 14:05:17 [ww] can't easily dial internationally at the moment :( 14:05:30 [Zakim] +Jeff_.a 14:05:36 [ww] TomB: fwiw +4420 is London not Bristol 14:05:37 [Zakim] + +49.4.aagg 14:05:40 [rsinger] rsinger has joined #lld 14:05:53 [ww] I think... 14:06:04 [jneubert] zakim, aagg is jneubert 14:06:07 [rsinger] Zakim, mute me 14:06:11 [Zakim] +jneubert; got it 14:06:11 [ww] actually, usually 0207/0208 are london. 0203 might well be elsewhere 14:06:12 [Zakim] sorry, rsinger, I do not know which phone connection belongs to you 14:06:35 [michaelp] zakim, Michael is really me 14:06:35 [Zakim] +michaelp; got it 14:07:00 [rsinger] Zakim, mute me 14:07:00 [Zakim] rsinger should now be muted 14:07:04 [bernard] Admin 14:07:08 [bernard] RESOLUTION: accept previous telecon minutes 14:07:31 [bernard] next meetings 14:08:35 [jeff_] true 14:08:42 [bernard] Tom : will circulate more informatin about F2F 14:09:22 [bernard] Tom : will need improvisation about phone and projector ... 14:09:25 [LarsG] LarsG has joined #lld 14:09:39 [bernard] ... agenda is on the wiki 14:09:55 [emma] draft agenda for F2F : 14:10:39 [bernard] ACTION; all add their attendance on the wiki 14:10:50 [bernard] ACTION: all add their attendance on the wiki 14:11:10 [TomB] ACTION: Potential attendees in Pittsburgh to use wiki page to indicate whether they are attending or not at [recorded in ] 14:11:15 [TomB] --continues 14:11:28 [Zakim] + +49.613.692.aahh 14:11:40 [bernard] ACTION: Potential attendees in Pittsburgh to use wiki page to indicate whether they are attending or not at 14:12:13 [bernard] Tom : informal meeting in Cologne 14:12:26 [bernard] Informal meeting of XG members at SWBIB 2010 on the morning of 1 December - contact Joachim Neubert 14:12:42 [bernard] Use cases and case studies - update 14:13:56 [bernard] Tom: next week pick two use cases for detailed discussion 14:14:38 [emma] ACTION: By Monday Sept. 6 members should add to list of email lists at [recorded in ] 14:14:43 [emma] --DONE 14:14:56 [emma] ACTION: Group to comment on email text at by Monday, Sept. 6 [recorded in ] 14:14:58 [emma] --DONE 14:15:09 [emma] ACTION: Everyone to elaborate on topics in the wiki [recorded in ] 14:15:10 [bernard] 4. RDA 14:15:12 [emma] --CONTINUE 14:15:32 [emma] ACTION: Gordon will prepare something on RDA, FRBR etc. for discussion in Sept. agenda [recorded in ] 14:15:36 [emma] --DONE 14:15:39 [bernard] -- To be discussed: 14:16:08 [RRSAgent] I have made the request to generate emma 14:16:11 [gneher] gneher has joined #lld 14:18:12 [bernard] GordonD commenting 14:19:34 [TomB] Within IFLA, new technical group for namespaces, coordination of standards in SW framwork, etc - will report to new core activity that resurrects something abandoned a few years ago - concept of universal bibliographic control. One or two months. 14:20:43 [Zakim] +??P20 14:21:05 [bernard] GordonD: no formal connection between FRBR and RDA activities 14:21:16 [TomB] ... special interest group for SW issues. Producing mappings beyond IFLA, e.g. for RDA, remains semi-formal - no formal orgn looking at mappings with RDA. Only joint membership in activiities. 14:22:02 [bernard] Publication of standards in RDF not synchronized 14:22:08 [emma] s/activiities/activities 14:22:34 [TomB] ... Trying to convey complexity of issues - if we are to get these standards to work together effectively. 14:23:14 [bernard] GordonD : Issues section gathers issues discussed in both groups 14:23:45 [bernard] GordonD: Constrained versus unconstrained properties and classes is the main issue ... 14:24:01 [matolat] matolat has joined #lld 14:25:03 [bernard] no specification of constraints in RDA ... 14:25:13 [Zakim] +michaelp.a 14:25:33 [emma] pressure from non-lib community for unconstrained properties but RDA, JSC and IFLA groups are convinced that constrained versions ar necessary 14:25:47 [kcoyle] q+ 14:25:52 [emma] recent IFLA meeting consider publishing unconstrained versions, linked with constrained ones. 14:25:59 [michaelp] q+ to talk about constraints 14:26:05 [matolat] zakim michaelp.a is matolat 14:26:08 [TomB] Gordon: IFLA persuaded, JSC coming around - have indicated they will not block unconstrained properties. 14:26:13 [michaelp] zakim, unmute me 14:26:13 [Zakim] michaelp should no longer be muted 14:27:18 [bernard] kcoyle: another issue has to do with the way constraints are expressed and how they are enforced in applications 14:27:36 [bernard] kcoyle: not sure the RDA method actually works 14:28:08 [bernard] GordonD: constraints are not enforced on applications 14:28:40 [bernard] GordonD: constraints are useful to check legacy data 14:29:12 [bernard] kcoyle: there is a sort a contradiction there 14:29:31 [TomB] Gordon: They are constraints on _inferences_. Powerful if you consider there may be billions of triples eventually. Records can be "filled in" using inferences. 14:30:00 [bernard] GordonD: I won't call them "unformal" constraints 14:30:24 [AlexanderH] AlexanderH has joined #lld 14:30:47 [TomB] ack Michaelp 14:30:47 [Zakim] michaelp, you wanted to talk about constraints 14:31:03 [bernard] GordonD: We need more technical investigation 14:31:24 [kcoyle] hearing loud typing 14:31:31 [bernard] Michaelp : there is a bit of confusion about constraints and validation 14:32:03 [bernard] Michaelp: In OWL the constraints are enabling, not restricting 14:32:41 [bernard] Michaelp: There are different of constraints like in XML schemas 14:33:30 [bernard] GordonD: Indeed a big deal of confusion in this area 14:33:49 [bernard] zakim, mute me 14:33:49 [Zakim] Bernard should now be muted 14:34:08 [jeff_] q+ 14:34:15 [bernard] GordonD: RDA wants to be validated as a schema 14:35:18 [bernard] GordonD: but also enabling as Michaelp has pointed 14:35:25 [TomB] ack jeff_ 14:36:25 [kcoyle] q+ 14:36:34 [TomB] ack kcoyle 14:36:45 [bernard] Jeff_: The same model can be used either as a schema, either as enabling depending on the use case 14:37:11 [bernard] kcoyle: we should clarify which constraints we are speaking about 14:38:00 [bernard] GordonD: There are a lot of value constraints in RDA 14:38:04 [emma] Karen : RDA primary constraints are about constraining a property to a FRBR entity, not the values 14:39:55 [bernard] GordonD: RDA should keep its own versions of FRBR constraints ... 14:40:18 [bernard] Tom : move on to next topics 14:40:37 [bernard] Tom : port this discussion to F2F 14:40:52 [jeff_] +1 14:41:01 [mzrcia] +1 14:41:02 [AlexanderH] +1 14:41:31 [bernard] GordonD : next topic is "Application profiles or OWL ontologies" 14:42:22 [LarsG] * Zakim please mute me 14:42:39 [emma] GordonD : ISBD elements have a sequence + indicate if mandatory or not, so need for application profile 14:43:25 [bernard] GordonD: What LLD can bring is clarification of when using application profiles and when using OWL 14:43:53 [bernard] next topic : Use of published properties and classes 14:45:42 [bernard] GordonD: How can library comunity be made aware of and re-use widely used classes and properties? 14:46:12 [emma] q+ to suggest the need to encourage the use of widespread standards 14:46:35 [bernard] Need to acknowledge overlap between libray and "open Web" classes and properties 14:46:45 [TomB] q+ to suggest additional attention to "alignment" - mappings and "identity links" between vocabularies 14:46:52 [TomB] ack emma 14:46:53 [Zakim] emma, you wanted to suggest the need to encourage the use of widespread standards 14:47:12 [bernard] emma: Agreed with GordonD ... 14:47:23 [bernard] it's very much an issue of trust ... 14:47:45 [mzrcia] FRSAD intent to use skos semantic relationships instead of repeating to define them 14:47:47 [bernard] LLD should encourage library community not to be shy about this use 14:48:15 [mzrcia] q+ 14:48:25 [TomB] ack TomB 14:48:25 [Zakim] TomB, you wanted to suggest additional attention to "alignment" - mappings and "identity links" between vocabularies 14:48:37 [bernard] emma: library community not used t o multiplicity of representations 14:48:53 [jeff_] Gordon, you should add OWL to the list of DC, FOAF, SKOS at 14:49:17 [GordonD] Jeff, will do 14:49:25 [TomB] ack mzrcia 14:50:23 [bernard] marcia : We have already actions on declaring relationships using "sameAs" or "equivalent classes" 14:50:44 [mzrcia] using SKOS secmantic relationships 14:51:04 [mzrcia] such as broader, narrower... between themas 14:51:22 [kcoyle] q+ 14:51:30 [bernard] GordonD: The library community should not be childish in saying "not invented here" 14:52:23 [bernard] GordonD: Elements coming from the library models will be useful elsewhere also 14:52:38 [GordonD] bernard: churlish, not childish 14:52:47 [bernard] sorry :)) 14:53:01 [mzrcia] I was also into the discussion of reusing/borwwring the assotiative relationships between concepts/topics defined by the Getty vocabulary 14:53:33 [Zakim] -Felix 14:54:37 [bernard] GordonD: How can LLD can have action on milestones in this area? 14:55:23 [bernard] GordonD : Library linked-data and legacy records ... 14:55:45 [bernard] potential to generate billions of instances ... 14:56:36 [bernard] 14:57:03 [emma] GordonD : Need from LLD community to encourage freeing the data 14:57:09 [bernard] It's a political issue (How can libraries be encouraged to "free" their records?) 14:57:34 [bernard] GordonD : Technical infrastructure ... 14:58:01 [jeff_] q+ 14:58:22 [emma] Library system vendors will only change if customers ask for it 14:58:25 [TomB] ack kcoyle 14:58:32 [bernard] How are vendors to see the ROI? 14:58:42 [bernard] Duplication and identifiers 14:58:58 [RRSAgent] I have made the request to generate emma 14:59:16 [bernard] GordonD : identifying different records for the same thing is also a big issue .. 15:00:19 [bernard] ... Libray culture ... 15:00:41 [bernard] ... sociological barriers to address ... 15:00:43 [emma] s/Libray/Library 15:01:09 [jar] q? 15:01:11 [TomB] ack jeff_ 15:01:17 [rsinger] GordonD++ #awesome 15:01:40 [emma] +1, Thanks GordonD ! 15:02:00 [mzrcia] +1, Gordon! 15:02:21 [LarsG] +1 GordonD, Great! 15:02:23 [TomB] +1 great summary of the issues! 15:02:36 [GordonD] All, thanks! 15:02:51 [bernard] TomB : a lot of congratulations to Gordon! 15:03:00 [Zakim] -kcoyle 15:03:01 [Zakim] -whalb 15:03:03 [Zakim] -jeff_ 15:03:04 [Zakim] -Jonathan_Rees 15:03:06 [Zakim] -Andras 15:03:07 [Zakim] -michaelp.a 15:03:09 [Zakim] -Alexander 15:03:11 [Zakim] -GordonD 15:03:12 [Zakim] -jneubert 15:03:13 [gneher] gneher has left #lld 15:03:14 [Zakim] -michaelp 15:03:16 [Zakim] -LarsG 15:03:17 [bernard] zakim, list attendees 15:03:18 [Zakim] -kefo 15:03:20 [Zakim] -rsinger 15:03:22 [Zakim] -marcia 15:03:23 [matolat] bye 15:03:24 [Zakim] As of this point the attendees have been TomB, +1.614.764.aaaa, +33.1.53.79.aabb, emma, Felix, Bernard, GordonD, kcoyle, +1.361.279.aacc, Alexander, Andras, +43.316.876.aadd, 15:03:31 :03:32 [emma] zakim, unmute bernard 15:03:34 [michaelp] michaelp has left #lld 15:03:40 [Zakim] -gneher 15:03:41 [Zakim] Bernard should no longer be muted 15:04:12 [TomB] rrsagent, please draft minutes 15:04:12 [RRSAgent] I have made the request to generate TomB 15:04:22 [emma] [adjourned] 15:08:46 [TomB] zakim, bye 15:08:46 [Zakim] leaving. As of this point the attendees were TomB, +1.614.764.aaaa, +33.1.53.79.aabb, emma, Felix, Bernard, GordonD, kcoyle, +1.361.279.aacc, Alexander, Andras, +43.316.876.aadd, 15:08:46 [Zakim] Zakim has left #lld 15:08:49 :08:52 [TomB] rrsagent, bye 15:08:52 [RRSAgent] I see 7 open action items saved in : 15:08:52 [RRSAgent] ACTION: all add their attendance on the wiki [1] 15:08:52 [RRSAgent] recorded in 15:08:52 [RRSAgent] ACTION: Potential attendees in Pittsburgh to use wiki page to indicate whether they are attending or not at [recorded in ] [2] 15:08:52 [RRSAgent] recorded in 15:08:52 [RRSAgent] ACTION: Potential attendees in Pittsburgh to use wiki page to indicate whether they are attending or not at [3] 15:08:52 [RRSAgent] recorded in 15:08:52 [RRSAgent] ACTION: By Monday Sept. 6 members should add to list of email lists at [recorded in ] [4] 15:08:52 [RRSAgent] recorded in 15:08:52 [RRSAgent] ACTION: Group to comment on email text at by Monday, Sept. 6 [recorded in ] [5] 15:08:52 [RRSAgent] recorded in 15:08:52 [RRSAgent] ACTION: Everyone to elaborate on topics in the wiki [recorded in ] [6] 15:08:52 [RRSAgent] recorded in 15:08:52 [RRSAgent] ACTION: Gordon will prepare something on RDA, FRBR etc. for discussion in Sept. agenda [recorded in ] [7] 15:08:52 [RRSAgent] recorded in
http://www.w3.org/2010/09/09-lld-irc
CC-MAIN-2015-18
en
refinedweb
How to vary the color along a stoke Hi all, I am trying to vary the color along a path2D from a lookup table. Has anyone else tried this or can give me some advice on this? Joey Hey, I've been looking into this using the Paint interface, however I have found that It is not suitable to solve my problem and its not realy working. The first attempt that I have tried at this has been to set an array of points, each with a color, I then use a gradientPaint to interpolate the color between the two points, the code for this is below. import java.awt.BasicStroke; import java.awt.BorderLayout; import java.awt.Color; import java.awt.GradientPaint; import java.awt.Graphics2D; import java.awt.geom.Path2D; import java.awt.geom.Point2D; import java.awt.image.BufferedImage; import java.util.Vector; import javax.swing.ImageIcon; import javax.swing.JFrame; import javax.swing.JLabel; public class StrokeTesting { public static void main(String input[]) { //Create an image and get Graphics2D BufferedImage img = new BufferedImage(600, 600, BufferedImage.TYPE_INT_ARGB); Graphics2D g = img.createGraphics(); //Vectors to hold points+colors Vector Vector //Add Points points.add(new Point2D.Double(100,100)); points.add(new Point2D.Double(200,500)); points.add(new Point2D.Double(500,500)); points.add(new Point2D.Double(100,400)); points.add(new Point2D.Double(90,200)); //Set Colors colors.add(Color.WHITE); colors.add(Color.YELLOW); colors.add(Color.green); colors.add(Color.RED); colors.add(Color.BLUE); //Set Stroke g.setStroke(new BasicStroke(10, BasicStroke.CAP_ROUND, BasicStroke.JOIN_ROUND)); Path2D.Double path; for (int i = 0; i < points.size() - 1; i++) { path = new Path2D.Double(); Point2D.Double p1 = points.get(i); Point2D.Double p2 = points.get(i + 1); Color c1 = colors.get(i); Color c2 = colors.get(i + 1); path.moveTo(p1.x, p1.y); path.lineTo(p2.x, p2.y); g.setPaint(new GradientPaint(p1, c1, p2, c2)); g.draw(path); } ImageIcon imageHolder = new ImageIcon(img); JLabel lab = new JLabel(imageHolder); JFrame f = new JFrame("Testing"); f.setSize(800,600); f.getContentPane().setLayout(new BorderLayout()); f.getContentPane().add(lab, BorderLayout.CENTER); f.setVisible(true); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } } This is not a perfect solution as the points can overllap and look kinda funny.Also it does not work that well if you want to vary the color between two points. What I would like to do would be to pass a function an array of colors and an array of points. The program would then draw this and vary the color along the path baised on the colors. I'm basicly trying to draw a rainbow colored path. That's a good attempt drawing smaller line segments with a GradientPaint. If this is your attempt at using the Paint interface, then I think you missed the mark. It sounds like you'll have to create classes that implement the Paint and PaintContext interfaces. Given my understanding of how Java2D's gradients require user-space shapes to define where on the screen the gradient starts and stops, your Paint implementation will likely require a copy of the path you're trying to draw so that it can fill a Raster object with colors that follow your path. The downside of Java's Paint interface is that you'd have to create new instances for different paths, and performance will suffer quickly as you add more geometry. The goal of your PaintContext class would be to receive a rectangle defining a grid of pixels, create a Raster object to hold the pixels, then map that copy of your geometry over the pixel grid, and color the pixels as desired. It seems that Java just creates boundaries and fills them with your pixels, so handling lines thicker than 1 pixel would be complicated. Here are some links I came across, they're difficult to find, and I hope they help.-... The path2d is basically a vector. You may want to paint it with some color. Painting basically takes place when you draw() or fill() the path. Take a look at the java.awt.Paint interface. Cheers, Mik
https://www.java.net/node/698680
CC-MAIN-2015-18
en
refinedweb
New Vulnerability Affects All Browsers 945 Jimmy writes "Secunia is reported about a new vulnerability, which affects all browsers. It allows a malicious web site to "hi-jack" pop-up windows, which could have been opened by e.g. a your bank or an online shop. Here is a demonstration of the vulnerability" Sniff, our little browser's all grown up... (Score:2, Insightful) Thank goodness we've found our first vulnerability in Firefox. Now we can move from the myth that free software is impervious to exploits, and into the reality that vulnerabilities are acknowleged and patched faster in most free software projects. Gentlemen, synchronize your watches. Will the Firefox team have a fix out before Microsoft even admits it's a bug? Re:Sniff, our little browser's all grown up... (Score:5, Insightful) Uh, who was saying that? This sounds scary (Score:5, Funny) Re:This sounds scary (Score:3, Interesting) "Firefox prevented this site from openning 619 popup windows. Click here for options" Is this Windows only or something? Re:This sounds scary (Score:5, Funny) Lynx support (Score:4, Funny) Safari vulnerable if 'pop-up-blocking' is off (Score:4, Informative) Re:Sniff, our little browser's all grown up... (Score:5, Insightful) Re:Sniff, our little browser's all grown up... (Score:3, Insightful) Re:Sniff, our little browser's all grown up... (Score:5, Informative) BTW Javascript has nothing to do with Java except the name. Not the first Firefox vulnerability (Score:5, Informative) As far as I can tell the problem is fixed in the latest Opera beta so they might be able to get it into a proper release pretty soon too. Re:Sniff, our little browser's all grown up... (Score:4, Funny) Re:Sniff, our little browser's all grown up... (Score:4, Insightful) Following their instructions on a barely-patched winXP, neither Firefox 1.0 (with or without popup blocking) or IE were vulnerable. Other people below have reported that it worked for them though, so since the browser (firefox at least) is the same, and it did/didn't work between them, how could it be a bug in firefox? Very strange indeed. NEVERMIND. (Score:4, Informative) So it works in Firefox per the instructions below for me, but Does not work in IE Still??? Crazy!!" Re:Sniff, our little browser's all grown up... (Score:3, Insightful) First? There have been plenty of other FireFox vulnerabilities in the past, however they have all been fixed extremely quickly once discovered (i.e. within a day or 2). All software has security holes in it, get over it - the difference is that the Mozilla Foundation have a habit of fixing them as soon as they find out about them whereas Microsoft have a habit of waiting for many months before bothering to fix them even if they are being active I don't get it (Score:2, Informative) I refreshed the page, and tried the link that said 'Without Pop-up Blocker'. It opened up the Citibank website, but it did not hijack my Citibank popup window. Same thing happened to me under IE6 (except I did not get the dialog when I clicked on the 'With Pop-up Blocker' link). Maybe it works under certain circumstances, but I couldn't reproduce it. Re:I don't get it (Score:2, Informative) Re:I don't get it (Score:5, Informative) And the exploit worked just 'fine' on my firefox 1.0. Re:I don't get it (Score:5, Informative) Re:I don't get it (Score:5, Informative) I hope this helps the vast masses of smart You know you've found a good exploit... (Score:4, Funny) Re:I don't get it (Score:4, Informative) The exploit worked for me on Firefox 1.0 on Windows 98 SE with pop-up blocking turned off, but the exploit didn't work for me when pop-up blocking was turned on. I think I've solved it. (Score:5, Informative) Middle-click to open citibank page in new tab YOU WILL NOT BE VULNERABLE. Left click and allow citibank page to open in new window YOU WILL BE VULNERABLE. At least, that's the behaviour I see on this box. Re:I think I've solved it. (Score:3, Interesting) Re:I don't get it (Score:4, Informative) Re:I don't get it (Score:5, Informative) under about:config, I have dom.disable_window_open_feature.location set to true. So every window must show the location (and because of it, I immediately could see the webpage I was at was not citibank.com). IT DOES WORK! (Score:5, Informative) Ok, here we go for those just playing." 5) See that it does work. That is all. Not all browsers (Score:2) Anyone else have a build of firefox that wasn't really fooled? All your typos... (Score:4, Funny) And in other news, Slashdot is reported all about a new grammatical error in the headlines. Reporting anyone? Re:All your typos... (Score:4, Funny) Not quite hijacking (Score:3, Interesting) But if I used the link from Secunia [secunia.com] to access Citybank, the Popup is then hijacked. So it seems like you need to access (click on a link to) your trusted site via an untrusted site to get hijacked? Here's how it works (Score:5, Insightful) So the attacker doesn't need you to click on anything, they just need you to have their site open -- with the timer going -- in another window. Also, the attacker needs to know in advance what name the victim site's pop-up is referenced by. A dynamically generated name could possibly defeat this attack, though the attacker could always crawl the DOM for a handle to the pop-up. Re:Here's how it works (Score:3, Insightful) I doubt it. If any browser allows you to look at the DOM of a page from a different site, that is a far greater security hole than what they are demonstrating. Re:Here's how it works (Score:3, Insightful) Evil site A helpfully offers a link that opens Good site B. If a user clicks the link and opens Good site B, Evil site A waits for the user to open a predictably named popup from Good site B, then reaches down through the DOM (using code on Evil site A) and alters the URL of the popup, bouncing you to their Evil popup. Big whoop -- this is permitted by Javascript's security model, you know -- the parent window "owns" the child window, thus it can access it and do weird things. Re:A quick workaround for FF 1.0 (Score:3, Informative) I would like to disable JavaScript entirely, but unfortunately that breaks too many pages. no problem here... (Score:4, Informative) Re:no problem here... (Score:4, Interesting) We haven't heard from any Konqueror users yet (and the modem in my Linux box is broken so I can't check it myself). Is the immunity a khtml thing or was it Apple? Re:no problem here... (Score:5, Informative) Re:no problem here... (Score:4, Funny) Re:no problem here... (Score:5, Informative) Happy. (Score:2) Well, that's one alert I'm safe from. Whew. Demo don't work (Score:3, Funny) It's called "Slashdotted" (Score:3, Funny) Re:It's called "Slashdotted" (Score:3, Funny) wait for 10,304,345 hits in the next five minutes as people post "x" in vulnerable "!X" is clear . . server goes down Profit! Safari test (Score:5, Informative) When I turned off the pop-up blocking feature, then when I tried the test, I did see a pop-up from the Secunia site instead of the Citibank text. Now that's a problem. Clearly, this is just another reason to block pop-up windows. Re:Safari test (Score:4, Insightful) I can confirm this works when the "Block Pop-up Windows" in the Safari menu is disabled, but not when the Blocking option is enabled. Rather than just a "me too", I went through the demonstration in reverse order of the previous poster (and was careful to refresh and follow the appropriate links) so I don't think this behavior is due to caching issues. While I do hope there will be a fix for this soon, IMHO, the more appropos fix is that secure sites should not EVER rely on popups. Works for me (Score:3, Informative) Re:Works for me (Score:3, Funny) Security through server meltdown? not irider (Score:3, Informative) All browsers?!? (Score:4, Funny) Re:All browsers?!? (Score:5, Funny) I just don't believe it. Anything -- even an exploit -- working in all browsers would be unprecedented! Lynx appears to be unaffected. Nyeh (Score:4, Informative) Of course it's a bug (Score:5, Insightful) Site A should be able to create and interact with a window named "popup". Site B should be able to create and interact with a window named "popup". This should happen without either site interfering, blocking or overwriting the other. They should simply be invisible to each other, existing in completely seperate little worlds. Re:Of course it's a bug (Score:5, Insightful) Re:Of course it's a bug (Score:3, Informative) Traditionally, windows weren't private to sites, but this is just a variation of the "cross-frame scripting" bugs that have been patched over time. Re:Of course it's a bug (Score:3, Insightful) Traditionally, windows weren't private to sites, but this is just a variation of the "cross-frame scripting" bugs that have been patched over time. A stupifyingly dumb design decision in the first place. The above poster's namespace comment is dead on, and there is obviously no choice but to implement per-site namespace properly. This design bug, however, is the fault of _all_ of us, for not reviewing the des Re:Of course it's a bug (Score:3, Informative) I did find this: Referring to windows and frames [netscape.com] from the Netscape JavaScript handbook. It says nothing about window names being private. So, pin this one on Netscape, and the lack of any formal open standard for what happens in a browser outside of the document. Not so bad... (Score:3) Using Opera 7.54 (Score:3, Informative) the link for disabled popup blockers doesnt open a popup when i have my popup blocker enabled (actually its just Proxomitron with custom filters) When I disable proxomitrion, it does what it says (opens the Secunia site instead of the citibank site) And with proxomitron disabled, the first method (for people running popup blockers) still does the same as it did the first time. Re:Using Opera 7.54 (Score:3) jack pot (Score:4, Funny) Re:jack pot (Score:3, Funny) Wow, did you get an email from Yassir Arafat's widow too? I'm still waiting for my cash transfer. Once again, why needless use of Javascript is BAD! (Score:4, Insightful) If web masters would stop NEEDLESSLY using Javascript to do things like open new windows, and would use it ONLY when there is no way using HTML to accomplish the same goal, then people would not need to have Javascript active all the time, and the impact of exploits like this would be greatly reduced. If, instead of using <a href="#" onclick="foo"> or <a href="javascript(foo)"> type constructs, web designers would use <a target="_blank" href="something.html" onclick="javascript(stuff)"> type constructs, then if the user HAS Javascript active, then the web master can micromanage the newly created window. If not, then the user STILL gets a new window, just not one that the web master can remove all the chrome from. Seriously - when was the last time you heard of an exploit that used straight HTML? All of the recent exploits in ALL browsers, IE included, have been in either Javascript or Active-X, not in the core HTML rendering. There is a REASON for that. Re:Once again, why needless use of Javascript is B (Score:5, Insightful) Example: Sites that pop up their "main" window from their "entry tunnel." Exactly what justification do you have for thinking I still need to view your entry tunnel? Example: (as mentioned,) sites that use Javascript to open windows. Granted, this practice came around before Opera/Mozilla introduced us to the wonders of tabbed browsing, but what's the point of pulling up a "diversionary" window and forcing the user to close it? Afraid they might not understand the concept of the "back" button? Example: using flash/java/shockwave/etc to perform functions that could be handled in HTML, especially now that we have DHTML. I have trouble with understanding the argument "we will be more successful if we deny access to some percentage of the population." etc etc etc.IMHO, this is a symptom of the problem where people assume "everyone else thinks / acts / behaves in the same way I do." Re:Once again, why needless use of Javascript is B (Score:3, Informative) Re:Once again, why needless use of Javascript is B (Score:5, Informative) Yup. Check out Ian Hickson's "Sending XHTML as text/html Considered Harmful" [hixie.ch] for a quick primer on what most sites that do XHTML are doing wrong. Check out Evan Goer's list of "X-Philes" [goer.org] for a list of the very few sites which get it right, and his purge of sites from that list [goer.org] for an indication of how easy it is to go wrong even after you've initially gotten it right. As for HTML generally not producing good markup and being "too loose", I hate to break it to you but XHTML 1.0 and HTML 4.01 are element-for-element identical; the only difference between the two is that one is an SGML application and one is an XML application. And when you serve XHTML 1.0 as "text/html" (e.g., when you do XHTML the way ESPN and others do) you don't gain any of the strictness benefits of XML. And the only thing XHTML 1.1 does on top of that is deprecate a couple more things and add modularization and ruby support, so I'm really not sure where all the "good markup" would come from in a transition to XHTML. Plus there's no reason to believe that serving XHTML 1.1 as "text/html" is conformant, so if you use 1.1 you either break the spec or you shut out IE. Likewise, switching to an XHTML DOCTYPE and using XML syntax doesn't magically confer accessibility on a page; it's just as easy to write a horrid, bloated, table-based images-for-everything page in XHTML as it is in HTML 4.01. I suspect that you're making a common mistake among people who've just discovered web standards: you're confusing XHTML with good markup and best practices (check out Molly Holzschlag on what standards are and aren't [molly.com]). Anyway, it's quite possible to write beautiful, clean, accessible, semantically rich HTML 4.01 with separation of content from presentation; after all, it's got the same set of tags and attributes as XHTML 1.0, so if you can do it in one you can do it in the other just as easily. And when you consider that serving valid, well-formed XHTML according to the spec can be a nightmare at times, it's no surprise that even "gurus" of the standards world (e.g., Mark [diveintomark.org] Pilgrim [diveintomark.org], Anne [annevankesteren.nl] van [annevankesteren.nl] Kesteren [annevankesteren.nl]) have gone back to or recommended sticking with HTML 4.01 unless you really need one of the features gained by an XML-based HTML. And lest you continue to think I'm some sort of skeptic or enemey of web standards, well, every site I've built in the past three years (basically, since I discovered there was such a thing as a "web standard") has been valid, accessible, and CSS-based. I just know from experience that valid markup and stylesheets are one part of the equation, and there are an awful lot of those "best practices" that aren't ever published in a spec from the W3C or anyone else. Re:Once again, why needless use of Javascript is B (Score:4, Informative) With scripting, you can make iFrames draggable, closeable and behave and look just like regular windows but they are, in essence, windows within a window and are tied closely to the current browser. There are reasons to have popups like, for example, color or date pickers (with a calendar). It is actually much easier to build a draggable DIV than a draggable iFrame but the draggable DIV doesn't show up on top of certain HTML elements and hence becomes useless (even with an infinitely high z-index). By the way, you can get draggable iFrames to work in both MSIE and Mozilla. I just bought my iMac for testing but I'm pretty sure I can get it to work in the mac versions too as they all have the necessary language and DHTML components. All I can say though is that JavaScript and DHTML are definitely vendor dependant, and I don't care if you are mozilla or Apple or Microsoft, they ALL have quirks and bugs that go outside of the specifications. In many ways, my high speed photoshop-style image scripting program (for use on web servers) was easier to write in C# than trying to figure out how to make things work across every browser out there! Anyways, programmer alert. I wouldn't depend on popups working in the future if your app depends on it. Make sure to use iFrames or have a non popup dependant way of doing the same thing! Re:Once again, why needless use of Javascript is B (Score:3, Insightful) Yup. It further demonstrates why any financial institution that requires you to enable javascript in order to use their website should be deemed incompetent. Re:Once again, why needless use of Javascript is B (Score:3, Informative) OK, let's try something easier. I've got a table with many rows where each row contains two sets of radio buttons. When one of the radio buttons in the first set is selected, you shouldn't select an answer in the second set. Thus, I use Javascript to disable the second set of radio buttons when that particular option is chosen. Care to tell me how to do that using regular HTML? Re:Once again, why needless use of Javascript is B (Score:3) There you go. You've just shown your ignorance. For simple web pages I would agree, but this vulnerablility is for, and demonstrated in, a web application. As other posters have pointed out, you cannot get some features of an application without using Javascript. So, until the world starts using something like Webstart and downloadable, secure thick clients via the web, the browser is all that we have. Perhaps th Re:Once again, why needless use of Javascript is B (Score:4, Insightful) Just what I want.. a user posting 300 times before realizing that, yes, they must fill out the form. Think about something like Yahoo mail. I can go into a new message and if I forget to put in a To:, it will still post to the server and come back and say that I'm a moron. With JS verification, I would know instantly. Obviously client-side verification shouldn't be used for passwords, but checking that a form is at least completely filled out is very helpful, both as a designer and a web user. Client side verification is practically instant and does not burden the server with incomplete requests. Of course, client side verification does not exempt you from having to perform server side verification. Re:Once again, why needless use of Javascript is B (Score:3, Informative) This excellent article on ALA [alistapart.com] should answer any pending questions on the issue. BTW, the target attribute of anchors was dropped between XHTML 1.1 Transitional and XHTML 1 Re:Once again, why needless use of Javascript is B (Score:5, Informative) 1. 'target' is certainly part of standard html. Just because it isn't defined initially by the A tag doesn't mean the A tag can't use it. 2. From- PS. Hey mods, if you don't know about a subject, don't mark a post 'informative' just because there's a link in it. Re:Once again, why needless use of Javascript is B (Score:3, Informative) In strict, frames and target= are depricated Re:Once again, why needless use of Javascript is B (Score:3, Informative) The "target" attribute still exists in the Transitional and Frameset versions of HTML 4.01 and XHTML 1.0. XHTML 1.1 does not have a Transitional or a Frameset version; however, it is a modularization of XHTML which means that the same functionality can be easily re-introduced. For example, Jacques Distler has produced a page using the "target" attribute [utexas.edu] which is valid against an extended XHTML 1.1 DTD. This is one of the major selling points of XML-based markup and ha Re:Once again, why needless use of Javascript is B (Score:3, Insightful) Some little JavaScript projects I have done: Bugzilla #273699 (Score:3, Informative) Mozilla/Firefox Workaround (Score:5, Informative) 1. Enter about:config in the Location Bar. 2. Enter dom.disable_window_open_feature.location in the filter field. 3. Right-click (Ctrl+click on Mac OS) the preference option and choose Toggle (the value should change to true). This issue is already being worked on bug 273699 [mozilla.org] (copy link location, paste) filed a few hours ago. As a side note, being able to see the bug fixing progress unfold is one of the many reasons why i love open source. I am able to learn so much from just seeing the process take place from start to finish, how it is reported, test cases created, problems that arise, insights into other parts of the system, who the people involved are, reviews, patches, etc. Re:Mozilla/Firefox Workaround (Score:5, Informative) From the page: "Note that, although the attack site can inject its own content, it cannot change the URL appearing in the Location Bar. Firefox and Mozilla have the ability to deny access to the Location Bar so all pop-up windows always have it." Re:Mozilla/Firefox Workaround (Score:5, Insightful) In general, it's always going to be possible if you are browsing sketchy and secure sites at the same time that the sketchy site might pop up some deceptive window, and if you are confused, and can't see the URL bar, you might think it came from the secure site, with or without this specific injection issue. Which is why this workaround out to be default behavior anyway (I HATE sites that try to hide my location bar and navigation toolbar, those bastards). Anyway, the point is, yes the issue should be fixed, but if you applied the workaround, it makes the exploit essentially worthless to an adversary. Results for Slackware 10, Konqueror, Mozilla (Score:3, Informative) Slackware 10, Konqueror, and Mozilla 1.7.3. Results with Konqueror: the popup did NOT point back at Secunia, it pointed at Citibank. Perhaps this is because I have Konqueror configured to open new windows in tabs and have "smart" popup blocking enabled. Would someone try and confirm this? If it is the issue, then we can block the vulnerability in Konqueror, at least. In Mozilla, the popup trick worked. Bad Mozilla! FYI Re:UPDATE: Slackware 10, Konqueror, Mozilla 1.7.3 (Score:3, Interesting) In Javascript, if (and only if) your web page opens a new window, it "owns" that window. In other words, you have access to the whole DOM in that window. You can step through the document object, alter things, and so forth. This is how things are supposed to work; it's what enables us to open new windows and interact with the user. For example, ma Firefox 1.0 (Score:3, Interesting) If I middleclick on the test page and *force* firefox to open the site in a new tab, the exploit fails. I don't know enough to now if this is a limitation in the exploit or in how they've written the exploit, but it's odd and interesting in my opinion there is a simple fix for this (Score:3, Interesting) Well, why not make a new rule in javascript that would disallow any javascript code to access any popups that aren't a direct child of the current instance of the browser. Basically what i mean is to have each window in it's own namespace and have the child window share said namespace. (I think one would have to not allow grandparents to access it either though). so basically if two seperate windows open a window with target="name" then 2 windows are opened one for each instance and they have nothing to do with each other. proxy As of right now... (Score:4, Funny) And this is a version of Firefox I installed approximately two weeks ago. Vulnerability? For dyslexic octopii, maybe (Score:3, Interesting) This strikes me as about as dangerous as the post-SP2 "Warning! If you copy and paste shit files from the net and click a few boxes, YOU COULD GET SPYWARE!". For the record, I just nuked and reinstalled XP-Sp2 + hotfixes a few days ago (for once, not because it was fucked up, but my new raid0 array), so I have cherry IE6 and unextensioned-FireFox 1. I tried several variations of the convoluted instructions, and could get no explicitly dangerous behavior. Mozilla didn't bat an eye, and IE once popped up a box saying "The script is trying to close this window, do you want to let it?" If I let it, then it opened the Citibank site in the window again. Oooh, scary. I'm sure there may be some actual, dangerous vulnerability here somewhere. But I've gotten better instructions from the japanese ASUS site, translated through google. just say no to javascript (Score:3, Interesting) For firefox or opera just turn it on when you absolutely need it and never forget to turn it off right away when you are done. For IE make use of the security zones to implement javascript whitelisting. That's what I do because with firefox and opera I often don't remember to turn it off again until I start getting annoying popups or worse. Seems like more than half of these vulnerabilities that keep popping up make use of javascript. That last one with the online banking passwords was pretty scary and made me very glad that I browse with javascript off. backwards on Firefox 1.0? (Score:3, Insightful) Mixed risk (Score:3, Informative) But I ran the tests, and here are my results: Mac OSX 10.3.6 Safari 1.2.4 (v125.12) - Not affected according to test. FireFox 1.0 (G4 optimized build) - Affected according to test Camino 0.8.2+ - Affected according to test All browsers have pop-up blocking enabled, and some sort of ad filtering (Pith Helmet, Ad Block, etc). Your mileage WILL vary. So... (Score:3, Funny) Re:It doesn't affect Safari (Score:5, Informative) After you have clicked on the link, you have to refresh the Secunia page, then it will work. It's kinda strange, but I guess it is a vulnerability. Kinda like walking back and forth through a bad neighborhood while counting your cash. NarratorDan Re:It doesn't affect Safari (Score:3, Insightful) If so, then it's not "jumping through hoops", which makes Safari as vulnerable as any other browser. Re:Doesn't work for me (Score:5, Informative) In Internet Explorer I pressed "With popup-blocker" (Google Toolbar) and up came Citibank, then I pressed the Fraudulent E-Mail button, and up came CitiBanks popupwindow, first when I closed the popupwindow the "This was hijacked" window appeared (as if triggered by the window.onclose function) but that does not strike me as a gigantic security-hole. Of course the issue in itself is scary, but I'm confident the Mozilla team will have a patch out in no time. This should probably serve as a reminder to webmasters out there, that if you want users to trust content you provide in popup-windows eg. for creditcard payments, you should provide the address-bar, and if the creditcard processing takes place on another server, explain to the customer before he clicks "pay by creditcard" why the window will load from another server. Re:Doesn't work for me (Score:3, Informative) I did this, and Firefox 1.0 (linux) was vunerable. The site wasn't clear that the first site wasn't the vunerability, but links from a genuine site can be made vunerable. Of course, Re:Doesn't work for me (Score:3, Informative) I'm also confident that this will be fixed soon but it's also not really a big issue for me because I do mostly tabbed browsing. It is very rarely that I open a new site in a seperate window anymore. Re:Doesn't work for me (Score:5, Insightful) I disagree. I think they have their moments. Such as displaying incidental information without interrupting the flow of something you're already doing (say, a help link in a wizard-style sequence of pages) like everything else, popups are a tool which can be used or misused. Unfortunately they're mostly misused. Re:I call bullshit!! (Score:5, Informative) 1) Send out a phishing expedition, asking people to log into their BofA account to update their account information. Make it look real official, and include a link that goes to "". The new window takes them to the real site, encrypted and everything. 2) Customers login and check their mailing address, or whatever. 3) Some percentage of them will leave their windows open for more than 10 minutes, at which point BofA sends their standard pop-up window warning about account inactivity and logout. 4) Hijack the pop-up window and do Something Nefarious, like initiate a funds transfer. Now, this isn't a perfect example. But there are an untold number of different sites out there who use pop-ups for perfectly reasonable applications, and it would be trivial for some phisher to get people to go to those sites using his link. The best thing to do is, for those sites who use pop-ups to communicate with their visitors, use some nonstandard form for naming those windows. Use the person's username, a random string, a DES hash with the first two characters of the day of the week as the salt and the time the page is first loaded as the string, whatever (no, don't use "whatever", that's just a figure of speech)' Another clue for webmasters (Score:3, Insightful) It's incredibly sad that pretty much every bank I've ever used doesn't think I might like to know that I'm really talking to their server when I use their web interface. Re:Firefox 1.0 seems fine (Score:3, Interesting) Re:Vulnerability? (Score:4, Insightful) Has happened before. Users may still have to click something, but they could easily be tricked into doing that. Most users aren't constantly vigilant and observant. If the compromised banner ad opened another window that looked like Citibank's site whilst you were using Citibank's site, you could fall for it - especially since Citibank does use pop-ups.
http://it.slashdot.org/story/04/12/09/0053205/new-vulnerability-affects-all-browsers?sbsrc=thisday
CC-MAIN-2015-18
en
refinedweb
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives The ECMAScript group has created some public resources for the Edition 4 design and specification process. We've released a public export of the group wiki and started a public mailing list for discussion related to ECMAScript Edition 4. Keep in mind this is an ongoing process, and we've preferred to release information early to integrate community feedback into the process. As a result the documents are not in their final state, and none of these documents should be considered authoritative yet. You'll find most of the activity, and the most up-to-date material, in the proposals section. There's a fair bit of information under discussion and clarification as well. Here's a small sample of proposals that members of LtU might find particularly interesting: So... prefix notation coming to ECMAScript 5? :-) You're giving away the plan! does anyone do standalone programming with ecma/java-script? i've used it quite a bit recently inside browsers and i was wondering about using it "standalone" as "just" a programming language. i know there's rhino, but that seems to be intended for embedding. In Windows you can just double click .js files and they will run with Windows Scripting Host. It's a very nice way to write batch files. You can compile JScript with .net to make a standalone executable. I've seen two largish projects in the while that did backend business logic with JavaScript. The idea was that it would be easier to modify for (non-existent) different customers than Java and that a lower level of developer would be able to do so. Said lower-level developers were of course available from the Three-Letter-Company-Global-Services-Branch that built the projects in the first place. The JavaScript in question was triggered from servlets, included callouts to Java for data access (via JDBC and SQL), and produced XML which would be munged by XSLT for presentation (which of course included client-side JavaScript as well). The architecture serves mainly as a useful reminder of just how wrong DSL backers can possibly be, and why JSP isn't as bad as you thought. I find PostScript an interesting language. While I haven't programmed in it really, what I see looks interesting (though, for me, I doubt I'd feel (personally) "comfortable" with it; for me it would be more an interesting toy). How many people have used PostScript as a (more) general programming language? Experiences? People who haven't really looked at PostScript as a language, give it a look. Completely off-topic for this thread Right. So why did you post it here? If you really want you can open a new thread, or revisit one of the older htreadas about Postscript (there are at least a couple of them). Right. So why did you post it here? If you really want you can open a new thread, or revisit one of the older htreadas about Postscript (there are at least a couple of them). I didn't/don't expect too large a response, so didn't think it worthwhile to start a whole new thread; however, I certainly would start one if there did seem to be strong interest. Further, it isn't as off-topic as I said; it does relate to using an application-specific language as a general purpose one. Searching now, there don't seem to be any threads with postscript as a topic (according to the LtU search box), there is maybe one or two where this might somewhat fit in better. However, if you like I can start a new thread. We have a DSL department which contains quite a lot of stuff about "application-specific language". re Postrcript here a couple of links I dug out for you: #1, #2 (link should now point here). But I can't reply to either of those threads and the intent is postscript as a general purpose language. Anyway, I'll post it as a forum topic soon as that will resolve the issue. Couldn't care less about the rest -- program transformation and code generation takes care of it. But real tail calls are a must. Just transform and generate away all functions. These enhancements are promising, although I'm not sure that classes are appropriate for a delegation language ;-) In my experimental language, I've added some metaprogramming features ( meta- not macro- ) like currying, concatenation ( f(x) + g(y) yields fg(x,y)={ f(x); g(y) }; ) and runtime inlining : Here is a JS-ized example of a Power function :) }) } For Power(14) it yields something like function(x){ return ((x^2 * x)^2 * x)^2 } The metaprogramming stuff does VM bytecode transformations. Here "fusion" triggers the transformations and "infusion" tags the functional variables to merge. Usually nonlocal variables cannot be optimised or tranformed significantly by the bytecode compiler.) }) } Well, you may think this is totally off topic but actually, it is not that much as my language is both syntaxically and semantically close to JavaScript. I think that kind of metaprogramming could be rematively easily added to VM based languages like JS with many cool uses. It is quite efficient as you don't have the burden of elaborating ASTs manually and running a slow parser or compiler, bytecode transformations are relatively easy to implement and it can be syntax and type safe. For a JIT enabled VM, it necessitates keeping the VM bytecode of compiled functions. Some fallbacks are also needed for non inline-able foregin functions. Maybe automatic AST tranformations are possible for AST based implementations, like the KDE JS engine. Maybe one of the PL gurus here could hint me a language with similar metaprogramming features ? Is it related to optimising functional languages compilers ? ( BTW, is this post beyond the defined off-topic&rant level ? ) is this post beyond the defined off-topic&rant level IT seems fine to me. It is slightly off topic for the current thread, but not extremely so (and you can start a separate thread if you want, of course).
http://lambda-the-ultimate.org/node/1543
CC-MAIN-2015-18
en
refinedweb
chop ficaro wrote: public class JPanel1 extends JFrame chop ficaro wrote:JPanel JPanel = new JPanel(); Rob Prime wrote: chop ficaro wrote:JPanel JPanel = new JPanel(); I really suggest you name your variables differently. Never use a class name as variable name. It makes your code confusing to read; when I first glanced over your code I thought I saw a few static calls: JPanel.setLayout(layout); JPanel.add(mainJPanel2(),BorderLayout.NORTH); Anyway, I see one problem with your BackgroundPanel. It has no children, so its size will be 0x0. Try giving it a preferred size. chop ficaro wrote:tyvm
http://www.coderanch.com/t/508226/GUI/java/BackgroundPanel-java-BorderLayout-NORTH
CC-MAIN-2015-18
en
refinedweb
Flex: NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered. To install the port: cd /usr/ports/www/py-flexget/ && make install cleanTo add the package: pkg install www/py-flexget cd /usr/ports/www/py-flexget/ && make install clean pkg install www/py-flexget No options to configure python:2 Affects: users of www/py-flexget Author: lioux@FreeBSD.org Reason: Database schema changes. Please run: $ sqlite3 db-config.sqlite "ALTER TABLE thetvdb_favorites ADD series_id VARCHAR;" $ sqlite3 db-config.sqlite "ALTER TABLE imdb_movies ADD updated DateTime;" $ sqlite3 db-config.sqlite "ALTER TABLE imdb_movies ADD mpaa_rating VARCHAR;" inside flexget configuration directory (~/.flexget) for each sqlite database you might have. Replace "db-config.sqlite" with the appropriate name for your sqlite database file. Reason: metainfo_series is no longer a builtin. This should only affect you if you aren't using one of the series plugins (series, all_series, thetvdb_favorites, or series_premiere.) If you need to enable metainfo_series manually for a feed it can be done like so: metainfo_series: yes Number of commits found: 93 www/py-flexget: should be working with updated dateutil Submitted by: Chip Marshall (via email) www/py-flexget: mark as broken for now - Does not run with python dateutil 2.2 www/py-flexget: update to 1.2.172 www/py-flexget: update to 1.2.149 www/py-flexget: update to 1.2.137-flexget: update to 1.2.54 www/py-flexget: update to 1.2.27 www/py-flexget: update to 1.2.24 www/py-flexget: update to 1.1.170 - Update to 1.1.170 - Remove duplicated setuptools build dependency www/py-flexget: update to 1.1.167 - Update to 1.1.167 - Use auto plist www/py-flexget: update to 1.1.165 - Update to 1.1.165 - Do not use auto plist due to a bug with egg info www/py-flexget: update to 1.1.164 www/py-flexget: update to 1.1.156 www/py-flexget: update to 1.1.155 - Update to 1.1.155 - Use CHEESESHOP as master site www/py-flexget: update to 1.1.148 www/py-flexget: update to 1.1.146 - Update to 1.1.146 www/py-flexget: update to 1.1.144 - Update to 1.1.144 www/py-flexget: update to 1.1.140 - Update to 1.1.140 www/py-flexget: update to 1.1.138 - Update to 1.1.138 www/py-flexget: update to 1.1.136 - Update to 1.1.136 www/py-flexget: update to 1.1.131 - Update to 1.1.131 www/py-flexget: update to 1.1.124 - Update to 1.1.124 - Use PYDISTUTILS_AUTOPLIST www/py-flexget: update to 1.1.123 - Update to 1.1.123 www/py-flexget: update to 1.1.122 - Update to 1.1.122 - Reorder sites - Enable staging Add NO_STAGE all over the place in preparation for the staging support (cat: www) www/py-flexget: update to 1.1.121 - Update to 1.1.121 www/py-flexget: update to 1.1.119 - Update to 1.1.119 www/py-flexget: update to 1.1.115 - Update to 1.1.115 www/py-flexget: update to 1.1.105 - Update to 1.1.105 www/py-flexget: update to 1.1.102 - Update to 1.1.102 www/py-flexget: update to 1.1.98 - Update to 1.1.98 www/py-flexget: update to 1.1.95 - Update to 1.1.95 www/py-flexget: update to 1.1.93 - Update to 1.1.93 www/py-flexget: update to 1.1.91 - Update to 1.1.91 www/py-flexget: update to 1.1.85 - Update to 1.1.85 www/py-flexget: update to 1.1.82 - Update to 1.1.82 www/py-flexget: update to 1.1.81 - Update to 1.1.81 www/py-flexget: update to 1.1.80 - Update to 1.1.80 www/py-flexget: update to 1.1.76 - Update to 1.1.76 www/py-flexget: fix regression-test target - Fix regression-test target, TEST_DEPENDS needs RUN_DEPENDS as well www/py-flexget: update to 1.1.73 - Update to 1.1.73 www/py-flexget: Update to 1.1.45 - Update to 1.1.45 - Update pkg-plist - Remove oddball MASTER_SITES - nose is not a BUILD_DEPENDS, remove it - Refactor, update and re-order RUN_DEPENDS - Add TEST_DEPENDS and regression-test: target - Whitespace alignment Upgrade guide: Approved by: wg (maintainer) www/py-flexget: take maintainership - Take maintainership PR: ports/179366 Approved by: maintainer (timeout) - Update to 1.1.28 - Chase py-requests update [1] PR: ports/178227 [1] Submitted by: Jan Beich <jbeich@tormail.org> [1] Approved by: culot / jpaetzel (mentors, implicit), maintainer (timeout) - update to 1.0.3273 - bump PORTEPOCH PR: 175109 Submitted by: Jan Beich <jbeich@tormail.org> Approved by: maintainer timeout (1 month+) - remove bogus setuptools dependency. This port actually using bundled ``paver'' build tool, that's initiated by distutils's setup.py. - bump PORTREVISION because of dependency change - trim Makefile header - try to avoid PYTHON_SITELIBDIR in *_DEPENDS - limit python version to 2.x only per official docs - use ``yes'' in USE_PYDISTUTILS - whitespace fix in pkg-descr PR: 174320 Submitted by: rm (myself) Approved by: maintainer timeout (2 weeks) - 1.0r2315 - Unbreak: fix plist and add missed dependency on py-nose Submitted by: Ruslan Mahmatkhanov <cvs-src@yandex.ru> - Mark BROKEN: incomplete plist Reported by: pointyhat o Update to 1.0r2288 o Remove RUN_DEPENDS on devel/py-setuptools since it is unnecessary o Update to 1.0r2283 o Fix BROKEN: {BUILD,RUN}_DEPENDS on devel/py-setuptools [1] Submitted by: pointyhat [1] - Mark BROKEN: does not build from paver.setuputils import setup, find_package_data, find_packages ImportError: cannot import name find_packages Reported by: pointyhat Update to 1.0r2278 - 1.0.r1819 Update to 1.0.r1805 Update to 1.0.r1794 Update to 1.0.r1683 Update to 1.0.r1662 Update to 1.0.r1660 Update to 1.0.r1643 Update to 1.0.r1624 Update to 1.0.r1589 Update to 1.0.r1581 Update to 1.0.r1565 Update to 1.0.r1548 Update to 1.0.r1519 Update to 1.0.r1518 Update to 1.0.r1514 Update to 1.0.r1513 Update to 1.0.r1509 Update to 1.0.r1508 Update to 1.0.r1503 Update to 1.0.r1499 Update to 1.0.r1495 Update to 1.0.r1465 Update to 1.0.r1445 Update to 1.0.r1359 Update to 1.0.r1354 Update to 1.0.r1333 Update to 1.0.r1305 Feature safe: yes Add LICENSE* information Update to 1.0.r1283 Update to 1.0.r1278 Update to 1.0.r1262 Update to 1.0.r1226 Update to 1.0.r1197 Correct date on copyright notice: 2008 -> 2010 New port py-flexget version 1.0.r1153: Program to automate downloading from different sources Feature safe: yes Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD 9 vulnerabilities affecting 28 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
http://www.freshports.org/www/py-flexget/
CC-MAIN-2015-18
en
refinedweb
12 September 2012 23:59 [Source: ICIS news] LONDON (ICIS)--European methyl di-p-phenylene isocyanate (MDI) contract prices have largely rolled over into September because of fairly balanced market conditions and a large proportion of quarterly contract business, market players said on Wednesday. MDI price ranges in September were assessed steady at €2,030-2,150/tonne ($2,603-2,756/tonne) FD (free delivered) NWE (northwest Europe) for crude MDI and €2,130-2,200/tonne FD NWE for pure MDI, according to ICIS. Numbers below the range were also heard on the buy-side, but this was not widely confirmed, while selective price increases of €10-50/tonne were heard for MDI in September in a few cases, althouh this was not seen to reflect the general market trend. MDI manufacturers stressed the underlying need to increase prices to recoup the higher benzene costs in recent months, but the main focus of their upward price initiative was on October contracts rather than September - with the re-negotiation of quarterly, as well as monthly business. Buyers, however, were strongly resisting any upward price movement in September, stating that demand was insufficient to support an increase. Looking to October and the fourth quarter, buyers said they had not yet started price discussions, but added they were equally as resistant to any possible price increases in view of demand, which they consider to be fragile for economic reasons, and because of the onset of low seasonal demand in the downstream construction sector. MDI consumption varies, depending on source. Sellers maintain that demand is better in September than it was in August, as players restock after the summer holidays. Buyers, by contrast, said consumption is not as good as expected for this time of year, which they attribute to soft macroeconomic conditions. The MDI market is sufficiently covered, although a few suppliers have low stocks because of maintenance or upstream supply limitations. Despite this, buyers said they had not experienced any supply problems. In manufacturing news, Dow Chemical’s MDI plant in ?xml:namespace> BASF’s MDI facility in Antwerp, Belguim, since last week has been running at reduced operating rates for maintenance reasons and this is expected to last for around six weeks. The MDI output at the site has also been further restricted because of some upstream supply constraints. This MDI plant has the capacity to produce up to 560,000 tonnes/year, according to ICIS Plants and Projects database. Market sources have said the maintenance at the Antwerp MDI facility may also include a nameplate capacity increase of around 100,000 tonnes/year, although the new capacity is not likely to be utilised until the middle of next year, depending on demand and profitability. The information on capacity expansion, however, was not officially confirmed at
http://www.icis.com/Articles/2012/09/12/9594997/europe-sept-mdi-contracts-largely-roll-over-in-balanced-market.html
CC-MAIN-2015-18
en
refinedweb
deCast - Dynamic Xml with C# 4.0 - Posted: Jan 26, 2009 at 3:32 AM - 16,01 nice but broken. There's too much magic going on. The XML GI syntax (and still more the ns'd GI syntax) does not match up with C# method name syntax. In particular C# method names are far more restricted than XML (namespace) names, they cannot express them. Now adding magic names such as Nodes, Parent, etc. will only compound the problem. What happens if a document actually happens to have an element called Parent or Nodes? So based on a quick review, this approach does not appear sound, as much as I love syntactic sugar. Now the idea to abstract over attribute and element names is quite sweet. I like the idea and this is also the approach that RDF/XML takes. It just needs to be an explicit abstraction taken by the client not built into the API as a default, thus projecting that assumption onto all existing XML. As a way to illustrate how dynamic is useful this is not so bad - except it, in my oppinion, teaches an unsound practice for dealing with XML; at least it is not general-purpose. One has to know a priori that such and such name collisions (Parent, Nodes?) will not occur (there goes extensibility) and that cannot express such and such names (XML name syntax, including names in popular XML dialects such as XSLT, where you can have: xslt:value-of). Any chance you could post the sample implementation of this code please? Thanks - this look good. The screencast goes over your fun little XML DSL but doesn't ACTUALLY talk about how to take advantage of any of C# 4.0's dynamic objects. Very neat, but now I have to search online for documentation on DynamicObject and methods like GetMember, assuming those are part of the CLR or DLR and not something you wrote. Could you please post the code from the episode so people can take a look at the actual code to figure out how to take advantage of these new features? DynamicObject reference: A blog post that shows example implementations of TryInvokeMember, TryGetMember, TrySetMember, and more: Remove this comment Remove this threadclose
http://channel9.msdn.com/Blogs/RobBagby/deCast-Dynamic-Xml-with-C-40?format=flash
CC-MAIN-2015-18
en
refinedweb
Making Smart robot - Login or register to post comments - by soni991 - Collected by one user Hi all, My plans are as follow: I want to make my RC car into fully automated version. Top view of car: Front view of car: Back view of car: Bottom view of car: The function involves (I give it as Version 1.0) 1) Steer itself throughout the house without bumping into any object. 2) when power goes down then take itself to the powerport for charging. also it should check the battery power charge if it is sufficient and not overcharge the battery. 3) Goes on sleepmode after every half an hr of operation and then powerbackon after 1 hr ( that is changeable) . these 3 basic function i need follow. I have RC car with 2WD. The steering wheels in front are of angular movement(it moves only to certain angle) : What gadgets i have available : Picture: List: 1) MCU = Mega 2560.(not in picture) 2) HC-SR04 1pc 3) US-100 1pc 4) SD memmory card using SPI protocol 2Pcs 5)TCRT5000 2PC 6) Microphone 1pc 7) Light Sensor 2pc 8) Audio amplifier 1pc 9) Accelrometer 1pc 10) Bluetooth 1pcs 11) USD to TTL 2pc 12) NRF 24L01 3pc 13) 10Pin to 6pin AVRisp 2Pcs 14) logo sensor 1pc 15) CP 2102 USB to TTL 1pc (not in picture) 16) lot of jumper wires and solderless board. not in picture 17) HC-SR501. More parts are coming so i would update those list. So I am good to go with parts, how to start building smart robot with clean code. Please can anyone tell me how do i configure the steering wheel of this car on arduino? i have put all the pics here: any one suggest any new parts please let me know so i can purchase it and use on this project. 1) First thing i need is suggestion about what parts i should include for the basic function in Version 1. 2) Second the Programing Code written from scratch with everyone contribution. Hope i can have feedback from taleted people online. overlap Conditioning If i am using the feed from ultrasonic "Lultra" to detect the distance of an object and if i write the code as below. Would it over lap the conditioning ? if (Lultra <=40){fast speed} if (Lultra <=20){stop} would the <=40 line would overlap the speed in <=20 when the sensor read the distance 10 1) how do i setup the if command that the condition remain in between range: here the range is 40 to 20 & another is 20 to 0 fast and stop speed respectively. As you've written it, both As you've written it, both the {fast speed} and {stop} sections will be executed when Lultra is <= 20 (Meaning that it, depending on implementation, will likely not move, but it's best to only execute one). The simplest change would be to first check if Lultra is <= 20, and if not, check if it is <= 40. Thus: if (Lultra <= 20 {stop} else { if (Lultra <= 40) {fast speed} } Although it's not in the question you asked, I notice this means that with an Lultra value > 40, neither the stop nor fast speed sections will be executed. Is this intentional? Thanks for Help! Hi Janvar, Thanks for the code, i have another related question. if i am reading from two ultrasonic which is next to each other in certain angle which act like an imaginary cone shape path and hence any object come in between the path would be avoided. this is an example: i have two Lultrasonic named Lultra & Rultra, D2:P2 & D1:P1 are the command for the motor conected to wheels. if (Lultra >= 41 and Rultra >= 41) { digitalWrite(D2 , HIGH); analogWrite(P2, 255); digitalWrite(D1 , HIGH); analogWrite(P1, 255); } My question is would this command only follwed if the two condition matched ( like the car with sensor move to a flat wall) at given moment, would it also be follwed if one is >=40 and other is not. Yes, both operands must Yes, both operands must evaluate to true for the conditional section to be executed. As The Benjaneer says, you'll want '&&' instead of 'and'. If you wanted it to execute if either of the operands are true (i.e. if Lultra or Rultra or both at once are >= 41), the symbol for logical OR is '||'. I think you need to do I think you need to do this: if (Lultra > 40 && Rultra > 40) { digitalWrite(D2 , HIGH); analogWrite(P2, 255); digitalWrite(D1 , HIGH); analogWrite(P1, 255);} Then if Both values are greater than 40 (or you could do >=41 but by habit i did > 40) it will run the digtialWrite code. If one is above 40 but the other isnt then it wont run it. ( '&&' means and, whereas '&' means bitwise adding, also i don't think 'and' is an operator in arduino but i could be wrong...) Refernce Page for you: A Rolling average Function I was just following some comment on Shout box and thought this might be useful: void RollDistance() // The Rolling Averge Calculator { Total = Total - IRAverage[ThisRead]; // Minus the oldest read IRAverage[ThisRead] = analogRead(IRPin); // Replaces the oldest Read Total = Total + IRAverage[ThisRead]; // Adds the New Read to the total ThisRead ++; // Increment the Reading number if (ThisRead >= 5) // Loops the Reading Number { ThisRead = 0; } float volts = (Total / 5)*0.0048828125; // Converts it back into a voltage, volts (5) / Analog steps (1024) Distance = 65 * pow(volts, -1.1); // Volts to the power of -1.1, times 65 gives a rough distance in cm } I use it with an Sharp IR, but it should be similar, this way it just add's the lastest reading, saves timing looping 5 times before deciding what to do. Also, If you use this, you need to Prime it: Call this function 5 times(or how ever many readings your averaging) from the setup loop to make the average have data (cheers birdmun for reminding me to add this) average... Take a given number of readings and add them to a total as you go. When you are done, divide by the total number of readings. Changes Well i had changes the structure of the RC car and practically using the minature one for beigining . I want to take the reading of my ultrasonic. i want to take the average reading of 10-15 counts at once , how do i do it in my below code. at moment reading for my Rultra i am not using, also i didnt setup my another motor D1, #include <Ultrasonic.h> #define TRIGGER_PIN 24 #define ECHO_PIN 25 #define TRIGGER 22 #define ECHO 23 Ultrasonic leftultrasonic(TRIGGER_PIN, ECHO_PIN); Ultrasonic rightultrasonic(TRIGGER, ECHO); int D2 = 42; int P2 = 44; int D1 = 38; int P1 = 40; void setup() { pinMode(D2,OUTPUT); pinMode(P2,OUTPUT); pinMode(D1,OUTPUT); pinMode(P1,OUTPUT); Serial.begin(9600); } void loop() { // left ultrasonic float Lultra; long Lmicrosec = leftultrasonic.timing(); Lultra = leftultrasonic.convert(Lmicrosec, Ultrasonic::CM); Serial.print(" LCM: "); Serial.print(Lultra); // right ultrasonic float Rultra; long Rmicrosec = rightultrasonic.timing(); Rultra = rightultrasonic.convert(Rmicrosec, Ultrasonic::CM); Serial.print(" RCM: "); Serial.println(Rultra); delay(100); if (Lultra <=40 or Lultra >=20) { int val = ((Lultra*3.875)+100); digitalWrite(D2 , HIGH);analogWrite(P2, val); } } trouble writing code for the steering hi all i am having trouble writing code for the steering. the code is below: #include <Ultrasonic.h> #define TRIGGER_PIN 5 #define ECHO_PIN 12 #define TRIGGER 10 #define ECHO 11 int D1 = 6; int P1 = 7; int contactA; // contact A is the sensor which turns signal to '0' when the wheel is all the way to left. int contactB; // contact B is the sensor which turns signal to '0' when the wheel is all the way to left. int D2 = 8; int P2 = 9; Ultrasonic front(TRIGGER_PIN, ECHO_PIN); Ultrasonic rear(TRIGGER, ECHO); void setup() { Serial.begin(9600); pinMode(2,INPUT); pinMode(3,INPUT); digitalWrite(2,HIGH); //<-- this turns the internal pull-up on digitalWrite(3,HIGH); //<-- this turns the internal pull-up on pinMode(13,OUTPUT); digitalWrite(13,LOW); pinMode(D2 , OUTPUT);//this is for the front motor pinMode(P2 , OUTPUT);//this is for the front motor PWM pinMode(D1 , OUTPUT);//this is for the rear motor pinMode(P1 , OUTPUT);//this is for the rear motor PWM } void loop() { // steering angle detection contactA=digitalRead(2); //yellow contactB=digitalRead(3); //blue Serial.print(contactA); Serial.print(" A "); Serial.print(contactB); Serial.print(" B "); {if (contactA == 1 and contactB ==0) { digitalWrite(D2,LOW); analogWrite(P2,0); } if (contactA == 0 and contactB ==1) { digitalWrite(D2,HIGH); analogWrite(P2,0); } } //front distance calculation float cmMsec, inMsec; long microsec = front.timing(); { cmMsec = front.convert(microsec, Ultrasonic::CM); } Serial.print(", CM: "); Serial.print(cmMsec); // rear distance calculation float RcmMsec, RinMsec; long Rmicrosec = rear.timing(); { RcmMsec = rear.convert(Rmicrosec, Ultrasonic::CM); } Serial.print(", RCM: "); Serial.print(RcmMsec); /* this is for the movement upon detection logic*/ if (cmMsec >= 45) // if ultra sonic see distance greater then 45cm go forward { int value(70); digitalWrite(D1,HIGH); analogWrite(P1, value); } if (cmMsec < 25) // if the Front Ultrasonic detects distance less then 25cm then Brakes. { if (RcmMsec > 25) // if the back distance is more then 25cm then go back { int value(70); { digitalWrite(D1,LOW); analogWrite(P1, value); } } if (RcmMsec < 10) //if the back distance is less then 10cm then stop { analogWrite(P1 , 0); if ( //here i want to write command for the steering wheel so it can steer to left with steering detection and then go ahead) } Is this the bit of code you Is this the bit of code you are having trouble with? Can you describe the problem? Just based on a quick look, this code will detect if the steering swtiches have detected that the steering wheel is all the way left or all the way right. I assume D2 sets the direction for the motor to go, and P2 sets the speed. You have analogWrite(P2,0), which will stop the motor. Is that what you want?
http://letsmakerobots.com/node/33172
CC-MAIN-2015-18
en
refinedweb
Initially only a few TBs of space is needed for a storage solution (<10TBs). My initial plan is to have a NAS with other servers mounting the data stored on the NAS device, pushing and pulling data from it. Because this will be a small deployment the cost of a SAN can't be justified, and growth expectancy is unknown as of yet. So it will be NAS to being with, but it needs to be expandable. I see a few problems with this design though; Firstly; SMB/CIFS is not a good with multiple servers having the same data store mounted, NFS seems like a better option here. Although, as far as I know it's my only option. It would be a native Linux deployment, so are any better protocols available to me other than NFS, or is that my only choice here? Secondly; As the NAS device gets short of either I/O capacity or space, which ever comes first, another will have to be added (and this process will repeat). How can I drop another NAS onto the network and extending the existing storage share (as far as the view from the other servers is concerned) to include this addition storage space? What is the "NAS equivalent" of adding another storage host in a SAN, and expanding the file system across it? (As far as I know this isn't possible, but I'm asking in case I'm wrong!). Presumably what I have described in the scenario above, once the first NAS is at capacity, is the basic requirement for a SAN, is this correct? Would a more scalable approach be to add another NAS, and have the servers mount both storage shares, and have the application support the use of multiple storage spaces, rather then trying to implement an ever growing storage space? SMB/CIFS is perfectly fine with multiple servers having the same data store mounted. It's a plenty concurrent file system that, despite being slower and higher-latency than NFS, will deal just fine with a good number of concurrent connections. The bigger concern may be user access, since all traffic to your CIFS mount will go across as the user that authenticated the mount, in contrast to NFS. In general, I do consider NFS the more robust solution for server-server file sharing. If you're planning for expandability in a single namespace, it's much cheaper to scale up than scale out. Most vendors will provide some kind of SAS-based disk array solution. These can typically be daisy-chained, and you can run a single coherent filesystem across them using LVM or a similar volume manager (keeping in mind that if one disk shelf fails it will trash your entire volume, so you probably want multiple paths to your storage). This is probably the most cost-effective solution for you, but there's a ton of options. Your choice of filesystem does matter, so keep that in mind. I'm a fan of ZFS on Solaris 11, but your choices are more diverse if you're dedicated to using a free software solution. If you're dead-set on scaling out, there's a number of parallel filesystems like Gluster and Ceph out there, but they're at varying degrees of maturity and compatibility and I wouldn't recommend them for general-purpose file sharing at this point. If you use iSCSI you can turn your NAS into a SAN. After that use it with cLVM and ocfs to concurrently mount it on your systems. cLVM will give you the flexibility to expand at will. By posting your answer, you agree to the privacy policy and terms of service. asked 3 years ago viewed 309 times active
http://serverfault.com/questions/357401/how-to-get-san-like-flexibility-from-a-nas
CC-MAIN-2015-18
en
refinedweb
24 August 2010 18:12 [Source: ICIS news] LONDON (ICIS)--Shell Chemicals has successfully completed a turnaround and debottlenecking of an ethylene cracker at its Nanhai petrochemicals joint venture in ?xml:namespace> Ethylene capacity has now increased to 950,000 tonnes/year from 800,000 tonnes/year, with total petrochemical production at the plant rising to 2.7m tonnes from 2.3m tonnes, Shell Chemicals said. “The decision to increase capacity at Nanhai supports the Shell strategy to grow selectively and to continue to remain a leader in the expanding Asian petrochemicals market,” said executive vice president Ben van Beurden. Nanhai is a joint venture with China National Offshore Oil Company (CNOOC). It produces ethylene, propylene, butadiene, polyethylene, polypropylene, monoethylene glycol, styrene monomer, propylene oxide, polyols and propylene glycol. The turnaround took place in March and April this year, Shell Chemicals said in a statement. For more on Shell visit ICIS company
http://www.icis.com/Articles/2010/08/24/9387926/shell-completes-turnaround-raises-capacity-at-cnooc-venture.html
CC-MAIN-2015-18
en
refinedweb
Like this, just why? Bad enough every word in all my open documents are suggested before the obvious tag/attribute suggestion, but this one is extreme. jps wrote:This is caused by a plugin, it's not the default behavior import sublime, sublime_plugin class InhibitWordsListener(sublime_plugin.EventListener): def on_query_completions(self, view, prefix, locations): return ([], sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS) Return to General Discussion Users browsing this forum: mlf and 21 guests
http://www.sublimetext.com/forum/viewtopic.php?p=32241
CC-MAIN-2015-18
en
refinedweb
User talk:Duncan.britton -: I did not know about this namespaces tool. Should I remove all of the templates from that category that I created? Yeah,. -- Pytony (talk)
http://www.funtoo.org/index.php?title=Linux_Containers&oldid=5815
CC-MAIN-2015-18
en
refinedweb
23 March 2011 17:28 [Source: ICIS news] HOUSTON (ICIS)--Brazil will receive around 200m litres (53m gal) of anhydrous ethanol, mostly from the US, over the next few weeks, market sources said on Wednesday, adding that the cargoes will help the country avoid a potential supply shortage. Market sources said half of the shipments were headed to ?xml:namespace> The other half would go the northeast region, which produces the remaining 10%. “US imports should start arriving in 15 days…our inventories are critically low,” a centre-south producer said. Brazilian ethanol prices soared to record highs in March, lifted by tight supply and unexpected strong demand, particularly for hydrous ethanol. Hydrous ethanol prices were assessed on Wednesday at Brazilian reais (R) 1,950/m³ ($4.45/gal), up by 35% in the last four weeks. The product now costs more than twice the R940/m³ assessed in the same week of 2010. Anhydrous ethanol was offered on Wednesday at an unprecedented R2,000/m³, up by 42% from a range of R1,400-1,410/m³ in the week ended 23 February. “Never seen anything like it before,” a seller in Another source said the spike in recent weeks had surprised even industry veterans, who had expected hydrous ethanol demand to soften because the product was less competitive than gasoline in The arrival of US imports in The last time The The shipments of But the Brazilian ethanol supply is expected to remain tight until late April, when supply from the next centre-south sugarcane harvest will begin to enter the market. The area's sugarcane harvest runs from April to November-December. Centre-south mills in 2010-2011 crushed 556.2m tonnes of sugarcane, an increase of 2.63% compared with the 2009-2010 harvest, according to sugarcane industry association Unica. Ethanol production rose by 7% to 25.3bn litres in the same period, the association said. Unica expects sugarcane production in 2011-2012 to be flat compared with the previous crop. ($1 = R1
http://www.icis.com/Articles/2011/03/23/9446559/us-ethanol-flows-into-brazil-amid-supply-shortage-fears.html
CC-MAIN-2015-18
en
refinedweb
This example shows how to send a string from an Arduino® Uno board to the MATLAB® command line and create all the necessary files in your custom library using MATLAB and C++. Files for this example are located in your Arduino support package installation folder in \toolbox\matlab\hardware\supportpackages\arduinoio\arduinoioexamples. Create a folder package to contain all the files for your custom library, and add it to the MATLAB path. For this example: Add a folder named +arduinoioaddons in your working folder. In +arduinoioaddons, add a +ExampleAddon subfolder to contain your MATLAB class file. For example C:\Work. In the +ExampleAddon subfolder, add a src folder to contain your C++ header files. For this example, create a C++ header file named HelloWorld.h, and save it in the +arduinoioaddons/+ExampleAddon/srcfolder. This file wraps methods to expose to the Arduino library. Include header files, including LibraryBase.h and any other third-party header file that the add-on library depends on. #include "LibraryBase.h" Create an add-on class that inherits from the LibraryBase class, which defines all the necessary interfaces. In the constructor, define the library name, and register the library to the server. class HelloWorld : public LibraryBase { public: HelloWorld(MWArduinoClass& a) { libName = "ExampleAddon/HelloWorld"; a.registerLibrary(this); } The custom class and library names must have this format: shield(vendor)/device(library) Determine the command calls to issue from MATLAB. Override the command handler, and create a switch case for each command that the add-on executes on the Arduino device: public: void commandHandler(byte cmdID, byte* inputs, unsigned int payload_size) { switch (cmdID){ case 0x01:{ byte val [13] = "Hello World"; sendResponseMsg(cmdID, val, 13); break; } default:{ // Do nothing } } } }; The command IDs must match up with the operations that you add to the MATLAB add-on library. For more information, see Command Handler. (Optional) Use debugPrint to pass additional messages from the Arduino device to the MATLAB command line. The MATLAB add-on wrapper class that defines your library must inherit from matlabshared.addon.LibraryBase. The matlabshared.addon.LibraryBase class defines several constant properties that you must override in your MATLAB class. The class also contains internal utility functions that enable you to send and retrieve data from the server running on the Arduino board. Create a MATLAB class, and define the command ID for each command that is sent to the server on the board. classdef HelloWorld < matlabshared.addon.LibraryBase properties(Access = private, Constant = true) READ_COMMAND = hex2dec('01') end ... end Override constant properties in the class to specify the location of source header files. classdef HelloWorld < matlabshared.addon.LibraryBase ... properties(Access = protected, Constant = true) LibraryName = 'ExampleAddon/HelloWorld' DependentLibraries = {} LibraryHeaderFiles = {} CppHeaderFile = fullfile(arduinoio.FilePath(mfilename('fullpath')), 'src', 'HelloWorld.h') CppClassName = 'HelloWorld' end ... end Define the class constructor, set the methods to call the class constructor, and set the parent property. classdef HelloWorld < matlabshared.addon.LibraryBase ... methods function obj = HelloWorld(parentObj) obj.Parent = parentObj; end ... end end Always assign the first input argument to obj.Parent. The support package auto-detects the class only if you have redefined all the properties. If you do not need a value, leave the field empty. Define the method to read the data back. classdef HelloWorld < matlabshared.addon.LibraryBase ... methods ... function out = read(obj) cmdID = obj.READ_COMMAND; inputs = []; output = sendCommand(obj, obj.LibraryName, cmdID, inputs); out = char(output'); end end end For help on using MATLAB programming language, see . To register your add-on library, add the working folder that contains +arduinoioaddons to the MATLAB path: addpath C:\Work Once you add the folder to the path, you should see the files listed in the MATLAB Current Folder browser. Make sure the ExampleAddon/HelloWorld library is available. listArduinoLibraries listArduinoLibraries ans = 'Adafruit/MotorShieldV2' 'I2C' 'SPI' 'Servo' 'ExampleAddon/HelloWorld' Tip If you do not see your add-on library in the list, see Custom Arduino Library Issues for more information. This example shows how to return data from the Arduino library commandHandler to the MATLAB command line. Create an arduino object and include the new library. Set ForceBuildOn to true to reprogram the board. arduinoObj = arduino('COM3', 'Uno', 'Libraries', 'ExampleAddon/HelloWorld', 'ForceBuildOn', true); Arduino devices reuse cached code if the specified library matches a library name in the source code. Reprogramming forces the device to newly download the header file, ensuring current and accurate information. Create an add-on object using the ExampleAddon library. dev = addon(arduinoObj,'ExampleAddon/HelloWorld'); Execute the command on the server, and read the data back into MATLAB. read (dev) ans = Hello World
https://la.mathworks.com/help/supportpkg/arduinoio/ug/create-code-to-write-a-string.html
CC-MAIN-2021-43
en
refinedweb
Issue #1140 API call to URLs that do not exist cause 500s on RHEL 6 Description When making a call to a URL that does not exist, Pulp should respond with a 404. However, with RHEL 6 (and Django 1.4), the middleware is attempting to render a nonexistent 404 template and therefore returns a 500. curl -X GET -k -u admin:admin "" <.15 (Red Hat) Server at localhost Port 443</address> </body></html> Traceback: Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) Internal Server Error: /pulp/api/v2/doesnotexist/ Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) Traceback (most recent call last): Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 148, in get_response Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) response = callback(request, **param_dict) Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) File "/usr/lib/python2.6/site-packages/django/utils/decorators.py", line 91, in _wrapped_view Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) response = view_func(request, *args, **kwargs) Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) File "/usr/lib/python2.6/site-packages/django/views/defaults.py", line 20, in page_not_found Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) t = loader.get_template(template_name) # You need to create a 404.html template. Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) File "/usr/lib/python2.6/site-packages/django/template/loader.py", line 145, in get_template Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) template, origin = find_template(template_name) Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) File "/usr/lib/python2.6/site-packages/django/template/loader.py", line 138, in find_template Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) raise TemplateDoesNotExist(name) Jul 15 10:26:44 mgmt4 pulp: django.request:ERROR: (14267-91680) TemplateDoesNotExist: 404.html This does not affect 404s that are raised as a MissingResource. Related issues Associated revisions Revision 996e4366 View on GitHub Adds a 404 handler for Django which responds with JSON closes #1140 History #1 Updated by bmbouter over 6 years ago We originally did not experience this exception because we were using DEBUG=True but when we set that to DEBUG=False by fixing issue #910 we now have to provide a 404.html file and probably a 500.html in the template root. Setting the template root is our case is a bit tricky. There are two default template loaders in Django 1.4+, 'django.template.loaders.filesystem.Loader' and 'django.template.loaders.app_directories.Loader'. The loader 'django.template.loaders.app_directories.Loader' looks for a templates directory in each INSTALLED_APP listed in settings.py but we've avoided making one of those because we don't really need one and I don't feel this is a good enough reason to make one so that loader won't benefit us. The other loader 'django.template.loaders.filesystem.Loader' looks for absolute paths as specified by the TEMPLATE_DIRS setting and the absolute path won't be the same on all platforms. One fix would be to create a "templates" directory in the webservices directory and have the full path used by TEMPLATE_DIRS be created dynamically as follows: import os DIRNAME = os.path.dirname(__file__) TEMPLATE_DIRS = ( os.path.join(DIRNAME, "templates"), ) Then we could put an empty 404.html and 500.html in that directory and I think it would resolve the issue. Another option is to catch these errors and add the correct behavior in the pulp exception handling middleware directly. I like that solution slightly better because we could set the content_type of the return to json which is more correct since it's from the API. We need to be careful here though that the following are true: - 404's don't turn into 500's by the time the middleware is returned - 500 errors are truly the stacktrace of the actual error, not just Django's inability to find the 500.html file - we'll probably need to give some thought to how the json structure is of a 500 error and a 404 This isn't a clear-cut implementation outline but two possible paths. This probably needs to be done with some care. #2 Updated by bmbouter over 6 years ago - Status changed from NEW to ASSIGNED - Assignee set to bmbouter #3 Updated by jortel@redhat.com about 6 years ago - Triaged changed from No to Yes #4 Updated by bmbouter about 6 years ago I was looking into a solution that adjusted the middleware when I learned that a 404 error does not even flow through the middleware so we can't fix it there. If the url does not match anything in urls.py it is handled by the default page_not_found handler directly. The 500 exception this bug shows is in that handler attempting to form the 404 response. The solution I'm putting together involves us As such we can write our own 404 handler and not even make a template at all. This also will correctly set the response status code, content type, and contain an empty JSON body. :-) We identify that Django should use our handler by setting handler404 in urls.py. I tested it and it works well. I also investigated if 500 errors would be an issue given that we don't have a 500.html, and I don't think they are because our middleware correctly serializes them into a reasonable json representation like the following: { "exception": [ "IOError: asdf\n" ], "traceback": [ " File \"/usr/lib/python2.7/site-packages/django/core/handlers/base.py\", line 109, in get_response\n response = callback(request, *callback_args, **callback_kwargs)\n", " File \"/usr/lib/python2.7/site-packages/django/views/generic/base.py\", line 48, in view\n return self.dispatch(request, *args, **kwargs)\n", " File \"/usr/lib/python2.7/site-packages/django/views/generic/base.py\", line 69, in dispatch\n return handler(request, *args, **kwargs)\n", " File \"/home/bmbouter/Documents/pulp/server/pulp/server/webservices/views/decorators.py\", line 237, in _auth_decorator\n return _verify_auth(self, operation, super_user_only, method, *args, **kwargs)\n", " File \"/home/bmbouter/Documents/pulp/server/pulp/server/webservices/views/decorators.py\", line 191, in _verify_auth\n value = method(self, *args, **kwargs)\n", " File \"/home/bmbouter/Documents/pulp/server/pulp/server/webservices/views/util.py\", line 110, in wrapper\n return func(*args, **kwargs)\n", " File \"/home/bmbouter/Documents/pulp/server/pulp/server/webservices/views/repositories.py\", line 171, in post\n raise IOError('asdf')\n" ], "_href": "/pulp/api/v2/repositories/", "error_message": "asdf", "http_request_method": "POST", "http_status": 500 } I need to go write some tests for this now and put in my PR. #5 Updated by bmbouter about 6 years ago - Status changed from ASSIGNED to POST PR available at: #6 Updated by bmbouter about 6 years ago - Status changed from POST to MODIFIED - % Done changed from 0 to 100 Applied in changeset pulp|996e4366742db143b366c6cccab7231ce0da4904. #7 Updated by dkliban@redhat.com about 6 years ago - Status changed from MODIFIED to 5 #8 Updated by amacdona@redhat.com almost 6 years ago - Status changed from 5 to CLOSED - CURRENTRELEASE #9 Updated by bmbouter almost 6 years ago - Related to Issue #1485: TemplateDoesNotExist exception for '500.html' raised on v3 API endpoint added #10 Updated by bmbouter almost 6 years ago - Related to Issue #1486: TemplateDoesNotExist exception for '404.html' raised on v3 API endpoint added #11 Updated by bmbouter over 2 years ago - Tags Pulp 2 added Please register to edit this issue Also available in: Atom PDF Adds a 404 handler for Django which responds with JSON closes #1140
https://pulp.plan.io/issues/1140
CC-MAIN-2021-43
en
refinedweb
Week 10 - Input Devices A 10bit-ADC based on AttinyX4 with I2C connection! Contents - Contents - Principle - Design - Programming the board Principle In this week’s assignment I will try to develop a board that is able to read data from a 5V analog sensor and that could send data over I2C to a Raspberry Pi. This component would be mainly used to read an analog pressure sensor that I will use in my final project, as part of the monitoring part of it. Now, the intention within the final project would be for it to be connected to a Raspberry Pi and therefore, there are some considerations about the Pi and it’s voltage levels have to be taken into account: - The Raspberry Pi does not have ADC and can output either 5V or 3.3V power supply, although the I2C should be done with 3.3V reference - The Sensor needs a 5V input, and outputs signal at the same maximum level - The microcontroller can work at both 5V or 3.3V, but if powered at 3.3V, the maximum ADC reading should be the same Before going into the design phase, I will detail below some learnings about how to perform the voltage level changes in these situations. Changing Voltage Levels Whenever we are working with different type of sensors and microcontrollers, it is very common to find that we would need to perform some voltage level adaptations for them not to kill eachother. Since I am not an expert (not even close to it) in electronics, I found this explanatory article in Hackaday that talks about how to make voltage changes properly. For now, I will focus on the Step-Down level-shifter, but more coming for the bi-directional level shifters in future assignments and for my final project. A Step down level shifter, in the shape of a voltage divider is nothing else than two-stages of resistors connected in series that, merely because of the ratio between them, they are able to make a level shift in voltage. Having read the document above, a voltage divider it’s not the optimal solution for fast rise times, but in my case I will not be reading nor powering the sensor in a in a high speed way. The ratio between the voltages can very simply be calculated as follows: Also, another useful solution that I will be using in this assignment will be a 5-3.3V Voltage Regulator such as this one, available in the Fablab. Design Having considered the above, the AttinyX4 should be dealing with either one of the two voltage levels: 3.3V or 5V, and for this time being, the decision (thanks to the recommendations of Guillem and Victor) will be to power the Attiny with 3.3V, having lowered down the power input itself and the sensor signal output. The only thing to mention is that there won’t be any step up in this solution, so the whole board has to be fed by a 5V power supply and so then the sensor. Therefore, the I2C bus that arrives to the board will be weird in its own, since the Vcc will be 5V, but SCL and SDA lines will be done with 3.3V that the Pi can understand. The whole schematic (in Kicad) can be seen below: I included in the design the following: - A 4 male header for the analog sensor connection, with Vcc at 5V, GND and two ADC channels (with a voltage divider) - An ISP for the chip programming - A Grove I2C connector - An Attiny84 (was meant to be Attiny44, but for some reason I took the 84 instead by mistake) - A green LED as an indicator - A 20MHz external resonator - Some jumpers to make the routing easier For the Grove and the 4 male header, I added in Kicad my own footprints. For this, I copied the Fab library and saved it in the new Kicad format: a directory with .pretty extension and the different modules with * .kicad_mod extension: FabOscar.pretty: - 1x04SMD.kicad_mod - I2C_Grove.kicad_mod - … The footprint for the I2C Grove: And the 1x04 SMD Male Header: Also, taking into account the available resistors in the lab, I added the following resistors to the voltage divider, giving me a maximum input voltage for the ADC of 3,19V. This means that if we power the chip at 3.3V we could still be able to read the voltage, but if we want a precise measurement, we should also connect the AREF PIN (the reference for the voltage measurement) to the same output (therefore existing three voltage divider stages): Routing done in Kicad, I exported it as an SVG and created the PNG files from Iknscape: And the PNG files for the traces: And the cuts: The board is milled in a Modela MDX-20 and the traces are generated with fab.modules (the modela is directly hooked up to the computer and the fab.modules works just fine for it): And below a picture of the board soldered with all the components on it: Programming the board After using the multimeter to check for continuity and shorts (explained in previous assignments) I will test the board via software. For this and in general, I will be using again Platformio, but since Platformio is based on a series of boards in a * .json files, there is a bit of confusion and I had to figure out how to properly set the Clocking speeds. Note: For more information about platflormio, visit their website and/or my Electronics Design Week For this board, the wiring with all the stuff goes like this (note that the Raspberry Pi here is for the assignment on Interface): Initial tests and setup Firstly, I programmed the board to do nothing than blinking the led, and check that everything is OK. This code below works just fine, blinking the LED, but not at the desired speed: #include <Arduino.h> #define LED 7 int _delay = 100; void setup() { pinMode(LED, OUTPUT); } void loop() { digitalWrite(LED, LOW); // turn the LED on (HIGH is the voltage level) delay(_delay); digitalWrite(LED, HIGH); delay(_delay); } Three things have to be taking into account: - That the fuse set to read the external resonator is correct (lfuse). I firstly used the MakeFile with this line of code to flash the fuses: avrdude -p t84 -P usb -c usbtiny -U lfuse:w:0x5E:m And then verified that Platformio is not modifying the fuses in the programming phase. For this I used AVR Fuses (thanking again Guillem for the help), I used AVR Fuses to check, which is a very explanatory way to view the fuses in a connected chip: Here, we can see a list of Fuses and they can be verified, read or programmed. Doing the verification before and after the programming with Platformio, doesn’t make the fuses change in this verification. - That the code is compiled to use a clock speed of 20MHz. This is done via avr-g++ (compiler) and avr-gcc via the compiler message: -DF_CPU=20000000L. In Platformio, the default is to use the 8MHz as specified in the AVR chips Platformio documentation. If we want to compile it, the only way for now is to create a custom board (more on how to do it here) and use a * .json file like in the custom boards directory: { "build": { "core": "arduino", "extra_flags": "-DARDUINO_AVR_ATTINYX4 -DF_CPU=20000000L", "f_cpu": "20000000L", "mcu": "attiny84", "variant": "tiny14" }, "frameworks": [ "arduino" ], "name": "Generic ATTiny84", "upload": { "extra_flags": "-e", "maximum_ram_size": 512, "maximum_size": 8192, "protocol": "usbtiny" }, "url": "", "vendor": "Generic ATTiny" } Now, we found that the compiler message f_cpu is not being sent properly, but I sent -DF_CPU=20000000L in the extra_flags part. - That there is no CLK divider. Here, I think things get tricky becase when we flash initially the chip, we send the lfuse:0x5e: avrdude -p t84 -P usb -c usbtiny -U lfuse:w:0x5E:m Which if we check with AVR Fuses, is setting the CLOCK divider to 8 internally (the lfuse is 0x5e below): Now, we can either do any of these two things: either use this fuse instead for the external clock without clock divider: avrdude -p t84 -P usb -c usbtiny -U lfuse:w:0xDE:m Or in the code, set the fuse itself with this: void setup(){ // // set clock divider to /1 // CLKPR = (1 << CLKPCE); CLKPR = (0 << CLKPS3) | (0 << CLKPS2) | (0 << CLKPS1) | (0 << CLKPS0); } (From Neil’s code). Reading the sensor Now, we can go through reading the sensor itself, simply by using the analogRead function in Arduino and setting up the analogReference(EXTERNAL) for the reading. I have the sensor calibration at hand, so I include it in the result, and since I don’t have any way to communicate via Serial to the board, I make the green LED blink differently depending on the pressure readings: #include <Arduino.h> #define LED 7 #define SENSOR 1 int _delay = 100; float Va = 5; int readSensor = 0; float Vs = 0; float pressure = 0; void setup() { pinMode(LED, OUTPUT); analogReference(EXTERNAL); // // set clock divider to /1 // CLKPR = (1 << CLKPCE); CLKPR = (0 << CLKPS3) | (0 << CLKPS2) | (0 << CLKPS1) | (0 << CLKPS0); } void loop() { readSensor = analogRead(SENSOR); Vs = ((float) readSensor + 0.5 ) / 1024.0 * 5.0; pressure = (Vs*687.8/Va - 18.77)*1000/1e5; // in bar _delay = 1000-6000*(pressure-1); digitalWrite(LED, LOW); // turn the LED on (HIGH is the voltage level) delay(_delay); digitalWrite(LED, HIGH); delay(_delay); } Finally, check the networking week to see the result of the sensor reading!
http://fab.academany.org/2018/labs/barcelona/students/oscar-gonzalezfernandez/2018/04/08/Week-10-Input-Devices.html
CC-MAIN-2021-43
en
refinedweb
re_comp(3) [netbsd man page] RE_COMP(3) BSD Library Functions Manual RE_COMP(3) NAME re_comp, re_exec -- regular expression handler LIBRARY Compatibility Library (libcompat, -lcompat) SYNOPSIS #include <re_comp.h> char * re_comp(const char *s); int re_exec(const char *s); DESCRIPTION This interface is made obsolete by regex(3). It is available from the compatibility library, libcompat. The re_comp() function compiles a string into an internal form suitable for pattern matching. The re_exec() function checks the argument string against the last string passed to re_comp(). The re_comp() function returns 0 if the string s was compiled regu- lar) HISTORY The re_comp() and re_exec() functions appeared in 4.0BSD. BSD June 4, 1993 BSD -1 if the compiled regular expression is invalid (indicating an internal error)..10 26 Feb 1997 re_comp(3C)
https://www.unix.com/man-page/netbsd/3/RE_COMP/
CC-MAIN-2021-43
en
refinedweb
Python Connector Libraries for Microsoft Teams Data Connectivity. Integrate Microsoft Teams with popular Python tools like Pandas, SQLAlchemy, Dash & petl. Easy-to-use Python Database API (DB-API) Modules connect Microsoft Teams data with Python and any Python-based applications. Features - Compatible with MS Graph API v1.1/beta - Powerful metadata querying enables SQL-like access to non-database sources - Push down query optimization pushes SQL operations down to the server whenever possible, increasing performance - Client-side query execution engine, supports SQL-92 operations that are not available server-side - Connect to live Microsoft Teams Microsoft Teams with bi-directional access. - Write SQL, get Microsoft Teams data. Access Microsoft Teams through standard Python Database Connectivity. - Integration with popular Python tools like Pandas, SQLAlchemy, Dash & petl. - Simple command-line based data exploration of Microsoft Teams Groups, Teams, Channels, Messages, and more! - Full Unicode support for data, parameter, & metadata. CData Python Connectors in Action! Watch the video overview for a first hand-look at the powerful data integration capabilities included in the CData Python Connectors.WATCH THE PYTHON CONNECTOR VIDEO OVERVIEW Python Connectivity with Microsoft Teams Full-featured and consistent SQL access to any supported data source through Python - Universal Python Microsoft Teams Connectivity Easily connect to Microsoft Teams Microsoft Teams Microsoft Teams Connector includes a library of 50 plus functions that can manipulate column values into the desired result. Popular examples include Regex, JSON, and XML processing functions. - Collaborative Query Processing Our Python Connector enhances the capabilities of Microsoft Teams with additional client-side processing, when needed, to enable analytic summaries of data such as SUM, AVG, MAX, MIN, etc. - Easily Customizable and Configurable The data model exposed by our Microsoft Teams Microsoft Teams with Python CData Python Connectors leverage the Database API (DB-API) interface to make it easy to work with Microsoft Teams from a wide range of standard Python data tools. Connecting to and working with your data in Python follows a basic pattern, regardless of data source: - Configure the connection properties to Microsoft Teams - Query Microsoft Teams to retrieve or update data - Connect your Microsoft Teams data with Python data tools. Connecting to Microsoft Teams in Python To connect to your data from Python, import the extension and create a connection: import cdata.microsoftteams as mod conn = mod.connect("User=user@domain.com; Password=password;") #Create cursor and iterate over results cur = conn.cursor() cur.execute("SELECT * FROM Groups") rs = cur.fetchall() for row in rs: print(row) Once you import the extension, you can work with all of your enterprise data using the python modules and toolkits that you already know and love, quickly building apps that help you drive business. Visualize Microsoft Teams Data with pandas The data-centric interfaces of the Microsoft Teams Python Connector make it easy to integrate with popular tools like pandas and SQLAlchemy to visualize data in real-time. engine = create_engine("microsoftteams///Password=password&User=user") df = pandas.read_sql("SELECT * FROM Groups", engine) df.plot() plt.show() More Than Read-Only: Full Update/CRUD Support Microsoft Teams Connector goes beyond read-only functionality to deliver full support for Create, Read Update, and Delete operations (CRUD). Your end-users can interact with the data presented by the Microsoft Teams Connector as easily as interacting with a database table.
https://www.cdata.com/drivers/msteams/python/
CC-MAIN-2021-43
en
refinedweb
Cloud computing might be one of the best ways to host the applications. However, it takes dedicated effort to manage cloud applications. Also, it is now common for companies to rely on multiple cloud computing solution. This brings us to the concept of the hybrid cloud where it is common to use private, public and other third-party services. Hybrid clouds also force companies to manage different monitoring services, making it hard for them to have all the information under one roof. This gives rise to the need for a tool that can help manage multiple solutions at one single place. Grafana is a tool that lets you do just that. Yup, you can manage multiple cloud services through one simple dashboard. In this article, we will go through a simple tutorial on how to monitor AWS CloudWatch With Grafana. Before we dive deep into the actual tutorial, let’s learn more about the platforms that we are using. AWS CloudWatch: AWS CloudWatch is the Amazon’s monitoring services for cloud resources. It is used to collect and display metrics and other useful information. Administrators can use AWS CloudWatch to know what needs to be improved or get warned when something breaks. It works with all the AWS resources Grafana: Grafana is a popular open source dashboard tool. It works out of the box with different data sources including Graphite OpenTSDB, and InfluxDB. The user can easily edit and modify the dashboard according to their requirement. It also enables them to monitor multiple solutions from a single place. It uses different types of graphs, charts and another form of tools to help users in giving them an bird eye view. Monitor AWS CloudWatch With Grafana Grafana comes with an easy-to-use pluggable architecture. This means that you can create a dashboard with the widgets of your liking. It also comes with plugins and widgets. Monitoring AWS CloudWatch from Grafana is easy. All you need to do is follow the tutorial. So, without any delay, let’s get started. IAM Role Set up As we already know, AWS can be accessed using IAM roles. IAM roles can be an entity or a third party application. { "Version": "2012-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "cloudwatch:GetMetricStatistics", "cloudwatch:GetMetricData", "cloudwatch:ListMetrics" ], "Resource": "*" } ] } Now, that the IAM Role is created, it is now time to start an EC2 instance. You can start an EC2 instance from your AWS dashboard as shown in the image below. You need to use the script below to launch the EC2 instance. The user-data script has all the allowances that are required to run a successful Grafana server. You need to make sure that the role that you created earlier is associated with the instance. #!/bin/sh yum install -y service grafana-server start /sbin/chkconfig --add grafana-server Creating a Grafana Account Now that you have created an IAM Role and an EC2 instance, the next step is to create your Grafana account. You can go to their official website and register a free account. As it is open source, you can either download the client or create a free instance on their website as shown in the image below. To make your instance works as intended, you need to open port 3000 for inbound traffic. It can easily be done through Grafana Dashboard. Connecting to your EC2 Instance using Grafana With everything ready, it is now time to open your browser and open up the instance. To do so, type in the following in your browser. If you did everything correctly until now, you would be redirected to the Grafana login page as shown below. To log in, you need the username as “admin” and password as “admin.” Once you log in, you will be greeted by the beautiful interface of Grafana. You can access the different options from the side menu as shown in the image below. As Gafana comes with in-built support for CloudWatch, you don’t have to install any additional plugin. Click on the gear icon on the left menu, and then go to Data Sources. Once you are there, click on “Add Data Source.” Now, you need to select the type to CloudWatch from the drop-down menu. It will bring you to new form that needs to be filled. Type in the name and also select the default region. Once done, click on save and test. If it is working, it will show a message, “data source is working.” Congratulations, you have successfully connected your EC2 instance with CloudWatch with Grafana. Alternative approach: As we have used IAM role, we don’t have to fill up the credentials profile name. However, you can also go forward and connect without IAM role by creating a simple credentials file that will contain the AWS Secret and Access Key. The file needs to be created under ~ /.aws/credentials. Creating a new Dashboard With the data source connected, it is now time to create a dashboard. To do so, just click on the “create dashboard” option from the side menu. Add a new graph from the options available and now select AWS/EC2 as the required namespace. The other two things that you need to set is CPU Utilization as metric and also the instance id. That’s it! You have successfully created a graph and started monitoring your EC2 instance variables. You can create more graphs and widgets on dashboard according to your own requirements. Conclusion: – Working with Grafana will surely give you an edge as you can monitor multiple instances, both private and public. The CloudWatch API’s are used to communicate between AWS and Grafana. The good news is that Grafana comes with in-built support for AWS CloudWatch and it won’t take much of your time to set it up. So, what do you think about the tutorial? Are you all ready to build dynamic and interactive dashboards? Comment below and let us know.
https://blog.eduonix.com/software-development/learn-monitor-aws-cloudwatch-grafana/
CC-MAIN-2021-43
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Phoenix now has a lazy list implementation which is very similar but not identical to the implementation provided by FC++. This provides a set of objects defined by list<type>, for example this which defines an empty list of type int. list<int> example; A list can contain zero or more elements of the same type. It can also be declared using a function returning values of the correct type. Such lists are only evaluated on demand. A set of functions are defined which enable many ways of manipulating and using lists. Examples are provided for the features available. Exceptions are provided to deal with certain cases and these can be turned off if desired. There is a check on the maximum list length which has a default of 1000 which can be changed by the user. This is an extension to Boost Phoenix which does not change the public interface except to define new features in the namespace boost::phoenix It has to be explicitly included using the header boost/phoenix/function/lazy_prelude.hpp Boost Phoenix provides many features of functional_programming. One of the things which has been missing until now is a lazy list implementation. One is available in the library FC++ which although not part of Boost has many similarities. It has been possible to reimplement the strategy of the FC++ List Implementation using the facilties in Phoenix. This provides something which has up until now not been available anywhere in Phoenix and probably not anywhere else in Boost. This new implementation is very well integrated with other features in Phoenix as it uses the same mechanism. In turn that is well integrated with Boost Function. There is a great deal of material in FC++ and it is not proposed to replicate all of it. A great deal has changed since FC++ was written and many things are already available in Phoenix or elsewhere. The emphasis here is to add to Phoenix in a way which will make it easier to implement functional_programming. Progress is being made in implementing both the basic list<T> and the functions needed to manipulate lists.
https://www.boost.org/doc/libs/1_68_0/libs/phoenix/doc/html/phoenix/lazy_list.html
CC-MAIN-2021-43
en
refinedweb
This section discusses the behavioral changes between Ansible 1.x and Ansible 2.0. It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible. We suggest you read this page along with Ansible Changelog for 2.0 to understand what updates you may need to make. This document is part of a collection on porting. The complete list of porting guides can be found at porting guides. Topics This section discusses any changes you may need to make to your playbooks. # Syntax in 1.9.x - debug: msg: "{{ 'test1_junk 1\\\\3' | regex_replace('(.*)_junk (.*)', '\\\\1 \\\\2') }}" # Syntax in 2.0.x - debug: msg: "{{ 'test1_junk 1\\3' | regex_replace('(.*)_junk (.*)', '\\1 \\2') }}" # Output: "msg": "test1 1\\3" To make an escaped string that will work on all versions you have two options: - debug: msg="{{ 'test1_junk 1\\3' | regex_replace('(.*)_junk (.*)', '\\1 \\2') }}" uses key=value escaping which has not changed. The other option is to check for the ansible version: "{{ (ansible_version|version_compare('2.0', 'ge'))|ternary( 'test1_junk 1\\3' | regex_replace('(.*)_junk (.*)', '\\1 \\2') , 'test1_junk 1\\\\3' | regex_replace('(.*)_junk (.*)', '\\\\1 \\\\2') ) }}" trailing newline When a string with a trailing newline was specified in the playbook via yaml dict format, the trailing newline was stripped. When specified in key=value format, the trailing newlines were kept. In v2, both methods of specifying the string will keep the trailing newlines. If you relied on the trailing newline being stripped, you can change your playbook using the following as an example: # Syntax in 1.9.x vars: message: > Testing some things tasks: - debug: msg: "{{ message }}" # Syntax in 2.0.x vars: old_message: > Testing some things message: "{{ old_messsage[:-1] }}" - debug: msg: "{{ message }}" # Output "msg": "Testing some things" Behavior of templating DOS-type text files changes with Ansible v2. A bug in Ansible v1 causes DOS-type text files (using a carriage return and newline) to be templated to Unix-type text files (using only a newline). In Ansible v2 this long-standing bug was finally fixed and DOS-type text files are preserved correctly. This may be confusing when you expect your playbook to not show any differences when migrating to Ansible v2, while in fact you will see every DOS-type file being completely replaced (with what appears to be the exact same content). When specifying complex args as a variable, the variable must use the full jinja2 variable syntax ( `{{var_name}}`) - bare variable names there are no longer accepted. In fact, even specifying args with variables has been deprecated, and will not be allowed in future versions: --- - hosts: localhost connection: local gather_facts: false vars: my_dirs: - { path: /tmp/3a, state: directory, mode: 0755 } - { path: /tmp/3b, state: directory, mode: 0700 } tasks: - file: args: "{{item}}" # <- args here uses the full variable syntax with_items: "{{my_dirs}}" porting task includes More dynamic. Corner-case formats that were not supposed to work now do not, as expected. variables defined in the yaml dict format templating (variables in playbooks and template lookups) has improved with regard to keeping the original instead of turning everything into a string. If you need the old behavior, quote the value to pass it around as a string. Empty variables and variables set to null in yaml are no longer converted to empty strings. They will retain the value of None. You can override the null_representation setting to an empty string in your config file by setting the ANSIBLE_NULL_REPRESENTATION environment variable. Extras callbacks must be whitelisted in ansible.cfg. Copying is no longer necessary but whitelisting in ansible.cfg must be completed. dnf module has been rewritten. Some minor changes in behavior may be observed. win_updates has been rewritten and works as expected now. from 2.0.1 onwards, the implicit setup task from gather_facts now correctly inherits everything from play, but this might cause issues for those setting environment at the play level and depending on ansible_env existing. Previously this was ignored but now might issue an ‘Undefined’ error. While all items listed here will show a deprecation warning message, they still work as they did in 1.9.x. Please note that they will be removed in 2.2 (Ansible always waits two major releases to remove a deprecated feature). Bare variables in with_ loops should instead use the "{ {var }}" syntax, which helps eliminate ambiguity. The ansible-galaxy text format requirements file. Users should use the YAML format for requirements instead. Undefined variables within a with_ loop’s list currently do not interrupt the loop, but they do issue a warning; in the future, they will issue an error. Using dictionary variables to set all task parameters is unsafe and will be removed in a future version. For example: - hosts: localhost gather_facts: no vars: debug_params: msg: "hello there" tasks: # These are both deprecated: - debug: "{{debug_params}}" - debug: args: "{{debug_params}}" # Use this instead: - debug: msg: "{{debug_params['msg']}}" Host patterns should use a comma (,) or colon (:) instead of a semicolon (;) to separate hosts/groups in the pattern. Ranges specified in host patterns should use the [x:y] syntax, instead of [x-y]. Playbooks using privilege escalation should always use “become*” options rather than the old su*/sudo* options. The “short form” for vars_prompt is no longer supported. For example: vars_prompt: variable_name: "Prompt string" Specifying variables at the top level of a task include statement is no longer supported. For example: - include_tasks: foo.yml a: 1 Should now be: - include_tasks: foo.yml vars: a: 1 Setting any_errors_fatal on a task is no longer supported. This should be set at the play level only. Bare variables in the environment dictionary (for plays/tasks/etc.) are no longer supported. Variables specified there should use the full variable syntax: ‘{{foo}}’. Tags (or any directive) should no longer be specified with other parameters in a task include. Instead, they should be specified as an option on the task. For example: - include_tasks: foo.yml tags=a,b,c Should be: - include_tasks: foo.yml tags: [a, b, c] The first_available_file option on tasks has been deprecated. Users should use the with_first_found option or lookup (‘first_found’, …) plugin. Here are some corner cases encountered when updating. These are mostly caused by the more stringent parser validation and the capture of errors that were previously ignored. Bad variable composition: with_items: myvar_{{rest_of_name}} This worked ‘by accident’ as the errors were retemplated and ended up resolving the variable, it was never intended as valid syntax and now properly returns an error, use the following instead.: hostvars[inventory_hostname]['myvar_' + rest_of_name] Misspelled directives: - task: dostuf becom: yes The task always ran without using privilege escalation (for that you need become) but was also silently ignored so the play ‘ran’ even though it should not, now this is a parsing error. Duplicate directives: - task: dostuf when: True when: False The first when was ignored and only the 2nd one was used as the play ran w/o warning it was ignoring one of the directives, now this produces a parsing error. Conflating variables and directives: - role: {name=rosy, port=435 } # in tasks/main.yml - wait_for: port={{port}} The port variable is reserved as a play/task directive for overriding the connection port, in previous versions this got conflated with a variable named port and was usable later in the play, this created issues if a host tried to reconnect or was using a non caching connection. Now it will be correctly identified as a directive and the port variable will appear as undefined, this now forces the use of non conflicting names and removes ambiguity when adding settings and variables to a role invocation. Bare operations on with_: with_items: var1 + var2 An issue with the ‘bare variable’ features, which was supposed only template a single variable without the need of braces ({{ )}}, would in some versions of Ansible template full expressions. Now you need to use proper templating and braces for all expressions everywhere except conditionals (when): with_items: "{{var1 + var2}}" The bare feature itself is deprecated as an undefined variable is indistinguishable from a string which makes it difficult to display a proper error. In ansible-1.9.x, you would generally copy an existing plugin to create a new one. Simply implementing the methods and attributes that the caller of the plugin expected made it a plugin of that type. In ansible-2.0, most plugins are implemented by subclassing a base class for each plugin type. This way the custom plugin does not need to contain methods which are not customized. Although Ansible 2.0 provides a new callback API the old one continues to work for most callback plugins. However, if your callback plugin makes use of self.playbook, self.play, or self.task then you will have to store the values for these yourself as ansible no longer automatically populates the callback with them. Here’s a short snippet that shows you how: import os from ansible.plugins.callback import CallbackBase class CallbackModule(CallbackBase): def __init__(self): self.playbook = None self.playbook_name = None self.play = None self.task = None def v2_playbook_on_start(self, playbook): self.playbook = playbook self.playbook_name = os.path.basename(self.playbook._file_name) def v2_playbook_on_play_start(self, play): self.play = play def v2_playbook_on_task_start(self, task, is_conditional): self.task = task def v2_on_any(self, *args, **kwargs): self._display.display('%s: %s: %s' % (self.playbook_name, self.play.name, self.task)) In specific cases you may want a plugin that supports both ansible-1.9.x and ansible-2.0. Much like porting plugins from v1 to v2, you need to understand how plugins work in each version and support both requirements. Since the ansible-2.0 plugin system is more advanced, it is easier to adapt your plugin to provide similar pieces (subclasses, methods) for ansible-1.9.x as ansible-2.0 expects. This way your code will look a lot cleaner. You may find the following tips useful: __init__) ImportErrorexception and perform the equivalent imports for ansible-1.9.x. With possible translations (e.g. importing specific methods). warning()method during development, but it is also important to emit warnings for deadends (cases that you expect should never be triggered) or corner cases (e.g. cases where you expect misconfigurations). As a simple example we are going to make a hybrid fileglob lookup plugin. from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os import glob try: # ansible-2.0 from ansible.plugins.lookup import LookupBase except ImportError: # ansible-1.9.x class LookupBase(object): def __init__(self, basedir=None, runner=None, **kwargs): self.runner = runner self.basedir = self.runner.basedir def get_basedir(self, variables): return self.basedir try: # ansible-1.9.x from ansible.utils import (listify_lookup_plugin_terms, path_dwim, warning) except ImportError: # ansible-2.0 from __main__ import display warning = display.warning class LookupModule(LookupBase): # For ansible-1.9.x, we added inject=None as valid argument def run(self, terms, inject=None, variables=None, **kwargs): # ansible-2.0, but we made this work for ansible-1.9.x too ! basedir = self.get_basedir(variables) # ansible-1.9.x if 'listify_lookup_plugin_terms' in globals(): terms = listify_lookup_plugin_terms(terms, basedir, inject) ret = [] for term in terms: term_file = os.path.basename(term) # For ansible-1.9.x, we imported path_dwim() from ansible.utils if 'path_dwim' in globals(): # ansible-1.9.x dwimmed_path = path_dwim(basedir, os.path.dirname(term)) else: # ansible-2.0 dwimmed_path = self._loader.path_dwim_relative(basedir, 'files', os.path.dirname(term)) globbed = glob.glob(os.path.join(dwimmed_path, term_file)) ret.extend(g for g in globbed if os.path.isfile(g)) return ret Note In the above example we did not use the warning() method as we had no direct use for it in the final version. However we left this code in so people can use this part during development/porting/use. Custom scripts that used the ansible.runner.Runner API in 1.x have to be ported in 2.x. Please refer to: Python API
https://docs.ansible.com/ansible/2.6/porting_guides/porting_guide_2.0.html
CC-MAIN-2018-51
en
refinedweb
pivot(), pivot_table(), and melt()(40 min) matplotlib(35 min) import pandas as pd #if you don't have the dataset surveys = pd.read_csv('') #if you have already downloaded the dataset #surveys = pd.read_csv('./surveys.csv') surveys.head() Data is often presented in a so-called wide format, e.g. with one column per measurement: This can be a great way to display data so that it is easily interpretable by humans and is often used for summary statistics (commonly referred to as pivot tables). However, many data analysis functions in pandas, seaborn and other packages are optimized to work with the tidy data format. Tidy data is a long format where each row is a single observation and each column contains a single variable: pandas enables a wide range of manipulations of the structure of data, including alternating between the long and wide format. The survey data presented here is in a tidy format. To facilitate visual comparisons of the relationships between measurements across variables, it would be beneficial to display this data in the wide format. For example, what is the relationship between mean weights of different species caught at the same plot type? To make facilitate the visualization of the the transformations between wide and tidy data, it is beneficial to create a subset of the data. species_sub = ['albigula', 'flavus', 'merriami'] plot_id_sub = list(range(5, 11)) cols = ['record_id', 'species', 'weight', 'plot_type'] surveys_sub = surveys.loc[(surveys['plot_id'].isin(plot_id_sub)) & (surveys['species'].isin(species_sub)), cols] surveys_sub.head(10) surveys_sub.info() <class 'pandas.core.frame.DataFrame'> Int64Index: 2568 entries, 17415 to 34083 Data columns (total 4 columns): record_id 2568 non-null int64 species 2568 non-null object weight 2497 non-null float64 plot_type 2568 non-null object dtypes: float64(1), int64(1), object(2) memory usage: 100.3+ KB pivot()and pivot_table()¶ A long to wide transformation would be suitable to effectively visualize the relationship between the mean body weights of each species within the different plot types used to trap the animals. The first step in creating this table is to compute the mean weight for each species in each plot type. surveys_sub_gsp = ( surveys_sub .groupby(['species', 'plot_type'])['weight'] .mean() .reset_index() ) surveys_sub_gsp To remove the repeating information for 'species' and 'plot_type', this table can be pivoted into a wide formatted using the pivot() method. The arguments passed to pivot() includes the rows (the index), the columns, and which values should populate the table. surveys_sub_gsp.pivot(index='plot_type', columns='species', values='weight') Compare how this table is displayed with the table in the previous cell. It is certainly easier to spot differences between the species and plot types in this wide format. Since presenting summary statistics in a wide format is such a common operation, pandas has a dedicated method, pivot_table(), that performs both the data aggregation pivoting. surveys_sub.pivot_table(index='plot_type', columns='species', values='weight') Although pivot_table() is the most convenient way to aggregate and pivot data, pivot() is still useful to reshape a data frame from wide to long without performing aggregation. With the data in a wide format, the pairwise correlations between the columns can be computed using corr(). surveys_sub_pvt = surveys_sub.pivot_table(index='plot_type', columns='species', values='weight') surveys_sub_pvt.corr() The columns and rows can be swapped in the call to pivot_table(). This is useful both to present the table differently and to perform computations on a different axis (dimension) of the data frame (this result can also be obtained by calling the transpose() method of subveys_sub). surveys_sub.pivot_table(index='species', columns='plot_type', values='weight') With pivot_table() it is also possible to add the values for all rows and columns, and to change the aggregation function. surveys_sub.pivot_table(index='plot_type', columns='species', values='weight', margins=True, aggfunc='median') melt()¶ It is also a common operation to reshape data from the wide to the long format, e.g. when getting the data into the most suitable format for analysis. For this transformation, the melt() method can be used to sweep up a set of columns into one key-value pair. To prepare the data frame, the plot_type index name can be moved to a column name with the reset_index() method. surveys_sub_pvt surveys_sub_pvt = surveys_sub_pvt.reset_index() surveys_sub_pvt At a minimum, melt() only requires the name of the column that should be kept intact. All remaining columns will have their values in the value column and their name in the variable column (here, our columns already has a name "species", so this will be used automatically instead of "variable"). surveys_sub_pvt.melt(id_vars='plot_type') To be more explicit, all the arguments to melt() can be specified. This way it is also possible to exclude some columns, e.g. the species 'merriami'. surveys_sub_pvt.melt(id_vars='plot_type', value_vars=['albigula', 'flavus'], var_name='species', value_name='weight') Challenge¶ - Make a wide data frame with yearas columns, plot_idas rows, where the values are the number of genera per plot. Hint Remember how nunique()from last lecture. You will also need to reset the index before pivoting. - Now take that data frame, and make it long again, so each row is a unique plot_id- yearcombination. When visualizing data it is important to explore different plotting options and reflect on which one best conveys the information within the data. In the following code cells, as sample data set is used to illustrate some advantages and disadvantages between categorical plot types. This sample data contains three different species of iris flowers and measurements of their sepals and petals. import seaborn as sns iris = sns.load_dataset('iris') iris.groupby('species').mean() %matplotlib inline sns.set_context('notebook', font_scale=1.3) sns.barplot(x='species', y='sepal_length', data=iris) <matplotlib.axes._subplots.AxesSubplot at 0x1dc33b4bb00> ax = sns.boxplot(x='species', y='sepal_length', data=iris, width=0.3) ax.set_ylabel('Sepal Length', fontsize=14) ax.set_xlabel('') Text(0.5,0,'') ax = sns.violinplot(x='species', y='sepal_length', data=iris) ax.set_ylabel('Sepal Length', fontsize=14) ax.set_xlabel('') # ax.set_xticklabels(fontsize=20) Text(0.5,0,'') ax = sns.swarmplot(x='species', y='sepal_length', data=iris) ax.set_ylabel('Sepal Length', fontsize=14) ax.set_xlabel('') Text(0.5,0,'') Challenge¶ Out of the these plots, which one do you think is the most informative and why? Which is the most true to the underlying data and how would you know this? We will deepen the discussion around some of these ideas, in the context of the following plot: Reproduced with permission from Dr. Koyama's poster It is generally advisable to avoid "decorative" plot elements that do not convey extra information about the data, especially when such elements hide the real data. An early champion of this idea was Edward Tufte, who details how to reduce so called non-data ink and many other things in his book The visual display of quantitative information. In the bar chart above, the only relevant information is given by the where the rectangles of the bars ends on the y-axis, the rest of it is unnecessary. Instead of using the rectangle's height, a simpler marker (circle, square, etc) could have been used to indicate the height on the y-axis. Note that the body of the rectangle is not representative for where the data lies, there are probably no data points close to 0, and several above the rectangle. Barplots are especially misleading when used as data summaries, as in the example above. In a summary plot, only two distribution parameters (a measure of central tendency, e.g. the mean, and error, e.g. the standard deviation or a confidence interval) are displayed, instead of showing all the individual data points. This can be highly misleading, since different underlying distributions can give rise to the same summary plot. We also have no idea of how many observations there are in each group. These shortcomings become evident when comparing the barplot to the underlying distributions that were used to create them: Reproduced with permission from Dr. Koyama's poster Immediately, you can see that conclusions drawn from the barplot, such that A and B have the same outcome, are factually incorrect. The distribution in D is bimodal, so representing that with a mean would be like observing black and white birds and conclude that the average bird color is grey, it's nonsensical. If we would have planned our follow up experiments based on the barplot alone, we would have been setting ourselves up for failure! Always be sceptical when you see a barplot in a published paper, and think of how the underlying distribution might look (note that barplots are more acceptable when used to represents counts, proportion or percentages, where there is only one data point per group in the data set). Boxplots and violin plots are more meaningful data summaries as they represent more than just two distribution parameters (such as mean +/- sd). However, these can still be misleading and it is often the most appropriate to show each individual observation with a dot/hive/swarm plot, possibly combined with a superimposed summary plot or a marker for the mean or median if this additional information is useful. One exception, when it is not advisable to show all data points, is when the data set is gigantic and plotting each individual observation would oversaturate the chart. In that case, plot summary statistics or a 2D histogram (more on this later). Here is an example of how a violinplot can be combined together with the individual observations in seaborn. ax = sns.violinplot(x='species', y='sepal_length', data=iris, color='lightgrey', inner=None) ax = sns.swarmplot(x='species', y='sepal_length', data=iris) ax.set_ylabel('Sepal Length', fontsize=14) ax.set_xlabel('') sns.despine() Challenge¶ - So far, we've looked at the distribution of sepal length within species. Try making a new plot to explore the distribution of another variable within each species. - Combine a stripplot()with a boxplot(). Set the jitterparameter to distribute the dots so that they are not all on one line. matplotlib¶ The knowledge of how to make an appealing and informative visualization can be put into practice by working directly with matplotlib, and styling different elements of the plot. The high-level figures created by seaborn can also be configured via the matplotlib parameters, so learning more about them will be very useful. As demonstrated previously with seaborn.FacetGrid, one way of creating a line plot in matplotlib is by using plt.plot(). To facilitate the understanding of these concepts, the initial examples will not include data frames, but instead have simple lists holding just a few data points. This is how a line plot can be created with matplotlib. import matplotlib.pyplot as plt x = [1, 2, 3, 4] y = [1, 2, 4, 3] plt.plot(x ,y) [<matplotlib.lines.Line2D at 0x1dc341bfef0>] However, this way of plotting is not very explicit, since some things happen under the hood that we can't control, e.g. a figure is automatically created and it is assumed that the plot should go into the currently active region of this figure. This gives little control over exactly where to place the plots within a figure and how to make modifications the plot after creating it, e.g. adding a title or label the axis. For these operations, it is easier to use the object oriented plotting interface, where an empty figure and is created initially. This figure and its axes are assigned to variable names which are then explicitly used for plotting. In m matplotlib, an axes refers to what you would often call a subplot colloquially and it is named "axes" because it consists of an x-axis and a y-axis by default. By default an empty figure is created. fig, ax = plt.subplots() Plots can then be added to the axes of the figure. fig, ax = plt.subplots() ax.plot(x, y) [<matplotlib.lines.Line2D at 0x7f8927457a20>] To create a scatter plot, use ax.scatter() instead of ax.plot(). fig, ax = plt.subplots() ax.scatter(x, y) <matplotlib.collections.PathCollection at 0x7f8927162668> Plots can also be combined together in the same axes. The line style and marker color can be changed to facilitate viewing the elements in th combined plot. fig, ax = plt.subplots() ax.scatter(x, y, color='red') ax.plot(x, y, linestyle='dashed') [<matplotlib.lines.Line2D at 0x7f89275764a8>] And plot elements can be resized. fig, ax = plt.subplots() ax.scatter(x, y, color='red', s=100) ax.plot(x, y, linestyle='dashed', linewidth=3) [<matplotlib.lines.Line2D at 0x7f89273ea630>] It is common to modify the plot after creating it, e.g. adding a title or label the axis. fig, ax = plt.subplots() ax.scatter(x, y, color='red') ax.plot(x, y, linestyle='dashed') ax.set_title('Line and scatter plot') ax.set_xlabel('Measurement X') Text(0.5,0,'Measurement X') The scatter and line plot can easily be separated into two subplots within the same figure. Instead of assigning a single returned axes to ax, the two returned axes objects are assigned to ax1 and ax2 respectively. fig, (ax1, ax2) = plt.subplots(1, 2) # The default is (1, 1), that's why it does not need to be specified with only one subplot To prevent plot elements, such as the axis ticklabels from overlapping, tight_layout() method can be used. fig, (ax1, ax2) = plt.subplots(1, 2) fig.tight_layout() The figure size can easily be controlled when it is created. fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4)) # This refers to the size of the figure in inches when printed or in a PDF fig.tight_layout() Putting it all together to separate the line and scatter plot. fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4)) ax1.scatter(x, y, color='red') ax2.plot(x, y, linestyle='dashed') ax1.set_title('Scatter plot') ax2.set_title('Line plot') fig.tight_layout() Challenge¶ There are a plethora of colors available to use in matplotlib. Change the color of the line and the dots in the figure using your favorite color from this list. Use the documentation to also change the styling of the line in the line plot and the type of marker used in the scatter plot (you might need an online search also). fig.savefig('scatter-and-line.png', dpi=300) A PDF-file can be saved by changing the extension in the specified file name. Since PDF is a vector file format, there is not need to specify the resolution. fig.savefig('scatter-and-line.pdf') This concludes the customization section. The concepts taught here will be applied in the next section on how to choose a suitable plot type for data sets with many observations. Summary plots (especially bar plots) were previously mentioned to potentially be misleading, and it is often most appropriate to show every individual observation with a dot plot or the like, perhaps combined with summary markers where appropriate. But, what if the data set is too big to visualize every single observations? In large data sets, it is often the case that plotting each individual observation would oversaturate the chart. To illustrate saturation and how it can be avoided, load the datasets diamonds from the R sample data set repository: diamonds = pd.read_csv('', index_col=0) diamonds.tail() diamonds.info() <class 'pandas.core.frame.DataFrame'> Int64Index: 53940 entries, 1 to 53940 Data columns (total 10 columns): carat 53940 non-null float64 cut 53940 non-null object color 53940 non-null object clarity 53940 non-null object depth 53940 non-null float64 table 53940 non-null float64 price 53940 non-null int64 x 53940 non-null float64 y 53940 non-null float64 z 53940 non-null float64 dtypes: float64(6), int64(1), object(3) memory usage: 4.5+ MB When plotting a data frame, matplotlib is aware of the structure of the data by specifying the data parameter and the x and y parameters can then be specified just by passing the name of a column in the data frame as a string. fig, ax = plt.subplots() ax.scatter('carat', 'price', data=diamonds) <matplotlib.collections.PathCollection at 0x7f0ca4fb4be0> Because this is a dataset with 53940 observations, visualizing it in two dimensions creates a graph that is incredibly oversaturated. Oversaturated graphs make it far more difficult to glean information from the visualization. Maybe adjusting the size of each observation could help? fig, ax = plt.subplots() ax.scatter('carat', 'price', data=diamonds, s=1) <matplotlib.collections.PathCollection at 0x7f0ca41beb70> That's a bit better. Reducing the transparency might help further. fig, ax = plt.subplots() ax.scatter('carat', 'price', data=diamonds, s=1, alpha=0.1) <matplotlib.collections.PathCollection at 0x7f0ca4257128> This is much clearer than initially, but does still not reveal the full structure of the underlying data. Before proceeding, add axis labels and remove the axis lines (spines) on the top and the right. fig, ax = plt.subplots() ax.scatter('carat', 'price', data=diamonds, s=1, alpha=0.1) sns.set_context('notebook', font_scale=1.2) # Increase all font sizes ax.set_title('Diamond prices') ax.set_xlabel('Carat') ax.set_ylabel('Price') sns.despine()# sns.despine() essentially does the following: # ax.spines['top'].set_visible(False) # ax.spines['right'].set_visible(False) The axis can be adjusted to zoom in on the denser areas in the plot. fig, ax = plt.subplots() ax.scatter('carat', 'price', data=diamonds, s=1, alpha=0.1) fig.suptitle('Diamond prices') ax.set_xlabel('Carat') ax.set_ylabel('Price') ax.set_xlim(0.2, 1.2) ax.set_ylim(200, 4000) sns.despine() The result is still not satisfactory, which illustrates that the scatter plot is simply not a good choice with huge data sets. A more suitable plot type for this data, is a so called hexbin plot, which essentially is a two dimensional histogram, where the color of each hexagonal bin represents the amount of observations in that bin (analogous to the height in a one dimnesional histogram). fig, ax = plt.subplots() ax.hexbin('carat', 'price', data=diamonds) sns.despine() This looks ugly because the bins with zero observations are still colored. This can be avoided by setting the minimum count of observations to color a bin. fig, ax = plt.subplots() ax.hexbin('carat', 'price', data=diamonds, mincnt=1) sns.despine() The distribution of the data is not more akin to that of the scatter plot. To know what the different colors represent, a colorbar needs to be added to this plot. The space for the colorbar will be taken from a plot in the current figure. fig, ax = plt.subplots() # Assign to a variable to reuse with the colorbar hex_plot = ax.hexbin('carat', 'price', data=diamonds, mincnt=1) sns.despine() # Create the colorbar from the hexbin plot axis cax = fig.colorbar(hex_plot) Notice that the overall figure is the same size, and the axes that contains the hexbin plot shrank to make room for the colorbar. To remind ourselves what is plotted, axis labels can be added like previously. fig, ax = plt.subplots() hex_plot = ax.hexbin('carat', 'price', data=diamonds, mincnt=1, gridsize=50) sns.despine() cax = fig.colorbar(hex_plot) fig.suptitle('Diamond prices') ax.set_xlabel('Carat') ax.set_ylabel('Price') cax.set_label('Number of observations') It is now clear that the yellow area represents over 2000 observations! fig, ax = plt.subplots() fig.suptitle('Diamond prices') diamonds_subset = diamonds.loc[(diamonds['carat'] < 1.3) & (diamonds['price'] < 2500)] hexbin = ax.hexbin('carat', 'price', data=diamonds_subset, mincnt=1) sns.despine() cax = fig.colorbar(hexbin) cax.set_label('Observation density') ax.set_xlabel('Carat') ax.set_ylabel('Price') Text(0,0.5,'Price') Although this hext plot is a great way of visualizing the distributions, it can also be good to look at the histograms. fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4)) fig.suptitle('Distribution plots', y=1.05) ax1.hist('carat', bins=30, data=diamonds) ax1.set_title('Diamond weight') ax1.set_xlabel('Carat') ax2.hist('price', bins=30, data=diamonds) ax2.set_title('Diamond price') ax2.set_xlabel('USD') sns.despine() fig.tight_layout() Since this is a common operation, seaborn has a built-in function to create a hexbin with histograms on the marginal axes. sns.jointplot(x='carat', y='price', data=diamonds, kind='hex') <seaborn.axisgrid.JointGrid at 0x7f0ca5e7ee10> This can be customized to appear like the previous hexbin plots. Since joinplot() deals with both the hexbin and the histogram, the parameter names must be separated so that it is clear which plot they are referring to. This is done by passing them as dictionaries to the joint_kws and marginal_kws parameters ("kws" stands for "keywords"). sns.jointplot('carat', 'price', diamonds, kind='hex', joint_kws={'cmap':'viridis', 'mincnt':1}, marginal_kws={'color': 'indigo'}) <seaborn.axisgrid.JointGrid at 0x7f0ca402ed30> Colour blindness is common in the population, and red-green colour blindness in particular affects 8% of men and 0.5% of women. Guidelines for making your visualizations more accessible to people affected by colour blindness, will in many cases also improve the interpretability of your graphs for people who have standard color vision. Here are a couple of examples: Don't use jet rainbow-coloured heatmaps. Jet colourmaps are often the default heatmap used in many visualization packages (you've probably seen them before). Colour blind viewers are going to have a difficult time distinguishing the meaning of this heat map if some of the colours blend together. The jet colormap should be avoided for other reasons, including that the sharp transitions between colors introduces visual threshold levels that do not represent the underlying continuous data. Another issue is luminance, or brightness. For example, your eye is drawn to the yellow and cyan regions, because the luminance is higher. This can have the unfortunate effect of highlighting features in your data that don't actually exist, misleading your viewers! It also means that your graph is not going to translate well to greyscale in publication format. More details about jet can be found in this blog post and this series of posts. In general, when presenting continuous data, a perceptually uniform colormap is often the most suitable choice. This type of colormap ensures that equal steps in data are perceived as equal steps in color space. The human brain perceives changes in lightness as changes in the data much better than, for example, changes in hue. Therefore, colormaps which have monotonically increasing lightness through the colormap will be better interpreted by the viewer. More details and examples of such colormaps are available in the matplotlib documentation, and many of the core design principles are outlined in this entertaining talk. Another approach is to use both colours and symbols. sns.lmplot(x='sepal_width', y='sepal_length', hue='species', data=iris, fit_reg=False, markers=['o', 's', 'd']) <seaborn.axisgrid.FacetGrid at 0x7f0ca151f320> Or to change the color palette # To see all available palettes, set it to an empty string and see the error message sns.lmplot(x='sepal_width', y='sepal_length', hue='species', data=iris, fit_reg=False, markers=['o', 's', 'd'], palette='colorblind') <seaborn.axisgrid.FacetGrid at 0x7f8915b25e48>
https://nbviewer.jupyter.org/github/UofTCoders/2018-06-18-utoronto/blob/gh-pages/code/4-advanced-viz.ipynb
CC-MAIN-2018-51
en
refinedweb
Changes for version 1.01 - 2005-12-22 - Place some CODE: chunks in "ZOOM.xs" inside curly brackets so that the declarations they begin with are at the start of the block. This avoid mixed code/declarations. (The "correct" solution is to use INIT: clauses in the XS file, but they don't seem to work: the code in them is slapped down right next to the CODE:, so declarations are not acceptable there either.) - Add new function Net::Z3950::ZOOM::connection_scan1(), which uses a query object to indicate the start-term. This opens the way for using CQL queries for scanning once the underlying ZOOM-C code supports this. - NOTE BACKWARDS-INCOMPATIBLE CHANGE: The ZOOM::Connection method scan() is renamed scan_pqf(), and a new scan() method is introduced which calls the underlying scan1() function. Thus the scan()/scan_pqf() dichotomy is consistent with that between search()/search_pqf(). - The tests t/15-scan.t and t/25-scan.t now also test for scanning by CQL query. To support these tests, a new files is added to the distribution, "samples/cql/pqf.properties" - Remove nonsensical clause about CQL sort-specifications from the documentation. - Add new function Net::Z3950::ZOOM::query_cql2rpn(), for client-side CQL compilation. - Add new ZOOM::Query::CQL2RPN class, encapsulating CQL compiler functionality as a Query subclass. - Add two new error-codes, CQL_PARSE and CQL_TRANSFORM, returned by the client-side CQL facilities. - The test-scripts t/12-query.t and t/22-query.t are extended to also test client-side CQL compilation. - Add all the yaz_log*() functions within the Net::Z3950::ZOOM namespace. - Add new ZOOM::Log class for logging, providing aliases for the functions in the Net::Z3950::ZOOM layer. - Add diagnostic set to rendering of Exception objects. - Documentation added for CQL compilation and logging. Modules - Net::Z3950::ZOOM - Perl extension for invoking the ZOOM-C API. - ZOOM - Perl extension implementing the ZOOM API for Information Retrieval Provides - Net::Z3950 in lib/Net/Z3950.pm - Net::Z3950::Connection in lib/Net/Z3950.pm - Net::Z3950::Manager in lib/Net/Z3950.pm - Net::Z3950::Op in lib/Net/Z3950.pm - ZOOM::Connection in lib/ZOOM.pm - ZOOM::Error in lib/ZOOM.pm - ZOOM::Event in lib/ZOOM.pm - ZOOM::Exception in lib/ZOOM.pm - ZOOM::Log in lib/ZOOM.pm - ZOOM::Options in lib/ZOOM.pm - ZOOM::Package in lib/ZOOM.pm - ZOOM::Query in lib/ZOOM.pm - ZOOM::Query::CQL in lib/ZOOM.pm - ZOOM::Query::CQL2RPN in lib/ZOOM.pm - ZOOM::Query::PQF in lib/ZOOM.pm - ZOOM::Record in lib/ZOOM.pm - ZOOM::ResultSet in lib/ZOOM.pm - ZOOM::ScanSet in lib/ZOOM.pm
https://metacpan.org/release/MIRK/Net-Z3950-ZOOM-1.01
CC-MAIN-2018-51
en
refinedweb
About The Author Hello, I’m Daniel and I make things for the web. I’m the CTO at Kinsta and I write for a number of amazing publications like Smashing Magazine and … More about Daniel Utilizing User Roles In WordPress. Terminology The main terms to know are “role” and “capability”. Capabilities are the core of the system, they represent the things you can do. An example of a capability is switch_themes. A user who has this capability is able to change the look of the website using the Appearances section in the admin. Those who do not have this capability will not be able to do so. Further Reading on SmashingMag: - Manage Events Like A Pro With WordPress - Writing Effective Documentation For WordPress End Users - Random Redirection In WordPress - Do-It-Yourself Caching Methods With WordPress Roles are simply a named set of capabilities. Imagine you have capabilities like edit_post, add_user and so on. Instead of listing all 50 of them for each and every user who you’d like to be an admin, simply assign each user the “admin” role and then assign all the capabilities to that role. The Default Setup By default, WordPress has six roles and about 57 capabilities. Each role has a different combination of capabilities which endow the user with the appropriate rights and privileges. For example, a contributor can edit and delete his own posts, but cannot control when these posts are published (and cannot delete those posts once they are published). This is because they have the edit_posts and delete_posts capabilities, but they do not have the publish_post and delete_published_post capabilities. The default setup offered by WordPress is very well thought out and should not be modified, only added to. If it is changed, you could face difficult problems later on — like a contributor being able to remove posts from your website, or an admin not being able to change the theme. When New Roles And Capabilities Are Needed New roles usually come hand-in-hand with new capabilities. Usually, we first create a set of new capabilities, which are held by the admin (and a new role, as well). Let’s look at an example. If you have a large website, chances are you have a marketing team. This team doesn’t need to be able to edit and publish posts, but they do need access to advertising stats, trending search topics, etc. Perhaps it would also be beneficial to allow them to manage categories and comments for SEO purposes and customer satisfaction, respectively. Our first order of business is to look at the advertising stats and trending searches. This is not something WordPress offers by default, so let’s presume it has been built as a plugin. Let’s create a capability called view_stats. This will be given to the marketing people and of course to the admin. Other users — like editors and authors — do not need to see the stats (which, in many case, is sensitive data) so they will not receive this capability. Our second task is to decide what other capabilities our marketing people need. Based on our assumptions above, they will need the read, manage_links, manage_categories and moderate_comments capabilities. Lastly, we would use some coding chops to create a new role named “Marketing”, and assign the above-mentioned capabilities to it. Once that has been done, we are able to appoint the “Marketing” role to any user within the “Add User” and “Edit Profile” page. Basic WordPress Functions To manage roles and capabilities effectively, all you need associated capabilities If you would like to view the documentation for these functions, you can take a look at the Roles and Capabilities section in the Codex. Implementing New Roles And Capabilities The full implementation of roles and capabilities requires more than just the five functions I mentioned. We’ll also need to use functions for checking capabilities, we’ll need features which are only available to certain roles, and so on. Adding the Roles and Capabilities $marketing = add_role( 'marketing', 'Marketing', array( 'read' => true, 'manage_links', 'manage_categories', 'moderate_comments', 'view-stats' )); $role = get_role( 'administrator' ); $role->add_cap( 'view-stats' ); The function above will add our new “Marketing” role. It adds four capabilities which are already included in WordPress and a fifth one which is a custom capability. As you can see, no special coding is required to add custom caps — just include them in the array when adding the role. Once we add the new role, we make sure to add our new capability to the admin role as well. Note that the add_role() fuction returns a WP_Role object on success or NULL if the role already exists. Since the above code will be used in a plugin, the best way to use it is to utilize hooks, and to add the capabilities to the administrator only when we first create the role. add_action( ‘admin_init’, ‘add_custom_role’ ); function add_custom_role() { $marketing = add_role( ‘marketing’, ‘Marketing’, array( ‘read’ => true, ‘manage_links’, ‘manage_categories’, ‘moderate_comments’, ‘view-stats’ )); if( null !== $marketing ) { $role = get_role( ‘administrator’ ); $role->add_cap( ‘view-stats’ ); } } Checking for Capabilities and Roles Once we have our system in place we can check for capabilities and roles. This allows us to make sure that only users with the right capabilities can do certain things. add_action('admin_menu', 'register_stats_Page'); function register_stats_Page() { add_menu_page( 'Marketing Info', 'Marketing Info', 'view_stats', 'marketing-info', 'show_marketing_info' ); } function show_marketing_info(){ if( ! current_user_can( 'view-stats' ) ) { die( 'You are not allowed to view this page' ); } echo " This is the marketing stats page "; } The code above will add a top level menu to the admin area for our marketing stats page. The add_menu_page() function allows you to specify a capability (the third argument) which determines whether or not the menu is shown. Note that even though the add_menu_page() function itself takes the capability as an argument, we still need to check for it in the function which displays the content of the page. This leads us into our next two functions nicely: current_user_can() and user_can(). If you want to check whether or not the currently logged-in user has a specific capability, simply use the following format: if( current_user_can( 'any_capability' ) ) { // The user has the capability } else { // The user does not have the capability } If you would like to check whether or not an arbitrary user has a specific capability, use the user_can() function: if( user_can( 45, 'any_capability' ) ) { // The user with the ID of 45 has the capability } else { // The user with the ID of 45 does not have the capability } In addition to capabilities, both functions can check for roles as well: if( current_user_can( 'marketing' ) ) { // The current user is a marketing person } else { // The current user is not a marketing person } Creating User Types While not an official term, I call different roles “user types” when they require many different front-end changes. Say you’re building a theme marketplace, for example. Apart from the administrator who oversees all (and the authors who would write in the company blog) you also have buyers and sellers. These are both registered users of your website but would require somewhat differing front-ends. The profile page of a buyer would most likely show the number of things he’s bought, how frequently he buys stuff and when logged in, he would see a list of downloads. A seller’s profile would be a portfolio of his work, his most popular items, and so on. When logged in he should probably see sales statistics. Figuring Out Roles and Permissions To make things simple, users either buy or sell, and cannot do both. Let’s create the two roles with a set of capabilities (off the top of my head…): add_action( 'admin_init', 'add_custom_roles' ); function add_custom_roles() { $seller = add_role( 'seller', 'Seller', array( 'read' => true, 'delete_posts' => true, 'edit_posts' => true, 'delete_published_posts' => true, 'publish_posts' => true, 'upload_files' => true, 'edit_published_posts' => true, 'view_sales' => true, 'moderate_comments' => true )); $role = get_role( 'administrator' ); if( null !== $seller ) { $role->add_cap( 'view_sales' ); $role->add_cap( 'view_others_sales' ); } $buyer = add_role( 'buyer', 'Buyer', array( 'read' => true, 'add_payment' => true, 'download_resources' => true )); if( null !== $buyer ) { $role->add_cap( 'add_payment' ); $role->add_cap( 'add_others_payment' ); $role->add_cap( 'download_resources' ); $role->add_cap( 'download_others_resources' ); } } The idea here is that sellers are like authors, who will store sellable items as posts. Therefore, they receive the capabilities of an author. In addition, they can also view sales stats for their own items. In addition to viewing items, buyers can also add payments and download files related to items that they buy. Admins receive all these permissions, but in addition, they may view the statistics of any other user, add payments to any account, and may download resources for any item. Frontend Implementation From here on out it’s a matter of using our role and capability checking functions to determine who sees what. Let’s take a look at a couple of examples: Listing Users If we list all the users on the website, we could show the items bought for the buyers and the items sold for the sellers. Here’s how: <!-- User Data comes from a WordPress user object stored in the $user variable --> <div class='user'> <div class='row'> <div class='threecol'> <?php echo get_avatar( $user->ID, 120 ) ?> </div> <div class='sixcol'> <h1><?php echo $user->display_name ?></h1> <?php echo $user->description ?> </div> <div class='threecol last'> <?php if( user_can( $user->ID, 'buyer' ) ) : ?> <div class='counter'> <span class='number'><?php get_purchases_count( $user->ID ) ?></span> <span class='text'>purchases</span> </div> <?php else ?> <div class='counter'> <span class='number'><?php get_sales_count( $user->ID ) ?></span> <span class='text'>sales</span> </div> <?php endif ?> </div> </div> </div> Most of this is plain old HTML, but the important thing to take away is how the counter is created. Note that the get_purchases_count() and get_sales_count() functions are fictitious (they are not part of WordPress, of course). Of course, in a real world example we would need to make sure that only buyers and sellers are listed, and we might need to create a separate template for them completely (if the differences are numerous enough). foreach( $users as $user_id ) { $user = get_userdata( $user_id ); if( user_can( $user->ID, 'buyer' ) ) { get_template_part( 'userlist', 'buyer' ); } else { get_template_part( 'userlist', 'seller' ); } } When looping through our users above, either userlist-buyer.php or userlist-seller.php will be used, depending on the user’s role. Conclusion User roles and capabilities are extremely powerful tools in WordPress. Whenever you build a plugin which requires the implementation of special user-based functions, you should consider using them. Apart from making your life easier (and more manageable) it will make the processes on your websites easier to follow and easier to backtrack any problems. (jvb) (il) (jc) (js)
https://www.smashingmagazine.com/2012/10/utilizing-user-roles-wordpress/
CC-MAIN-2018-51
en
refinedweb
Commands that don’t explicitly define a return value, but do have output arguments, have an implicit return value: an ISIVTCollection. The ISIVTCollection is a “special type of collection”, but it may help to think of it as [something like] a dictionary. An ISVTCollection holds a collection of key, value pairs (like a Python dictionary). You use the key names, like “Value”, to extract the associated value. For example, the PickElement command has three “output arguments”; that is, parameters that return some value. These output arguments are PickedElement, ButtonPressed, and ModifierPressed. So PickElement returns an ISIVTCollection collection with three key-value pairs. The keys are “PickedElement”, “ButtonPressed”, and “ModifierPressed”. You can use the ISIVTCollection.Value method to get the value of a key: from sipyutils import si # win32com.client.Dispatch('XSI.Application') from sipyutils import log # LogMessage from sipyutils import C # win32com.client.constants si = si() vtcol = si.PickElement(C.siObjectFilter, "Pick deformer") button = vtcol.Value( "ButtonPressed" ) modkey = vtcol.Value( "ModifierPressed" ) o = vtcol.Value( "PickedElement" ) There’s also an Item property, but like most Item properties it fails in Python: vtcol = si.PickElement(C.siObjectFilter, "Pick deformer") o = vtcol.Item( "PickedElement" ) # ERROR : Traceback (most recent call last): # File "<Script Block >", line 7, in <module> # o = vtcol.Item( "PickedElement" ) # TypeError: 'NoneType' object is not callable Fortunately, Item is the default property, so you can omit it: vtcol = si.PickElement(C.siObjectFilter, "Pick deformer") o = vtcol( "PickedElement" ) You can also use an integer index instead of the key string. It just so happens that ISIVTCollections are sorted by key name (case insensitive), so you can work out which index to use. # ISIVTCollection is sorted by name (case insensitive) # 0 = ButtonPressed # 1 = ModifierPressed # 2 = PickedElement o = si.PickElement(C.siObjectFilter, "Pick deformer")(2) print ( si.ClassName(o) ) # X3DObject Sorry but i don’t understand the last part. It doesn’t work for me. I need to write this to make it work : o = si.PickElement(C.siObjectFilter, “Pick last”) print ( si.ClassName(o(2)) ) Did I forget something ? Hi There was a typo in my post, now fixed. I had ClassName(o1) instead of ClassName(o). But that updated code does work, I just checked it. Thanks Ok, It works for me too. Thanks.
https://xsisupport.com/2013/04/25/isivtcollections-output-arguments-and-implicit-return-values/
CC-MAIN-2018-51
en
refinedweb
Last week we looked at creating our first Ruby on Rails Model. Rails Models are backed by a database table. So for example, if you had an Article Model you would also need an articles database table. In order to control our database schema and to make deployment and changes easier we can use migration files. A migration file is a set of instructions that will be run against your database. This means if you need to make a change to your database, you can capture that change within a migration file so it can be repeated by other installations of your project. This will be extremely useful when it comes to deploying your application or working with other developers. In today’s tutorial we will be taking a look at Rails migrations. The purpose of Migrations In order to fully understand migrations and why you need them, first it’s important to understand their purpose. The majority of all web applications will need a database. A database is used for storing the data of the web application. So for example, that might be blog posts or your user’s details. The database is made up of tables that store your data. Typically you would run SQL statements to create or modify the tables and columns of a database. Rails introduces a specific Domain-specific language for writing instructions for how a database should be created. This saves you from writing SQL statements. A migration is a file that contains a specific set of instructions for the database. For example, last week we created a migration file to create the articles table with columns for title and body. When this migration file is run, Rails will be able to make the changes to the database and automatically create the table for us. Over time as the database evolves, the migration files will act as a versioned history of how the database has changed. This means you will be able to recreate the database from the set of instruction files. What are the benefits of using Migrations? There are a number of benefits to using Migrations as part of your Rails project. Firstly, your application is going to be pretty useless without a database. When someone grabs a copy of the code they need to set up a local version of the database. Instead of passing around a file of SQL statements, the Rails project will automatically have everything it needs to recreate the database. Secondly, when working with other developers, you will likely face a situation where one developer needs to modify the database in order to implement the feature she is working on. When that developer pushes her code and you pull down the changes, your application will be broke without also modifying the database. The migration file will be able to automatically make the change so you don’t have to try and reverse engineer what has changed. And thirdly, when you deploy your application to a production server you need a way for the production database to be created or updated. Updating anything in production manually can be a bit hairy as any human error can potentially be disastrous. By using database migrations we can completely remove the human element of updating a production database. What do Migration files look like? Last week we ran the Rails generator to create a new Article Model. bin/rails g model Article title:string body:text [/bash] As part of this generation process, Rails also created a migration file: ```ruby class CreateArticles < ActiveRecord::Migration def change create_table :articles do |t| t.string :title t.text :body t.timestamps null: false end end end This migration will create a new database table called articles. We’ve also specified that the table should have a title column of type string and a body column of type text. Rails will also add a primary key column of id as well as created_at and updated_at timestamps by default. Understanding the Migration file The migration file is essentially just a regular Ruby class that is run during the migration process: class CreateArticles < ActiveRecord::Migration end The CreateArticles class inherits from the ActiveRecord::Migration class. In the change method we have the instructions of what the migration should do: def change create_table :articles do |t| t.string :title t.text :body t.timestamps null: false end end In this example we are calling the create_table method with an argument of :articles for the table name. We also pass a block to the method that allows us to specify the names and types of the columns. Creating Migration files In the previous example we saw how we can automatically generate a migration file using the model generator command. However, you will need to generate a migration file whenever you want to alter an existing table or create a join table between two tables. So how do you generate a standalone migration file? Rails provides a migration generator command that looks something like this: bin/rails g migration AddSlugToArticles [/bash] This should create the following migration file ```ruby class AddSlugToArticles < ActiveRecord::Migration def change end end You can also add the columns you want to add to the table to the generator command: bin/rails g migration AddSlugToArticles slug:string [/bash] This will create the following migration file: ```ruby class AddSlugToArticles < ActiveRecord::Migration def change add_column :articles, :slug, :string end end As you can see in this example, we are calling the add_column method and passing the table name :articles, the column name :slug and the type :string. The migration will automatically created with the add_column method if your migration begins with Add. If you wanted to remove a column from a table, you would generate a migration that begins with Remove: bin/rails g migration RemoveSlugFromArticles slug:string [/bash] This would generate the following migration: ```ruby class RemoveSlugFromArticles < ActiveRecord::Migration def change remove_column :articles, :slug, :string end end If you would like to create a new table you should run a generator command where the migration begins with Create. For example: bin/rails g migration CreateComments name:string comment:text [/bash] Running the command above would generate the following migration: ```ruby class CreateComments < ActiveRecord::Migration def change create_table :comments do |t| t.string :name t.text :comment end end end Running Migrations Once you have generated your migration files and you are happy with the structure of the tables that you have defined, it’s time to run the migrations to update the database. To run the migration files you can use the following command: bin/rake db:migrate [/bash] This will work through each of your migration files in the order of which they were generated and perform that action on the database. If everything goes smoothly you should see the output from the command console telling you which migrations were run. If you try to run the `db:migrate` command again you will notice that nothing will happen. Rails keeps track of which migrations have already been run. When you run the `db:migrate` command, Rails will only run the migrations that have yet to be run. ## Rolling back migrations If you have made a mistake and you want to roll back the previous migration you can do so using the following command: ```bash bin/rake db:rollback [/bash] For simple migrations Rails can automatically revert the changes from the `change` method. If you need to rollback through multiple migrations you can add the `STEP` parameter to the command: ```bash bin/rake db:rollback STEP=2 [/bash] This will rollback through the previous 2 migrations. ## Conclusion Migrations are an important part of a modern web application framework. Nearly all web applications have a database, and so it’s pretty important that you have the right tooling. Migrations make it easy to create database tables. You will no longer have to remember the syntax for creating tables or defining column. If you were to switch databases, you will also find that the migrations are agnostic to the type of database you are using. However, really the biggest benefit of migrations is how easy it is to create a new database, or modify an existing one. When working on a team of developers this is going to be essential!
https://www.culttt.com/2015/10/07/understanding-ruby-on-rails-migrations/
CC-MAIN-2018-51
en
refinedweb
------------ () def as_SmoothNearest(): SmoothNearest.as_SmoothNearest(5) # number 5 limits no of influnces per vertex.
https://www.highend3d.com/maya/script/free-as_smoothnearest-a-magic-feature-from-hyper-skinning-system-for-maya
CC-MAIN-2018-51
en
refinedweb
An Odoo launcher that discovers addons automatically Project description Odoo server startup scripts that discover Odoo addons automatically without the need of the --addons-path option. They work by looking at addons in the odoo_addons namespace package. This is the basic building block to package and distribute Odoo addons using standard python infrastructure (ie setuptools, pip, wheel, and pypi). The following thin wrappers around official Odoo startup scripts are provided: - odoo-autodiscover.py is the equivalent of odoo.py - openerp-server-autodiscover is the equivalent of openerp-server - odoo-server-autodiscover is an alias for openerp-server-autodiscover These scripts have exactly the same behaviour and options as their official Odoo counterparts, except they look for additional addons by examining all distributions providing the odoo_addons namespace package. How to install - create a virtualenv and make sure you have a recent version of pip (by running pip install -U pip or using get-pip.py) - install Odoo with the standard Odoo installation procedure - make sure Odoo is installed (the following commands must work: python -c "import openerp", odoo.py and openerp-server, and pip list must show the odoo package) - install this package (pip install odoo-autodiscover) How to use - create or install odoo addons in the odoo_addons namespace package possibly with the help of the setuptools-odoo package. - run odoo with openerp-server-autodiscover or odoo-autodiscover.py and notice the addons path is constructued automatically Complete example The following commands install Odoo 8.0 nightly, then install base_import_async pulling all required dependencies automatically (ie connector). It uses pre-built wheel packages at, so the procedure is extremely fast and you don’t need any compiler nor build dependencies. # create and activate a virtualenv virtualenv venv . ./venv/bin/activate # install Odoo 8.0 nightly pip install -r pip install # install odoo-autodiscover pip install odoo-autodiscover # install base_import_async from wheelhouse.acsone.eu pip install odoo-addon-base_import_async \ --find-links= --no-index # start odoo openerp-server-autodiscover Should you like to have an Odoo shell, simply pip install the module: pip install odoo-addon-shell \ --find-links= --no-index odoo-autodiscover.py shell To view addon packages that are installed in your virtualenv, simply use pip list | grep odoo-addon- (note official addons are part of the odoo package). Technical note Since it’s not possible to make openerp.addons a namespace package (because openerp/__init__.py contains code), we use a pseudo-package named odoo_addons for the sole purpose of discovering addons installed with setuptools in that namespace. odoo_addons is not intended to be imported as the Odoo import hook will make sure all addons can be imported from openerp.addons as usual. See for more information about namespace packages. See to follow progress with making openerp.addons a namespace package, which will hopefully make this package obsolete in the future. Useful links - pypi page: - code repository: - report issues at: - see also setuptools-odoo: Credits Author: Many thanks to Daniel Reis who cleared the path, and Laurent Mignon who convinced me it was possible to do it using standard Python setup tools and had the idea of the odoo_addons namespace package. Changes 1.0.1 (2015-12-30) - [FIX] odoo-autodiscover.py: more reliable way to discover and import the official odoo.py script, so it will now work when Odoo is installed from the deb package 1.0.0 (2015-12-28) - initial stable release Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/odoo-autodiscover/1.0.1.post1/
CC-MAIN-2018-51
en
refinedweb
Building A Slack Bot With Elixir Part 1 Having started out as a Rails shop, Bendyworks is always on the lookout for new tools that give use the same combination of power and developer friendliness. We've been watching Elixir grow in popularity in the last few years, and are attracted to its marriage of functional programming and concurrency capabilities with a friendly Ruby-ish syntax. We've been experimenting with it on personal projects for a while now, and have enjoyed the experience enough that we've begun to choose it over Ruby/Rails for developing some of our internal tools. In this series of posts, I'll cover one of these internal applications; a basic Slack bot. In this case it'll only be responsible for fetching weather forecasts, but as many developers are doubtless familiar, Slack's API is becoming something of a de facto tool for building automated services that a team can access. This first post will cover the basics of setting up and using Cowboy and Plug, two important tools in the Elixir/Erlang web services arsenal. The second part will cover implementing a Slack interface to handle the common task of scraping a URL, parsing the result, and sending a message back to Slack. Setup To follow along with this tutorial, you'll first need to install Elixir. Once that's finished, you can use mix, Elixir's integrated build tool, to create a new Elixir project called "weatherbot" by running mix new weatherbot Dependencies Elixir uses hex as its package manager, which is tightly integrated into the boilerplate project generated by mix new. Adding new dependencies is as easy as modifying the mix.exs config file and running mix deps.get. To begin with, we'll just be using some simple web server libraries. Cowboy Cowboy describes itself as a "Small, fast, modern HTTP server for Erlang/OTP". It aims to be fully compliant with the modern web, and supports standard RESTful endpoints as well as Websockets, HTTP/2, streaming, and a laundry list of other features. It is also used by the Phoenix Elixir framework as its base HTTP server. Plug Plug is a middleware library similar in some ways to Rack, only even more modular and easy to use. While Cowboy was originally developed for Erlang, Plug is pure Elixir. Unsurprisingly, it is also a core part of the Phoenix framework. However, it also provides its own router implementation that will suffice for a simple server of the sort we're building. Installing Plug and Cowboy Plug and Cowboy can be installed be modifying the deps function of the mix.exs configuration to be: defp deps do [{:cowboy, "~> 1.0.0"}, {:plug, "~> 1.0"}] end And running mix deps.get Hello HTTP Server To get started, lets just build a simple Plug routes file and an accompanying test. First create a new folder lib/weatherbot, and create a file in it called router.ex with the following code: defmodule Weatherbot.Router do use Plug.Router use Plug.Debugger, otp_app: :weatherbot plug Plug.Logger plug Plug.Parsers, parsers: [:json, :urlencoded] plug :match plug :dispatch post "/webhook" do send_resp(conn, 200, "ohai") end match _ do send_resp(conn, 404, "not_found") end end Note that Plug.Logger and Plug.Debugger were not strictly necessary to include, but they'll help us see what's going on in our application and diagnose anything that goes wrong. You'll also have to modify the main project file ( lib/weatherbot.ex) to run Cowboy using the Plug router we've defined. defmodule Weatherbot do use Application def start(_type, _args) do children = [ Plug.Adapters.Cowboy.child_spec(:http, Weatherbot.Router, [], port: 4000) ] Supervisor.start_link(children, strategy: :one_for_one) end end The use Application call and use of a Supervisor are all part of OTP, which, if you're not familiar, is somewhere between a standard library and a general framework for Erlang and Elixir. It's far too large a topic to cover in this post, but as a short gloss for what's going on in weatherbot.ex; we're defining our project to be an OTP "Application", and using a concurrency management abstraction called a "Supervisor" to start and watch over the Cowboy server process. Finally, we'll need to tell Elixir to use the Weatherbot module as the basis of our application. Change the application function in mix.exs to the following: def application do [ extra_applications: [:logger], mod: {Weatherbot, []} ] end To try it out, run the app with mix run --no-halt, and use your HTTP client of choice to make a POST request to localhost:4000/webhook . You should get "ohai" as a response. Testing Mix comes bundled with ExUnit, a unit testing framework for Elixir. It's syntax, structure, and output should all feel pretty familiar if you're used to Rspec, unittest, jasmine, and other popular unit testing frameworks. We can quickly write an ExUnit test for the code we have by leveraging the Plug.Test module. Create the file test/weatherbot_test.exs with the following content: defmodule WeatherbotTest do use ExUnit.Case use Plug.Test doctest Weatherbot alias Weatherbot.Router @opts Router.init([]) test "responds to greeting" do conn = conn(:post, "/webhook", "") |> Router.call(@opts) assert conn.state == :sent assert conn.status == 200 assert conn.resp_body == "ohai" end end Which can be run with mix test. Next Steps This part of the tutorial covered the basics of setting up and defining routes for an Elixir web server. We now have a good base configuration, which we'll use in Part 2 to build a more complex service that can receive, process, and respond to input from Slack. Thanks to @geeksam for noticing and pointing out a few mistakes in an earlier version of this post.
https://bendyworks.com/blog/building-a-slackbot-in-elixir-part-1/index
CC-MAIN-2018-51
en
refinedweb
This action might not be possible to undo. Are you sure you want to continue? by Wikibooks contributors the open-content textbooks collection Developed on Wikibooks, © Copyright 2004–2007,". Principal authors: Rod A. Smith (C) · Jonas Nordlund (C) · Jlenthe (C) · Nercury (C) · Ripper234 (C) Cover: C♯ musical note, by Mothmolevna (See naming) (GFDL) The current version of this Wikibook may be found at: Contents INTRODUCTION..........................................................................................04 Foreword............................................................................................................04 Getting Started..................................................................................................06 LANGUAGE BASICS.....................................................................................08 Syntax................................................................................................................08 Variables............................................................................................................11 Operators...........................................................................................................17 Data Structures.................................................................................................23 Control...............................................................................................................25 Exceptions.........................................................................................................31 CLASSES........................................................................................... ......33 . Namespaces.......................................................................................................33 Classes...............................................................................................................35 Encapsulation....................................................................................................40 THE .NET FRAMEWORK.............................................................................42 .NET Framework Overview...............................................................................42 Console Programming.......................................................................................44 Windows Forms.................................................................................................46 ADVANCED OBJECT-ORIENATION CONCEPTS......................................................47 Inheritance........................................................................................................47 Interfaces...........................................................................................................49 Delegates and Events........................................................................................51 Abstract Classes................................................................................................54 Partial Classes...................................................................................................55 Generics.............................................................................................................56 Object Lifetime..................................................................................................59 ABOUT THE BOOK.............................................................................. .........61 History & Document Notes...............................................................................61 Authors..............................................................................................................62 GNU Free Documentation License....................................................................63 and projects with strict reliability requirements. Anders Hejlsberg as Chief Engineer. library. Similar to Java. Developers can thus write part of an application in C# and another part in another . the language is open to implementation by other parties. Because of the similarities between C# and the C family of languages.NET languages rely on an implementation of the virtual machine specified in the Common Language Infrastructure. multiple types of polymorphism. Its strong typing helps to prevent many programming errors that are common in weakly typed languages. a developer with a background in object-oriented languages like C++ may find C# structure and syntax intuitive. it has features such as garbage collection that allow beginners to become proficient in C# more quickly than in C or C++. handles object references.NET initiative and subsequently opened its specification via the ECMA.g. which provides a large set of classes. projects implemented by individuals or large or small teams.NET Framework API. including ones for encryption. and separation of interfaces from implementations.NET languages). and graphics. combined with its powerful development tools. VB .NET). as well as Java. keeping the tools. That virtual machine manages memory. comes with an extensive class library. multi-platform support. and performs Just-In-Time (JIT) compiling of Common Intermediate Language code. Internet applications. The virtual machine makes C# programs safer 4 | C# Programming . C# and other . like Microsoft's Common Language Runtime (CLR). created C# as part of their . Other implementations include Mono and DotGNU.NET language (e.Chapter 1 1 F OREWORD live version · discussion · edit chapter · comment · report an error C # (pronounced "See Sharp") is a multi-purpose computer programming language suitable for all development needs. and supports exception handling. A large part of the power of C# (as with other . Testing frameworks such as NUnit make C# amenable to test-driven development and thus a good language for use with Extreme Programming (XP). Standard Microsoft. comes with the common . it is object-oriented. make C# a good choice for many types of software development projects: rapid application development projects. Introduction Although C# is derived from the C programming language. Thus. and objectoriented development model while only having to learn the new language syntax. Those features. TCP/IP socket programming. and generics. 0 version of C# includes such new features as generics. live version · discussion · edit chapter · comment · report an error Wikibooks | 5 .Foreword than those that must manage their own memory and is one of the reasons . Microsoft submitted C# to the ECMA standards group mid-2000. codenamed "Cool". Se microsoft-watch and hitmil. More like Java than C and C++.0 was released in late-2005 as part of Microsoft's development suite. partial classes. which could otherwise allow software bugs to corrupt system memory and force the operating system to halt the program forcibly with nondescript error messages. History Microsoft's original plan was to create a rival to Java. The 2. named J++ but this was abandoned to create C#. C# discourages explicit use of pointers. and iterators. C# 2.NET language code is referred to as managed code. Visual Studio 2005. Net project page. or the C:\WINDOWS\Microsoft. For Linux.NET can be compiled in Visual Studio 2002 and 2003 (only supports the .Chapter 2 2 G ETTING S TARTED live version · discussion · edit chapter · comment · report an error T o compile your first C# application.Net Framework SDK installation places the Visual C# . a good compiler is cscc which can be downloaded for free from the DotGNU Portable. If the default Windows directory (the directory where Windows or WinNT is installed) is C:\WINDOWS. there are plenty of editors that are available.NET code.NET programs with a simple text editor.cs: using System. you will need a copy of a . Currently C#. Microsoft For Windows. The Visual Studio C# Express Edition can be downloaded and used for free from Microsoft's website.NET Framework SDK installed on your PC.0 and earlier versions with some tweaking). but it should be noted that this requires you to compile the code yourself.1. the .NET frameworks available: Microsoft's and Mono's.0. For writing C#.0.NET\Framework\v2. Microsoft offers five Visual Studio editions.NET Compiler (csc) in the C:\WINDOWS\Microsoft.NET\Framework\v1.NET Framework 2.exe or mcs. Linux. or other Operating Systems. an installer can be downloaded from the Mono website.Net Framework SDK can be downloaded from Microsoft's .1) and Visual Studio 2005 (supports the .0 and 1.NET Framework Developer Center.3705 directory for version 1. The code below will demonstrate a C# program written in a simple text editor. namespace MyConsoleApplication { class MyFirstClass 6 | C# Programming . four of which cost money. The compiled programs can then be run with ilrun.NET\Framework\v1.50727 directory for version 2. There are two .NET Framework version 1. the C:\WINDOWS\Microsoft. Mono For Windows. the . It's entirely possible to write C#.4322 directory for version 1. If you are working on Windows it is a good idea to add the path to the folders that contain cs. Start by saving the following code to a text file called hello.0.0.exe to the Path environment variable so that you do not need to type the full path each time you want to compile.1. Microsoft offers a wide range of code editing programs under the Visual Studio line that offer syntax highlighting as well as compiling and debugging capabilities. Console.exe <name>. Alternatively. run C:\WINDOWS\Microsoft. Note that the example above includes the System namespace via the using keyword. System.0.Net 2.exe". Note that much of the power of the language comes from the classes provided with the . On Linux.cs. even though that is for debugging. That inclusion allows direct references to any member of the System namespace without specifying its fully qualified name.exe. compile with "cscc -o <name>.ReadLine()."). The second call to that method shortens the reference to the Console class by taking advantage of the fact that the System namespace is included (with using System).Net framework.WriteLine("Hello. in Visual C# express.cs • For Mono run mcs hello.exe: • • On Windows. } } } To compile hello.cs.WriteLine("Hello.cs". Console. run the following from the command line: • For standard Microsoft installations of . • For users of cscc.exe.exe will produce the following output: Hello.50727\csc. World! The program will then wait for you to strike 'enter' before returning to the command prompt.0. which are not part of the C# language syntax Wikibooks | 7 . Console.exe or "ilrun <name>."). C# is a fully object-oriented language. you could just hit F5 or the green play button to run the code.Getting Started { static void Main(string[] args) { System.exe hello.Console. The following command will run hello.Console. use mono hello. The first call to the WriteLine method of the Console class uses a fully qualified reference. Running hello.WriteLine("World!"). Doing so will produce hello. use hello.WriteLine("World!").NET\Framework\v2. The following sections explain the syntax of the C# language as a beginner's course for programming in the language. live version · discussion · edit chapter · comment · report an error 8 | C# Programming .Chapter 2 per se. define an expression. and the code within a corresponding catch block. Wikibooks | 9 . control the flow of execution of other statements.SampleMethodReturningInteger(i). Statements can be grouped into comma-separated statement lists or braceenclosed statement blocks. Examples: int sampleVariable. whose detailed behaviors are defined by their statements. Statements The basic unit of execution in a C# program is the statement. SampleClass sampleObject = new SampleClass(). The object-oriented nature of C# requires the high-level structure of a C# program to be defined in terms of classes. Code blocks can be nested and often appear as the bodies of methods. perform a simple action by calling a method. // // // // // declaring a variable assigning a value constructing a new object calling an instance method calling a static method // executing a "for" loop with an embedded "if" statement for(int i = 0. i < upperLimit. code blocks serve to limit the scope of variables defined within them. i++) { if (SampleClass. Statements are usually terminated by a semicolon.SampleInstanceMethod(). } } Statement blocks A series of statements surrounded by curly braces form a block of code. or field. the protected statements of a try block. create an object. sampleVariable = 5.Syntax 3 S YNTAX live version · discussion · edit chapter · comment · report an error C # syntax looks quite similar to the syntax of Java because both inherit much of their syntax from C and C++. try { // Here is a code block protected by the "try" statement. private void MyMethod() { // This block of code is the body of "MyMethod()" CallSomeOtherMethod().SampleStaticMethod(). A statement can declare a variable. property. or assign a value to a variable. SampleClass. Among other purposes.SampleStaticMethodReturningBoolean(i)) { sum += sampleObject. sampleObject. } // "variableWithLimitedScope" is not accessible here. Multiple-line comments Comments can span multiple lines by using the multiple-line comment style. //************************** 10 | C# Programming .Chapter 3 int variableWithLimitedScope. Avoid using butterfly style comments. It allows multiple lines. either. Single-line comments. } catch(Exception) { // Here is yet another code block. end at the first endof-line following the "//" comment marker. Three styles of comments are allowed in C#: Single-line comments The "//" character sequence marks the following text as a single-line comment. */ XML Documentation-line comments This comment is used to generate XML documentation. The text between those multi-line comment markers is the comment. /// <summary> documentation here </summary> This is the most recommended type. //This style of a comment is restricted to one line. // "variableWithLimitedScope" is not accessible here. /* This is another style of a comment. CallYetAnotherMethod(). Comments Comments allow inline documentation of source code. // "variableWithLimitedScope" is accessible in this code block. Each line of the comment begins with "///". The C# compiler ignores comments. as one would expect. For example: //************************** // Butterfly style documentation comments like this are not recommended. } // Here ends the code block for the body of "MyMethod()". Such comments start with "/*" and end with "*/". writeline("Hello"). live version · discussion · edit chapter · comment · report an error Wikibooks | 11 . The variables myInteger and MyInteger below are distinct because C# is case-sensitive: int myInteger = 3. The following code will generate a compiler error (unless a custom class or variable named console has a method named writeline()): // Compiler error! console. int MyInteger = 5.WriteLine("Hello"). The following corrected code compiles as expected because it uses the correct case: Console. including its variable and method names.Syntax Case sensitivity C# is case-sensitive. for example. name ). parameters. Fields can also be associated with their class by making them constants (const).e. Fields. i. Parameter Parameters are variables associated with a method. a variable binds an object (in the general sense of the term. a specific value) to an identifier (the variable's name) so that the object can be accessed later. Fields Fields. which requires a declaration assignment of a constant value and prevents subsequent changes to the field. Local variables Like fields. Local Variables. They thus have both a scope and an extent of the method or statement block that declares them. 12 | C# Programming . More technically. or private (from most visible to least visible). Only values whose types are compatible with the variable's declared type can be bound to (stored in) the variable. and local variables.WriteLine ( "Good morning. while non-constant local variables are stored (or referenced from) the stack. Console. protected internal.Chapter 4 4 V ARIABLES live version · discussion · edit chapter · comment · report an error V ariables are used to store values. {0}" . protected. Each field has a visibility of public. while a static variable. sometimes called class-level variables. is a field associated with the type itself.ReadLine(). Variables can. store the value of user input: string name = Console. Each variable is declared with an explicit type. internal. An instance variable is a field associated with an instance of the class or structure. are variables associated with classes or structures. local variables can optionally be constant (const). and Parameters C# supports several program elements corresponding to the general programming concept of variable: fields. Constant local variables are stored in the assembly data region. declared with the static keyword. NET languages. string) are passed in "by value" while reference types (objects) are passed in "by reference.NET framework remain the same. Although the names of the aliases vary between .Int32 i = 42. except that it is bound before the method call and it need not be assigned by the method.NET framework. Such a variable is considered by the compiler to be unbound upon method entry. If a method signature includes one. A params parameter represents a variable number of parameters. A reference parameter is similar to an out parameter. } public void EquivalentCodeWithoutAlias() { System. Types Each type in C# is either a value type or a reference type. so that changes to the parameter by the method do not affect the value of the callee's variable. the params argument must be the last argument in the signature. objects created in assemblies written in other languages of the .NET Framework can be bound to C# variables of any type to which the value can be converted. The following illustrates the cross-language compatibility of types by comparing C# code with the equivalent Visual Basic . double. each integral C# type is actually an alias for a corresponding type in the . Integral types Because the type system in C# is unified with other languages that are CLIcompliant. thus it is illegal to reference an out parameter before assigning it a value. so that changes to the variables will affect the value of the callee's variable. or passed in by reference. per the conversion rules below. } Wikibooks | 13 .Variables An in parameter may either have its value passed in from the callee to the method's environment. thus changes to the variable's value within the method's environment directly affect the value from the callee's environment." An out parameter does not have its value copied. C# has several predefined ("built-in") types and allows for declaration of custom value types and reference types. Value types (int. the underlying types in the . It also must be assigned by the method in each valid (non-exceptional) code path through the method in order for the method to compile.NET code: // C# public void UsingCSharpTypeAlias() { int i = 42. Thus. and string. // The value of i is now the integer 97.NET Framework. a long is only guaranteed to be at least as large as an int. an alias for the System. from which all other reference types derive.Int.Int32 type implements a ToString() method to convert the value of an integer to its string representation.e. and is implemented with different sizes by different compilers.NET Framework type names. the System.NET Framework types follow: 14 | C# Programming . As reference types.Chapter 4 ' Visual Basic . int unboxedInteger = (int)boxedInteger.Object class. C#'s int type exposes that method: int i = 97.g. For example.Int32 = 42 End Sub Using the language-specific type aliases is often considered more readable than using the fully-qualified . e.NET Framework types. The predefined C# type aliases expose the methods of the underlying . any class) are exempt from the consistent size requirement. Likewise.ToString(). each an alias to a corresponding value type in the System namespace of the . may vary by platform. where. since the . an alias for the System. The fact that each C# type corresponds to a type in the unified type system gives each value type a consistent size across platforms and compilers. the size of reference types like System. The unified type system is enhanced by the ability to convert value types to reference types (boxing) and likewise to convert certain reference types to their corresponding value types (unboxing): object boxedInteger = 97. There are two predefined reference types: object. Fortunately.IntPtr. string s = i.Int32 type implements the Parse() method. That is. // The value of s is now the string "97".Parse(s). int i = int. The built-in C# type aliases and their equivalent . C# likewise has several integral value types. That consistency is an important distinction from other languages such as C. which can therefore be accessed via C#'s int type: string s = "97". as opposed to value types like System.NET Public Sub UsingVisualBasicTypeAlias() Dim i As Integer = 42 End Sub Public Sub EquivalentCodeWithoutAlias() Dim i As System.String class. there is rarely a need to know the actual size of a reference type. variables of types derived from object (i.NET Framework's System. Single System.String 32/64 16 * length Wikibooks | 15 .0 x 10-28 to 7.Variables Integers C# Alias sbyte byte short ushort char int uint long ulong .535 -2.294.NET Type System.615 Range System.036.808 to 9.775.446.036.Int32 8 8 16 16 32 Size (bits) -128 to 127 0 to 255 -32.372.7 x 10308 1.709.Char System.854.NET Type System.9 x 1028 decimal System.NET Type Size (bits) Range true or false.295 -9.5 x 10-45 to 3.4 x 1038 5.Int64 64 System.648 to 2.147.SByte System.775.Decimal 128 Other predefined types C# Alias bool object string .223.Boolean 32 System.223.551.Int16 System.647 0 to 4.Byte System.073.535 A unicode character of code 0 to 65.967. A unicode string with no special upper bound.UInt64 64 Floating-point C# Alias float double .UInt16 16 System.807 0 to 18.UInt32 32 System.Object System. System. which aren't related to any integer in C#.854.372.768 to 32. Platform dependant (a pointer to an object).147.483.0 x 10-324 to 1.767 0 to 65.744.483.Double Size (bits) 32 64 Precision 7 digits 15-16 digits 28-29 decimal places Range 1. arr[i] = arr[j]. int i.Chapter 4 Custom types The predefined types can be aggregated and extended into custom types. and explicit cast definitions. 16 | C# Programming . Custom value types are declared with the struct or enum keyword. If the type conversion is guaranteed not to lose information. Likewise. the following code snippet successfully swaps two elements in an integer array: static void swap (int[] arr. int j) { int temp = arr[i]. custom reference types are declared with the class keyword. As with other variable types. Arrays Although the number of dimensions is included in array declarations. the conversion can be implicit (i. however. Predefined conversions Many predefined value types have predefined conversions to other predefined value types. For example. specify the size of each dimension: s = new string[5] . arrays are passed by reference. initialization can be combined: string[] s = new string[5] . and not passed by value. the size of each dimension is not: string[] s . } Conversion Values of a given type may or may not be explicitly or implicitly convertible to other types depending on predefined conversion rules. the declaration and the It is also important to note that like in Java. inheritance structure.e. an explicit cast is not required). arr[j] = temp. Assignments to an array variable (prior to the variable's usage). Scope and extent The scope and extent of variables is based on their declaration. The extent of variables is determined by the runtime environment using implicit reference counting and a complex garbage collection algorithm. In either case. The scope of parameters and local variables corresponds to the declaring method or statement block. Similarly. the conversion must be explicit in order for the conversion statement to compile. the runtime environment throws a conversion exception if the value to convert is not an instance of the target type or any of its derived types. live version · discussion · edit chapter · comment · report an error Wikibooks | 17 . to convert an interface instance to a class that implements it. while the scope of fields is associated with the instance or class and is potentially further restricted by the field's access modifiers. To convert a base class to a class that inherits from it. the conversion must be explicit in order for the conversion statement to compile.Variables Inheritance polymorphism A value can be implicitly converted to any class from which it inherits or interface that it implements. Arithmetic The following arithmetic operators operate on numeric operands (arguments a and b in the "sample usage" below). The binary operator % operates only on integer arguments. Similar to C++. When placed before its argument.) The unary operator ++ operates only on arguments that have an l-value. The binary operator / returns the quotient of its arguments. When placed after its argument. It returns the remainder of integer division of those arguments. it increments that argument by 1 and returns the value of that argument before it was incremented.e.returns the difference between its arguments. it obtains that quotient using integer division (i. The unary operator -. defining or redefining the behavior of the operators in contexts where the first argument of that operator is an instance of that class. Following are the built-in behaviors of C# operators. it drops any resulting remainder). but doing so is often discouraged for clarity. it decrements that argument by 1 and returns the value of a / b a % b a++ a plus plus ++a a-- plus plus a a minus minus 18 | C# Programming . When placed after its argument.b a * b Read a plus b a minus b a times b a divided by b a mod b Explanation The binary operator + returns the sum of its arguments. it increments that argument by 1 and returns the resulting value.operates only on arguments that have an l-value. The binary operator * returns the multiplicative product of its arguments. Sample usage a + b a . (See modular arithmetic. The binary operator . If both of its operators are integers. The unary operator ++ operates only on arguments that have an l-value.Chapter 5 5 O PERATORS live version · discussion · edit chapter · comment · report an error C # operators and their precedence closely resemble the operators in other languages of the C family. classes can overload most operators. the logical disjunction is performed bitwise. When placed before its argument. The binary operator && operates on boolean operands only. it evaluates and returns the results of the second operand. Logical The following logical operators operate on boolean or integral operands. If the result is false. That is. the exclusive or is performed bitwise. The binary operator ^ returns the exclusive or ("XOR") of their results.operates only on arguments that have an l-value. Note that if evaluating the second operand would hypothetically have no side effects. the results are identical to the logical conjunction performed by the & operator. Sample usage a & b Read a bitwise and b Explanation The binary operator & evaluates both of its operands and returns the logical conjunction ("AND") of their results. it returns false. it returns true. it decrements that argument by 1 and returns the resulting value. it evaluates and returns the results of the second operand. Otherwise. it returns true if a evaluates to false and it returns false if a evaluates to true. The binary operator || operates on boolean operands only. If the operands are integral. It evaluates the first operand. If the result is true. The binary operator | evaluates both of its operands and returns the logical disjunction ("OR") of their results. It Wikibooks | 19 a && b a and b a | b a bitwise or b a || b a or b a ^ b a x-or b !a ~a not a bitwise . The unary operator ! operates on a boolean operand only. Note that if evaluating the second operand would hypothetically have no side effects. If the operands are integral. Otherwise. --a minus minus a The unary operator -. It evaluates its operand and returns the negation ("NOT") of the result. If the operands are integral. as noted. It evaluates its first operand.Operators that argument before it was decremented. the logical conjunction is performed bitwise. the results are identical to the logical disjunction performed by the | operator. The unary operator ~ operates on integral operands only. ~a returns a value where each bit is the negation of the corresponding bit in the result of evaluating a. Sample usage Read Explanation For arguments of value type. false otherwise. That is. >. false otherwise.Chapter 5 evaluates its operand and returns the bitwise negation of the result. The binary operator >> evaluates its operands and returns the resulting first argument right-shifted by the number of a bits specified by the second argument. It returns true if a is greater than b. not a Bitwise shifting Sample usage Read Explanation a << b The binary operator << evaluates its operands and returns the resulting first argument left-shifted by the number of bits a left specified by the second argument. It returns true if a is less than b. For other reference types (types derived from System. <=. it returns true if the strings' character sequences match. The operator <= operates on integral types. however. false otherwise. the operator == returns true if its operands have the same value. The operator > operates on integral types. it returns true if a is not equal to b. equal to b and false if they are equal. a is less than b a is greater than b a is less The operator < operates on integral types. a >> b Relational The binary relational operators ==. or to zero if the first argument is unsigned. a == b returns true only if a and b reference the same object. and >= are used for relational operations and for type comparisons. Thus. <. a == b a is equal to b a != b a < b a > b a <= b The operator != returns the logical negation of the a is not operator ==. For the string type. !=. It discards high-order bits shift b that shift beyond the size of its first argument and sets new low-order bits to zero.Object). It discards low-order right bits that are shifted beyond the size of its first argument and shift b sets new high-order bits to the sign bit of the first argument. It returns 20 | C# Programming . so the first argument typically just refers to a different object but the object that it originally referenced does not change (except that it may no longer be referenced and may thus be a candidate for garbage collection). Sample usage a += b Read a plus equals (or increment by) b Explanation Equivalent to a = a + b. false otherwise. and then a set to b Equivalent to a = (b = c). false otherwise. When there are consecutive assignments. b set to c.) The first argument of the assignment operator (=) is typically a variable.Operators than or true if a is less than or equal to b. equal to b a >= b a is greater The operator >= operates on integral types. the assignment operation changes the reference. the right-most assignment is evaluated first. it assigns the value of its second argument to its first argument. When the first argument is a reference type. proceeding from right to left. Sample usage a = b Read Explanation The operator = evaluates its second argument and then a equals assigns the results to (the l-value indicated by) its first (or set to) b argument. a = b = c Short-hand Assignment The short-hand assignment operators shortens the common assignment operation of a = a operator b into a operator= b. It returns than or true if a is greater than or equal to b. the assignment operation changes the argument's underlying value. equal to b Assignment The assignment operators are binary. Not surprisingly. (More technically. resulting in less typing and neater syntax. The most basic is the operator =. Wikibooks | 21 . When that argument has a value type. the operator = requires for its first (left) argument an expression to which a value can be assigned (an l-value) and for its second (right) argument an expression which can be evaluated (an r-value). both variables a and b have the value of c. That requirement of an assignable expression to its left and a bound expression to its right is the origin of the terms l-value and r-value. In this example. Equivalent to a = a >> b. []. or. Equivalent to a = a * b. Remarks: The sizeof operator can be applied only to value types.Chapter 5 a -= b a *= b a /= b a %= b a &= b a |= b a ^= b a <<= b a >>= b a minus equals (or decrement by) b a multiply equals (or multiplied by) b a divide equals (or divided by) b a mod equals b a and equals b a or equals b a xor equals b a left-shift equals b a right-shift equals b Equivalent to a = a . if x is of type T. not reference types. Equivalent to a = a << b. Else returns null. Equivalent to a = a ^ b. Type information Expression x is T x as T Explanation returns true if the variable x of base class type stores an object of derived class type T. if x is of type T.Type object describing the type.b. sizeof(x) typeof(T) Pointer manipulation Expression Explanation To be done *. or. Else returns false. Equivalent to a = a & b. ->. & Overflow exception control Expression checked(a) Explanation uses overflow checking on value a unchecked(a) avoids overflow checking on value a 22 | C# Programming . and not a variable. Equivalent to a = a % b. Use the GetType method to retrieve run-time type information of variables. The sizeof operator can only be used in the unsafe mode. T must be the name of the type. returns (T)x (x casted to T) if the variable x of base class type stores an object of derived class type T. Equivalent to a = a / b. Equivalent to x is T ? (T)x : null returns the size of the value type x. returns a System. Equivalent to a = a | b. returns the value of b. otherwise c if a is null. concatenates a and b if a is true.Operators Others Expression a. otherwise returns a live version · discussion · edit chapter · comment · report an error Wikibooks | 23 . returns b.b a[b] (a)b new a a+b a?b:c a ?? b Explanation accesses member b of type or namespace a the value of index b in a casts the value b to type a creates an object of type a if a and b are string types. Wednesday. The elements in the above enumeration are then available as constants: Weekday day = Weekday.". it is often more readable to use an enumeration. Friday. Structs Structures (keyword struct) are light-weight objects. or geographical territory in a more meaningful way than an integer could.WriteLine("You become a teenager at an age of {0}.Tuesday) { Console. Tuesday. They are mostly used when only a data container is required for a collection of value type variables. e. and the successive values are assigned to each subsequent element.g.WriteLine("Time sure flies by when you program in C#!"). if (day == Weekday. Enumerations An enumeration is a data type that enumerates a set of items by assigning to each of them an identifier (a name). specific values from the underlying integral type can be assigned to any of the enumerated elements: enum Age { Infant = 0. Diamonds. It may be desirable to create an enumeration with a base type other than int. Spades. The underlying values of enumerated elements may go unused when the purpose of an enumeration is simply to group a set of items together. Sunday }. Teenager = 13. The underlying type is int by default. Adult = 18 }. Saturday.Chapter 6 6 D ATA S TRUCTURES live version · discussion · edit chapter · comment · report an error There are various ways of grouping sets of data together in C#. Clubs }. } If no explicit values are assigned to the enumerated items as the example above. state. to represent a nation. Thursday.. However. specify any integral type besides char as with base class extension syntax after the name of the enumeration. 24 | C# Programming . To do so.Monday. (int)age).Teenager. Console. the first element has the value 0. Rather than define a group of logically related constants. Age age = Age. but can be any one of the integral types except for char. as follows: enum CardSuit : byte { Hearts. while exposing an underlying base type for ordering the elements of the enumeration. Enumerations are declared as follows: enum Weekday { Monday. } It is also possible to provide constructors to structs to make it easier to initialize them: using System. 7. Another very important difference is that structs cannot support inheritance. While structs may appear to be limited with their use.birthDate = new DateTime(1974. 18). int weightInKg. new DateTime(1974.name = name. dana. 50).Data Structures Structs are similar to classes in that they can have constructors.weightInKg = weightInKg. public int heightInCm. DateTime birthDate. } } live version · discussion · edit chapter · comment · report an error Wikibooks | 25 . } } public class StructWikiBookSample { public static void Main() { Person dana = new Person("Dana Developer". public Person(string name. dana. if (dana.DateTime birthDate. 7. be declared like this: struct Person { public string name. DateTime birthDate. int heightInCm. } The Person struct can then be used like this: Person dana = new Person().WriteLine("Thank goodness! Dana Developer isn't from the future!"). int heightInCm. methods. A struct can.weightInKg = 50.Now) { Console.name = "Dana Developer". this. but there are important differences.heightInCm = heightInCm. public System. and even implement interfaces. public int weightInKg. this. for example. dana. Structs are value types while classes are reference types. 178. they require less memory and can be less expensive if used in the proper way. 18). dana. int weightInKg) { this.birthDate < DateTime. this.birthDate = birthDate.heightInCm = 178. struct Person { string name. which means they behave differently when passed into methods as parameters. and Java. try-finally. foreach. An exception handling statement can be used to handle exceptions using keywords such as throw. continue. else if. else if( myNumber < 0 ) { Console. and in. if ( myNumber == 4 ) Console. it is written in the following form: if-statement ::= "if" "(" condition ")" if-body ["else" else-body] condition ::= boolean-expression if-body ::= statement-or-statement-block else-body ::= statement-or-statement-block The if statement evaluates its condition expression to determine whether to execute the if-body. A jump statement can be used to transfer program control using keywords such as break. return. and yield. Thus.Chapter 7 7 C ONTROL live version · discussion · edit chapter · comment · report an error C onditional. public class IfStatementSample { public void IfMyNumberIs() { int myNumber = 5. Conditional statements A conditional statement decides whether to execute code based on conditions. while. the if statement has the same syntax as in C. else if. else if.WriteLine("This will not be shown because myNumber is not negative. C++. jump."). providing code to execute when the condition is false. an else clause can immediately follow the if body. and exception handling statements control a program's flow of execution. else statements: using System. The if statement and the switch statement are the two types of conditional statements in C#. try-catch. Optionally. iteration.WriteLine("This will not be shown because myNumber is not 4. 26 | C# Programming . for."). Making the elsebody another if statement creates the common cascade of if. and try-catch-finally. An iteration statement can create a loop using keywords such as do. The if statement As with most of C#. C# does not support "fall through" from one case statement to the next (thereby eliminating a common source of unexpected behaviour in C programs). The default label is optional. default: Console.WriteLine("myNumber does not match the coded conditions. break. break. Wikibooks | 27 . case 2: Console.WriteLine("Rans S6S Coyote"). break. so this sentence will be shown!"). If no default case is defined. } } } The switch statement The switch statement is similar to the statement from C.WriteLine("Dual processor computer"). In other words. If goto is used.Control } else if( myNumber % 2 == 0 ) Console. However "stacking" of cases is allowed. each case statement must finish with a jump statement (which can be break or goto or return).WriteLine("You don't have a CPU! :-)")."). break. A simple example: switch (nCPU) { case 0: Console. case 1: Console.WriteLine("A multi processor computer"). goto case 0 or goto default). then the default behaviour is to do nothing. // Stacked cases case 3: case 4: case 5: case 6: case 7: case 8: Console. it may refer to a case label or the default case (e. Unlike C. case "C-GJIS": Console.g. as in the example below. else { Console. break.WriteLine("A seriously parallel computer"). break.WriteLine("This will not be shown because myNumber is not even. For example: switch (aircraft_ident) { case "C-FESO": Console. C++ and Java. } A nice improvement over the C switch statement is that the switch variable can be a string.WriteLine("Single processor computer").WriteLine("Rans S12XL Airaile"). The do.while loop always runs its body once. The for loop The for loop likewise has the same syntax as in other languages derived from C. the body executes again.while loop The do. If the condition is true.. do { Console. break.. It is written in the following form: for-loop ::= "for" "(" initialization ". If the condition evaluates to true again after the body has ran.. default: Console. and the foreach loop are the iteration statements in C#..while loop likewise has the same syntax as in other languages derived from C. } while(number <= 10)..Chapter 7 break. public class DoWhileLoopSample { public void PrintValuesFromZeroToTen() { int number = 0. the body executes. } } The above code writes the integers from 0 to 10 to the console. } Iteration statements An iteration statement creates a loop of code to execute a variable number of times. using System." iteration ")" body initialization ::= variable-declaration | list-of-statements condition ::= boolean-expression iteration ::= list-of-statements 28 | C# Programming . The for loop. After its first run.WriteLine(number++.while-loop ::= "do" body "while" "(" condition ")" condition ::= boolean-expression body ::= statement-or-statement-block The do. the do.... the do loop.while loop ends.WriteLine("Unknown aircraft"). it evaluates its condition to determine whether to run its body again. It is written in the following form: do...ToString()). the while loop. When the condition evaluates to false." condition ". } } In the above code. The foreach loop exits when there are no more elements of the enumerable-expression to assign to the variable of the variable-declaration. It is often used to test an index variable against some limit.Control body ::= statement-or-statement-block The initialization variable declaration or statements are executed the first time through the for loop. and "Charlie" to the console.WriteLine(item). "Bravo".Console. "Charlie"}. the foreach statement iterates over the elements of the string array to write "Alpha". public class ForLoopSample { public void ForFirst100NaturalNumbers() { for(int i=0. typically to declare and initialize an index variable. public class ForEachSample { public void DoSomethingForEachItem() { string[] itemsToWrite = {"Alpha". It is written in the following form: foreach-loop ::= "foreach" "(" variable-declaration "in" enumerableexpression ")" body body ::= statement-or-statement-block The enumerable-expression is an expression of a type that implements IEnumerable. the body is executed. so it works even with collections that lack indices altogether. If the condition evaluates to true. typically to increment or decrement an index variable. The condition expression is evaluated before each pass through the body to determine whether to execute the body.WriteLine(i.Console. "Bravo".ToString()). The iteration statements are executed after each pass through the body. The variable-declaration declares a variable that will be set to the successive elements of the enumerableexpression for each pass through the body. i<100. but the foreach statement lacks an iteration index. } } } The above code writes the integers from 0 to 99 to the console. The foreach loop The foreach statement is similar to the for statement in that both allow code to iterate over the items of collections. foreach (string item in itemsToWrite) System. Wikibooks | 29 . i++) { System. so it can be an array or a collection. } } Jump statements A jump statement can be used to transfer program control using keywords such as break. DateTime start = DateTime. you use yield return to return individual values and yield break to end the sequence.WriteLine(d). using System. Instead of using return to return the sequence. } Console.Console.Now . i.Now < limit) yield return DateTime. If the condition then evaluates to true again.Chapter 7 The while loop The while loop has the same syntax as in other languages derived from C.Collections. the body executes again. while (DateTime.Generic. 0. public class WhileLoopSample { public void RunForAwhile() { TimeSpan durationToRun = new TimeSpan(0.e.start < durationToRun) { Console. continue. When the condition evaluates to false.Now. a function that returns a sequence of values from an object implementing IEnumerable. the body executes. using System.WriteLine("not finished yet").WriteLine("finished"). 30 | C# Programming .Now + new TimeSpan(0. It is written in the following form: while-loop ::= "while" "(" condition ")" body condition ::= boolean-expression body ::= statement-or-statement-block The while loop evaluates its condition to determine whether to run its body. public class YieldSample { static IEnumerable<DateTime> GenerateTimes() { DateTime limit = DateTime. the while loop ends. and yield. yield break. Using yield A yield statement is used to create an iterator. } static void Main() { foreach (DateTime d in GenerateTimes()) { System.30).Now. 30).0. return. while (DateTime. using System. If the condition is true. Control } System. Exception handling statements An exception handling statement can be used to handle exceptions using keywords such as throw. } } Note that you define the function as returning a System.Generic. note that the body of the calling foreach loop is executed in between the yield return statements. try-catch.Collections. then yield return individual values of the parameter type.Console. Also.IEnumerable parameterized to some type. try-finally. and try-catch-finally.Read(). See the Exceptions page for more information on Exception handling live version · discussion · edit chapter · comment · report an error Wikibooks | 31 . WriteLine("press enter to continue. All exception objects are instantiations of the System.Exception or a child class of it. the use of a null object reference detected by the runtime system or an invalid input string entered by a user and detected by application code. such as the stack trace at the point of the exception and a descriptive error message. An exception can represent a variety of abnormal conditions. } catch (ApplicationException e) { Console. There are many exception classes defined in the .ReadLine(). topping).. topping)). The following example demonstrates the basics of exception throwing and handling exceptions: class ExceptionTest { public static void Main(string[] args) { try { OrderPizza("pepperoni"). OrderPizza("anchovies"). for example. this example produces the following output: one pepperoni pizza ordered Unsupported pizza topping: anchovies press enter to continue.. } } private static void OrderPizza(string topping) { if (topping != "pepperoni" && topping != "sausage") { throw new ApplicationException( String. Console. 32 | C# Programming . Code that detects an error condition is said to throw an exception and code that handles the error is said to catch the exception.WriteLine("one {0} pizza ordered". such as ApplicationException. } finally { Console. including. Programmers may also define their own class inheriting from System. } Console.Message). } } When run..Exception or some other appropriate exception class from the .NET Framework used for various purposes. An exception in C# is an object that encapsulates various information about the error that occurred.").NET Framework.Chapter 8 8 E XCEPTIONS live version · discussion · edit chapter · comment · report an error T he exception handling system in the C# allows the programmer to handle errors or anomalous situations in a structured manner that allows the programmer to separate the normal flow of the code from error handling logic.WriteLine(e.Format("Unsupported pizza topping: {0}".. The throw is followed by the object reference representing the exception object to throw. it is one method in the call stack higher.Exceptions The Main() method begins by opening a try block. These blocks contain the exception handling logic. The try block calls the OrderPizza() method. but normally it is used to release acquired resources or perform other cleanup activities. the exception object is constructed on the spot. In this case. similar to the way a method argument is declared. an exception is thrown using the throw keyword. When an exception matching the type of the catch block is thrown. in this case. The method checks the input string and. A try block is a block of code that may throw an exception that is to be caught and handled. When the exception is thrown. which may throw an ApplicationException. In this case. if it has an invalid value. live version · discussion · edit chapter · comment · report an error Wikibooks | 33 . Each catch block contains an exception object declaration. The finally block is optional and contains code that is to be executed regardless of whether an exception is thrown in the associated try block. the finally just prompts the user to press enter. Following the try block are one or more catch blocks. the Main() method contains a finally block after the catch block. In this case. Lastly. control is transferred to the inner most catch block matching the exception type thrown. that exception object is passed in to the catch and available for it to use and even possibly re-throw. an ApplicationException named e. world!"). you explicitly tell the compiler that you'll be using a certain namespace in your program. with the System namespace usually being by far the most commonly seen one. a class named Console. as you told it which namespaces it should look in if it couldn't find the data in your application.WriteLine("Hello. So one can then type like this: using System.NET Framework. such as variable names. along with making it so your application doesn't occupy names for other applications. such as: System.g.NET already use one in its System namespace. Since the compiler would then know that.WriteLine("Hello. will cause the compiler to treat the 34 | C# Programming . and another with the same name in another source file. They're used especially to provide the C# compiler a context for all the named information in your program.Chapter 9 9 N AMESPACES live version · discussion · edit chapter · comment · report an error N amespaces are used to provide a "named space" in which your application resides. The purpose of namespaces is to solve this problem. if your application is intended to be used in conjunction with another. } There is an entire hierarchy of namespaces provided to you by the . world!").NET Framework for your applications to use. namespace MyApplication { class MyClass { void ShowGreeting() { Console. operator. // note how System is now not required } } } Namespaces are global. This will call the WriteLine method that is a member of the Console class within the System namespace.Console. So namespaces exist to resolve ambiguities a compiler wouldn't otherwise be able to do. so a namespace in one C# source file. Without namespaces. Namespaces are easily defined in this way: namespace MyApplication { // The content to reside in the MyApplication namespace is placed here. Data in a namespace is referred to by using the . and release thousands of names defined in the . as . it no longer requires you to type the namespace name(s) for such declared namespaces. By using the using keyword. you wouldn't be able to make e. that one would then not have to be explicitly declared with the using keyword. often named after your application or project name. Sometimes.Namespaces different named information in these two source files as residing in the same namespace. and still not have to be completely typed out. Nested namespaces Normally. Either like this: namespace CodeWorks { namespace MyApplication { // Do stuff } } .. and the nested namespaces the respective project names. If both the library and your program shared a parent namespace. and are identical in what they do.MyApplication { // Do stuff } Both methods are accepted. companies with an entire product series decide to use nested namespaces though. The developer of the library and program would finally also separate all the named information in their product source codes. you can show this in two ways. third party developers that may use your code would additionally then see that the same company had developed the library and the program. This can be especially convenient if you're a developer who has made a library with some usual functionality that can be shared across programs. for fewer headaches especially if common names are used. If your code was open for others to use.. your entire application resides under its own special namespace. live version · discussion · edit chapter · comment · report an error Wikibooks | 35 . To make your application reside in a nested namespace. where the "root" namespace can share the name of the company. or like this: namespace CodeWorks. } get { return _name. and structures. public string Name { set { _name = value. } } public void GetPayCheck() { } public void Work() { } } public class Sample { public static void Main() { Employee Marissa = new Employee(). The methods and properties of a class contain the code that defines how the class behaves. including subtyping polymorphism via inheritance and parametric polymorphism via generics. } get { return _age.Work(). It also defines a Sample class that instantiates and uses the Employee class: public class Employee { private string _name. } } public int Age { set { _age = value. } } 36 | C# Programming . static classes. Classes are defined using the keyword "class" followed by an identifier to name the class. Instances of the class can then be created with the "new" keyword followed by the name of the class.GetPayCheck(). Marissa.Chapter 10 10 C LASSES live version · discussion · edit chapter · comment · report an error A s in other object-oriented programming languages. C# classes support information hiding by encapsulating functionality in properties and methods and by enabling several types of polymorphism. the functionality of a C# program is implemented in one or more classes. including instance classes (standard classes that can be instantiated). The code below defines a class called Employee with properties Name and Age and with empty methods GetPayCheck() and Work(). Several types of C# classes can be defined. Marissa. private int _age. WriteLine("End").Classes Methods C# methods are class members containing code.Console. a constructor can have parameters. the new command accepts parameters.Console. The below code defines and then instantiates multiple objects of the Employee class. as well as a generic type declaration. } } Output: Start Constructed without parameters Parameter for construction End Finalizers The opposite of constructors. Employee Alfred = new Employee(). Employee Billy = new Employee("Parameter for construction").Console.WriteLine("Constructed without parameters"). To create an object using a constructor with parameters. but they are not restricted to doing so.Console. once using the constructor without parameters and once using the version with a parameter: public class Employee { public Employee() { System. Like fields.WriteLine(text). finalizers define final the behavior of an object Wikibooks | 37 . } } public class Sample { public static void Main() { System. They may have a return value and a list of parameters. methods can be static (associated with and accessed through the class) or instance (associated with and accessed through an object instance of the class).WriteLine("Start"). Constructors often set properties of their classes. A constructor's code executes to initialize an instance of the class when a program requests a new object of the class's type. } public Employee(string text) { System. Constructors A class's constructors control its initialization. System. Like other methods. a.k. is called sometime after an object is no longer referenced. public class Employee { public Employee(string text) { System. they are less frequently used in C# due to the .Console. } ~Employee() { System. They simplify the syntax of calling traditional get and set methods (a. Like methods. } } Output: Constructed! Finalized! Properties C# properties are class members that expose functionality of methods using the syntax of fields.Chapter 10 and execute when the object is no longer in use. which takes no parameters.Console. but the complexities of garbage collection make the specific timing of finalizers uncertain. After a property is defined it can be used like a variable.NET Framework Garbage Collector. If you were to write some 38 | C# Programming . // Sets integerField with a default value of 3 public int IntegerField { get { return integerField. // set assigns the value assigned to the property of the field you specify } } } The C# keyword value contains the value assigned to the property. An object's finalizer. Marissa = null.WriteLine(text). // get returns the field you specify when this property is assigned } set { integerField = value. they can be static or instance. accessor methods).WriteLine("Finalized!"). Properties are defined in the following way: public class MyClass { private int integerField = 3. Although they are often used in C++ to free memory reserved by an object. } public static void Main() { Employee Marissa = new Employee("Constructed!"). Classes additional code in the get and set portions of the property it would work like a method and allow you to manipulate the data before it is read or written to the variable. // Indirectly assigns 7 to the field myClass. . Events C# events are class members that expose notifications to clients of the class.IntegerField = 7. myClass. if the class was EmployeeCollection. use the this keyword as in the following example: public string this[string key] { get {return coll[_key]. you could write code similar to the following: EmployeeCollection e = new EmployeeCollection().} set {coll[_key] = value. e["Smith"] = "xxx". // Writes 3 to the command line. public class MyProgram { MyClass myClass = new MyClass.integerField } Using properties in this way provides a clean. . list[0] to access the first element of list even when list is not an array).IntegerField). using System.g. string s = e["Jones"]. Wikibooks | 39 . Console. . easy to use mechanism for protecting data. For example. To create an indexer. Operator C# operator definitions are class members that define or redefine the behavior of basic C# operators (called implicitly or explicitly) on instances of the class.WriteLine(myClass.} } This code will create a string indexer that returns a string value. Indexers C# indexers are class members that define the behavior of the array access operation (e. but have subtle differences. } } public class Sample { public static void Main() { Writer. using a standard class is a better choice. Structs are used as lightweight versions of classes that can help reduce memory management efforts in when working with small data structures. } } } Static classes Static classes are commonly used to implement a Singleton Pattern.Console class) and can thus be used without instantiating the static class: public static class Writer { public static void Write() { System.Chapter 10 Structures Structures. Access to the private field is granted through the public property "Name": struct Employee { private string name. and fields of a static class are also static (like the WriteLine() method of the System. } get { return name. } } live version · discussion · edit chapter · comment · report an error 40 | C# Programming .WriteLine("Text"). Thus when you pass a struct to a function by value you get a copy of the object so changes to it are not reflected in the original because there are now two distinct objects but if you pass an instance of a class by value then there is only one instance. public int age. All of the methods.Write(). public string Name { set { name = value. The Employee structure below declares a public and a private field. The principal difference between structs and classes is that instances of structs are values whereas instances of classes are references.Console. however. properties. In most situations. They are similar to classes. are defined with the struct keyword followed by an identifier to name the structure. or structs. and operations which are not allowed to outside users. Protected Protected members can be accessed by the class itself and by any class Wikibooks | 41 . and preventing him from manipulating objects in ways not intended by the designer. In this example. These methods and properties represent the operations allowed on the class to outside users. For example: public class Frog { public void JumpLow() { Jump(1). The Jump private method is implemented by changing the value of a private data member _height. data members (and other elements) with private protection level represent the internal state of the class (for variables). even a class derived from the class with private members cannot access the members. A method in another class. only 10 or 1. A class element having public protection level is accessible to all code anywhere in the program. which is also not visible to an outside user. } public void JumpHigh() { Jump(10). Methods. so he cannot make the frog jump 100 meters. they are implemented using the private Jump function that can jump to any height. the public method the Frog class exposes are JumpLow and JumpHigh. Some private data members are made visible by properties. } private void Jump(int height) { _height += height. Protection Levels Private Private members are only accessible within the class itself.} } private int _height = 0. This operation is not visible to an outside user.Encapsulation 11 E NCAPSULATION live version · discussion · edit chapter · comment · report an error E ncapsulation is depriving of the user of a class information he does not need. Internally. Internal Internal members are accessible only in the same assembly and invisible outside it. Public Public members can be accessed by any method in any class.Chapter 11 derived from that class. live version · discussion · edit chapter · comment · report an error 42 | C# Programming . NET Framework contains common class libraries . but has nothing to do with .Process. and running Web Services and Web Applications.Start("notepad.NET and Windows Forms .NET F RAMEWORK O VERVIEW live version · discussion · edit chapter · comment · report an error Introduction NET was originally called NGWS(Next Generation Windows Services).Net does not run IN any browser.wikibooks. In the System namespace.NET Framework is a common environment for building.. deploying.Version.NET Framework Overview 12 .The .MachineName). Let's look at a couple.NET is built on the following Internet standards: HTTP.NET delivers software as Web Services .OSVersion.exe"). or a webpage.Net . Console. the format for exchanging data between Internet Applications SOAP.ToString() + " + System. You can also get information about your system in the Environment namespace: Console.Process.OSVersion. It is a RUNTIME language (Common Language Runtime) like the Java runtime.Environment.NET will run in any browser on any platform . the standard format for requesting Web Services UDDI.Diagnostics.like ADO.Platform.Write: Wikibooks | 43 . .NET.Start(". you can write: System. there are a lot of useful libraries.org").ToString()). System.NET is a server centric computing model .WriteLine(System.Environment.NET is a new Internet and Web based infrastructure .ReadLine() This can be directly passed as a parameter to Console. ASP. the communication protocol between Internet Applications XML.. Silverlight does run in a browser.WriteLine("Machine name: " + System. • • • • .to provide advanced standard services that can be integrated into a variety of computer systems.Diagnostics. " User input You can read a line from the user with Console.NET is based on the newest Web standards . If you want to start up a external program. the standard to search and discover Web Services The .Environment. Write ( "I'm afraid I can't do that.ReadLine() ) . "{1}" would be the next parameter etc.Write(). which will be most effective for the input "Dave" :-) In this case. {0}" .ReadLine(). which is Console. live version · discussion · edit chapter · comment · report an error 44 | C# Programming . Console. "{0}" gets replaced with the first parameter passed to Console.Chapter 12 Console. my first program! Goodbye World! That text is output using the System. The middle lines use the Write() method.Console Programming 13 C ONSOLE P ROGRAMMING live version · discussion · edit chapter · comment · report an error Output The example program below shows a couple ways to output text: using System. but allows us to encode certain special characters (like a new line character).WriteLine("Hello World!"). public class HelloWorld { public static void Main() { Console.WriteLine("Goodbye World!"). which does not automatically create a new line.WriteLine("Nice to meet you.. we can use the sequence backslash-n ( \n ).. If for whatever reason we wanted to really show the \n character instead. To specify a new line. Console.Read()..Write("My name is: "). Input Input can be gathered in a similar method to outputing data using the Read() and ReadLine methods of that same System. Console.Console class. The using statement at the top allows the compiler to find the Console class without specifying the System namespace each time it is used. " + name).Console class: using System.Write("This is"). string name = Console. Console. The backslash is known as the escape character in C# because it is not treated as a normal character.ReadLine(). System.WriteLine("Greetings! What is your name?"). } } Wikibooks | 45 .. Console. public class ExampleClass { public static void Main() { Console.Console. my first program!\n"). Console. } } // relies on "using System." // no "using" statement required The above code displays the following text: Hello World! This is. we add a second backslash ( \\n ).Write(". } } Try running the program with only entering your first name or no name at all. The final Console. That assumption makes the program unsafe. using System. it can be executed from the command line using two arguments. } } If the program above code is compiled to a program called username.Length property returns the total number of arguments.Length >= 2) Console.WriteLine("Last Name: " + args[1]).Length >= 1) Console. it will return zero.Read(). it will crash when it attempts to access the missing argument. Command line arguments Command line arguments are values that are passed to a console program before execution. If it is run without the expected number of command line arguments.Read() waits for the user to enter a key before exiting the program. If no arguments are given. "Bill" and "Gates": C:\>username. Console.exe Bill Gates Notice how the Main() method above has a string array parameter. public class Test { public static void Main(string[] args) { if(args.g. live version · discussion · edit chapter · comment · report an error 46 | C# Programming . using System. we make we can check to see if the user entered all the required arguments.WriteLine("First Name: " + args[0]). The program assumes that there will be two arguments. public class ExampleClass { public static void Main(string[] args) { Console. To make the program more robust. The first argument is the original file and the second is the location or name for the new copy. Custom console applications can have arguments as well. e. Console.Chapter 13 The above program requests the user's name and displays it back.exe. For example. The string. if(args.WriteLine(args[0]).WriteLine(args[1]). the Windows command prompt includes a copy command that takes two command line arguments. Form { public static void Main() { ExampleForm wikibooksForm = new ExampleForm().Forms namespace allows us to create Windows applications easily.Run(wikibooksForm).Forms.Windows Forms 14 W INDOWS F ORMS live version · discussion · edit chapter · comment · report an error T he System. Your program will compile and run successfully if you comment these lines out but they allow us to add extra control to our form. Setting any of the properties Text. public class ExampleForm : Form // inherits from System. and Height is optional.Windows. icons and title bars together. wikibooksForm. // display the form } } The example above creates a simple Window with the text "I Love Wikibooks" in the title bar.Forms. menus. but it is important to know how to do so manually: using System.Form class.Forms. Width.Text = "I Love Wikibooks".Width = 400.Windows. // width of the window in pixels wikibooksForm. The Form class is a particularly important part of that namespace because the form is the key graphical building block Windows applications. It provides the visual frame that holds buttons. Custom form classes like the example above inherit from the System.Windows. // height in pixels Application.Windows. Integrated development environments (IDEs) like Visual C# and SharpDevelop can help create graphical applications. live version · discussion · edit chapter · comment · report an error Wikibooks | 47 .Height = 300.// specify title of the form wikibooksForm. we create an Executive class that will override the GetPayCheck method. public class Employee { // we declare one method virtual so that the Executive class can // override it. Inheritance(by Mine) Subtyping Inheritance The code sample below shows two classes. and extending the functionality and state of the derived class. } // the extra method is implemented. public override void GetPayCheck() { //new getpaycheck logic here. public virtual void GetPayCheck() { //get paycheck logic here. AdministerEmployee.Chapter 15 15 I NHERITANCE live version · discussion · edit chapter · comment · report an error I nheritance is the ability to create a class from another class. We want the Executive class to have the same methods. } //Employee's and Executives both work. Employee has the following methods. so no virtual here needed. public void AdministerEmployee() { // manage employee logic here } } You'll notice that there is no Work method in the Executive class. Employee. } } Now. public void Work() { //do work logic here. it is not 48 | C# Programming . public class Executive : Employee { // the override keyword indicates we want new logic behind the GetPayCheck method. GetPayCheck and Work. Below is the creation of the first class to be derived from. Employee and Executive. Inheritance in C# also allows derived classes to overload methods from their parent class. but differently implemented and one extra method. Inheritance(); } Inheritance keywords How C# inherits from another class syntacticaly is using the ":" character.); } live version · discussion · edit chapter · comment · report an error Wikibooks | 49 Chapter 16 16 I NTERFACES live version · discussion · edit chapter · comment · report an error A n interface in C# is type definition similar to a class except that it purely represents a contract between an object and a user of the object. An interface cannot be directly instantiated as an object. No data members can be defined in an interface. Methods and properties can only be declared, not defined. For example, the following defines a simple interface: interface IShape { void Draw(); double X { get; set; } double Y { get; set; } } A convention used in the .NET Framework (and likewise by many C# programmers) is to place an "I" at the beginning of an interface name to distinguish it from a class name. Another common interface naming convention is used when an interface declares only one key method, such Draw() in the above example. The interface name is then formed by adding the suffix "able" to the method name. So, in the above example, the interface name would be IDrawable. This convention is also used throughout the .NET Framework. Implementing an interface is simply done by inheriting off the interface and then defining all the methods and properties declared by the interface. For example: class Square : IShape { private double mX, mY; public void Draw() { ... } public double X { set { mX = value; } get { return mX; } } public double Y { set { mY = value; } get { return mY; } } } Although a class can only inherit from one other class, it can inherit from any number of interfaces. This is simplified form of multiple inheritance supported by C#. When inheriting from a class and one or more interfaces, the base class should be provided first in the inheritance list followed by any interfaces to be implemented. For example: class MyClass : Class1, Interface1, Interface2 { ... } 50 | C# Programming Interfaces Object references can be declared using an interface type. For example, using the previous examples: class MyClass { static void Main() { IShape shape = new Square(); shape.Draw(); } } Intefaces can inherit off of any number of other interfaces but cannot inherit from classes. For example: interface IRotateable { void Rotate(double theta); } interface IDrawable : IRotateable { void Draw(); } Some Important Points for Interface Access specifiers (i.e. private, internal, etc) cannot be provided for interface members. In Interface, all members are public by default. A class implementing an interface must define all the members declared by the interface as public. The implementing class has the option of making an implemented method virtual if it is expected to be overridden in a a child class. In addition to methods and properties, interfaces can declare events and indexers as well. live version · discussion · edit chapter · comment · report an error Wikibooks | 51 the delegate is declared by the line delegate void Procedure(). in the Main() method. Next. Something concrete has now been created. DelegateDemo demo = new DelegateDemo().Method1).Method1 and DelegateDemo.Method2).WriteLine("Method 3"). (Note: the class name could have been left off of DelegateDemo. A simple example: delegate void Procedure().WriteLine("Method 1"). 52 | C# Programming . It does not result in executable code that does any work. A delegate declaration specifies a particular method signature. The assignment of someProcs to null means that it is not initially referencing any methods. It merely declares a delegate type called Procedure which takes no arguments and returns nothing. adds a non-static method to the delegate instance. someProcs += new Procedure(demo. References to one or more methods can be added to a delegate instance. the statement Procedure someProcs = null. Delegates form the basis of event handling in C#. and someProcs += new Procedure(DelegateDemo. } } In this example. The delegate instance can then be "called" which effectively calls all the methods that have been added to the delegate instance. } static void Main() { Procedure someProcs = null.Method3). } static void Method2() { Console. add two static methods to the delegate instance. This statement is a complete abstraction. someProcs(). class DelegateDemo { static void Method1() { Console.Method2 because the statement is occurring in the DelegateDemo class. instantiates a delegate. someProcs += new Procedure(DelegateDemo.Method2).Chapter 17 17 D ELEGATES Delegates AND E VENTS live version · discussion · edit chapter · comment · report an error D elegates are a construct for abstracting and creating objects that reference methods and can be used to call those methods.) The statement someProcs += new Procedure(demo.Method3). } void Method3() { Console.Method1). someProcs += new Procedure(DelegateDemo. The statements someProcs += new Procedure(DelegateDemo.WriteLine("Method 2"). Invoking a delegate instance that presently contains no method references results in a NullReferenceException. Finally.e. In C# 2. Method3() is called on the object that was supplied when the method was added to the delegate instance.Method1.. for example.Delegates and Events For a non-static method. public void SimulateClick() { if (ButtonClicked != null) { ButtonClicked(). All the methods that were added to the delegate instance are now called in the order that they were added.Method1).0. } Wikibooks | 53 .. an event declared to be public would allow other classes the use of += and -= on the event. the method name is preceded by an object reference. Events An event is a special kind of delegate that facilitates event-driven programming. calls the delegate instance. but firing the event (i. the statement someProcs(). invoking the delegate) is only allowed in the class containing the event. Note that if a delegate declaration specifies a return type and multiple methods are added to a delegate instance. The return values of the other methods cannot be retrieved (unless explicitly stored somewhere in addition to being returned). So. Methods that have been added to a delegate instance can be removed with the -= operator: someProcess -= new Procedure(DelegateDemo. A simple example: delegate void ButtonClickedHandler().Method1. someProcess -= DelegateDemo. class Button { public event ButtonClickedHandler ButtonClicked. When the delegate instance is called. Events are class members which cannot be called outside of the class regardless of its access specifier. adding or removing a method reference to a delegate instance can be shortened as follows: someProcess += DelegateDemo. then an invocation of the delegate instance returns the return value of the last method referenced. } } . ButtonClicked += MyHandler. Even though the event is declared public.Forms namespace. Events are used extensively System. it cannot be directly fired anywhere except in the class containing the event. b.Chapter 17 A method in another class can then subscribe to the event by adding one of its methods to the event delegate: Button b = new Button().Windows. in GUI programming and in the live version · discussion · edit chapter · comment · report an error 54 | C# Programming . but only to serve as a base to other classes. A class should be made abstract when there are some aspects of the implementation which must be deferred to a more specific subclass.). For example. Although it is not possible to instantiate the Employee class directly. live version · discussion · edit chapter · comment · report an error Wikibooks | 55 .g. TemporaryEmployee. both of which inherit the behavior defined in the Employee class. an Employee can be an abstract class if there are concrete classes that represent more specific types of Employee (e. An abstract class should contain at least one abstract method which derived concrete classes will implement. etc. SalariedEmployee.Abstract Classes 18 A BSTRACT C LASSES live version · discussion · edit chapter · comment · report an error A n abstract class is a class that is never intended to be instantiated directly. a program may create instances of SalariedEmployee and TemporaryEmployee. Below is the example of a partial class. this does not make a difference as all the fragments of the partial class are grouped and the compiler treats it as a single class.cs) public class Node { public bool Delete() { } public bool Create() { } } Listing 2: Class split across multiple files (file1. To the compiler.Chapter 19 19 P ARTIAL C LASSES live version · discussion · edit chapter · comment · report an error A s the name indicates. Listing 1: Entire class definition in one file (file1. One common usage of partial classes is the separation of automatically generated code from programmer written code.cs) public partial class Node { public bool Create() { } } live version · discussion · edit chapter · comment · report an error 56 | C# Programming .cs) public partial class Node { public bool Delete() { } } (file2. partial class definitions can be split up across multiple physical files. you can index into it. But in more complicated case with more containers in different parts of program. SomeObjectContainer container2 = new SomeObjectContainer(5). everything is clear. A List is a convenient growable array.int) every time we want to get object from such container. } Console.ReadKey(). and so on.getObject() + (int)container2. without modyfing them. } } And the usage of it would be: class Program { static void Main(string[] args) { SomeObjectContainer container = new SomeObjectContainer(25). } public object getObject() { return this. would not have a string or any other Wikibooks | 57 . we would have to take care that container.getObject()). when you need to create some class to manage objects of some type. // wait for user to press any key. Console. In such small program like this.WriteLine((int)container. supposed to have int type in it. that we have to cast back to original data type we have chosen (in this case . so we could see results } Notice. Generic Interfaces MSDN2 Entry for Generic Interfaces Generic Classes There are cases.obj. usual approach (highly simplified) to make such class would be like this: public class SomeObjectContainer { private object obj. public SomeObjectContainer(object obj) { this.obj = obj. Without Generics. The classic example is a List collection class. It has a sort method.Generics 20 G ENERICS live version · discussion · edit chapter · comment · report an error G enerics is essentially the ability to have type parameters on your type. They are also called parameterized types or parametric polymorphism. GenericObjectContainer<int> container2 = new GenericObjectContainer<int>(5). and add <T> mark near class name to indicate that this "T" type is Generic / any type. To make our "container" class to support any object and avoid casting. } public T getObject() { return this.ReadKey(). and avoid previously mentioned problems. we could surround every unsafe area with try . // wait for user to press any key. However.WriteLine(container. we replace every previous object type with some new name. i. public GenericObjectContainer(T obj) { this. because Generics offers much more elegant solution. or we could create separate "container" for every data type we need. we will incur a performance penalty every time we access the elements of the collection.obj = obj. } } Not a big difference.Chapter 20 data type. due to the Autoboxing feature of C#. this avoids the Autoboxing for struct types. Console.getObject() + container2.e <genKey. If that happens.getObject()). While this example is far from practical. InvalidCastException is thrown. While both ways could work (and worked for many years). Note: You can choose any name and use more than one generic type for class.T. genVal> public class GenericObjectContainer<T> { private T obj. that you specify type for "container" only when creating it. and after that you will be able to use only the type you specified. so we could see results } Generics ensures. it does illustrate some situations where generics are useful: • You need to keep objects of single type in some class 58 | C# Programming . such as int. which results in simple and safe usage: class Program { static void Main(string[] args) { GenericObjectContainer<int> container = new GenericObjectContainer<int>(25). just to avoid casting.catch block. In addition. Additionally.obj. if the original data type we have chosen is a struct type. it is unnecessary now. } Console. But now you can create containers for different object types. in this case . short. or any custom struct) in a collection class without incurring the performance penalty of Autoboxing every time you manipulate the stored elements. string.Generics You don't need to modify objects You need to manipulate objects in some way • You wish to store a "value type" (such as int. • • live version · discussion · edit chapter · comment · report an error Wikibooks | 59 . . In many languages the solution is to explicitly close the file and dispose of the objects and many C# programmers do just that. This means that the reference count goes to zero as soon as the function ends which results in calls to the Terminate event handlers of both objects. In Visual Basic Classic this is not necessary because both objects are declared locally. If you are coming to C# from Visual Basic Classic you will have seen code like this: Public Function Read(ByRef FileName) As String Dim oFSO As FileSystemObject Set oFSO = New FileSystemObject Dim oFile As TextStream Set oFile = oFSO. In C# this doesn't happen because the objects are not reference counted. However. The finalizers will not be called until the garbage collector decides to dispose of the objects. This causes a problem because the file is held open which might prevent other processes from accessing it.finally 60 | C# Programming . Those event handlers close the file and release the associated resources. } } Behind the scenes the compiler turns the using statement into try.ReadLine End Function Note that neither oFSO nor oFile are explicitly disposed of.Chapter 21 21 O BJECT L IFETIME live version · discussion · edit chapter · comment · report an error Introduction D ifferent programming languages deal with issues of object lifetime in different ways and this must be accounted for when writing programs because some objects must be disposed of at specific moments while others can be allowed to exist until the program terminates or a garbage collector disposes of it. ForReading. there is a better way: use the using statement: public read(string fileName) { using (TextReader textReader = new StreamReader(filename)) { return textReader.ReadLine(). If the program uses very little memory this could be a long time. False) Read = oFile.OpenTextFile(FileName. IDisposable::Dispose() IL_0024: endfinally } // end handler IL_0025: ldloc. try.maxstack 5 . RAII is a natural technique in languages like Visual Basic Classic and C++ that have deterministic finalization but usually requires extra work to include in programs written in garbage collected languages like C# and VB.IO.Object Lifetime and produces this intermediate language (IL) code: .try finally { IL_0018: ldloc.0 IL_0019: brfalse IL_0024 IL_001e: ldloc.NET.TextReader::ReadLine() IL_000d: stloc.try { IL_0007: ldloc. Add notes on RAII. Resource Acquisition Is Initialisation The application of the using statement in the introduction is an example of an idiom called Resource Acquisition Is Initialisation (RAII). live version · discussion · edit chapter · comment · report an error Wikibooks | 61 . memoization and cacheing (see OOP wikibook).0 IL_0008: callvirt instance string [mscorlib]System.TextReader V_0.ctor(string) IL_0006: stloc.0 IL_0001: newobj instance void [mscorlib]System. namely a call to the destructor of the Streamreader instance. and finally. string V_1) IL_0000: ldarg.locals init (class [mscorlib]System.method public hidebysig static string Read(string FileName) cil managed { // Code size 39 (0x27) .0 . Of course you could write the try. Work in progress: add C# versions showing incorrect and correct methods with and without using. The using statement makes it just as easy. For a thorough discussion of the RAII technique see HackCraft: The RAII Programming Idiom. See Understanding the 'using' statement in C# By TiNgZ aBrAhAm.0 IL_001f: callvirt instance void [mscorlib]System.1 IL_000e: leave IL_0025 IL_0013: leave IL_0025 } // end . The finally block includes code that was never explicitly specified in the original C# source code. Wikipedia has a brief note on the subject as well: Resource Acquisition Is Initialization.1 IL_0026: ret } // end of method Using::Read Notice that the body of the Read function has been split into three parts: initialisation.StreamReader::..finally code out explicitly and in some cases that will still be necessary.IO.IO. A transparent copy of this document is available at Wikibooks:C Sharp Programming. The template from which the document was created is available at Wikibooks:Image:PDF template.org/wiki/C_Sharp_Programming.sxw.sxw. The SXW source of this PDF document is available at Wikibooks:Image:C Sharp Programming.wikibooks.Chapter 22 22 H ISTORY & D OCUMENT N OTES Wikibook History This book was created on 2004-06-15 and was developed on the Wikibooks project by the contributors listed in the next section. 62 | C# Programming . The latest version may be found at. PDF Information & History This PDF was created on 2007-07-15 based on the 2007-07-14 version of the C# Programming Wikibook. Boly38. Ohms law.Authors 23 A UTHORS Principal Authors • • • • • • Rodasmith (Contributions) Jonas Nordlund (Contributions) Eray (Contributions) Jlenthe (Contributions) Nercury (Contributions) Ripper234 (Contributions) All Authors Andyrewww. Hyad. Fly4fun. Pcu123456789. Whiteknight. Zr40 Wikibooks | 63 . David C Walls. Dm7475. Orion Blastar. Mkn. Szelee. Northgrove. Darklama. Magnus Manske. Kernigh. Cpons. Jokes Free4Me. Eray. Charles Iliya Krempeaux. Dethomas. Chmohan. Scorchsaber. Arbitrary. Jonas Nordlund. Minun. Huan086. Jesperordrup. Frank. Peachpuff. Panic2k4. Bacon. Jlenthe. Yurik. Ripper234. Devourer09. Thambiduraip. Rodasmith. Herbythyme. Mshonle. Derbeth. Shall mn. Chowmeined. Nanodeath. Lux-fiat. Sytone. Plee. Nym. Bijee. Withinfocus. Nercury. Kwhitefoot. Hagindaz. Feraudyh. Luke101. Krischik. HGatta. Fatcat1111. Jguk. 51 Franklin St. because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does.2001. PREAMBLE The purpose of this License is to make a manual. and is addressed as "you".2. MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document. modify or distribute the work in a way requiring permission under copyright law. with or without modifying it. Inc. Any member of the public is a licensee. it can be used for any textual work. or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it. either commercially or noncommercially. Fifth Floor.Chapter 24 24 GNU F REE D OCUMENTATION L ICENSE Version 1. while not being considered responsible for modifications made by others. APPLICABILITY AND DEFINITIONS This License applies to any manual or other work. 0. 64 | C# Programming . that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. refers to any such manual or work. this License preserves for the author and publisher a way to get credit for their work. which is a copyleft license designed for free software. which means that derivative works of the document must themselves be free in the same sense. You accept the license if you copy. Such a notice grants a world-wide. textbook. Boston. It complements the GNU General Public License. November 2002 Copyright (C) 2000. regardless of subject matter or whether it is published as a printed book. 1. below. unlimited in duration. but changing it is not allowed. either copied verbatim. We recommend this License principally for works whose purpose is instruction or reference. in any medium. A "Modified Version" of the Document means any work containing the Document or a portion of it. We have designed this License in order to use it for manuals for free software.2002 Free Software Foundation. The "Document". But this License is not limited to software manuals. or with modifications and/or translated into another language. to use that work under the conditions stated herein. Secondarily. royalty-free license. This License is a kind of "copyleft". A copy made in an otherwise Transparent file format whose markup. as FrontCover Texts or Back-Cover Texts.) The relationship could be a matter of historical connection with the subject or with related matters. Wikibooks | 65 . preceding the beginning of the body of the text. and the machinegenerated HTML. and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy that is not "Transparent" is called "Opaque". in the notice that says that the Document is released under this License. Texinfo input format. Examples of suitable formats for Transparent copies include plain ASCII without markup. "Title Page" means the text near the most prominent appearance of the work's title. The Document may contain zero Invariant Sections. as being those of Invariant Sections. ethical or political position regarding them. PostScript or PDF designed for human modification. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. if the Document is in part a textbook of mathematics. The "Title Page" means. a Secondary Section may not explain any mathematics. the title page itself. (Thus. An image format is not Transparent if used for any substantial amount of text. that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor. For works in formats which do not have any title page as such. the material this License requires to appear in the title page. A "Transparent" copy of the Document means a machine-readable copy. The "Cover Texts" are certain short passages of text that are listed. SGML or XML for which the DTD and/or processing tools are not generally available. commercial. plus such following pages as are needed to hold. The "Invariant Sections" are certain Secondary Sections whose titles are designated. If the Document does not identify any Invariant Sections then there are none. for a printed book. in the notice that says that the Document is released under this License. legibly. PostScript or PDF produced by some word processors for output purposes only. or of legal. has been arranged to thwart or discourage subsequent modification by readers is not Transparent. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors. A Front-Cover Text may be at most 5 words.GNU Free Documentation License. XCF and JPG. represented in a format whose specification is available to the general public. Examples of transparent image formats include PNG. SGML or XML using a publicly available DTD. and a Back-Cover Text may be at most 25 words. and standard-conforming simple HTML. philosophical. or absence of markup. LaTeX input format. COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document. 2. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. you may accept compensation in exchange for copies. and continue the rest onto adjacent pages. Copying with changes limited to the covers. "Endorsements". "Dedications". under the same conditions stated above. either commercially or noncommercially. and Back-Cover Texts on the back cover. but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License. and that you add no other conditions whatsoever to those of this License. numbering more than 100. However. 3.Chapter 24 A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. all these Cover Texts: Front-Cover Texts on the front cover. Both covers must also clearly and legibly identify you as the publisher of these copies. as long as they preserve the title of the Document and satisfy these conditions. clearly and legibly. such as "Acknowledgements". (Here XYZ stands for a specific section name mentioned below. and the Document's license notice requires Cover Texts. If you publish or distribute Opaque copies of the Document numbering more 66 | C# Programming . You may add other material on the covers in addition. can be treated as verbatim copying in other respects. and the license notice saying this License applies to the Document are reproduced in all copies. and you may publicly display copies. the copyright notices. provided that this License. The front cover must present the full title with all words of the title equally prominent and visible.) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition. These Warranty Disclaimers are considered to be included by reference in this License. If the required texts for either cover are too voluminous to fit legibly. VERBATIM COPYING You may copy and distribute the Document in any medium. or "History". you should put the first ones listed (as many as fit reasonably) on the actual cover. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. You may also lend copies. If you distribute a large enough number of copies you must also follow the conditions in section 3. you must enclose the copies in covers that carry. to give them a chance to provide you with an updated version of the Document. you must either include a machine-readable Transparent copy along with each Opaque copy. in the form shown in the Addendum below. Preserve its Title. G. thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above. or state in or with each Opaque copy a computernetwork location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document. that you contact the authors of the Document well before redistributing any large number of copies. free of added material. Include an unaltered copy of this License. I. F. as the publisher. but not required. as authors. immediately after the copyright notices. 4.GNU Free Documentation License than 100. with the Modified Version filling the role of the Document. if there were any. C. you must take reasonably prudent steps. You may use the same title as a previous version if the original publisher of that version gives permission. Include. Preserve the section Entitled "History". and publisher of the Wikibooks | 67 . H. If you use the latter option. if it has fewer than five). Add an appropriate copyright notice for your modifications adjacent to the other copyright notices. new authors. E. unless they release you from this requirement. Use in the Title Page (and on the covers. year. together with at least five of the principal authors of the Document (all of its principal authors. List on the Title Page. to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. be listed in the History section of the Document). provided that you release the Modified Version under precisely this License. one or more persons or entities responsible for authorship of the modifications in the Modified Version. B. a license notice giving the public permission to use the Modified Version under the terms of this License. Preserve all the copyright notices of the Document. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice. if any) a title distinct from that of the Document. It is requested. and from those of previous versions (which should. when you begin distribution of Opaque copies in quantity. D. you must do these things in the Modified Version: A. and add to it an item stating at least the title. In addition. State on the Title page the name of the publisher of the Modified Version. and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. For any section Entitled "Acknowledgements" or "Dedications". The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version. previously added by you or by arrangement made by the same entity you are acting on behalf of. year. These titles must be distinct from any other section titles. unaltered in their text and in their titles. then add an item describing the Modified Version as stated in the previous sentence. L. You may omit a network location for a work that was published at least four years before the Document itself. To do this. Preserve any Warranty Disclaimers. but you may replace the old one. K. If the Document already includes a cover text for the same cover. create one stating the title. Preserve all the Invariant Sections of the Document. M. O. add their titles to the list of Invariant Sections in the Modified Version's license notice. If there is no section Entitled "History" in the Document. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section. statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. authors. Delete any section Entitled "Endorsements". Preserve the network location. if any. J. you may not add another.Chapter 24 Modified Version as given on the Title Page. given in the Document for public access to a Transparent copy of the Document. and a passage of up to 25 words as a Back-Cover Text. You may add a passage of up to five words as a Front-Cover Text. Section numbers or the equivalent are not considered part of the section titles. These may be placed in the "History" section. Preserve the Title of the section. or if the original publisher of the version it refers to gives permission. N. provided it contains nothing but endorsements of your Modified Version by various parties--for example. you may at your option designate some or all of these sections as invariant. on explicit permission from the previous publisher that added the old one. Such a section may not be included in the Modified Version. If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document. and publisher of the Document as given on its Title Page. to the end of the list of Cover Texts in the Modified Version. You may add a section Entitled "Endorsements". 68 | C# Programming . Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. and likewise the network locations given in the Document for previous versions it was based on. under the terms defined in section 4 above for modified versions. Wikibooks | 69 . and any sections Entitled "Dedications". provided you insert a copy of this License into the extracted document. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works. or else a unique number. and that you preserve all their Warranty Disclaimers. provided that you include in the combination all of the Invariant Sections of all of the original documents. make the title of each such section unique by adding at the end of it. You must delete all sections Entitled "Endorsements. The combined work need only contain one copy of this License. and multiple identical Invariant Sections may be replaced with a single copy. and follow this License in all other respects regarding verbatim copying of that document. You may extract a single document from such a collection. forming one section Entitled "History". When the Document is included in an aggregate. in parentheses. 7. and replace the individual copies of this License in the various documents with a single copy that is included in the collection. you must combine any sections Entitled "History" in the various original documents. COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License. COMBINING DOCUMENTS You may combine the Document with other documents released under this License. and list them all as Invariant Sections of your combined work in its license notice. unmodified. and distribute it individually under this License. is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. in or on a volume of a storage or distribution medium. If there are multiple Invariant Sections with the same name but different contents.GNU Free Documentation License 5. likewise combine any sections Entitled "Acknowledgements". this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. the name of the original author or publisher of that section if known." 6. In the combination. If the Document does 70 | C# Programming . then if the Document is less than one half of the entire aggregate. revised versions of the GNU Free Documentation License from time to time.org/copyleft/. However. the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title. or rights. and will automatically terminate your rights under this License. or "History". or distribute the Document except as expressly provided for under this License. Each version of the License is given a distinguishing version number. See. modify. 9. Any other attempt to copy. 8. 10. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer. TRANSLATION Translation is considered a kind of modification. you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. Such new versions will be similar in spirit to the present version. sublicense. If a section in the Document is Entitled "Acknowledgements". or the electronic equivalent of covers if the Document is in electronic form.gnu. and all the license notices in the Document. Otherwise they must appear on printed covers that bracket the whole aggregate. parties who have received copies. the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate. but may differ in detail to address new problems or concerns. You may include a translation of this License. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new. modify. and any Warranty Disclaimers. sublicense or distribute the Document is void. "Dedications". Replacing Invariant Sections with translations requires special permission from their copyright holders. but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. from you under this License will not have their licenses terminated so long as such parties remain in full compliance. so you may distribute translations of the Document under the terms of section 4.Chapter 24 If the Cover Text requirement of section 3 is applicable to these copies of the Document. If the Document specifies that a particular numbered version of this License "or any later version" applies to it. provided that you also include the original English version of this License and the original versions of those notices and disclaimers. TERMINATION You may not copy. the original version will prevail. GNU Free Documentation License not specify a version number of this License. you may choose any version ever published (not as a draft) by the Free Software Foundation. External links • • GNU Free Documentation License (Wikipedia article on the license) Official GNU FDL webpage Wikibooks | 71 .
https://www.scribd.com/document/36484847/C
CC-MAIN-2017-09
en
refinedweb
Using SOAP with J2EE - The Basic Structure of SOAP - SOAP Namespaces - SOAP Headers - The SOAP Body - SOAP Messaging Modes - SOAP Faults - SOAP over HTTP - Wrapping Up SOAP was originally an acronym for Simple Object Access Protocol. (Now it's just a name.) SOAP 1.1 is the standard messaging protocol used by J2EE Web Services, and is the de facto standard for Web services in general. SOAP's primary application is Application-to-Application (A2A) communication. Specifically, it's used in Business-to-Business (B2B) and Enterprise Application Integration (EAI), which are two sides of the same coin: Both focus on integrating software applications and sharing data. To be truly effective in B2B and EAI, a protocol must be platform-independent, flexible, and based on standard, ubiquitous technologies. Unlike earlier B2B and EAI technologies, such as CORBA and EDI, SOAP meets these requirements, enjoys widespread use, and has been endorsed by most enterprise software vendors and major standards organizations (W3C, WS-I, OASIS, etc.). Despite all the hoopla, however,,1 is usually carried as the payload of some other network protocol. For example, the most common way to exchange SOAP messages is via HTTP (HyperText Transfer Protocol), used by Web browsers to access HTML Web pages. The big difference is that you don't view SOAP messages with a browser as you do HTML. SOAP messages are exchanged between applications on a network and are not meant for human consumption. HTTP is just a convenient way of sending and receiving SOAP messages.. Figure 4-1 illustrates how SOAP can be carried by various protocols between software applications on a network. Figure 4-1. SOAP over HTTP, SMTP, and Raw TCP/IP Web services can use One-Way messaging or Request/Response messaging. In the former, SOAP messages travel in only one direction, from a sender to a receiver. In the latter, a SOAP message travels from the sender to the receiver, which is expected to send a reply back to the sender. Figure 4-2 illustrates these two forms of messaging. Figure 4-2. One-Way versus Request/Response Messaging SOAP defines how messages can be structured and processed by software in a way that is independent of any programming language or platform, and thus facilitates interoperability between applications written in different programming languages and running on different operating systems. Of course, this is nothing new: CORBA IIOP and DCE RPC also focused on cross-platform interoperability. These legacy protocols were never embraced by the software industry as a whole, however, so they never became pervasive technologies. SOAP, on the other hand, has enjoyed unprecedented acceptance, and adoption by virtually all the players in distributed computing, including Microsoft, IBM, Sun Microsystems, BEA, HP, Oracle, and SAP, to name a few. The tidal wave of support behind SOAP is interesting. One of the main reasons is probably its grounding in XML. The SOAP message format is defined by an XML schema, which exploits XML namespaces to make SOAP very extensible.. A port is a communication address on a computer that complements the Internet address. Each network application on a computer uses a different port to communicate. By convention, Web servers use port 80 for HTTP requests, but application servers can use any one of thousands of other ports. The power that comes from XML's extensibility and the convenience of using the ubiquitous, firewall-immune HTTP protocol partly explain SOAP's success. It's difficult to justify SOAP's success purely on its technical merits, which are good but less than perfect. Another factor in SOAP's success is the stature of its patrons. SOAP is the brainchild of Dave Winner, Don Box, and Bob Atkinson. Microsoft and IBM supported it early, which sent a strong signal to everyone else in the industry: “If you want to compete in this arena, you better jump aboard SOAP.” The event that secured industry-wide support for SOAP was its publication by the World Wide Web Consortium as a Note2 in May of 2000, making it the de facto standard protocol for A2A messaging. Overnight, SOAP became the darling of distributed computing and started the biggest technology shift since the introduction of Java in 1995 and XML in 1998. SOAP is the cornerstone of what most people think of as Web services today, and will be for a long time to come. Recently, the W3C has defined a successor to SOAP 1.1. SOAP 1.2 does a decent job of tightening up the SOAP processing rules and makes a number of changes that will improve interoperability. SOAP 1.2 is very new and has not yet been widely adopted, however, so it's not included in the WS-I Basic Profile 1.0. This exclusion is bound to end when the BP is updated, but for now J2EE 1.4 Web Services, which adheres to the WS-I Basic Profile 1.0, does not support the use of SOAP 1.2. 4.1 The Basic Structure of SOAP As you now know, a SOAP message is a kind of XML document. SOAP has its own XML schema, namespaces, and processing rules. This section focuses on the structure of SOAP messages and the rules for creating and processing them. A SOAP message is analogous to an envelope used in traditional postal service. Just as a paper envelope contains a letter, a SOAP message contains XML data. For example, a SOAP message could enclose a purchaseOrder element, as in Listing 4-1. Notice that XML namespaces are used to keep SOAP-specific elements separate from purchaseOrder elements—the SOAP elements are shown in bold. Listing 4-1 A SOAP Message That Contains an Instance of Purchase Order Markup <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Body> <po:purchaseOrder <po:accountName>Amazon.com</po:accountName> <po:accountNumber>923</po:accountNumber> <po:address> <po:name>AMAZON.COM</po:name> <po:street>1850 Mercer Drive</po:street> <po:city>Lexington</po:city> <po:state>KY</po:state> <po:zip>40511</po:zip> </po:address> <po:book> <po:title>J2EE Web Services</po:title> <po:quantity>300</po:quantity> <po:wholesale-price>24.99</po:wholesale-price> </po:book> </po:purchaseOrder> </soap:Body> </soap:Envelope> This message is an example of a SOAP message that contains an arbitrary XML element, the purchaseOrder element. In this case, the SOAP message will be One-Way; it will be sent from the initial sender to the ultimate receiver with no expectation of a reply. Monson-Haefel Books' retail customers will use this SOAP message to submit a purchase order, a request for a shipment of books. In this example, Amazon.com is ordering 300 copies of this book for sale on its Web site. A SOAP message may have an XML declaration, which states the version of XML used and the encoding format, as shown in this snippet from Listing 4-1. <?xml version="1.0" encoding="UTF-8"?> If an xml declaration is used, the version of XML must be 1.0 and the encoding must be either UTF-8 or UTF-16. If encoding is absent, the assumption is that the SOAP message is based on XML 1.0 and UTF-8. An XML declaration isn't mandatory. Web services are required to accept messages with or without them.BP (Remember that I said I'd use a superscript BP to signal a BP-conformance rule.) Every XML document must have a root element, and in SOAP it's the Envelope element. Envelope may contain an optional Header element, and must contain a Body element. If you use a Header element, it must be the immediate child of the Envelope element, and precede the Body element. The Body element contains, in XML format, the actual application data being exchanged between applications. The Body element delimits the application-specific data. Listing 4-2 shows the structure of a SOAP message. Listing 4-2 The Structure of a SOAP Message <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Header> <!-- Header blocks go here --> </soap:Header> <soap:Body> <!-- Application data goes here --> </soap:Body> </soap:Envelope> A SOAP message adheres to the SOAP 1.1 XML schema, which requires that elements and attributes be fully qualified (use prefixes or default namespaces). A SOAP message may have a single Body element preceded, optionally, by one Header element. The Envelope element cannot contain any other children. Because SOAP doesn't limit the type of XML data carried in the SOAP Body, SOAP messages are extremely flexible; they can exchange a wide spectrum of data. For example, the application data could be an arbitrary XML element like a purchaseOrder, or an element that maps to the arguments of a procedure call. The Header element contains information about the message, in the form of one or more distinct XML elements, each of which describes some aspect or quality of service associated with the message. Figure 4-3 illustrates the structure of a basic SOAP message. Figure 4-3. The Structure of a Basic SOAP Message The Header element can contain XML elements that describe security credentials, transaction IDs, routing instructions, debugging information, payment tokens, or any other information about the message that is important in processing the data in the Body element. For example, we may want to attach a unique identifier to every SOAP message, to be used for debugging and logging. Although unique identifiers are not an integral part of the SOAP protocol itself, we can easily add an identifier to the Header element as in Listing 4-3. Listing 4-3 A SOAP Message with a Unique Identifier <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Header> <mi:message-id>11d1def534ea:b1c5fa:f3bfb4dcd7:-8000</mi:message-id> </soap:Header> <soap:Body> <!-- Application-specific data goes here --> </soap:Body> </soap:Envelope> The message-id element is called a header block, and is an arbitrary XML element identified by its own namespace. A header block can be of any size and can be very extensive. For example, the header for an XML digital signature, shown in bold in Listing 4-4, is relatively complicated. Listing 4-4 A SOAP Message with an XML Digital-Signature Header Block <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns: <soap:Header> <mi:message-id>11d1def534ea:b1c5fa:f3bfb4dcd7:-8000</mi:message-id> <sec:Signature > <ds:Signature> <ds:SignedInfo> <ds:CanonicalizationMethod <ds:SignatureMethod <ds:Reference <ds:Transforms> <ds:Transform </ds:Transforms> <ds:DigestMethod <ds:DigestValue>u29dj93nnfksu937w93u8sjd9= </ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>CFFOMFCtVLrklR…</ds:SignatureValue> </ds:Signature> </sec:Signature> </soap:Header> <soap:Body sec: <!-- Application-specific data goes here --> </soap:Body> </soap:Envelope> You can place any number of header blocks in the Header element. The example above contains both the message-id and XML digital signature header blocks, each of which would be processed by appropriate functions. Header blocks are discussed in more detail in Section 4.3.
http://www.informit.com/articles/article.aspx?p=169106&amp;seqNum=7
CC-MAIN-2017-09
en
refinedweb
- Total open tickets: 2202 ( - #149 - missed CSE opportunity - #314 - #line pragmas not respected inside nested comments - #344 - arrow notation: incorrect scope of existential dictionaries - #345 - GADT - fundep interaction - #367 - Infinite loops can hang Concurrent Haskell - #418 - throwTo to a thread inside 'block' - #552 - GHCi :m doesn't restore default decl - #589 - Various poor type error messages - #781 - GHCi on x86_64, cannot link to static data in shared libs - #806 - hGetBufNonBlocking doesn't work on Windows - #816 - Weird fundep behavior (with -fallow-undecidable-instances) - #917 - -O introduces space leak - #926 - infinite loop in ShutdownIOManager() - #947 - ghc -O space leak: CSE between different CAFs - #1012 - ghc panic with mutually recursive modules and template haskell - #1057 - Implicit parameters on breakpoints - #1087 - bang patterns with infix ops - #1147 - Quadratic behaviour in the compacting GC - #1158 - Problem with GADTs and explicit type signatures - #1168 - Optimisation sometimes decreases sharing in IO code - #1171 - GHC doesn't respect the imprecise exceptions semantics - #1216 - Missed opportunity for let-no-esape - #1290 - ghc runs preprocessor too much - #1308 - Type signature in warning is wrong - #1330 - Impredicativity bug: Church2 test gives a rather confusing error with the HEAD - #1400 - :set +r doesn't work for interpreted modules - #1466 - Stack check for AP_STACK - #1487 - unix package: test needed for getLoginName - #1498 - Optimisation: eliminate unnecessary heap check in recursive function - #1526 - -fobject-code doesn't apply to expressions typed at the prompt - #1530 - debugging :steps inside TH spliced code need to be bypassed - #1544 - Derived Read instances for recursive datatypes with infix constructors are too inefficient - #1612 - GHC_PACKAGE_PATH and $topdir bug - #1614 - Type checker does not use functional dependency to avoid ambiguity - #1620 - ModBreaks.modBreaks_array not initialised - #1687 - A faster (^)-function. - #1693 - Make distclean (still) doesn't - #1727 - Precedence and associativity rules ignored when mixing infix type and data constructors in a single expression - #1831 - reify never provides the declaration of variables - #1851 - "make install-strip" should work - #1853 - hpc mix files for Main modules overwrite each other - #1928 - Confusing type error message - #2028 - STM slightly conservative on write-only transactions - #2031 - relocation overflow - #2057 - inconsistent .hi file error gets ignored - #2132 - Optimise nested comparisons - #2140 - cpuTimePrecision is wrong - #2147 - unhelpful error message for a misplaced DEPRECATED pragma - #2189 - hSetBuffering stdin NoBuffering doesn't work on Windows - #2224 - -fhpc inteferes/prevents rewrite rules from firing - #2255 - Improve SpecConstr for free variables - #2256 - Incompleteness of type inference: must quantify over implication constraints - #2273 - inlining defeats seq - #2289 - Needless reboxing of values when returning from a tight loop - #2346 - Compilation of large source files requires a lot of RAM - #2356 - GHC accepts multiple instances for the same type in different modules - #2370 - num009 fails on OS X 10.5? - #2374 - MutableByteArray# is slower than Addr# - #2387 - Optimizer misses unboxing opportunity - #2401 - aborting an STM transaction should throw an exception - #2408 - threadWaitRead on mingw32 threaded causes internal error - #2439 - Missed optimisation with dictionaries and loops - #2496 - Invalid Eq/Ord instances in Data.Version - #2607 - Inlining defeats selector thunk optimisation - #2625 - Unexpected -ddump-simpl output for derived Ord instance and UNPACKed fields - #2642 - Improve SpecConstr for join points - #2710 - -main-is flag in {-# OPTIONS #-} pragma not fully supported - #2731 - Avoid unnecessary evaluation when unpacking constructors - #2776 - Document -pgmL (Use cmd as the literate pre-processor) - #2786 - Blackhole loops are not detected and reported in GHCi - #2805 - Test ffi009(ghci) fails on PPC Mac OS X - #2926 - Foreign exported function returns wrong type - #2933 - LDFLAGS ignored by build system - #2940 - Do CSE after CorePrep - #3034 - divInt# floated into a position which leads to low arity - #3048 - Heap size suggestion gets ignored when -G1 flag is passed - #3055 - Int / Word / IntN / WordN are unequally optimized - #3061 - GHC's GC default heap growth strategy is not as good as other runtimes - #3070 - floor(0/0) should not be defined - #3073 - Avoid reconstructing dictionaries in recursive instance methods - #3081 - Double output after Ctrl+C on Windows - #3094 - Some GHC.* module should export word size and heap object header size - #3107 - Over-eager GC when blocked on a signal in the non-threaded runtime - #3134 - encodeFloat . decodeFloat - #3138 - Returning a known constructor: GHC generates terrible code for cmonad - #3184 - package.conf.d should be under /var, not /usr - #3231 - Permission denied error with runProcess/openFile - #3238 - CInt FFI exports do not use C int in _stub.h header file - #3353 - Add CLDouble support - #3373 - GHC API is not thread safe - #3397 - :step hangs when -fbreak-on-exception is set - #3458 - Allocation where none should happen - #3549 - unlit does not follow H98 spec - #3571 - Bizzarely bloated binaries - #3588 - ghc -M should emit dependencies on CPP headers - #3606 - The Ord instance for unboxed arrays is very inefficient - #3628 - exceptions reported to stderr when they propagate past forkIO - #3676 - realToFrac doesn't sanely convert between floating types - #3711 - Bad error reporting when calling a function in a module which depends on a DLL on Windows - #3765 - Rules should "look through" case binders too - #3766 - Parsing of lambdas is not consistent with Haskell'98 report. - #3767 - SpecConstr for join points - #3781 - Improve inlining for local functions - #3782 - Data Parallel "Impossible happened" compiler error - #3831 - SpecConstr should exploit cases where there is exactly one call pattern - #3872 - New way to make the simplifier diverge - #3881 - section parse errors, e.g. ( let x=1 in x + ) - #3903 - DPH bad sliceP causes RTS panic "allocGroup: requested zero blocks" - #3937 - Cannot killThread in listen/accept on Windows threaded runtime - #3960 - ghc panic when attempting to compile DPH code - #3995 - Comment delimiters ignored inside compiler pragma - #3998 - strace breaks System.Process(?) - #4005 - Bad behaviour in the generational GC with paraffins -O2 - #4012 - Compilation results are not deterministic - #4017 - Unhelpful error message in GHCi - #4022 - GHC Bindist is Broken on FreeBSD/amd64 - #4043 - Parsing of guards, and type signatures - #4048 - ghc-pkg should check for existence of extra-libraries - #4049 - Support for ABI versioning of C libraries - #4081 - Strict constructor fields inspected in loop - #4101 - Primitive constant unfolding - #4105 - ffi005 fails on OS X - #4140 - dynHelloWorld(dyn) fails in an unreg build - #4144 - Exception: ToDo: hGetBuf - when using custom handle infrastructure - #4150 - CPP+QuasiQuotes confuses compilation errors' line numbers - #4162 - GHC API messes up signal handlers - #4176 - reject unary minus in infix left hand side function bindings that resolve differently as expressions - #4245 - ghci panic: thread blocked indefinitely in an MVar operation - #4288 - Poor -fspec-constr-count=n warning messages - #4296 - The dreaded SkolemOccurs problem - #4301 - Optimisations give bad core for foldl' (flip seq) () - #4308 - LLVM compiles Updates.cmm badly - #4372 - Accept expressions in left-hand side of quasiquotations - #4413 - (^^) is not correct for Double and Float - #4451 - Re-linking avoidance is too aggressive - #4471 - Incorrect Unicode output on Windows Console - #4505 - Segmentation fault on long input (list of pairs) - #4820 - "Invalid object in isRetainer" when doing retainer profiling in GHC 7 branch - #4824 - Windows: Dynamic linking doesn't work out-of-the-box - #4831 - Too many specialisations in SpecConstr - #4833 - Finding the right loop breaker - #4836 - literate markdown not handled correctly by unlit - #4861 - Documentation for base does not include special items - #4899 - Non-standard compile plus Template Haskell produces spurious "unknown symbol" linker error - #4942 - GHC.ConsoleHandler does not call back application when Close button is pressed - #4945 - Another SpecConstr infelicity - #5041 - Incorrect Read deriving for MagicHash constructors - #5051 - Typechecker behaviour change - #5071 - GHCi crashes on large alloca/allocaBytes requests - #5142 - stub header files don't work with the MS C compiler - #5188 - Runtime error when allocating lots of memory - #5224 - Improve consistency checking for family instances - #5262 - Compiling with -O makes some expressions too lazy and causes space leaks - #5267 - Missing type checks for arrow command combinators - #5291 - GhcDynamic build fails on Windows: can't find DLLs - #5298 - Inlined functions aren't fully specialised - #5302 - Unused arguments in join points - #5305 - crash after writing around 40 gigabytes to stdout - #5320 - check_overlap panic (7.1 regression) - #5326 - Polymorphic instances aren't automatically specialised - #5340 - wrong warning on incomplete case analysis in conjunction with empty data declarations - #5355 - Link plugins against existing libHSghc - #5369 - Reinstate VECTORISE pragmas with expressions as right-hand sides - #5378 - unreg compiler: warning: conflicting types for built-in function ‘memcpy’ - #5400 - GHC loops on compiling with optimizations - #5444 - Slow 64-bit primops on 32 bit system - #5448 - GHC stuck in infinite loop compiling with optimizations - #5463 - SPECIALISE pragmas generated from Template Haskell are ignored - #5466 - Documentation for Chan could be better - #5470 - The DPH library needs to support PData and PRepr instances for more than 15-tuples - #5495 - simple program fails with -shared on mac - #5539 - GHC panic - Simplifier ticks exhausted - #5553 - sendWakeup error in simple test program with MVars and killThread - #5611 - Asynchronous exception discarded after safe FFI call - #5620 - Dynamic linking and threading does not work on Windows - #5641 - The -L flag should not exist - #5642 - Deriving Generic of a big type takes a long time and lots of space - #5645 - Sharing across functions causing space leak - #5646 - Initialise tuples using pragmas - #5692 - Source code with large floating constants in exponential notation cannot be compiled - #5702 - Can't vectorise pattern matching on numeric literals - #5722 - GHC inlines class method forever - #5761 - Getting stdout and stderr as a single handle from createProcess does not work on Windows - #5775 - Inconsistency in demand analysis - #5777 - core lint error with arrow notation and GADTs - #5780 - -faggressive-primops change caused a failure in perf/compiler/parsing001 - #5797 - readRawBufferPtr cannot be interrupted by exception on Windows with -threaded - #5807 - DPH library functions don't work without -fvectorise. - #5840 - Extend the supported environment sizes of vectorised closures - #5902 - Cannot tell from an exception handler whether the exception was asynchronous - #5928 - INLINABLE fails to specialize in presence of simple wrapper - #5954 - Performance regression 7.0 -> 7.2 (still in 7.4) - #5957 - signatures are too permissive - #5959 - Top level splice in Template Haskell has over-ambitious lexical scope? - #5974 - Casts, rules, and parametricity - #5987 - Too many symbols in ghc package DLL - #6004 - dph-lifted-vseg package doesn't provide Data.Array.Parallel.Prelude.Float module - #6034 - Parse error when using ' with promoted kinds - #6040 - Adding a type signature changes heap allocation into stack allocation without changing the actual type - #6047 - GHC retains unnecessary binding - #6065 - Suggested type signature causes a type error (even though it appears correct) - #6070 - Fun with the demand analyser - #6087 - Join points need strictness analysis - #6092 - Liberate case not happening - #6101 - Show instance for integer-simple is not lazy enough - #6107 - GHCi runtime linker cannot link with duplicate common symbols - #6113 - Profiling with -p not written if killed with SIGTERM - #6132 - Can't use both shebang line and #ifdef declarations in the same file. - #7026 - Impredicative implicit parameters - #7044 - reject reading rationals with exponent notation - #7045 - The `Read` instance of `Rational` does not support decimal notation - #7057 - Simplifier infinite loop regression - #7063 - Register allocators can't handle non-uniform register sets - #7066 - isInstance does not work for compound types - #7078 - Panic using mixing list with parallel arrays incorrectly - #7080 - Make RULES and SPECIALISE more consistent - #7098 - GHC 7.4.1 reports an internal error and core dumps while using DPH - #7102 - Type family instance overlap accepted in ghci - #7109 - Inlining depends on datatype size, even with INLINE pragmas - #7114 - Cannot recover (good) inlining behaviour from 7.0.2 in 7.4.1 - #7141 - Inlining the single method of a class can shadow rules - #7161 - hSetNewlineMode and hSetEncoding can be performed on closed and semi-closed handles - #7190 - GHC's -fprof-auto does not work with LINE pragmas - #7198 - New codegen more than doubles compile time of T3294 - #7206 - Implement cheap build - #7240 - Stack trace truncated too much with indirect recursion - #7245 - INLINEing top-level patterns causes ghc to emit 'arity missing' traces - #7246 - Callstack depends on way (prof, profasm, profthreaded - #7258 - Compiling DynFlags is jolly slow - #7259 - Eta expansion of products in System FC - #7273 - Binary size increase in nofib/grep between 7.6.1 and HEAD - #7277 - Recompilation check fails for TH unless functions are inlined - #7287 - Primops in RULES generate warnings - #7296 - ghc-7 assumes incoherent instances without requiring language `IncoherentInstances` - #7297 - LLVM incorrectly hoisting loads - #7298 - GHCi is setting stdin/stdout to NoBuffering in runghc when DYNAMIC_GHC_PROGRAMS=YES - #7307 - #7309 - The Ix instance for (,) leaks space in range - #7320 - GHC crashes when building on 32-bit Linux in a Linode - #7336 - Defined but not used is not detected for data types with instances - #7353 - Make system IO interruptible on Windows - #7367 - float-out causes extra allocation - #7373 - When building GHC: Failed to load interface for `GHC.Fingerprint' - #7374 - rule not firing - #7378 - Identical alts/bad divInt# code - #7380 - Panic: mkNoTick: Breakpoint loading modules with -O2 via API - #7388 - CAPI doesn't work with ghci - #7398 - RULES don't apply to a newtype constructor - #7411 - Exceptions are optimized away in certain situations - #7428 - GHC compile times are seriously non-linear in program size - #7430 - GHC API reports CPP errors in confusing ways - #7443 - Generated C code under -prof -fprof-auto -fprof-cafs very slow to compile - #7450 - Regression in optimisation time of functions with many patterns (6.12 to 7.4)? - #7461 - Error messages about "do" statements contain false information - #7503 - Bug with PolyKinds, type synonyms & GADTs - #7511 - Room for GHC runtime improvement >~5%, inlining related - #7535 - Using -with-rtsopts=-N should fail unless -threaded is also specified - #7539 - Hard ghc api crash when calling runStmt on code which has not been compiled - #7542 - GHC doesn't optimize (strict) composition with id - #7543 - Constraint synonym instances - #7593 - Unable to print exceptions of unicode identifiers - #7596 - Opportunity to improve CSE - #7602 - Threaded RTS performing badly on recent OS X (10.8?) - #7610 - Cross compilation support for LLVM backend - #7621 - Cross-build for QNX ARM smashes stack when using FunPtr wrappers - #7624 - Handling ImplicitParams in Instance Declaration - #7634 - MD5 collision could lead to SafeHaskell violation - #7644 - Hackage docs for base library contain broken links - #7665 - dynamicToo001 fails on Windows - #7668 - Location in -fdefer-type-errors - #7670 - StablePtrs should be organized by generation for efficient minor collections - #7679 - Regression in -fregs-graph performance - #7723 - iOS patch no 12: Itimer.c doesn't work on iOS - #7779 - building GHC overwrites the installed package database if GHC_PACKAGE_PATH is set - #7789 - GHCI core dumps when used with VTY - #7803 - Superclass methods are left unspecialized - #7828 - RebindableSyntax and Arrow - #7831 - Bad fragmentation when allocating many large objects - #7836 - "Invalid object in processHeapClosureForDead" when profiling with -hb - #7842 - Incorrect checking of let-bindings in recursive do - #7849 - Error on pattern matching of an existential whose context includes a type function - #7897 - MakeTypeRep fingerprints be proper, robust fingerprints - #7930 - Nested STM Invariants are lost - #7944 - GHC goes into an apparently infinite loop at -O2 - #7960 - Compiling profiling CCS registration .c file takes far too long - #7966 - 'make distclean' does not work in nofib - #7983 - Bug in hsc2hs --cross-safe - #7985 - Allow openFile on unknown file type - #7988 - Big integers crashing integer-simple on qnxnto-arm with unregisterised build - #7997 - waitForProcess and getProcessExitCode are unsafe against asynchronous exceptions - #8014 - Assertion failure when using multithreading in debug mode. - #8023 - dph-examples binaries don't use all CPUs - #8025 - "During interactive linking, GHCi couldn't find the following symbol" typechecker error with TemplateHaskell involved - #8032 - Worker-wrapper transform and NOINLINE trigger bad reboxing behavior - #8036 - Demand analyser is unpacking too deeply - #8042 - `:load *` and `:add *` misbehave in presence of `-fobject-code` - #8048 - Register spilling produces ineffecient/highly contending code - #8082 - Ordering of assembly blocks affects performance - #8095 - TypeFamilies painfully slow - #8118 - <stdout>: commitAndReleaseBuffer: invalid argument (invalid character) - #8123 - GHCi warns about -eventlog even though it's sometimes necessary - #8127 - iOS patch no 19: Linking - #8128 - Standalone deriving fails for GADTs due to inaccessible code - #8144 - Interface hashes include time stamp of dependent files (UsageFile mtime) - #8147 - Exponential behavior in instance resolution on fixpoint-of-sum - #8151 - ghc-7.4.2 on OpenIndiana (Solaris) createSubprocess fails - #8159 - Uses of Binary decode should have a proper error message - #8173 - GHC uses nub - #8177 - Roles for type families - #8195 - Different floating point results with -msse2 on 32bit Linux - #8198 - One-shot mode is buggy w.r.t. hs-boot files - #8211 - ghc -c recompiles TH every time while --make doesn't - #8228 - GHC built under Windows does not generate dyn_hi files - #8258 - GHC accepts `data Foo where` in H2010 mode - #8265 - getTokenStream fails for source using cpp - #8279 - bad alignment in code gen yields substantial perf issue - #8281 - The impossible happened: primRepToFFIType - #8285 - unexpected behavior with encodeFloat on large inputs - #8293 - user001 spuriously fails if getGroupEntryForID correctly fails - #8303 - defer StackOverflow exceptions (rather than dropping them) when exceptions are masked - #8316 - GHCi debugger segfaults when trying force a certain variable - #8318 - GHC does not infer type of `tagToEnum#` expression - #8319 - Simplifier ticks exhausted (need -fsimpl-tick-factor=955) - #8327 - Cmm sinking does not eliminate dead code in loops - #8332 - hp2ps does not escape parentheses - #8336 - Sinking pass could optimize some assignments better - #8338 - Incoherent instances without -XIncoherentInstances - #8346 - Rank 1 type signature still requires RankNTypes - #8362 - Filesystem related tests failed on solaris (SmartOS) - #8363 - Order matters for unused import warnings when reexporting identifiers - #8378 - Cross-compiling from x86_64-unknown-linux-gnu to x86_64-sun-solaris2 with mkGmpConstants workaround fails to build objects for integer-gmp - #8388 - forall on non-* types - #8399 - Of Bird tacks and non-blank blank lines - #8406 - Invalid object in isRetainer() or Segfault - #8420 - Spurious dynamic library dependencies - #8422 - type nats solver is too weak! - #8426 - one-shot compilation + TH doesn't see instances that is seen in batch mode - #8447 - A combination of type-level comparison and subtraction does not work for 0 - #8457 - -ffull-laziness does more harm than good - #8484 - Compile-time panic - #8487 - Debugger confuses variables - #8510 - Clear up what extensions are needed at a Template Haskell splice site - #8520 - ghc.exe: internal error: loadArchive: error whilst reading `C' - #8523 - blowup in space/time for type checking and object size for high arity tuples - #8524 - GHC is inconsistent with the Haskell Report on which Unicode characters are allowed in string and character literals - #8527 - The ordering of -I directives should be consistent with the ordering of -package directives - #8532 - Hyperbolic arc cosine fails on (-1) :: Complex r. - #8556 - Invalid constructor names are accepted in data declarations - #8572 - Building an empty module with profiling requires profiling libraries for integer-gmp - #8573 - "evacuate: strange closure type 0" when creating large array - #8589 - Bad choice of loop breaker with INLINABLE/INLINE - #8591 - Concurrent executions of ghc-pkg can cause inconstant package.cache files - #8604 - Some stack/vmem limits (ulimit) combinations causing GHC to fail - #8611 - nofib’s cacheprof’s allocations nondeterminisitic - #8623 - Strange slowness when using async library with FFI callbacks - #8627 - mallocForeignPtrBytes documentation unobvious regarding memory alignment - #8629 - Option 'split-objs' being ignored when trying to reduce object code size in iOS cross-compilation - #8635 - GHC optimisation flag ignored when importing a local module with derived type classes - #8648 - Initialization of C statics broken in threaded runtime - #8657 - -fregs-graph still has a limit on spill slots - #8662 - GHC does not inline cheap inner loop when used in two places - #8666 - integer-gmp fails to compile on Debian Squeeze - #8668 - SPECIALIZE silently fails to apply - #8671 - Rebindable syntax creates bogus warning - #8684 - hWaitForInput cannot be interrupted by async exceptions on unix - #8694 - ghc -M doesn't handle addDependentFile or #included files - #8708 - Kind annotation in tuple not parsed - #8713 - Avoid libraries if unneeded (librt, libdl, libpthread) - #8721 - Testsuite not reporting errors for DYN way on OS X - #8730 - Invalid Unicode Codepoints in Char - #8731 - long compilation time for module with large data type and partial record selectors - #8732 - Global big object heap allocator lock causes contention - #8733 - I/O manager causes unnecessary syscalls in send/recv loops - #8740 - Deriving instance conditionally compiles - #8763 - forM_ [1..N] does not get fused (10 times slower than go function) - #8774 - Transitivity of Auto-Specialization - #8784 - Makefile missing a dependency - #8808 - ImpredicativeTypes type checking fails depending on syntax of arguments - #8814 - 7.8 optimizes attoparsec improperly - #8827 - Inferring Safe mode with GeneralizedNewtypeDeriving is wrong - #8847 - Int64 ^ Int64 broken by optimization on SPARC - #8862 - forkProcess does not play well with heap or time profiling - #8871 - No-op assignment I64[BaseReg + 784] = I64[BaseReg + 784]; is generated into optimized Cmm - #8872 - hsc cast size warnings on 32-bit Linux - #8887 - Double double assignment in optimized Cmm on SPARC - #8905 - Function arguments are always spilled/reloaded if scrutinee is already in WHNF - #8922 - GHC unnecessarily sign/zero-extends C call arguments - #8925 - :print and :sprint sometimes fully evaluates strings - #8948 - Profiling report resolution too low - #8949 - switch -msse2 to be on by default - #8956 - Parser error shadowed by "module ‘main:Main’ is defined in multiple files" error - #8964 - split_marker_entry assert breaks -split-objs and -ddump-opt-cmm - #8971 - Native Code Generator for 8.0.1 is not as optimized as 7.6.3... - #8981 - ghc-pkg complains about missing haddock interface files - #8982 - Cost center heap profile restricted by biography of GHC segfaults - #8995 - When generalising, use levels rather than global tyvars - #9020 - Massive blowup of code size on trivial program - #9041 - NCG generates slow loop code - #9046 - Panic in GHCi when using :print - #9059 - Excessive space usage while generating code for fractional literals with big exponents - #9074 - GHC 7.8.2's ghci does not track missing symbols when loading non-Haskell object files - #9076 - GHC.Exts docs don't contain all primops - #9079 - Foreign.C.Types in haskell2010 - #9088 - Per-thread Haskell thread list/numbering (remove global lock from thread allocation) - #9121 - Presence of dyn_o files not checked upon recompilation - #9123 - Need for higher kinded roles - #9135 - readProcessWithExitCode leaks when the program does not exist - #9136 - Constant folding in Core could be better - #9157 - cmm common block not eliminated - #9173 - Better type error messages - #9176 - GHC not generating dyn_hi files - #9198 - large performance regression in type checker speed in 7.8 - #9210 - "overlapping instances" through FunctionalDependencies - #9219 - Parallel build proceeds despite errors - #9221 - (super!) linear slowdown of parallel builds on 40 core machine - #9223 - Type equality makes type variable untouchable - #9232 - Stats file has wrong numbers - #9235 - Simplifier ticks exhausted on recursive class method - #9237 - GHC not recognizing INPUT(-llibrary) in linker scripts - #9246 - GHC generates poor code for repeated uses of min/max - #9247 - Document -XDatatypeContexts in flag reference - #9248 - Document -XHaskell98 and -XHaskell2010 in flag reference - #9249 - Link to "latest" user's guide - #9267 - Lack of type information in GHC error messages when the liberage coverage condition is unsatisfied - #9274 - GHC panic with UnliftedFFITypes+CApiFFI - #9277 - GHCi panic: Loading temp shared object failed: Symbol not found - #9278 - GHCi crash: selector _ for message _ does not match selector known to Objective C runtime - #9279 - Local wrapper function remains in final program; result = extra closure allocation - #9280 - GHCi crash: illegal text-relocation to _ in _ from _ in _ for architecture x86_64; relocation R_X86_64_PC32 against undefined symbol _ can not be used when making a shared object - #9283 - bad autoconf variable names - #9287 - changed dependency generation - #9292 - Race condition when multiple threads wait for a process - #9307 - LLVM vs NCG: floating point numbers close to zero have different sign - #9315 - Weird change in allocation numbers of T9203 - #9320 - Inlining regression/strangeness in 7.8 - #9347 - forkProcess does not acquire global handle locks - #9349 - excessive inlining due to state hack - #9353 - prefetch primops are not currently useful - #9358 - Improve flag description in the user guide - #9364 - GHCi (or haskeline?) confused by non-single-width characters - #9370 - unfolding info as seen when building a module depends on flags in a previously-compiled module - #9386 - GHCi cannot load .so in ./ - #9388 - Narrow the scope of the notorious "state hack" - #9406 - unexpected failure for T7837(profasm) - #9418 - Warnings about "INLINE binder is (non-rule) loop breaker" - #9420 - Impredicative type instantiation without -XImpredicativeTypes - #9421 - Problems and workarounds when installing and using a 32bit GHC on 64bit Linux machine - #9432 - IncoherentInstances are too restricted - #9434 - GHC.List.reverse does not fuse - #9438 - panic: loading archives not supported - #9445 - GHC Panic: Tick Exhausted with high factor - #9450 - GHC instantiates Data instances before checking hs-boot files - #9453 - The example for GHC Generics is kinda broken - #9456 - Weird behavior with polymorphic function involving existential quantification and GADTs - #9457 - hsc2hs breaks with `--cflag=-Werror` in cross-compilation mode - #9468 - Internal error: resurrectThreads: thread blocked in a strange way: 10 - #9470 - forkProcess breaks in multiple ways with GHC 7.6 - #9481 - Linker does not correctly resolve symbols in previously loaded objects - #9503 - Cross compiling with mingw uses wrong gcc - #9512 - T9329 fails test on unregisterised i386, amd64 - #9533 - Signed/unsigned integer difference between compiled and interpreted code - #9539 - TQueue can lead to thread starvation - #9547 - Empty constraint tuples are mis-kinded - #9557 - Deriving instances is slow - #9562 - Type families + hs-boot files = unsafeCoerce - #9570 - cryptarithm1 (normal) has bimodal runtime - #9573 - Add warning for invalid digits in integer literals - #9587 - Type checking with type functions introduces many type variables, which remain ambiguous. The code no longer type checks. - #9599 - app runs 10 times faster when compiled with profilling information than without it - #9607 - Programs that require AllowAmbiguousTypes in 7.8 - #9614 - ghc --print-(gcc|ld)-linker-flags broken - #9627 - Type error with functional dependencies - #9630 - compile-time performance regression (probably due to Generics) - #9631 - Comment in GHC.Base about GHC.Prim does not appear to be correct - #9636 - Function with type error accepted - #9643 - ghci must be restarted to use break point more than once? - #9646 - Simplifer non-determinism leading to 8 fold difference in run time performance - #9655 - Do not UNPACK strict fields that are very wide - #9660 - unnecessary indirect jump when returning a case scrutinee - #9666 - runtime crashing with +RTS -w -h - #9669 - Long compile time/high memory usage for modules with many deriving clauses - #9672 - Error message too long (full case statement printed) - #9675 - Unreasonable memory usage on large data structures - #9686 - Link option detection does not work for incremental builds on Windows - #9693 - Reloading GHCi with Template Haskell names can panic GHC - #9701 - GADTs not specialized properly - #9704 - GHC fails with "Loading temp shared object failed" - #9708 - Type inference non-determinism due to improvement - #9709 - Make restarts itself sometimes, and that's OK - #9717 - More lazy orphan module loading - #9725 - Constraint deduction failure - #9729 - GHCi accepts invalid programs when recompiling - #9730 - Polymorphism and type classes - #9737 - List ANN in pragmas chapter of user manual - #9755 - Unhelpful error message when -XScopedTypeVariables is omitted - #9765 - Strange behavior of GC under ghci - #9775 - "Failed to remove" errors during Windows build from hsc2hs - #9780 - dep_orphs in Dependencies redundantly records type family orphans - #9792 - map/coerce rule does not fire until the coercion is known - #9798 - Frustrating behaviour of the INLINE pragma - #9806 - malloc and mallocArray ignore Storable alignment requirements - #9809 - Overwhelming the TimerManager - #9811 - constant folding with infinities is wrong - #9825 - ghc "panic! (the 'impossible' happened)" building vimus on NixOS - #9841 - Touching a file that uses TH triggers TH recompilation flood - #9854 - Literal overflow check is too aggressive - #9868 - ghc: panic! Dynamic linker not initialised - #9885 - ghc-pkg parser eats too much memory - #9916 - ghc -e ":foo bar" exit status is inconsistent - #9918 - GHC chooses an instance between two overlapping, but cannot resolve a clause within the similar closed type family - #9932 - GHC fails to build when cross compiling for mingw with the message "Threads not supported" - #9936 - Data.Fixed truncates 5.17 to 5.16 - #9940 - GHCi crashes when I hold down control-c for a short time, and then hold down any key that would normally result in typing. - #9944 - Performance issue re: simple loop - #9979 - Performance regression GHC 7.8.4 to GHC HEAD - #9980 - TcS monad is too heavy - #9981 - Potential typechecker regression in GHC 7.10.1RC - #9982 - cross building integer-gmp is running target program on build host - #9983 - configure script invokes ghc with LDFLAGS during cross-builds - #9985 - GHC panic with ViewPatterns and GADTs in a proc pattern - #9989 - GHCI is slow for precompiled code - #9992 - Constructor specialization requires eta expansion - #10005 - Operations on string literals won't be inlined - #10010 - LLVM/optimized code for sqrt incorrect for negative values - #10012 - Cheap-to-compute values aren't pushed into case branches inducing unnecessary register pressure - #10022 - Clean up GHC.RTS.Flags - #10027 - Importing constructor of associated data type fails - #10032 - binary distributions not working --with-system-libffi - #10037 - Several profiling tests give different results optimised vs. unoptimised - #10044 - Wrong line number reported with CPP and line beginning with # - #10046 - Linker script patch in rts/Linker.c doesn't work for (non-C or non-en..) locales - #10053 - Regression on MacOS platform, error in ghci calling main after loading compiled code: "Too late for parseStaticFlags..." - #10056 - Inconsistent precedence of ~ - #10059 - :i doesn't work for ~ - #10062 - Codegen on sequential FFI calls is not very good - #10065 - Definition of fix lacks commentary - #10069 - CPR related performance issue - #10077 - Providing type checker plugin on command line results in false cyclic import error - #10097 - GHC 7.11 errors on dictionary casting tricks - #10106 - GHC doesn't warn on typos in language pragmas - #10111 - hp2ps silently discards samples - #10114 - Kind mismatches with AnyK in rank-2 types - #10117 - Change the scheme for reporting redundant imports - #10120 - Unnecessary code duplication from case analysis - #10124 - Simple case analyses generate too many branches - #10141 - CUSK mysteries - #10144 - Variant of runGhc which accepts prelude for setting up DynFlags - #10160 - GHCi :sprint has odd/unhelpful behavior for values defined within the REPL - #10161 - GHC does not relink if we link against a new library with old timestamp - #10169 - bracket not running the final action on termination through SIGTERM - #10171 - runghc has problem when the argument list is too big - #10173 - Panic: Irrefutable pattern failed for pattern Data.Maybe.Just - #10174 - AArch64 : ghc-stage2 segfaults compiling libraries/parallel - #10178 - hs-boot/hsig ambiguity empty data declaration or type with no constructors - #10179 - Kinds missing from types in ghci - #10183 - Warning for redundant constraints: interaction with pattern matching - #10184 - Coercible solver incomplete with recursive newtypes - #10185 - Coercible solver incomplete with non-variable transitivity - #10187 - Panic: RegAlloc.Liveness.computeLivenss; SCCs aren't in reverse dependent order; bad blockId c8xHd - #10189 - explicit promotions of prefix data constructors can't be parsed naturally - #10193 - TypeRep Show instance doesn't add parens around type operators - #10199 - Sending SIGINT to a program that uses forkOS may crash with various errors - #10227 - Type checker cannot deduce type - #10228 - Increased memory usage with GHC 7.10.1 - #10229 - setThreadAffinity assumes a certain CPU virtual core layout - #10241 - BlockedIndefinitelyOnMVar thrown to the thread which is not blocked indefinitely - #10245 - panic in new integer switch logic with "out-of-range" literals - #10246 - Literal Pattern match loses order - #10270 - inconsistent semantics of type class instance visibility outside recursive modules - #10271 - Typed Template Haskell splice difficulty when resolving overloading - #10295 - Putting SCC in heavily inlined code results in "error: redefinition of global" - #10311 - package name returned from tyConPackage is garbled - #10328 - Control.Monad exports lead to weird Haddocks - #10330 - Better Template Haskell error message locations - #10332 - AArch64 : divbyzero test fails - #10333 - hs-boot modification doesn't induce recompilation - #10334 - __ctzdi2/si2/__clzdi2/si2 unknown symbols in ghc-prim on non-shared libs platform - #10337 - One-shot module loops have hard to understand messages - #10338 - GHC Forgets Constraints - #10341 - hs-boot files can have bogus declarations if they're not exported - #10346 - Cross-module SpecConstr - #10347 - Spurious "unused constructor" warning with Coercible - #10353 - Haddock for Data.List should list instances - #10355 - Dynamic linker not initialised - #10367 - "ghc: panic! (the 'impossible' happened)" - #10371 - GHC fails to inline and specialize a function - #10373 - Haiku: ghc-stage1 compiler crashes at exit - #10374 - Can't build GHC with a dynamic only GHC installation - #10378 - min/max for Double/Float instances are incorrect - #10381 - Type-checking failure with RankNTypes and RebindableSyntax - #10385 - Annotation restriction is not respected while generating Annotation via TH - #10387 - toRational should error out on NaN and Infinity values for Double/Floats - #10401 - state hack-related regression - #10411 - Neighbour let-bindings are not reported as relevant - #10412 - isAlphaNum includes mark characters, but neither isAlpha nor isNumber do - #10417 - Rule matching not "seeing through" floating and type lambda (and maybe cast) - #10418 - Incorrect RULE warning on constructor, and inablilty to {-# INLINE [0] #-} constrcutor - #10421 - exponential blowup in inlining (without INLINE pragmas) - #10424 - Build path leaks into ABI hashes - #10434 - SPECIALISE instance does not specialize as far as SPECIALISE for type signatures - #10436 - Excessive numbers of packages loaded for TH - #10437 - RunHaskell error in HSExtra on Win64 - #10440 - Float out just gets floated in - #10445 - Wrong stack space size when using -Ksize - #10456 - Wrong CPP during cross-compilation - #10469 - ghc crash on arm with -j2: internal error: scavenge: unimplemented/strange closure type - #10470 - Allocating StablePtrs leads to GC slowdown even after they're freed - #10477 - Tab-completing in a directory with Unicode heiroglyph crashes ghci - #10482 - Not enough unboxing happens on data-family function argument - #10484 - hPutBuf crashes when trying to write a large string to stdout (resource exhausted) - #10490 - Missing binder type check in coercion equality test? - #10504 - GHC panics with dsImpSpecs on SPECIALISE pragma with -fhpc enabled - #10506 - SourceNotes are not applied to all identifiers - #10509 - UnicodeSyntax documentation lists wrong symbols - #10526 - Overlapping instances, incoherence, and optimisation - #10531 - modules that can be linked successfully when compiled with optimizations, fail to link with: multiple definition of `__stginit_ZCMain' - #10542 - Incorrect Unicode input on Windows Console - #10554 - Replacing existing attachment with the same name doesn't work - #10555 - RULE left-hand side too complicated to desugar - #10560 - -f and -O options interact in non-obvious, order dependent ways - #10572 - Type signatures are not implicitly quantified over TH type variables - #10576 - REPL returns list of all imported names when operator completion requested - #10577 - Use empty cases where appropriate when deriving instances for empty types - #10582 - Tiny bug in lexer around lexing banana brackets - #10583 - Chaos in Lexeme.hs - #10584 - Installation of SFML failed - #10587 - Suspending and unsuspending ghci kills and spawns threads - #10595 - BuiltinRules override other rules in some cases. - #10599 - Template Haskell doesn't allow `newName "type"` - #10600 - -fwarn-incomplete-patterns doesn't work with -fno-code - #10616 - Panic in ghci debugger with PolyKinds and PhantomTypes - #10617 - Panic in GHCi debugger with GADTs, PolyKinds and Phantom types - #10626 - Missed opportunity for SpecConstr - #10631 - Report of GHC Panic - #10643 - GHC cannot import submodules when run from subfolder - #10648 - Some 64-vector SIMD primitives are absolutely useless - #10651 - Type checking issue with existential quantification, rank-n types and constraint kinds - #10671 - inplace/bin/ghc-stage1 doesn't respect --with-ld override - #10675 - GHC does not check the functional dependency consistency condition correctly - #10684 - Error cascade when unrelated class derivation fails - #10686 - Process stops responding to sigINT - #10695 - Trac errors when creating a ticket with a Blocking: field - #10698 - Forall'd variable ‘$rcobox’ is not bound in RULE lhs - #10701 - -fth-dec-file uses qualified names from hidden modules - #10702 - -fth-dec-file uses qualified names in binding positions - #10707 - -fth-dec-file outputs invalid case clauses - #10709 - Using ($) allows sneaky impredicativity on its left - #10730 - Spectral norm allocations increased 17% between 7.6 and 7.8 - #10731 - System.IO.openTempFile is not thread safe on Windows - #10732 - Legal Bang Patterns cannot parse - #10749 - Boot file instances should imply superclasses - #10761 - GHC panic when compiling vimus: failed to map segment from shared object - #10768 - Location information of LHsModule is incorrect - #10770 - Typeable solver has strange effects - #10774 - Use `Natural` rather than `Integer` in `GHC.TypeLits` - #10778 - GHC doesn't infer all constrains - #10779 - .so files in 64-bit Debian builds are 4% larger than they have to be - #10783 - Partial type signatures should work in pattern synonym signatures - #10792 - Bounded Enums [minBound..maxBound] produces runtime error - #10793 - Incorrect blocked on MVar detection - #10799 - "I found a duplicate definition for symbol: __x86.get_pc_thunk.bx" in package network - #10808 - Odd interaction between record update and type families - #10816 - Fixity declaration for associated type rejected - #10818 - GHC 7.10.2 takes much longer to compile some packages - #10822 - GHC inconsistently handles \\?\ for long paths on Windows - #10830 - maximumBy has a space leak - #10853 - Refine addTopDecls - #10856 - Record update doesn't emit new constraints - #10857 - "ghci -XMonomorphismRestriction" doesn't turn on the monomorphism restriction - #10859 - Generated Eq instance associates && wrongly - #10861 - `configure -C` yields different results on second run - #10875 - Unexpected defaulting of partial type signatures and inconsistent behaviour when -fdefer-typed-holes is set. - #10878 - Near doubling of generated code size for compiler/cmm/PprC.hs with commit 5d57087e31 - #10905 - Incorrect number of parameters in "role" errors - #10920 - ghci can't load local Prelude module - #10922 - String inlining is inconsistent - #10923 - GHC should recompile if flags change - #10930 - missing empty-Enumeration and out-of-range warning for `Natural` - #10937 - "ghc -no-link --make A.hs -o foo" does something silly - #10940 - Random number chosen by openTempFile is always 1804289383846930886 - #10944 - powModInteger slower than computing pow and mod separately - #10946 - Typed hole inside typed Template Haskell bracket causes panic - #10951 - HPC program has poor error reporting / strange CLI in general - #10952 - Use IPids instead of package keys in HPC tix files - #10957 - getExecutablePath adds " (deleted)" suffix if executable was deleted under linux - #10958 - "Annotating pure code for parallelism" docs based on old par/pseq primitives - #10965 - GHC Panic on import with 'OPTIONS_GHC -fobject-code -O' - #10966 - dirtiness checking isn't keeping track of which source file contained Main - #10975 - At program exit, finalizer runs while foreign function is running - #10980 - Deriving Read instance from datatype with N fields leads to N^2 code size growth - #10987 - -i option requires named module - #10990 - Checking whether a default declaration is an instance of a defaultable typeclass is broken - #10992 - Performance regression due to lack of inlining of `foldl` and `foldl'`. - #10993 - Bad error message reported when -XBinaryLiterals is not enabled - #10995 - Existentials in newtypes - #10996 - family is treated as keyword in types even without TypeFamilies enabled - #10998 - Parser should suggest -XMagicHash - #11004 - hsc2hs does not handle single quotes properly - #11005 - GHC's build system can't deal with ghc install path with multiple spaces in it - #11006 - Warning: Glomming in Main - #11008 - Difficulties around inferring exotic contexts - #11009 - Errors reading stdin on Windows - #11013 - GHC sometimes forgets to test for hs-boot consistency - #11023 - ghci and ghc-pkg disagree about what's exposed - #11029 - Performance loss due to eta expansion - #11032 - Missing result type handling for State# s in foreign import prim. - #11042 - Template Haskell / GHCi does not respect extra-lib-dirs - #11045 - #11050 - [bug] ModOrigin: hidden module redefined - #11058 - selected processor does not support ARM mode - #11066 - Inacessible branch should be warning - otherwise breaks type soundness? - #11068 - Make Generic/Generic1 methods inlinable - #11069 - :cd in GHCi unloads modules - #11070 - Type-level arithmetic of sized-types has weaker inference power than in 7.8 - #11075 - Confusing parallel spark behaviour with safe FFI calls - #11084 - Some type families don't reduce with :kind! - #11092 - ApiAnnotations : make annotation for shebang - #11099 - Incorrect warning about redundant constraints - #11101 - Expand Template Haskell type splices before quantification - #11106 - Turning on optimizer changes behavior in 7.10.3 - #11107 - Can't use type wildcard infix - #11110 - GHCi documentation says ":show packages" gives a list of packages currently loaded - #11113 - Type family If is too strict - #11117 - mdo blocks in error messages are shown modified - #11124 - GHC does not shadow -package-name/-this-package-key - #11126 - Entered absent arg in a Repa program - #11131 - Eta reduction/expansion loop - #11141 - Better error message when instance signature is incorrect - #11146 - Manual eta expansion leads to orders of magnitude less allocations - #11151 - T3064 regresses with wildcard refactor - #11180 - A program writing to a read-only stdout should not succeed - #11181 - Program hangs forever in sched_yield() / yield() unless -N is limited - #11188 - Confusing "parse error in pattern" for spurious indentation. - #11195 - New pattern-match check can be non-performant - #11196 - TypeInType performance regressions - #11197 - Overeager deferred type errors - #11198 - TypeInType error message regressions - #11201 - ghc --make on Haskell and non-Haskell inputs can silently clobber input - #11203 - Kind inference with SigTvs is wrong - #11204 - Recompilation checking stochastically broken on Darwin - #11207 - GHC cannot infer injectivity on type family operating on GHC.TypeLits' Nat, but can for equivalent type family operating on user-defined Nat kind - #11212 - Should be more liberal parsing pattern synonyms with view patterns - #11214 - Remove JavaScriptFFI from --supported-extensions list - #11215 - Line endings in quasiquotations are not normalised - #11226 - Performance regression (involving sum, map, enumFromThenTo) - #11228 - Interaction between ORF and record pattern synonyms needs to be resolved. - #11247 - Weird error from running runghc on an invalid input filename - #11251 - isInstance does not work on Typeable with base-4.8 anymore - #11253 - Duplicate warnings for pattern guards and relevant features (e.g. View Patterns) - #11259 - Use system runtime linker in GHCi on PowerPC 64 bit - #11260 - Re-compilation driver/recomp11 test fails - #11261 - Implement DWARF debugging on powerpc64 - #11262 - Test print022: wrong stdout on powerpc64 - #11263 - "Simplifier ticks exhausted" that resolves with fsimpl-tick-factor=200 - #11267 - Can't parse type with kind ascription with GHCi's :kind command - #11271 - Costly let binding gets duplicated in IO action value - #11272 - Overloaded state-monadic function is not specialised - #11282 - Error warns about non-injectivity of injective type family - #11288 - Thread blocked indefinitely in a Mvar operation - #11293 - Compiler plugins don't work with profiling - #11301 - Using GHC's parser and rendering the results is unreasonably difficult - #11306 - Do not generate warning in `do` when result is of type `Void`. - #11307 - Regresssion: parsing type operators - #11310 - Surprising accepted constructor for GADT instance of data family - #11312 - GHC inlining primitive string literals can affect program output - #11315 - GHC doesn't restore split objects - #11317 - Test prog003 fails with segfault on Windows (GHCi) - #11319 - ImpredicativeTypes even more broken than usual - #11323 - powerpc64: recomp015 fails with redundant linking - #11325 - Type of hole does not get refined after pattern matching on [GADT] constructors - #11327 - GHC doesn't create dyn_o-boot files - #11333 - GHCi does not discharge satisfied constraints - #11346 - GHCi can crash when tabbing a filename - #11360 - Test "termination" doesn't pass with reversed uniques - #11368 - Pattern synonym name is mangled when patterns are non-exhaustive - #11369 - Suppress redundant-constraint warnings in case of empty classes - #11371 - Bogus in-scope set in substitutions - #11375 - Type aliases twice as slow to compile as closed type families. - #11380 - Compiling a 10.000 line file exhausts memory - #11382 - Optimize Data.Char - #11384 - Error says to fix incorrect return type - #11400 - * is not an indexed type family - #11406 - RTS gets stuck in scheduleDetectDeadlock() - #11421 - adding a PartialTypeSignatures to a binding affected by MonoLocalBinds leads compile-time failure - #11424 - "Occurs check" not considered when reducing closed type families - #11427 - superclasses aren't considered because context is no smaller than the instance head - #11431 - GHC instantiates levity-polymorphic type variables with foralls - #11435 - Evidence from TC Plugin triggers core-lint warning - #11436 - ValueError: invalid literal for int() with base 10: '#11238' - #11437 - Don't put static (version-based) feature gates in compilerInfo - #11449 - Treat '_' consistently in type declarations - #11474 - incorrect redundant-constraints warning - #11475 - Lint should check for inexhaustive alternatives - #11476 - gcc leaves undeleted temporary files when invoked with a response file - #11490 - check architecture before attempting to load object files - #11495 - TH_spliceE5_prof is failing with release candidate 8.0.1 - #11498 - GHC requires kind-polymorphic signatures on class head - #11499 - #11501 - Building nofib/fibon returns permission denied - #11503 - TypeError woes (incl. pattern match checker) - #11505 - Boot file problem - #11506 - uType_defer can defer too long - #11511 - Type family producing infinite type accepted as injective - #11513 - Work out when GADT parameters should be specified - #11514 - Impredicativity is still sneaking in - #11515 - PartialTypeSignatures suggests a redundant constraint with constraint families - #11517 - ghc -haddock fails to parse doc comments in closed type families - #11522 - maxStkSize can overflow - #11523 - Infinite Loop when mixing UndecidableSuperClasses and the class/instance constraint synonym trick. - #11525 - Using a dummy typechecker plugin causes an ambiguity check error - #11526 - unsafeLookupStaticPtr should not live in IO - #11527 - Pattern match translation suboptimal - #11529 - Show instance of Char should print literals for non-ascii printable charcters - #11536 - Multitude of different error messages when installed package is missing a module - #11538 - Wrong constants in LL code for big endian targets - #11540 - ghc accepts non-standard type without language extension - #11542 - Profiling call count frequently 0 when it shouldn't be - #11545 - Strictness signature blowup - #11546 - Compiler warning: cast from pointer to integer of different size - #11549 - Add -fshow-runtime-rep - #11553 - `round (± ∞ :: Double)` not infinite - #11556 - GHC recompiles unchanged hs-boot files - #11559 - Building a cross-compiler for MIPS target on Mac OS X host fails - #11560 - panic: isInjectiveTyCon sees a TcTyCon - #11566 - I don't need madvise MADV_DONTNEED - #11571 - Need more intelligent conditionalization of libgcc rts symbols for x32 - #11577 - GHCi accepts invalid programs when recompiling - #11578 - Fix GHC on AArch64/Arm64 - #11584 - [Template Haskell] Language.Haskell.TH.Syntax.hs contains misleading comment - #11587 - Place shared objects in LIBDIR - #11596 - ghci gets confused if a file is deleted - #11599 - Why is UndecidableInstances required for an obviously terminating type family? - #11602 - Exponential behaviour in typeKind, unifyTys etc - #11604 - Build system fails after submodule update - #11605 - GHC accepts overlapping instances without pragma - #11606 - name shadowing warnings don't trigger on standalone declarations in ghci - #11621 - GHC doesn't see () as a Constraint in type family - #11622 - Annotating types in type familiy equations without parentheses - #11628 - Unexpected results with Read/Show - #11630 - More precise LANGUAGE pragma when forall is used - #11634 - Bang patterns bind... unexpectedly. Deserves loud warning - #11645 - Heap profiling - hp2ps: samples out of sequence - #11647 - GHCi does not honour implicit `module Main (main) where` for re-exported `main`s - #11650 - Documentation does not mention that default definitions for Alternative(some, many) can easily blow up - #11655 - Ambiguous types in pattern synonym not determined by functional dependencies - #11668 - SPEC has a runtime cost if constructor specialization isn't performed - #11669 - Incorrectly suggests RankNTypes for ill-formed type "forall a. Eq a. Int" - #11672 - Poor error message - #11677 - Dramatic de-optimization with "-O", "-O1", "-O2" options - #11695 - On GHCi prompt the arrow (movement) keys create strange character sequences - #11704 - Word and Int literals not correctly truncated when cross compiling 64 -> 32 bit - #11715 - Constraint vs * - #11719 - Cannot use higher-rank kinds with type families - #11721 - GADT-syntax data constructors don't work well with TypeApplications - #11722 - No TypeRep for unboxed tuples - #11731 - Simplifier: Inlining trivial let can lose sharing - #11764 - ghc internal error building llvm-general-3.5.1.2 - #11766 - Lazy application gives "No instance" error while strict application works - #11771 - ghc.exe: `panic'! (the 'impossible' happened); thread blocked indefinitely in an MVar operation - #11773 - linux/powepc : ghc-stage1 segfaults when building unregisterised - #11774 - Regression on GHC 8 branch (vs 7.10.3) when using the GHC API to parse code that uses TH - #11777 - RTS source code issues - #11780 - GHC stage-2 build fails with "relocation R_X86_64_PC32 against `exitStaticPtrTable' can not be used when making a shared object" - #11798 - Recompiling with -fhpc flag added does nothing - #11812 - Template Haskell can induce non-unique Uniques - #11819 - Full validation issues for 8.0.1 - #11822 - Pattern match checker exceeded (2000000) iterations - #11826 - unsafe causes bug, news @ 11 - #11827 - InteractiveEval error handling gets a boot ModSummary instead of normal ModSummary - #11829 - C++ does not catch exceptions when used with Haskell-main and linked by ghc - #11831 - Illegal Instruction when running byte operations in ghci - #11834 - GHC master, not compiling on Archlinux - #11836 - Hello World Bug - silent stdout errors - #11944 - Simplifier ticks exhausted When trying UnfoldingDone ip_X7RI - #11954 - Associated pattern synonyms not included in haddock - #11955 - Haddock documentation for pattern synonyms printed with explicit forall quantifiers - #11957 - DataKinds: lifting constructors whose identifier is a single character - #11959 - Importing doubly exported pattern synonym and associated pattern synonym panics - #11963 - GHC introduces kind equality without TypeInType - #11964 - Without TypeInType, inconsistently accepts Data.Kind.Type but not type synonym - #11966 - Surprising behavior with higher-rank quantification of kind variables - #11968 - Document AMP in the deviations from the standard - #11982 - Typechecking fails for parallel monad comprehensions with polymorphic let - #11984 - Pattern match incompleteness / inaccessibility discrepancy - #11994 - ghci not applying defaulting when showing type - #11995 - Can't infer type - #11998 - Symbol not found: __hpc_tickboxes_DataziHeterogeneousEnvironment_hpc - #11999 - expressing injectivity on functional dependencies gives orphan instances warnings - #12002 - Pragmas after a module declaration are ignored without warning. - #12004 - Windows unexpected failures - #12005 - Constraint instances not shown in `:info` - #12006 - Can't infer constraint of pattern synonyms - #12012 - Socket operations on Windows check errno instead of calling WSAGetLastError() - #12018 - Equality constraint not available in pattern type signature (GADTs/ScopedTypeVariables) - #12019 - Profiling option -hb is not thread safe - #12021 - Type variable escapes its scope - #12023 - Problems getting information and kind from GHC.Prim ~#, ~R#, ... - #12028 - Large let bindings are 6x slower (since 6.12.x to 7.10.x) - #12032 - Performance regression with large numbers of equation-style decls - #12034 - Template Haskell + hs-boot = Not in scope during type checking, but it passed the renamer - #12038 - Shutdown interacts badly with requestSync() - #12043 - internal error: evacuate: strange closure type - #12046 - AllowAmbiguousTypes doesn't work with UndecidableSuperClasses - #12055 - Typechecker panic instead of proper error - #12056 - Too aggressive `-w` option - #12060 - GHC panic depending on what a Haskell module is named - #12063 - Knot-tying failure when type-synonym refers to non-existent data - #12065 - there is a way to override the .tix path with HPCTIXFILE - #12074 - RULE too complicated to desugar - #12075 - Fails to build on powerpcspe because of inline assembly - #12078 - ghc-boot-th package reveals issue with build system's treatment of transitive dependencies - #12079 - segmentation fault in both ghci and compiled program involves gtk library - #12083 - ghc-8.0.1-rc4: tyConRoles sees a TcTyCon - #12088 - Type/data family instances in kind checking - #12089 - :kind command allows unsaturated type family, - #12091 - 'Variable not in scope" when using GHCi with `-fobject-code` - #12092 - Out-of-scope variable leads to type error, not scope error - #12093 - Wrong argument count in error message with TypeApplications - #12100 - GHC 8.0.1 build segmentation fault in haddock - #12102 - “Constraints in kinds” illegal family application in instance (+ documentation issues?) - #12104 - Type families, `TypeError`, and `-fdefer-type-errors` cause "opt_univ fell into a hole" - #12110 - Windows exception handler change causes segfault with API Monitor - #12113 - ghc-8.0.1-rc4: unification false positive? - #12117 - Thread by forkIO freezes (while read :: Int if error appears) when compiled with -threaded option - #12120 - GHC accepts invalid Haskell: `class Eq (a Int) => C a where` - #12121 - FlexibleContexts is under specified - #12122 - User's guide (master): all links to libraries are broken - #12126 - Bad error messages for SPECIALIZE pragmas - #12131 - Can't solve constraints with UndecidableSuperClasses but can infer kind (+ undesired order of kinds) - #12135 - Failure to recompile when #include file is created earlier on include path - #12142 - -Wredundant-constraints warns about constraints introduced via type synonyms. - #12143 - ApplicativeDo Fails to Desugar 'return True' - #12149 - Support bit-fields - #12150 - Compile time performance degradation on code that uses undefined/error with CallStacks - #12152 - panic: Loading temp shared object failed - #12158 - ghc: panic! (the 'impossible' happened) translateConPatVec: lookup - #12161 - Panic when literal is coerced into function - #12168 - panic! (the 'impossible' happened) with gi-gtk 3.0.4 - #12169 - libraries/base/dist-install/build/HSbase-4.9.0.0.o: unknown symbol `stat' - #12173 - foldl' semantics changed from 4.7 to 4.8 - #12176 - Failure of bidirectional type inference at the kind level - #12179 - Incorrect parsing of a pattern synonym type - #12180 - Ctrl-c during build produces *some* outputs for a file, and GHC --make fails handling this - #12181 - Multi-threaded code on ARM64 GHC runtime doesn't use all available cores - #12184 - unsafeCoerce# causing invalid assembly generation - #12187 - Clarify the scoping of existentials for pattern synonym signatures - #12193 - Include target versions of unlit and hsc2hs when cross-compiling - #12199 - GHC is oblivious to injectivity when solving an equality constraint - #12200 - ghc-pkg check complains about missing libCffi on dynamic-only install - #12205 - Program develops space leak with -fprof-auto - #12210 - allocateExec: can't handle large objects - #12214 - In event of framework failure, test suite still deletes temporary directory - #12221 - GHC's signal handlers break C-c C-c force terminate - #12225 - Warn if test setting has no effect (e.g. compile_timeout_multiplier on run_command) - #12226 - C-c test suite does not force kill hung GHC processes - #12231 - Eliminate redundant heap allocations/deallocations - #12232 - Opportunity to do better in register allocations - #12234 - 'deriving Eq' on recursive datatype makes ghc eat a lot of CPU and RAM - #12236 - Windows profiling: T11627b segfaults for WAY=prof_hc_hb - #12249 - Template Haskell top level scoping error - #12262 - Binary output is not deterministic - #12274 - GHC panic: simplifier ticks exhausted - #12347 - Parallel make should eagerly report when compilation of a module starts - #12372 - bug: documentation for Control.Monad.guard not useful after AMP - #12373 - Type error but types match - #12377 - getExecutablePath doesn't return absolute path on OpenBSD (and maybe other OS also) - #12379 - WARN pragma gives warning `warning: [-Wdeprecations]' - #12383 - ghc: internal error: TSO object entered - #12387 - Template Haskell ignores class instance definitions with methods that don't belong to the class - #12388 - Don't barf on failures in the RTS linker - #12390 - List rules for `Coercible` instances - #12391 - LANGUAGE CPP messes up parsing when backslash like \\ is at end of line (eol) - #12394 - broken (obsolete?) links to user guide - #12395 - Misleading GHCi errors when package is installed - #12396 - Panic when specializing in another module - #12410 - Somehow detect splicing in ghci - #12412 - SIMD things introduce a metric ton of known key things - #12416 - Some GCC versions warn about failed inlines - #12421 - TestEquality and TestCoercion documentation is confusing - #12425 - With -O1 and above causes ghc to use all available memory before being killed by OOM killer - #12429 - Pattern synonym parse error should recommend enabling extension - #12430 - TypeFamilyDependencies accepts invalid injectivity annotation - #12434 - Test suite should not copy in un-versioned files - #12436 - Too many nested forkProcess's eventually cause SIGSEGV in the child - #12437 - 20% regression in max_bytes_used for T1969 - #12440 - Strictness of span and break does not match documentation - #12441 - Conflicting definitions error does not print explicit quantifiers when necessary - #12446 - Doesn't suggest TypeApplications when `~` used prefix - #12447 - Pretty-printing of equality `~` without parentheses - #12449 - Broken types in identifiers bound by :print - #12451 - TemplateHaskell and Data.Typeable - tcIfaceGlobal (local): not found - #12452 - TemplateHaskell - variables in top level splices and loading modules. - #12454 - Cross-module specialisation of recursive functions - #12459 - UnboxedTuple makes overloaded labels fail to parse - #12462 - Cannot add directories with colon to include path - #12467 - distclean does not clean 'compact' library - #12471 - Weirdness when using fromIntegral in quosiquoter - #12475 - GHCi no longer handles stdin being closed gracefully - #12482 - Infinite compilation time when using wrongly ordered constraints - #12488 - Explicit namespaces doesn't enforce namespaces - #12494 - Implementation of setenv in base incorrectly claims empty environment variable not supported on Windows - #12502 - Reject wrong find utility - #12506 - Compile time regression in GHC 8. - #12509 - ghci -XSafe fails in an inscrutable way - #12514 - Can't write unboxed sum type constructors in prefix form - #12516 - Preprocessing: no way to portably use stringize and string concatenation - #12517 - Simplify runghc command line options - #12525 - Internal identifiers creeping into :show bindings - #12527 - GHC segfault while linking llvm-general while compiling a file using Template Haskell - #12535 - Bad error message when unidirectional pattern synonym used in bidirectional pattern synonym - #12537 - Parallel cabal builds Segmentation Fault on PowerPC 64-bit - #12540 - RFC: Allow not quantifying every top-level quantifiee - #12542 - Unexpected failure.. (bug?) - #12545 - Compilation time/space regression in GHC 8.0/8.1 (search in type-level lists and -O) - #12553 - Reference kind in a type instance declaration defined in another instance declaration - #12560 - ‘:info TYPE’ mentions any instance that includes ‘Type’ - #12561 - Scope extrusion in Template Haskell - #12563 - Bad error message around lack of impredicativity - #12564 - Type family in type pattern kind - #12565 - unhelpful error message about enabling TypeApplications - #12566 - Memory leak - #12567 - `ghc --make` recompiles unchanged files when using `-fplugin` OPTIONS - #12569 - TypeApplications allows instantiation of implicitly-quantified kind variables - #12570 - Different behaviour in Linux and Mac OS when using some locale environments - #12576 - Large Address space is not supported on Windows - #12577 - The flag -xb has no effect on Windows - #12581 - Testsuite segfaults on OS X - #12592 - Explicit type signature for type classes fails - #12596 - can't find interface-file declaration - #12598 - configure script: --enable-unregisterised default printed incorrectly - #12600 - Overloaded method causes insufficient specialization - #12601 - explicit foralls do not distinguish applicable types - #12607 - Memory effects in doomed STM transactions - #12610 - Emit tab warning promptly - #12612 - Allow kinds of associated types to depend on earlier associated types - #12625 - Bad error message for flags with required but missing arguments - #12629 - Worse performance with -O1 or -O2 due to GC cost - #12631 - `hpc report` silently ignore non-existent modules - #12632 - Inline and Noinline pragmas ignored for instance functions - #12636 - ProfHeap's printf modifiers are incorrect - #12640 - Class member functions not substituted for MultiParamTypeClasses - #12642 - reports incorrect target arch on mips64el - #12643 - class declaration works in ghci, but not in a file - #12645 - 7.10.3 porting feedback - #12648 - Stack overflow when using monad-unlift - #12652 - Type checker no longer accepting code using function composition and rank-n types - #12655 - Bizarre parser problem: "Illegal bang-pattern" (something to do with CPP?) - #12656 - ghc floats out constant despite -fno-cse - #12657 - GHC and GHCi: RWX mmap denied by GrSec, results in a segfault - #12658 - GHC 7.10.3 sometimes reports unimplemented/strange closure type 63270 - #12659 - Unactionable core lint warning due to floating out - #12671 - enumFrom error thwarts checkOldIface's exception handling - #12674 - GHC doesn't handle ./ prefixed paths correctly - #12675 - Simplifier ticks exhausted - #12678 - -threaded is listed as a dynamic flag but is silently ignored in OPTIONS_GHC - #12684 - GHC panic due to bindnow linker flag - #12685 - getNumProcessors semantics dont match documentation - #12689 - DataCon wrappers get in the way of rules - #12690 - Segmentation fault in GHC runtime system under low memory with USE_LARGE_ADDRESS_SPACE - #12694 - GHC HEAD no longer reports inaccessible code - #12695 - Build failure due to MAP_NORESERVE being removed in FreeBSD 11.x and later - #12696 - Exception gives not enough information to be useful - #12699 - Suspicious treatment of renaming of field labels - #12700 - Don't warn about redundant constraints for type equalities - #12704 - Check if constraint synonym satisfies functional dependencies - #12705 - Renamer should reject signatures that reexport only part of a declaration - #12706 - Collecting type info is slow - #12709 - GHC panic - #12712 - break011 is broken on Windows - #12714 - T9405 fails on Windows - #12715 - T3994 is intermittently broken on Windows - #12718 - Segmentation fault, runtime representation polymorphism - #12723 - Family instance modules are not fingerprinted in ABI - #12724 - Be lazier about reducing type-function applications - #12731 - Generic type class has type family; leads to big dep_finsts - #12734 - Missed use of solved dictionaries leads to context stack overflow - #12736 - Calling a complex Haskell function (obtained via FFI wrapper function) from MSVC 64-bit C code (passed in as FunPtr) can leave SSE2 registers in the XMM6-XMM15 range modified - #12737 - T12227 is failing on ghc-8.0 - #12739 - Failure installing elm-init-1.0.5 (ExitFailure (-6)) - #12740 - generic Linux installer contains dynamically linked helpers failing to run on non glibc systems - #12742 - Instantiation of invisible type family arguments is too eager - #12743 - Profiling wrongly attributes allocations to a function with Int# result - #12750 - hGetContents leads to late/silent failures - #12751 - T5611 fails non-deterministically on OSX - #12753 - GHCi/Template Haskell: Don't panic on linker errors. - #12756 - ghci gives stg_ap_v_ret error - #12760 - Assertion failed with BuildFlavour = devel2 (yet another) - #12761 - Type aliases for TypeError constrains fire during compilation time - #12762 - ASSERT failures on HEAD following typechecker refactoring - #12769 - ghc: internal error: stg_ap_pp_ret - #12770 - Shrink list of RUNPATH entries for GHC libraries - #12776 - Panic Simplifier ticks exhausted since ghc 8 - #12778 - Expose variables bound in quotations to reify - #12779 - isTrue# doesn't work in ghci anymore - #12780 - Calling "do nothing" type checker plugin affects type checking when it shouldn't - #12781 - Significantly higher allocation with INLINE vs NOINLINE - #12790 - GHC 8.0.1 uses copious amounts of RAM and time when trying to compile lambdabot-haskell-plugins - #12792 - Wrong error message when using a data type as a class instance head - #12793 - Performs unaligned access on SPARC64 - #12794 - Out of scope error not reported - #12798 - LLVM seeming to over optimize, producing inefficient assembly code... - #12808 - For closures, Loop Invariant Code Flow related to captured free values not lifted outside the loop... - #12811 - GHC tells me to use RankNTypes when it's already enabled - #12813 - GHC panic when installing haskell-opencv with nix - #12817 - Degraded performance with constraint synonyms - #12818 - Allow reify to find top-level bindings in later declaration groups - #12820 - Regression around pattern synonyms and higher-rank types - #12825 - ghc panic on ppc64le, ghc 8.0.1, agda 2.5.1.1 patched for newer EdisonAPI - #12829 - Multiline input (‘:set +m’) terminated by trailing whitespace - #12831 - Fulltext search SQL error in Trac - #12832 - GHC infers too simplified contexts - #12841 - Remove or explain target triple normalization - #12846 - On Windows, runtime linker can't find function defined in GHC's RTS - #12847 - ghci -fobject-code -O2 doesn't do the same optimisations as ghc --make -O2 - #12849 - hsc2hs trouble with floating-point constants in cross-compilation mode - #12850 - Panic with unboxed integers and ghci - #12852 - threadWaitReadSTM does not provide a way to unregister action. - #12854 - ghc-8 prints mangled names in error message: ‘GHC.Base.$dm<$’ - #12859 - ghc/packages/Cabal is not a valid repository name - #12860 - GeneralizedNewtypeDeriving + MultiParamTypeClasses sends typechecker into an infinite loop - #12861 - "ghc-pkg-6.9" - #12862 - Operator (!) causes weird pretty printing and parsing - #12863 - Associated data families don't use superclasses when deriving instances - #12869 - getChar doesn't work on a new Windows build - #12873 - hWaitForInput with socket as handle excepts on windows - #12874 - Read/Show Incompatibility in base - #12875 - GHC fails to link all StaticPointers-defining modules of a library in an executable - #12884 - Parsing of literate files fails because of special character (#) - #12885 - "too many iterations" causes constraint solving issue. - #12890 - Stdcall - treating as CCall (bogus warning on win 64 bit) - #12893 - Profiling defeats stream fusion when using vector library - #12897 - yesod-auth-1.4.13.2 failed during the building phase. - #12906 - GHC fails to typecheck Main module without main - #12908 - Tuple constraints refactoring seems to regress performance considerably - #12915 - cmmImplementSwitchPlans creates duplicate blocks - #12917 - Location info for error message with multiple source locations - #12919 - Equality not used for substitution - #12920 - Overzealous unused-top-binds - #12921 - initTc: unsolved constraints - #12922 - Kind classes compile with PolyKinds - #12926 - ghc master (8.1.20161202) panics with assertion failure with devel2 flavor and -O2 - #12929 - GHC 8.0.1 Armv7 Missing -N option - #12931 - tc_infer_args does not set in-scope set correctly - #12932 - -fexternal-interpreter` fails to find symbols - #12934 - Testsuite driver buffering behavior has changed with Python 3 - #12935 - Object code produced by GHC is non-deterministic - #12937 - String merging broken on Darwin - #12938 - Polykinded associated type family rejected on false pretenses - #12939 - ghc-8.0.1.20161117 did not install ghc.1 manpage - #12940 - ghc-8.0.2 RC1 libs installed under package dirs vs Cabal installing packages under abi dir - #12944 - ghc master (8.1.20161206) panics with assertion failure with devel2 flavor and -O - #12947 - Core lint error - #12949 - Pattern coverage mistake - #12952 - Broken tests: ghci014 maessen_hashtab qq006 - #12962 - No automatic SCC annotations for functions marked INLINABLE - #12964 - Runtime regression to RTS change - #12965 - String merging broken on Windows - #12967 - GHC 8.0.1 Panic for current Aeson 1.0.2.1 - #12971 - Paths are encoded incorrectly when invoking GCC - #12972 - Missed specialisation opportunity with phantom type class parameter? - #12974 - Solution to regular expression is no longer valid - #12975 - Suggested type signature for a pattern synonym causes program to fail to type check - #12978 - User guide section on AllowAmbiguousTypes should mention TypeApplications - #12980 - Unlifted class method rejected - #12983 - #12985 - GHCi incorrectly reports the kind of unboxed tuples spliced in from Template Haskell - #12989 - ($) can have a more general type - #12991 - panic when using IrredPreds in a type checker plugin. - #13000 - minor doc bug about list fusion in GHC user's guide - #13001 - EnumFromThenTo is is not a good producer - #13002 - :set -O does not work in .ghci file - #13003 - improve documentation for typed TH and make it more discoverable - #13010 - module imports form a cycle instead of failing to parse - #13011 - Simplifier ticks exhausted: a 10-line case - #13012 - ApiAnnotations comments are not machine checked - #13014 - Seemingly unnecessary marking of a SpecConstr specialization as a loopbreaker - #13016 - SPECIALIZE INLINE doesn't necessarily inline specializations of a recursive function - #13017 - GHC panics during build of etkmett/free - #13021 - Inaccessible RHS warning is confusing for users - #13024 - T12877 intermittently fails - #13025 - Type family reduction irregularity (change from 7.10.3 to 8.0.1) - #13027 - The let/app invariant, evaluated-ness, and reallyUnsafePtrEquality# - #13030 - Build error from following default nixos instructions and -Werror - #13031 - Bogus calculation of bottoming arity - #13032 - Redundant forcing of Given dictionaries - #13035 - GHC enters a loop when partial type signatures and advanced type level code mix - #13043 - GHC 7.10->8.0 regression: GHC code-generates duplicate _closures - #13045 - LLVM code generation causes segfaults on FreeBSD - #13046 - TH Names can't be used in patterns - #13047 - Can create bindings of kind Constraint without ConstraintKind, only TypeFamilies - #13048 - Splitter is O(n^2) - #13052 - unsafePerformIO duped on multithread if within the same IO thunk - #13053 - Inferred type for hole is not general enough - #13054 - Generating unique names with template haskell - #13056 - Deriving Foldable causes GHC to take a long time (GHC 8.0 ONLY) - #13061 - Incorrect constraints given single flexible undecidable instance. - #13062 - `opt' failed in phase `LLVM Optimiser'. (Exit code: -11) - #13063 - Program uses 8GB of memory - #13064 - Incorrect redudant imports warning - #13068 - GHC should not allow modules to define instances of abstract type classes - #13069 - hs-boot files permit default methods in type class (but don't typecheck them) - #13078 - Panic from ghc-stage1 when building HEAD with profiling - #13080 - Memory leak caused by nested monadic loops - #13083 - Constraint solving failure with Coercible + type family + newtype - #13084 - 'foreign import prim' are marked PlaySafe by the parser - #13085 - -fregs-graph is ignored but this is undocumented - #13086 - (\\) in Data.List uses foldl instead of foldl' - #13087 - AlternativeLayoutRule breaks LambdaCase - #13088 - Type operator holes don't work infix - #13090 - Expose all unfoldings of overloaded functions by default - #13091 - Build broken on amd64 solaris 11 - #13092 - family instance consistency checks are too pessimistic - #13093 - Runtime linker chokes on object files created by MSVC++ - #13094 - Poor register allocation and redundant moves when using `foreign import prim` - #13099 - recompilation can fail to recheck type family instance consistency - #13102 - orphan family instances can leak through the EPS in --make mode - #13104 - runRW# ruins join points - #13105 - Allow type families in RuntimeReps - #13109 - CUSK improvements - #13110 - GHC API allocates memory which is never GC'd - #13112 - Windows 64-bit GHC HEAD segfaults on the code with a lot of TH stuff. - #13113 - Runtime linker errors with CSFML on Windows - #13115 - missing data instances for IntPtr and WordPtr in base - #13118 - let binding tuple of lenses error not an expression - #13119 - yesod-auth-1.4.15: ghc: panic! (the 'impossible' happened) Linker error - #13130 - atomicModifyMutVar# has incorrect type - #13135 - Typechecker "panic! the 'impossible' happened" - #13137 - Dynamic linker not initialised. - #13139 - Documents not updating correctly? - #13141 - Don't check for Perl in ./configure when not splitting objects - #13142 - Substitution invariant failure arising from OptCoercion - #13145 - Documentation shouldn't call things functions that aren't functions - #13146 - Doc for RealWorld refers to non-existent "ptrArg" - #13147 - Formatting is broken in Exceptions section of GHC.Prim haddock - #13148 - Adding weak pointers to non-mutable unboxed values segfaults - #13150 - unknown warning is not reported by GHC - #13153 - Several Traversable instances have an extra fmap - #13154 - Standalone-derived anyclass instances aren't as permissive as empty instances - #13157 - Opportunity to improve case-of-case - #13163 - Make type import/export API Annotation friendly - #13165 - Speed up the RTS hash table - #13167 - GC and weak reference finalizers and exceptions - #13168 - Ambiguous interface errors in GHCi, even with -XPackageImports - #13169 - Documentation for CoreMonad.getAnnotations - #13171 - panic on negative literal in case on Word - #13174 - Fix mismatch between unsafeDupablePerformIO and note - #13180 - Confusing error when hs-boot abstract data implemented using synonym - #13184 - :show bindings broken under -fexternal-interpreter - #13185 - haskell-relational-query: ghc: panic! (the 'impossible' happened) - #13192 - Ambiguity Caused By PolyKind and Not Helpful Error Messages - #13193 - Integer (gmp) performance regression? - #13194 - Concurrent modifications of package.cache are not safe - #13195 - Ticks panic - #13196 - Document AMP as a deviation from Haskell 2010 - #13200 - Old links to snapshot releases in GHC user's guide - #13201 - Type-level naturals aren't instantiated with GHCi debugger - #13202 - Levity polymorphism panic in GHCi - #13203 - Implement Bits Natural clearBit - #13205 - Run `validate --slow` during CI at least sometimes. - #13207 - Performance regressions from removing LNE analysis - #13208 - Do two-phase inlining in simpleOptPgm - #13209 - ghc panic with optimization. - #13210 - Can't run terminfo code in GHCi on Windows - #13211 - NegativeLiterals -0.0 :: Double - #13216 - internal error: stg_ap_pppppp_ret - #13219 - CSE for join points - #13220 - Performance regressions in testsuite from join points - #13221 - OccurAnal fails to rediscover join points - #13223 - Core Prep sometimes generates case of type lambda - #13224 - Rules and join points - #13225 - Fannkuch-redux time regression from join point patch - #13226 - Compiler allocation regressions from top-level string literal patch - #13233 - typePrimRep panic while compiling GHC with profiling - #13235 - (makeVersion [4, 9, 0, 0] <= makeVersion [4, 9]) == False - #13236 - Improve floating for join points - #13239 - Phabricator upgrade broke the ticket custom field - #13242 - Panic "StgCmmEnv: variable not found" with ApplicativeDo and ExistentialQuantification - #13243 - make test in non-validate configuration fails with a variety of ghci errors - #13244 - Error Dealing with Unboxed Types and Type Families - #13245 - Default FD buffer size is not a power of 2 - #13246 - hPutBuf issues unnecessary empty write syscalls for large writes - #13247 - hPutBufNonBlocking can block - #13250 - Backpack: matching newtype selectors doesn't work - #13251 - Must perform family consistency check on non-imported identifiers - #13253 - Exponential compilation time with RWST & ReaderT stack with `-02` - #13254 - Confusing error message from GHCI - "defined in multiple files" shows the same file - #13255 - aws package fails to build with master - #13260 - panic on unboxed string literal in pattern - #13264 - GHC panic with (->) generalization branch while compiling lens - #13265 - perf-llvm build fails with "Too many sections: 123418 (>= 65280)" - #13266 - Source locations from signature merging/matching are bad - #13267 - Constraint synonym instances - #13268 - Backpack doesn't work with Template Haskell, even when it should - #13269 - Changes in foreign code used in TH do not trigger recompilation - #13271 - GHC Panic With Injective Type Families - #13272 - DeriveAnyClass regression involving a rigid type variable - #13273 - AttributeError: 'Environment' object has no attribute 'get_db_cnx' - #13275 - ghci ignores -fprint-explicit-runtime-reps - #13279 - Check known-key lists - #13281 - Linting join points - #13283 - T5435_dyn_asm fails with gold linker - #13284 - Incoherent instance solving is over-eager - #13285 - Bug in GHC.Stack.callStack when used with sections - #13286 - Late floating of join points - #13287 - The runtime parses arguments past -- under windows but passes them on as arguments on linux - #13288 - Resident set size exceeds +RTS -M limit with large nurseries - #13289 - Terrible loss of optimisation in presence of ticks - #13290 - Data constructors should not have RULES - #13291 - bpk15 and bkp47 fail with -dunique-increment=-1 - #13292 - Type checker rejects ambiguous top level declaration. - #13294 - compactResize is a terrible primop and a terrible name - #13295 - Failure to resolve type parameter determined by type family - #13296 - stat() calls can block Haskell runtime - #13297 - Panic when deriving Applicative instance through transformer - #13300 - panic! isInjectiveTyCon sees a TcTyCon W - #13301 - GHC base directory assumptions bugs, - #307 - Implicit Parameters and monomorphism - #393 - functions without implementations - #436 - Declare large amounts of constant data - #472 - Supertyping of classes - #728 - switch to compacting collection when swapping occurs - #750 - Set -M, -H, -c and other memory-related values based on available virtual/physical memory - #849 - Offer control over branch prediction - #860 - CPP fails when a macro is used on a line containing a single quote character - #881 - Improve space profiling for references - #989 - Build GHC on Windows using Microsoft toolchain - #1192 - GHC-only IOErrorType constructors, and is*Error(Type) functions - #1231 - deprecation warnings are reported too often - #1262 - RecursiveDo in Template Haskell - #1273 - would like to print partial application values when at a breakpoint - #1311 - newtypes of unboxed types disallowed - documentation bug and/or feature request - #1318 - add warning for prefix negate operator and flag to replace it with negative numeric literals - #1365 - -fbyte-code is ignored in a OPTIONS_GHC pragma - #1379 - Allow breakpoints and single-stepping for functions defined interactively - #1399 - better support for developing threaded applications in ghci - #1409 - Allow recursively dependent modules transparently (without .hs-boot or anything) - #1420 - Automatic heap profile intervals - #1444 - Template Haskell: add proper support for qualified names in non-splicing applications - #1451 - Provide way to show the origin of a constraint - #1475 - Adding imports and exports with Template Haskell - #1532 - Implicit parameters are not available in breakpoints - #1534 - [Debugger] Watch on accesses of "variables" - #1628 - warning(s) for using stolen syntax that's not currently enabled - #1768 - More flexible type signatures for data constructors - #1800 - Template Haskell support for running functions defined in the same module - #1826 - unable to list source for <exception thrown> should never occur - #1885 - Improve CPR analysis - #1894 - Add a total order on type constructors - #1921 - change default to support extensions that involve a change of syntax - #1965 - Allow unconstrained existential contexts in newtypes - #1974 - length "foo" doesn't work with -XOverloadedStrings - #2041 - Allow splicing in concrete syntax - #2075 - hpc should render information about the run in its html markup - #2101 - Allow some form of type-level lemma - #2119 - explicitly importing deprecated symbols should elicit the deprecation warning - #2135 - Warn if functions are exported whose types cannot be written - #2168 - ghci should show haddock comments for identifier - #2180 - Any installed signal handler stops deadlock detection, but XCPU never happens in a deadlock - #2200 - big static random access arrays - #2207 - Load the interface details for GHC.* even without -O - #2215 - :disable command to disable breakpoints - #2258 - ghc --cleanup - #2269 - Word type to Double or Float conversions are slower than Int conversions - #2340 - Improve Template Haskell error recovery - #2344 - oddity with package prefixes for data constructors - #2345 - :browse limitations (browsing virtual namespaces, listing namespaces) - #2365 - Warn about usage of `OPTIONS_GHC -XLanguageExtension` - #2403 - caching for runghc (runhaskell) - #2427 - Allow compilation of source from stdin - #2460 - provide -mwindows option like gcc - #2465 - Fusion of recursive functions - #2522 - Warning for missing export lists - #2550 - Add an option to read file names from a file instead of the command line - #2551 - Allow multiple modules per source file - #2595 - Implement record update for existential and GADT data types - #2598 - Avoid excessive specialisation in SpecConstr - #2600 - Bind type variables in RULES - #2614 - Enumeration of values for `Sys.Info.os`, `Sys.Info.arch` - #2630 - installed packages should have a src-dirs field, for access to optionally installed sources - #2640 - Treat -X flags consistently in GHCi - #2641 - Revise the rules for -XExtendedDefaultRules - #2648 - Report out of date interface files robustly - #2708 - Error message should suggest UnboxedTuples language extension - #2737 - add :tracelocal to ghci debugger to trace only the expressions in a given function - #2742 - The -> in ViewPatterns binds more weakly than infix data constructors. - #2803 - bring full top level of a module in scope when a breakpoint is hit in the module - #2867 - Make a way to tell GHC that a pragma name should be "recognised" - #2893 - Implement "Quantified contexts" proposal - #2895 - Implement the "Class System Extension" proposal - #2896 - Warning suggestion: argument not necessarily a binary operator - #2945 - add command :mergetrace - #2946 - tracing should be controled by a global flag - #2950 - show breakpoint numbers of breakpoints which were ignored during :force - #2986 - :info printing instances often isn't wanted - #3000 - :break command should recognize also nonexported top level symbols in qualified IDs - #3021 - A way to programmatically insert marks into heap profiling output - #3052 - ghc FFI doesn't support thiscall - #3085 - warn about language extensions that are not used - #3122 - Enhance --info - #3192 - Add dynCompileCoreExpr :: GhcMonad m => Bool -> Expr CoreBind -> m Dynamic to ghc-api - #3205 - Generalized isomorphic deriving - #3215 - Valgrind support - #3282 - How to start an emacs editor within ghci asynchronously with :edit filename.hs :set editor emacs & don't go - #3283 - Selective disabling of unused bind warnings - #3372 - Allow for multiple linker instances - #3427 - control what sort of entity a deprecated pragma applies to - #3452 - Show type of most recent expression in GHCi - #3464 - Find import declaration importing a certain function - #3483 - Some mechanism for eliminating "absurd" patterns - #3490 - Relax superclass restrictions - #3517 - GHC has lots of extra hidden IOErrorType values - #3541 - Allow local foreign imports - #3545 - As-patterns for type signatures - #3547 - Improve granularity of UndecidableInstances - #3557 - CPU Vector instructions in GHC.Prim - #3577 - Complete support for Data Parallel Haskell - #3583 - Default view patterns - #3601 - When running two or more instances of GHCi, persistent history is only kept for the first one - #3615 - GHCi doesn't allow the use of imported data contructors - #3619 - allow to set ghc search path globally (a'la CPATH) - #3632 - lift restrictions on records with existential fields, especially in the presence of class constraints - #3645 - Layout and pragmas - #3701 - allow existential wrapper newtypes - #3744 - Comparisons against minBound/maxBound not optimised for (Int|Word)(8|16|32) - #3753 - Make ghci's -l option consistent with GNU ld's -l option - #3769 - Add manpages for programs installed alongside ghc - #3786 - showing function arguments when stopped at its definition - #3869 - RTS GC Statistics from -S should be logged via the eventlog system - #3870 - Avoid Haddock-links to the Prelude - #3895 - "Fix" pervasive-but-unnecessary signedness in GHC.Prim - #3919 - Or-patterns as GHC extension - #3980 - System.Posix.Signals should provide a way to set the SA_NOCLDWAIT flag - #3984 - Handle multiline input in GHCi history - #4016 - Strange display behaviour in GHCi - #4020 - Please consider adding support for local type synonyms - #4052 - Two sided sections - #4091 - Parse error on curried context of instance declaration - #4092 - Floating point manipulation : ulp and coerce IEEE-754 Double# into Word64# - #4096 - New primops for indexing: index*OffAddrUsing# etc - #4102 - Bit manipulation built-ins - #4180 - do not consider associativity for unary minus for fixity resolution - #4222 - Template Haskell lets you reify supposedly-abstract data types - #4259 - Relax restrictions on type family instance overlap - #4316 - Interactive "do" notation in GHCi - #4329 - GHC.Conc modifyTVar primitive - #4385 - Type-level natural numbers - #4442 - Add unaligned version of indexWordArray# - #4453 - Allow specifying .hi files of imports on command line in batch mode - #4459 - Polymorphic Data.Dynamic - #4470 - Loop optimization: identical counters - #4479 - Implement TDNR - #4806 - Make error message more user friendly when module is not found because package is unusable - #4815 - Instance constraints should be used when deriving on associated data types - #4823 - Loop strength reduction for array indexing - #4879 - Deprecate exports - #4894 - Missing improvement for fun. deps. - #4900 - Consider usage files in the GHCi recompilation check - #4913 - Make event tracing conditional on an RTS flag only - #4937 - Remove indirections caused by sum types, such as Maybe - #4955 - increase error message detail for module lookups failure due to hi references - #4959 - Warning about variables with leading underscore that are used anyway - #4980 - Warning about module abbreviation clashes - #5016 - Make Template Haskell: -ddump-splices generate executable code - #5059 - Pragma to SPECIALISE on value arguments - #5073 - Add blockST for nested ST scopes - #5075 - CPR optimisation for sum types if only one constructor is used - #5108 - Allow unicode sub/superscript symbols in operators - #5171 - Misfeature of Cmm optimiser: no way to extract a branch of expression into a separate statement - #5197 - Support static linker semantics for archives and weak symbols - #5218 - Add unpackCStringLen# to create Strings from string literals - #5219 - need a version of hs_init that returns an error code for command-line errors - #5266 - Licensing requirements and copyright notices - #5288 - Less noisy version of -fwarn-name-shadowing - #5324 - Locally-scoped RULES - #5344 - CSE should look through coercions - #5392 - Warnings about impossible MPTCs would be nice - #5416 - Local modules and Template Haskell declaration splices - #5467 - Template Haskell: support for Haddock comments - #5556 - Support pin-changing on ByteArray#s - #5590 - "guarded instances": instance selection can add extra parameters to the class - #5619 - Shorter qualified import statements - #5672 - parBufferWHNF could be less subtle - #5813 - Offer a compiler warning for failable pattern matches - #5823 - FFI and CAPI needs {-# INCLUDE #-} back? - #5834 - Allow both INLINE and INLINABLE for the same function - #5910 - Holes with other constraints - #5918 - hsc2hs forces wordsize (i.e. -m32 or -m64) to be the choice of GHC instead of allowing a different (or no/default choice) - #5927 - A type-level "implies" constraint on Constraints - #5941 - Add compilation stage plugins - #5972 - option to suppress (Monomorphic) record selector functions - #5985 - Type operators are not accepted as variables in contexts - #6024 - Allow defining kinds alone, without a datatype - #6030 - Typeclass constraint should pick the OverloadedString type. - #6077 - Respect XDG_CONFIG_HOME - #6089 - Allow declaration splices inside declaration brackets - #7048 - Add the ability to statically define a `FunPtr` to a haskell function - #7081 - arrow analogs of lambda case and multi-way if - #7104 - Add tryWriteTBQueue to Control.Concurrent.STM.TBQueue - #7140 - Allow type signature in export list - #7152 - Add flag to configure that skips overwriting of symlinks on install - #7158 - GHCi commands case insensitive - #7169 - Warning for incomplete record field label used as function - #7204 - Use a class to control FFI marshalling - #7261 - ghci's :info and :browse break encapsulation - #7275 - Give more detailed information about PINNED data in a heap profile - #7283 - Specialise INLINE functions - #7285 - mkWeakMVar is non-compositional - #7291 - hp2ps should cope with incomplete data - #7300 - Allow CAFs kept reachable by FFI to be forcibly made unreachable for GC - #7330 - Data Parallel Haskell (DPH) isn't usable yet. - #7331 - Allow the evaluation of declaration splices in GHCi - #7335 - Need for extra warning pragma for accidental pattern matching in do blocks - #7337 - GHC does not generate great code for bit-level rotation - #7395 - DefaultSignatures conflict with default implementations - #7401 - Can't derive instance for Eq when datatype has no constructor, while it is trivial do do so. - #7413 - runghc (runhaskell) should be able to reload code on editing - #7414 - plugins always trigger recompilation - #7492 - Generic1 deriving: Can we replace Rec1 f with f :.: Par1? - #7494 - Allow compatible type synonyms to be the return type of a GADT data constructor. - #7495 - generalizing overloaded list syntax to Sized Lists, HLists, HRecords, etc - #7606 - Stride scheduling for Haskell threads with priorities - #7635 - SafeHaskell implying other options - #7647 - UNPACK polymorphic fields - #7662 - Improve GC of mutable objects - #7741 - Add SIMD support to x86/x86_64 NCG - #7763 - Resource limits for Haskell - #7808 - data families and TH names do not mix well (e.g. cannot use TH deriving) - #7845 - RebindableSyntax should allow rebinding tuples and lists - #7860 - Add more bit fiddling functions to 'integer-gmp' - #7870 - Compilation errors break the complexity encapsulation on DSLs, impairs success in industry - #7952 - Can cost-centre annotations be included in -ddump-simpl? - #7977 - Optimization: Shift dropped list heads by coeffecient to prevent thunk generation - #8012 - Warn when using Enum instance for Float or Double - #8015 - Show warning when file-header pragma is used after module keyword - #8043 - Feature Request : Qualified module exports - #8045 - Move I/O manager benchmarks into the GHC tree - #8046 - Make the timer management scale better across multicore - #8061 - Support for Complex Double in Foreign Function Interface - #8086 - Minimal install for GHC - #8099 - Alternate syntax for indicating when a function is "fully applied" for purposes of inlining - #8107 - need types to express constant argument for primop correctness - #8109 - Type family patterns should support as-patterns. - #8150 - Foreign export requires redundant type signature - #8161 - Associated type parameters that are more specific than the instance header - #8171 - Extending ExtendedDefaultRules - #8199 - Get rid of HEAP_ALLOCED on Windows (and other non-Linux platforms) - #8206 - Add support for Portable Native Client - #8220 - Macros / functions for source location - #8252 - prefetch# isn't as general as it should be (currently the general version isn't type safe) - #8263 - allow duplicate deriving / standalone deriving - #8288 - add idris style EDSL support for deep embedding lambdas - #8299 - Add richer data model address arithmetic: AddrDiff and AddrInt (ie d Int_ptr_diff and Int_ptr_size) - #8304 - more lenient operator sections (RelaxedSections extension) - #8311 - suboptimal code generated for even :: Int -> Bool by NCG (x86, x86_64) - #8325 - Pattern guards in anonymous functions - #8354 - Add INLINE (or at least INLINABLE) pragmas for methods of Ord in ghc-prim - #8364 - equip GHC with an accurate internal model of floating point - #8372 - enable -dcmm-lint by default for .cmm input files - #8398 - reify module list in TH - #8404 - Default to turning on architecture specific optimizations in the codegen - #8423 - Less conservative compatibility check for closed type families - #8429 - GHC.Base.{breakpoint, breakpointCond} do nothing - #8441 - Allow family instances in an hs-boot file - #8460 - Annotation reification with types in TH - #8477 - Allow inferring ambiguous types - #8501 - Improve error message when using rec/mdo keyword without RecursiveDo extention. - #8504 - Provide minor GC residency estimates - #8516 - Add (->) representation and the Invariant class to GHC.Generics - #8560 - Derive some generic representation for GADTs - #8581 - Pattern synonym used in an expression context could have different constraints to pattern used in a pattern context - #8583 - Associated pattern synonyms - #8605 - Alternative, guard-like syntax for view patterns - #8610 - Rebuild on a definition-based granularity - #8619 - Support anonymous string literals in C-- (OR) give better ASSERT failure messages in C-- - #8634 - Relax functional dependency coherence check ("liberal coverage condition") - #8642 - Allow GHCi to print non-pretty forms of things. - #8643 - Silent name shadowing - #8673 - GHC could generate GADT record selectors in more cases - #8679 - Extend FunD data constructor with information about type signature - #8685 - -split-objs doesn't work for executables - #8695 - Arithmetic overflow from (minBound :: Int) `quot` (-1) - #8697 - Type rationals - #8707 - Kind inference fails in data instance definition - #8751 - Show parenthesised output of expressions in ghci - #8772 - ghci should save history more often - #8809 - Prettier error messages? - #8812 - Make *_stub.c files available again - #8816 - Make SPARC registerised again. - #8822 - Allow -- ^ Haddock syntax on record constructors - #8828 - Type pattern synonyms - #8844 - Pseudo terminal and process-1.2.0.0 - #8875 - Track Exceptions - #8881 - No way to unsubscribe a bug - #8903 - Add dead store elimination - #8914 - Remove unnecessary constraints from MonadComprehensions and ParallelListComp - #8924 - Text.ParserCombinators.ReadP needs some kind of "commit" or "cut" to force a single parse.. - #8927 - Multiple case match at once - #8930 - GHC API: List of available Pragmas - #8944 - Warn instead of stopping on misplaced Haddock comments - #8955 - Syscall intrinsic - #8967 - Add syntax for creating finite maps and sets - #8989 - nofib should record and report more fine-grained binary size information - #8996 - mark more things as const in C codegen - #8997 - Warn about unused parameters in recursive definitions - #9030 - An async exception handler that blocks throwTo until handler finishes running - #9052 - Support a "stable heap" which doesn't get garbage collected - #9091 - print and/or apply constraints when showing info for typed holes - #9112 - support for deriving Vector/MVector instances - #9118 - Can't eta-reduce representational coercions - #9120 - Cache intermediate powers - #9137 - A way to match RULES only for literals - #9143 - feature request: way to set actual program argv - #9180 - New magic function `staticError` - #9182 - Empty case analysis for function clauses - #9183 - GHC shouldn't expand constraint synonyms - #9192 - Add sameByteArray# - #9197 - FFI types should be usable in foreign import decls without revealing representations - #9214 - UNPACK support for sum types - #9244 - Compiler could warn about type variable shadowing, and hint about ScopedTypeVariables - #9269 - Type families returning quantified types - #9289 - add anyToAddr# :: (#a#)-> Addr# primop (inverse of addrToAny#) - #9319 - nofib-analyze doesn’t provide per-benchmark compile time/alloc numbers - #9321 - Support for waiting on multiple MVars - #9328 - MINIMAL pragma should supprt negation - #9334 - Implement "instance chains" - #9342 - Branchless arithmetic operations - #9350 - Consider using xchg instead of mfence for CS stores - #9351 - add ability to version symbols .c for packages with C code - #9352 - Allow `State# s` argument/result types in `ccall` FFI imports - #9365 - Make command key in GHCi configurable - #9376 - More informative error messages when closed type families fail to simplify - #9392 - "\n" is displayed weirdly in error messages - #9394 - Show data/type family instances with ghci's :info command - #9419 - Machine-readable output for profiling - #9427 - Do finer-grained dependency analysis to infer more general kinds on type/class declarations - #9429 - Alternative to type family Any - #9431 - integer-gmp small Integer multiplication does two multiplications on x86 - #9441 - CSE should deal with letrec - #9447 - Add support for resizing `MutableByteArray#`s - #9476 - Implement late lambda-lifting - #9498 - GHC links against unversioned .so files - #9518 - Improve error message for unacceptable role annotations - #9522 - SPECIALISE pragmas for derived instances - #9571 - nofib should use criterion-style bootstrapping/sampling - #9601 - Make the rewrite rule system more powerful - #9617 - Implement `quot` and `rem` using `quotRem`; implement `div` and `mod` using `divMod` - #9622 - GHCi command to solve a constraint - #9624 - "Unlikely constraint" recognition - #9642 - LANGUAGE pragma synonyms - #9645 - Optimize range checks for primitive types - #9649 - symbols should/might be type level lists of chars - #9659 - Offer branchless conditional (CMOV) primop - #9661 - Branchless ==# is compiled to branchy code - #9667 - Type inference is weaker for GADT than analogous Data Family - #9671 - Allow expressions in patterns - #9685 - GHC fails to build with mingw32-make on Windows - #9688 - Improve the interaction between CSE and the join point transformation - #9690 - in GHCi map `:editNNN` to $EDITOR +NNN - #9699 - TH function to list names in scope - #9700 - Support C structures in Haskell FFI - #9702 - Offer a weaker name shadowing warning - #9724 - reexport IsList class from a trustworthy module - #9731 - Inductive type definitions on Nat - #9743 - Expose ghc-bin code as a library - #9748 - Disambiguate IO actions in GHCi with :set +t - #9756 - Warn about unnecessary unsafeCoerce - #9784 - Improve error message for misplaced quote inside promoted qualified type - #9789 - Make GHC accept .format+lhs as extension for literate haskell files - #9790 - Produce coercion rules for derived Functor instances - #9793 - Some as-patterns could be accepted in pattern synonyms - #9795 - Debug.Trace.trace is too strict - #9819 - Create typesafe method of obtaining dictionary types from class definitions, and constraint objects from dictionary types - #9835 - Add bindings for marshaling to/from mpz_t - #9846 - GHC --make should also look for .hi, not only for .hs - #9864 - Need realloc/resize feature for mallocForeignPtrBytes allocated memory - #9866 - ssh pubkey self-mgmt - #9883 - Make OverloadedLists more usable by splitting the class interface - #9898 - Wanted: higher-order type-level programming - #9908 - Improve enumFromX support for OverloadedLists - #9923 - Offer copy-on-GC sliced arrays - #9931 - Option to truncate Show output in ghci REPL - #9938 - GHC's link step needs to be told which packages to link - #9946 - Expose the source location of template-haskell Names - #9948 - Recommend class constraint instead of instance constraint - #9974 - Allow more general structural recursion without UndecidableInstances - #9990 - Top level module identifiers shadow imported identifiers - #9993 - PostfixOperators doesn't work for types - #9995 - :info enhancements - #10016 - UNPACK support for existentials - #10049 - Lower level memcpy primop - #10055 - Offer PolyKinded instances for Data.Fixed.HasResolution - #10063 - State a law for foldMap - #10064 - Add support for "foo"## literals to MagicHash - #10071 - Implement deprecation-warnings for class-methods to non-method transitions - #10076 - Don't suppress warnings in the presence of errors - #10084 - Data.List should have a takeLastN function - #10087 - DefaultSignatures: error message mentions internal name - #10089 - feature: warn about unused data definitions (with typeclass instances) - #10116 - Closed type families: Warn if it doesn't handle all cases - #10150 - Suppress orphan instance warning per instance - #10204 - Odd interaction between rank-2 types and type families - #10225 - GHC does not specialize based on type equality - #10235 - Get profiling info without stopping program - #10323 - Add Android and iOS Operating Systems in Trac - #10324 - our rts/ghc-prim/base shared library tricks don't work on Android - #10327 - Devise workaround for how infinite types prevent closed type family reduction - #10331 - Accept HsSyn in splices and generate it in quotes (ghc-api) - #10336 - Support qualified self {-# SOURCE #-} import - #10343 - Make Typeable track kind information better - #10350 - Should be able to specify path for eventlog output. - #10383 - AArch64: get GHC Calling convention working - #10391 - Ability to get export list of TH reified module - #10431 - EqualityConstraints extension? - #10453 - \case should trigger auto-multiline mode in ghci - #10465 - Make listArray non-strict in structure of argument list - #10478 - Shorter import syntax - #10505 - more specific types in the generated *_stub.h files - #10514 - Generic for existential types - #10606 - avoid redundant stores to the stack when examining already-tagged data - #10607 - Auto derive from top to bottom - #10613 - Mechanism for checking that we only enter single-entry thunks once - #10621 - Handle annotations in hsig/boot files - #10652 - Better cache performance in Array# - #10674 - Expose OSThreadID and assorted functions from Haskell - #10681 - Teach GHC to interpret all hs files as two levels of hs-boot files (abstract types only/full types + values) - #10708 - Rejection of constant functions defined using conditional pattern matching - #10741 - add flag to dump module and package dependencies - #10756 - Allow users to indicate inaccessible patterns - #10776 - DataKinds promotion of String -> Symbol and Natural -> Nat - #10789 - Notify user when a kind mismatch holds up a type family reduction - #10794 - Extension request: "where" clauses in type signatures - #10803 - New SignatureSections extension - #10804 - Rules conditional on strictess properties - #10809 - Add prefetch{Small}{Mutable}Array[0..3]# - #10827 - GHCi should support interpeting multiple packages/units with separate DynFlags - #10832 - Generalize injective type families - #10833 - Use injective type families when dealing with givens - #10841 - Run handler on STM retry - #10842 - "Reactive" Template Haskell - #10843 - Allow do blocks without dollar signs as arguments - #10869 - Option to dump preprocessed source - #10871 - Implement "fat" interface files which can be directly compiled without source - #10887 - Please export GhcMake.downsweep and make it return a partial module graph in case of errors - #10893 - Consistent error message suggestions for using language extensions - #10903 - Add an option to infer CallStack implicit parameters - #10906 - `SPECIALIZE instance` could be better - #10912 - Support for out of the box static linking - #10915 - Statistical profiling support in the RTS - #10925 - GHC should inform where unknown identifiers originate whenever possible - #10933 - REMOVED pragma - #10956 - Allow default keyboard behavior to be easily overriden - #10972 - Add a :binfo (beginner info) GHCi command - #10976 - Applicative Comprehensions - #10978 - Anonymous type instances - #10985 - When a "non-exhaustive pattern"-error occurs, output the arguments (if possible) - #10986 - GHC should delete all temporary files it creates in /tmp - #11011 - Add type-indexed type representations (`TypeRep a`) - #11012 - Support for unicode primes on identifiers. - #11014 - re-order GHC type errors for clarity - #11035 - Add implicit call-stacks to partial functions in base - #11078 - Access to module renaming with reifyModule, in TemplateHaskell - #11080 - Open data kinds - #11081 - Implement Introspective Template Haskell - #11115 - Indicate missing associated type instances - #11134 - Limit frequency of idle GCs - #11143 - Feature request: Add index/read/write primops with byte offset for ByteArray# - #11169 - Remove the word "skolem" from user error messages - #11179 - Allow plugins to access "dead code" - #11186 - Give strong preference to type variable names in scope when reporting hole contexts - #11191 - provide `make uninstall` - #11243 - Flag to not expand type families - #11270 - "Unusable UNPACK pragma" warnings should be printed even without -O - #11281 - Way to run --make and -M simultaneously - #11285 - Split objects makes static linking really slow - #11286 - ghc-pkg library - #11309 - Warn on shady data constructor export - #11335 - Add instance (Ix a, Read a, Read b) => Read (UArray a b) - #11342 - Character kind - #11343 - Unable to infer type when using DuplicateRecordFields - #11349 - [TypeApplications] Create Proxy-free alternatives of functions in base - #11350 - Allow visible type application in patterns - #11352 - Allow applying type to label - #11373 - GHC should support static archive creation on all systems - #11377 - Template Haskell only imports - #11378 - Use the compiler that built ghc for dynamic code loading, for cross-compiling - #11387 - Typecasting using type application syntax - #11393 - Ability to define INLINE pragma for all instances of a given typeclass - #11398 - Type application for operator sections - #11409 - Cannot instantiate literals using TypeApplications - #11418 - Suggest correct spelling when module is not found because of typo - #11425 - The GHC API doesn't provide a good hscTarget option for tooling - #11439 - Request for comments: Allow duplicate type signatures - #11441 - RFC: Inline intermediate languages (Core, STG, Cmm, even StrictCore) - #11457 - Run type-checker plugins before GHC's solver - #11461 - Allow pattern synonyms to be bundled with type classes? - #11469 - GHCi should get LANGUAGE extensions/defaulting from the module whose full top-level scope is visible - #11470 - Support changing cross compiler target at runtime - #11482 - Turn -fdefer-typed-holes on by default - #11483 - Ghci should TAB-complete keywords, not only identifiers - #11534 - Allow class associated types to reference functional dependencies - #11561 - Have static ghci link against its own copy of its libraries - #11581 - TypeError requires UndecidableInstances unnecessarily - #11593 - Template Haskell: Add a way to get names that are neither capturable nor capturing. - #11594 - closed empty type families fully applied get reduced lazily when in a constraint tuple and fully applied - #11620 - RFC: Type-class type signatures (:: Constraint) - #11636 - Promoting newtype destructor - #11641 - Allow wildcards for parameters functionally determined (also type synonyms) - #11646 - Make pattern synonym export type mismatch a warning - #11652 - Cyclical dependencies aren't reported in pure functions - #11658 - Type synonym with context in pattern synonym - #11671 - Allow labels starting with uppercase with OverloadedLabels - #11682 - :main doesn't use a bound thread - #11686 - implicit call stacks should provide a way to get the calling function's name - #11693 - Add `When` type family - #11706 - Increase precedence of lexps (if-then-else, case-of, do, lambda and let-in) - #11713 - STM Finalizers - #11714 - Kind of (->) type constructor is overly constrained - #11718 - Disable the Preview button on trac - #11738 - A command to remove modules from the target list - #11765 - Allow documentary type signatures - #11769 - Support and redistributables for ARM64 (64bit) - #11782 - Teach ghc-pkg to read multiple registrations from command line - #11796 - Warn about unwanted instances in a modular way - #11799 - Legend for hpc markup colors - #11801 - RFC: Make browse command display everything unqualified - #11807 - -fforce-relink / -fforce-link option - #11815 - Data.List: Add a function to get consecutive elements (mapConsecutives) - #11817 - Add proper support for weak symbols to the runtime linker - #11950 - Eventlog should include delimiters showing when the process writes to the .eventlog file - #11953 - Export Word32#, Word64# on all architectures - #11962 - Support induction recursion - #11967 - Custom message when showing functions, comparing functions, ... - #11971 - Unify error messages that suggest enabling extensions - #11993 - RFC, allow local bindings in pattern synonyms - #12001 - RFC: Add pattern synonyms to base - #12008 - GHCi autocomplete text following cursor/insertion point - #12014 - Make it possible to deprecate a method instantiation of a typeclass instance - #12016 - Allow wildcards in type synonyms and data declarations - #12020 - Error message on use of != should suggest use of /= - #12022 - unsafeShiftL and unsafeShiftR are not marked as INLINE - #12025 - Order of constraints forced (in pattern synonyms, type classes in comments) - #12030 - GHCi Proposal: Display (Data.Kind.)Type instead of * - #12044 - Remove sortWith in favor of sortOn - #12045 - Visible kind application - #12048 - Allow CustomTypeErrors in type synonyms (+ evaluate nested type family?) - #12049 - `OverloadedStrings` for types - #12050 - Allow haddock comments on non-record types - #12053 - Mode for ghc --make which only compiles the files I pass on command line - #12073 - Missing instance of MonadFix for Q - #12086 - Allow omitting type family signature - #12096 - Attach stacktrace information to SomeException - #12114 - Make injectivity check less conservative - #12119 - Can't create injective type family equation with TypeError as the RHS - #12139 - Add TUI (text-based user interface) for GHCi - #12146 - syntax repair suggestion is too eager to suggest TemplateHaskell - #12157 - Warning priorities (or: report hole warnings first) - #12159 - Record-like GADTs with repeated fields (of same type) rejected - #12160 - MonadFail instance for (Either String)? - #12178 - Allow inline pragmas on pattern synonyms - #12183 - Do not display global bindings with -fno-max-relevant-binds - #12190 - Generalize irrefutable patterns (static semantics like let-bindings) - #12203 - Allow constructors on LHS of (implicit) bidirectional pattern synonym - #12237 - Constraint resolution vs. type family resolution vs. TypeErrors - #12240 - Common Sense for Type Classes - #12244 - Idea: Remove unused symbols in link-time for smaller binaries - #12349 - Parallel make should interleave output if it means we can report an error earlier - #12360 - Extend support for binding implicit parameters - #12361 - Add -dppr-ribbon-cols - #12362 - don't complain about type variable ambiguity when the expression is parametrically polymorphic - #12363 - Type application for infix - #12369 - data families shouldn't be required to have return kind *, data instances should - #12376 - Allow function definitions in record syntax - #12389 - Limit duplicate export warnings for datatypes - #12397 - Support for PDB debug information generation - #12400 - Suggest misspelling if a type signature has similarly named binding - #12422 - Add decidable equality class - #12428 - Allow pattern synonyms to optionally carry coerceability - #12448 - Allow partial application of bidirectional pattern synonyms - #12450 - Option to suppress GHCi output "Failed, modules loaded" - #12457 - Deriving should be (more closely) integrated with other metaprogramming methods - #12463 - SPECIALIZABLE pragma? - #12465 - Evil idea: Allow empty record field update syntax for types. - #12468 - GADTs don't refine hole types - #12470 - Move LLVM code generator to LLVM bitcode format - #12477 - Allow left sectioning and tuple sectioning of types - #12483 - Improve parse error message on indentation mistake - #12498 - Support unconventionally named import libraries - #12499 - Support multiple library import libs - #12505 - Add foldl1' to Data.Foldable - #12508 - Show evaluation step-by-step in GHCi - #12515 - Pattern synonyms with non-conid/consym names give poor error messages - #12518 - Allow customizing immutable package dbs by stacking - #12524 - RFC: Allow prefixing result evaluated at GHCi prompt - #12541 - RFC: Implicit parentheses in GHCi - #12543 - Warning for duplicate language extensions - #12547 - Concurrent.ForeignPtr needs to access a C-ForeignPtr, but this is already gone - #12551 - Make type indices take local constraints into account in type instance declaration - #12580 - Eagerly simplify inherently-coherent instances - #12582 - HSOC Eventlog live profiling - #12606 - Linux (ELF) Support for "ghc -static -shared" - #12618 - Add saturated constructor applications to Core - #12620 - Allow the user to prevent floating and CSE - #12626 - Remove redundant type applications in Core - #12627 - build sytem feature request: persist warnings - #12633 - Support standard syntax for language pragmas in GHCi - #12639 - Inconsistent treatment of FlexibleInstances and MPTCs with standard vs. flexible deriving - #12649 - Allow TypeApplications syntax to be used to instantiate type variables in SPECIALISE pragmas - #12651 - Test suite should handle stage1 compiler - #12665 - Make Read instances for Integral types faster, and make them fail fast - #12669 - Add some weird Kmettian tests to the test suite - #12677 - Type equality in constraint not used? - #12680 - Permit type equality instances in signatures - #12683 - Monad laws in terms of fishes (>=>) - #12693 - Relax qualified import syntax - #12703 - Expand Backpack's signature matching relation beyond definitional equality - #12708 - RFC: Representation polymorphic Num - #12710 - Make some reserved Unicode symbols "specials" - #12717 - Permit data types in signatures to be implemented with equivalent pattern synonyms (and vice versa) - #12747 - INLINE vs NOINLINE vs <nothing> give three different results; two would be better - #12766 - Allow runtime-representation polymorphic data families - #12773 - Data.Functor.Classes instances for ZipList - #12786 - genericReplicateM and genericReplicateM_ - #12823 - Inconsistency in acceptance of equality constraints in different forms - #12833 - GHCi - #12848 - Reduce long-term memory usage of GHCi - #12857 - associate pattern synonyms with a type synonym - #12864 - Produce type errors after looking at whole applications - #12868 - Add groupOn function to Data.List - #12870 - Allow completely disabling +RTS options parsing - #12878 - Use gold linker by default if available on ELF systems - #12886 - Proposal for throwLeft and throwLeftIO in Control.Exception - #12888 - ‘Identity instance’: Outputable SDoc - #12895 - Lookup rules associated with functions/values in GHCI - #12896 - Consider using compact regions in GHC itself to reduce GC overhead - #12900 - Common up identical info tables - #12902 - Improve handle decoding error messages - #12928 - Too easy to trigger CUSK condition using TH - #12953 - Use computed gotos in the interpreter when the compiler supports it - #12970 - Add default implementation for Bits.bitSize - #12982 - Missed constant folding oportunities - #12986 - Ignore case when parsing language pragmas - #13013 - Add readMaybe to Prelude (possibly readEither too), make Haddocks for read more cautionary - #13026 - RFC functions for sums and products - #13028 - Compile-time validation of literals in IsString instances - #13034 - clean memory in GHCi - #13038 - implementation of Modus ponens and Modus tollens - #13039 - Add options to select GHCi prompt type errors - #13042 - Allow type annotations / visible type application in pattern synonyms - #13050 - Holes don't work infix - #13051 - Make the Register Allocator Loop-Aware - #13065 - Prohibit user-defined Generic and Generic1 instances - #13070 - time after evaluation - #13097 - Num a => Num (Down a) - #13103 - The RTS loader/linker relies too heavily on file extensions. - #13116 - Allow Overloaded things in patterns - #13117 - Derived functor instance for void types handles errors badly - #13126 - Support for hexadecimal floats - #13131 - add effectful version of atomicModifyMutVar# - #13140 - Handle subtyping relation for roles in Backpack - #13152 - Provide a mechanism to notify build system when .hi file is ready - #13164 - idle time full GCs (idle cpu usage) - #13176 - Deprecate the realWorld# - #13177 - Give Data.Functor.* its lifted void and unit - #13186 - Change EvNum to EvNum :: Natural -> EvLit - #13189 - Implement same specification as GHC spec file for mingw32 - #13212 - Support abs as a primitive operation on floating point numbers. - #13229 - Add -ddump-inlinings-reasoning - #13232 - Undeflow/overflow warnings for floating-point values - #13240 - Make it easier to find builds we may want to cancel - #13241 - Compile-time flag causing GC to zero evacuated memory - #13248 - Allow an injective type family RHS to be another injective type family - #13256 - Warn on out-of-range literals in pattern matches too - #13257 - out-of-range warnings for negative literals, without -XNegativeLiterals - #13262 - Allow type synonym family application in instance head if it reduces away - #13276 - Unboxed sums are not Typeable - #13282 - Introduce fast path through simplifier for static bindings - #13298 - Compact API design improvements - #13299 - Typecheck multiple modules at the same time feature requests, - #602 - Warning Suppression - #605 - Optimisation: strict enumerations - #609 - Useful optimisation for set-cost-centre - #618 - Dependency caching in ghc --make - #624 - Program location for thread error messages - #634 - Implement a more efficient TArray - #701 - Better CSE optimisation - #855 - Improvements to SpecConstr - #888 - Implement the static argument transformation - #932 - Improve inlining - #1016 - Avoidance of unaligned loads is overly conservative - #1349 - Generalise the ! and UNPACK mechanism for data types, to unpack function arguments - #1371 - Add -O3 - #1377 - GHCi debugger tasks - #1572 - Make it easy to find documentation for GHC and installed packages - #1574 - Broken link testing - #1600 - Optimisation: CPR the results of IO - #1631 - Make the External Package Table contain ModDetails not ModIface - #2123 - implement waitForProcess using signals - #2725 - Remove Hack in compiler/nativeGen/X86/CodeGen.hs - #2968 - Avoid generating C trigraphs - #2988 - Improve float-in - #3024 - Rewrite hp2ps in Haskell - #3251 - split rts headers into public and private - #3355 - Refactor Template Haskell syntax conversions - #3379 - GHC should use the standard binary package - #3462 - New codegen: allocate large objects using allocateLocal() - #3511 - port GHC to OpenBSD/sparc64 (unregisterised is fine) - #3559 - split ghci modules off into their own package - #3713 - Track -dynamic/-fPIC to avoid obscure linker errors - #3755 - Improve join point inlining - #3946 - Better diagnostic when entering a GC'd CAF - #4121 - Refactor the plumbing of CafInfo to make it more robust - #4211 - LLVM: Stack alignment on OSX - #4243 - Make a proper options parser for the RTS - #4281 - Make impredicativity work properly - #4295 - Review higher-rank and impredicative types - #4374 - Remove in-tree gmp - #4941 - SpecConstr generates functions that do not use their arguments - #4960 - Better inlining test in CoreUnfold - #5140 - Fix LLVM backend for PowerPC - #5143 - Soft heap limit flag - #5567 - LLVM: Improve alias analysis / performance - #5791 - Defer other kinds of errors until runtime, not just type errors - #5793 - make nofib awesome - #6017 - Reading ./.ghci files raises security issues - #7790 - Add dummy undefined symbols to indicate ways - #7829 - make better/more robust loopbreaker choices - #7917 - update documentation of InstalledPackageInfo - #8050 - add a required wrapper around plugin installers - #8096 - Add fudge-factor for performance tests run on non-validate builds - #8226 - Remove the old style -- # Haddock comments. - #8238 - Implement unloading of shared libraries - #8272 - testing if SpLim=$rbp and Sp=$rsp changed performance at all - #8287 - exploring calling convention changes and related engineering for 7.10 - #8290 - lookupSymbol API is unsafe - #8313 - Poor performance of higher-order functions with unboxing - #8315 - Improve specialized Hoopl module - #8317 - Optimize tagToEnum# at Core level - #8323 - explore ways to possibly use more tag bits in x86_64 pointers - #8326 - Place heap checks common in case alternatives before the case - #8396 - cleanup / refactor native code gens - #8488 - Annotations should not distinguish type and value - #8489 - clean up dependency and usages handling in interface files - #8578 - Improvements to SpinLock implementation - #8597 - Git Hook script to prevent large binary blobs being checked in - #8598 - IO hack in demand analyzer gets in the way of CPR - #8655 - Evaluate know-to-terminate-soon thunks - #8767 - Add rules involving `coerce` to the libraries - #8782 - Using GADT's to maintain invariant in GHC libraries - #8910 - cross compiling for x86_64 solaris2 - #9133 - Improve parser error reporting in `ghc-pkg` - #9251 - ghc does not expose branchless max/min operations as primops - #9276 - audit ghc floating point support for IEEE (non)compliance - #9285 - IO manager startup procedure somewhat odd - #9374 - Investigate Static Argument Transformation - #9403 - Make --show-iface more human readable - #9496 - Simplify primitives for short cut fusion - #9505 - Bounded instance for Word (and possibly others) uses explicitly unboxed literals - #9511 - Remove deprecated -fglasgow-exts from NoFib suite - #9534 - IEEE Standard 754 for Binary Floating-Point Arithmetic by Prof. W. Kahan, UCB - #9542 - GHC-IO-Handle-Text.hPutStr' and writeBlocks look like they need refactoring - #9572 - nofib target for just building should be part of validate - #9588 - Add `MonadPlus (Either e)` and `Alternative (Either e)` instances - #9596 - Create monoidal category framework for arrow desugarer - #9674 - Foldable doesn't have any laws - #9716 - The list modules need a bit of post-BBP shaking - #9718 - Avoid TidyPgm predicting what CorePrep will do - #9719 - Improve `mkInteger` interface - #9735 - Template Haskell for cross compilers (port from GHCJS) - #9786 - Make quot/rem/div/mod with known divisors fast - #9797 - Investigate rewriting `>>=` to `*>` or `>>` for appropriate types - #9805 - Use TrieMaps to speed up type class instance lookup - #9832 - Get rid of PERL dependency of `ghc-split` - #9837 - Introduce a logging API to GHC - #10068 - Make the runtime reflection API for names, modules, locations more systematic - #10074 - Implement the 'Improved LLVM Backend' proposal - #10143 - Separate PprFlags (used by Outputable) from DynFlags - #10172 - Cross-platform sed - #10181 - Lint check: arity invariant - #10266 - Split base for Backpack - #10303 - Make it easier to print stack traces when debugging GHC itself - #10319 - Eta expand PAPs - #10344 - Make BranchList simpler - #10352 - Properly link Haskell shared libs on all systems - #10450 - Poor type error message when an argument is insufficently polymorphic - #10536 - Clear up how to turn off dynamic linking in build.mk - #10601 - GHC should be distributed with debug symbols - #10640 - Document prim-ops - #10710 - More self-explanatory pragmas for inlining phase control - #10735 - Smooth out the differences between `compiler/utils/Pretty.hs` and `libraries/pretty` - #10739 - Resuscitate the humble ticky-ticky profiler - #10740 - Ensure that ticky, profiler, and GC allocation numbers agree - #10766 - user manual: INLINE's interaction with optimization levels is not clear - #10844 - CallStack should not be inlined - #10854 - Remove recursive uses of `pprParendHsExpr` from `HsExpr.ppr_expr` - #10892 - ApplicativeDo should use *> and <* - #10909 - Test suite: Support non-utf8 locale - #10913 - deprecate and then remove -fwarn-hi-shadowing - #10918 - Float once-used let binding into a recursive function - #10941 - Large heap address space breaks valgrind - #10962 - Improved arithmetic primops - #11024 - Fix strange parsing of BooleanFormula - #11031 - Record Pattern Synonym Cleanup - #11091 - Fix MonadFail warnings - #11138 - Kill the terrible LLVM Mangler - #11149 - Unify fixity/associativity of <>-ish pretty-printing operators - #11238 - Redesign the dynamic linking on ELF systems - #11295 - Figure out what LLVM passes are fruitful - #11359 - Consider removing RelaxedLayout and friends - #11366 - Control.Exception.assert should perhaps take an implicit call stack - #11392 - Decide and document how semicolons are supposed to work in GHCi - #11394 - Base should use native Win32 IO on Windows - #11445 - Turn on SplitSections by default - #11472 - Remove CallStack CPP guards in GHC 8.4 - #11477 - Remove -Wamp - #11502 - Scrutinize use of 'long' in rts/ - #11528 - Representation of value set abstractions as trees causes performance issues - #11551 - Get doctests into testsuite - #11557 - Unbundle Cabal from GHC - #11609 - Document unicode report deviations - #11610 - Remove IEThingAll constructor from IE datatype - #11613 - Add microlens to testsuite - #11616 - Kinds aren't instantiated - #11654 - User Guide: Generate command line options table from ghc-flags directives - #11735 - Optimize coercionKind - #11739 - Simplify axioms; should be applied to types - #11749 - Add long forms for multi-character short-form flags and possibly deprecate short forms - #11767 - Add @since annotations for base instances - #11778 - Preserve demandInfo on lambda binders in the simpifier - #11785 - Kinds should be treated like types in TcSplice - #11791 - Remove the `isInlinePragma prag` test - #11958 - Improved testing of cross-compiler - #12085 - Premature defaulting and variable not in scope - #12090 - Document Weverything/Wall/Wextra/Wdefault in user's guide - #12155 - Description of flags cut off - #12215 - Debugging herald should be printed before forcing SDoc - #12218 - Implement -fexternal-interpreter via static linking - #12223 - Get rid of extra_files.py - #12364 - Demand analysis for sum types - #12413 - Compact regions support needs some discussion in the release notes - #12420 - Users guide link for hsc2hs has bitrotten - #12461 - Document -O3 - #12486 - Investigate removing libGCC symbols from RtsSymbols.c - #12556 - `stimes` adds extra power to Semigroup - #12599 - Add Hadrian build artifacts to gitignore - #12619 - Allow users guide to be built independently from GHC - #12623 - Make Harbormaster less terrible in the face of submodule updates - #12635 - Compress core in interface files - #12647 - Can't capture improvement of functional dependencies - #12650 - Too many warnigns about consistentCafInfo - #12662 - Investigate ListSetOps module - #12687 - Add a flag to control constraint solving trace - #12720 - Remove ghcii.sh - #12721 - Implement sigINT handler for Window's timeout.exe - #12735 - Evaluate the feasibility of using lld for linking - #12744 - Register Allocator Unit Tests - #12758 - Bring sanity to our performance testsuite - #12765 - Don't optimize coercions with -O0 - #12774 - bkpcabal02 test fails on OS X - #12812 - Debugging functions should throw warnings - #12822 - Cleanup GHC verbosity flags - #12887 - Make each RTS header self-contained - #12891 - Automate symbols inclusion in RtsSymbols.c from Rts.h - #12892 - Improve type safety in the RTS - #12910 - Mirror mingw signature files - #12913 - Port SplitSections to Windows - #12941 - Extend nofib with benchmarks focused on compiler performance - #12943 - Adjust AST to capture additional parens around a context - #12948 - Implement unwind rules for non-Intel architectures - #12951 - Remove __USE_MINGW_ANSI_STDIO define from PosixSource.h - #12961 - Duplicate exported functions? - #12995 - interpetBCO doesn't get stripped from executables - #12999 - Investigate use of floating-point arithmetic in I/O Event-handler - #13008 - Cleanup backwards compatibility ifdefs due to stage1 external interpreter work - #13009 - Hierarchical Module Structure for GHC - #13015 - Remove as much Haskell code as we can from Lexer.x - #13020 - Use a fixed SDK for Mac OS X builds - #13044 - make it possible to apply GHC rewrite rules to class methods - #13072 - Move large tuples to a separate module in base - #13101 - Enable GHC to be loaded into GHCi - #13114 - UniqSet definition seems shady - #13121 - Investigate whether uploading build artifacts from harbormaster would be usful - #13122 - Investigate reporting build errors with harbormaster.sendmessage - #13127 - Refactor AvailInfo to be more sensible - #13128 - Refactor AvailInfo to be more sensible - #13134 - Simplifier ticks exhausted - #13138 - Clean up ghc-pkg validation warnings in build log - #13149 - Giving Backpack a Promotion - #13151 - Make all never-exported IfaceDecls implicit - #13160 - Simplify CoreFV.FVAnn - #13173 - Investigate old comment at the top of SrcLoc - #13175 - Documenting what can be derived 'out of the box' by GHC's "deriving" - #13179 - Levity-polymorphic data types - #13182 - Rethinking dataToTag# - #13213 - Lifting thunks out of thunks to reduce their size. - #13217 - configure script uses different CFLAGS from the actual build - #13222 - Update formalism for join points - #13238 - Harmonise pretty printing of parens in hsSyn - #13252 - Perf: Make dep_finsts a map from type family Name to Module - #13261 - Consider moving Typeable evidence generation wholly back to solver - #13270 - Make Core Lint faster - #13280 - Consider deriving more Foldable methods.
https://ghc.haskell.org/trac/ghc/wiki/WikiStart?version=6
CC-MAIN-2017-09
en
refinedweb
In the solution I most working on we will have services for every part. Letter service to send sms and mail messages; Print service to print reports; Application service to most common utilities; Synchronization service to send update to other systems and so... All this services will be implemented as WCF Service Library hosted in Windows Service. Depending on the size of the customer it is possible that all these services installed on the same physical machine. In such case we may running up to twelve Windows services and eat resources (like memory, ports, etc.). I decided to put together a solution to host any of these services under a single windows service. I also decided to make it flexible enough to add and remove services as easy as possible. Finally I came up with these features: I will not explain in details all the steps involved - it came to a surprisingly simple code at the end... ...and the code attached at the bottom, but will see into some of the 'gems'. In the process of the making found an article by this title, and added it to my solution at once. Read it here and pay tribute to W. Kevin Hazzard from Code Project. In all (most?) articles when talking about loading assemblies into different application domains you can find the fact that your class must inherit from MarshalByRefObject. It's not true. All you need is make your class Serializable (MarshalByRefObject is also make it Serializable but also ComVisible!). MarshalByRefObject ComVisible It can be the feeling after reading about MEF and checking some examples. The truth is that all you need is to implement an interface that can't be simpler (empty!). using System.ComponentModel.Composition; namespace WCFServiceInterface { [InheritedExport] public interface IWCFService { } } (It's true that the code to load all the matching assemblies is almost 15 lines but...) That's all folks! You will found that it's really simple - so jump in and enjoy... This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
https://www.codeproject.com/tips/622400/windows-service-to-host-multiple-wcf-services
CC-MAIN-2017-09
en
refinedweb
This action might not be possible to undo. Are you sure you want to continue? Jingqiang Li High Voltage Direct Current Transmission Master thesis submitted for approval for the degree of Master of Science, Espoo, January, 2009 Supervisor Professor Matti Lehtonen, D.Sc (Tech.) Instructor Professor Matti Lehtonen HELSINKI UNIVERSITY OF TECHNOLOGY ABSTRACT OF THE MASTER THESIS Author: Jingqiang Li Name of thesis: High Voltage Direct Current Transmission Date: 14.07.2008 Number of pages: 102 Faculty: Department of Electrical and Communications Engineering Chair: Electrical Engineering (Power Systems) Code: S-18 Supervisor: Prof. Matti Lehtonen Instructor: Prof. Matti Lehtonen This thesis is focused on the application and development of HVDC transmission technology based on thyristor without turn-off capability. Compared with other macroelectronics in the power field, thyristor without turn-off capability has successful operation experience to ensure reliability and high power ratings to transfer bulk energy. This thesis covers converter station design and equipments, reactive power compensation and voltage stability, AC/DC filters design, control strategy and function, fault analysis, overvoltage and insulation co-ordination, overhead line and cable transmission, transmission line environmental effects, earth electrode design and development. With the development of new concepts and techniques, the cost of HVDC transmission will be reduced substantially, thereby extending the area of application. Acknowledge The work for this thesis has carried out in the Power System Laboratory, Helsinki University of Technology. First, I would like to thank my supervisor, Prof. Matti Lehtonen, for the opportunity to study this subject and for his inspiring guidance. Again, I would also like to thank Prof. Jorma Kyyrä for his support and continued encouragement. I give best thanks to my family. Their love and care were supporting me in a foreign country. Helsinki 14. 7. 2008 Jingqiang Li jli2@cc.hut.fi 3 Contents Chapter 1 Introduction ................................................................................................... 6 1.1 HVDC Transmission Configurations.................................................................. 6 1.1.1 Two-Terminal HVDC Transmission........................................................... 6 1.1.2 Multiterminal HVDC Transmission............................................................ 8 1.2 HVDC Transmission Characteristics................................................................ 10 1.2.1 HVDC Transmission Advantages ............................................................. 10 1.2.2 HVDC Transmission Disadvantages......................................................... 11 1.3 HVDC Transmission Applications ................................................................... 11 Chapter 2 Converter Station ......................................................................................... 13 2.1 Station Design.................................................................................................. 13 2.2 Converter Valve............................................................................................... 15 2.3 Converter Transformer..................................................................................... 18 2.4 Smoothing Reactor........................................................................................... 19 Chapter 3 Reactive Power Management ....................................................................... 22 3.1 Reactive Power Balance................................................................................... 22 3.2 Voltage Stability .............................................................................................. 23 3.3 Reactive Power Compensators ......................................................................... 24 Chapter 4 AC Filter Design .......................................................................................... 26 4.1 AC Harmonics ................................................................................................. 26 4.2 Design Criteria................................................................................................. 26 4.3 Passive AC Filters............................................................................................ 27 4.3.1 Tuned Filters ............................................................................................ 28 4.3.2 Damped Filters......................................................................................... 30 Chapter 5 DC Filter Design .......................................................................................... 33 5.1 DC Harmonics ................................................................................................. 33 5.2 Design Criteria................................................................................................. 35 5.3 Active DC Filter .............................................................................................. 37 Chapter 6 Control System ............................................................................................ 39 6.1 Multiple Configurations ................................................................................... 39 6.2 Control System Levels ..................................................................................... 40 6.3 Converter Firing Phase Control........................................................................ 43 6.3.1 Individual Phase Control .......................................................................... 44 6.3.2 Equidistant Pulse Control ......................................................................... 44 6.4 Converter Control Mode .................................................................................. 45 6.5 Control System Functions ................................................................................ 46 Chapter 7 Fault Analysis .............................................................................................. 50 7.1 Converter Faults............................................................................................... 50 7.2 AC-side Faults ................................................................................................. 54 7.3 DC-Line Fault .................................................................................................. 58 Chapter 8 Overvoltages and Insulation Co-ordination................................................... 61 8.1 Overvoltage Protection Devices ....................................................................... 61 8.2 Overvoltages Studies........................................................................................ 62 8.2.1 AC-side Overvoltages .............................................................................. 62 8.2.2 DC-side Overvoltages .............................................................................. 63 8.2.3 DC-Line Overvoltages ............................................................................. 65 8.3 Insulation Co-ordination .................................................................................. 66 Chapter 9 Transmission Lines ...................................................................................... 69 9.1 Overhead Line ................................................................................................. 69 4 9.1.1 Conductor Cross-section .......................................................................... 69 9.1.2 Insulation Level........................................................................................ 69 9.1.3 Insulator Types......................................................................................... 71 9.1.4 Insulator Numbers .................................................................................... 71 9.1.5 Steel Tower.............................................................................................. 72 9.1.6 Ground Wire ............................................................................................ 72 9.2 Cable Line ....................................................................................................... 73 9.2.1 Application and Development .................................................................. 73 9.2.2 Cable Insulation ....................................................................................... 73 9.2.3 Cable Types ............................................................................................. 74 9.2.4 Cable Structures ....................................................................................... 76 9.3 Earth Electrode Line ........................................................................................ 78 9.3.1 Insulation Level........................................................................................ 78 9.3.2 Conductor Cross-section .......................................................................... 79 Chapter 10 Transmission Line Environmental Effects ................................................ 80 10.1 Corona ............................................................................................................. 80 10.2 Electric-Field Effect......................................................................................... 81 10.3 Radio Interference............................................................................................ 82 10.4 Audible Noise .................................................................................................. 84 Chapter 11 Earth Electrode......................................................................................... 85 11.1 Earth Electrode Effects .................................................................................... 85 11.2 Earth Electrode Operational Features ............................................................... 85 11.3 Electrode Site Selection ................................................................................... 86 11.4 Earth Electrode Design .................................................................................... 88 11.5 Earth Electrode Development........................................................................... 90 11.6 Influence of Earth Electrode Current ................................................................ 91 Chapter 12 Conclusion ............................................................................................... 93 References....................................................................................................................... 95 5 Chapter 1 Introduction 1.1 HVDC Transmission Configurations In accordance with operational requirements, flexibility and investment, HVDC transmission systems can be classified into two-terminal and multiterminal HVDC transmission systems. 1.1.1 Two-Terminal HVDC Transmission There are only two converter stations in the point-to-point HVDC transmission system, one rectifier station and the other inverter station. The main circuit and primary equipments of the rectifier station are almost the same as those of the inverter station (sometimes AC-side filter configuration and reactive-power compensation may be different), but the functions of control and protection systems must be configured respectively. There are three different configurations, i.e. monopolar link (positive or negative polarity), bipolar link (positive and negative polarity) and back-to-back interconnection (no transmission line), illustrated in Figure 1.1. [1] Figure 1.1 Back-to-back interconnection (a), monopolar link (b) and bipolar link (c) [1] 6 If faults occur on one pole. Chandrapur – Padghe and Gezhouba – Shanghai. Although the DC-line investment and operational cost of monopolar link with metallic return are higher than those of monopolar link with ground return. Fenno-Skan. A monopolar link with ground return is usually employed in the HVDC submarine cable scheme. Since considerable direct current flows through earth or sea continuously. Sweden-Poland Link became the first monopolar link with metallic return. bipolar links can be classified into bipolar link with twoterminal neutral ground.g. transformer magnetism saturation and electrochemistry corrosion can be avoided. A major advantage is to ensure no earth current during operations. For a monopolar link with ground return. Three Gorges – Changzhou. If one pole is out of service due to a fault. monopolar links can be classified into monopolar link with ground (or sea) return and monopolar link with metallic return. North Sea cannot be used as a return path due to the 7 . one positive and the other negative. [6] According to circuit modes. Konti-Skan. A bipolar link with two-terminal neutral ground was employed in most HVDC transmission schemes. the other pole can operate with earth by using the overload capability. thereby avoiding some consequences. Finally owing to the environmental impact. only in English Channel HVDC submarine scheme interconnecting England and France. it will give rise to transformer magnetism saturation and underground metal-objects electrochemistry corrosion. A bipolar link with one-terminal neutral ground is rarely used. bipolar link with one-terminal neutral ground and bipolar link with metallic neutral line. thereby easily increasing the cost of earth electrode. a monopolar link with metallic return (low insulation) may be used. due to no direct current flowing through earth during operations. Three Gorges – Guangdong.In accordance with circuit modes. [2] [3] [4] [5] Instead of earth or sea return. due to only one-terminal neutral grounded. Baltic cable and Kontek HVDC links. the reliability and flexibility are relatively less during operations and earth electrodes must be designed with quite high requirements. Sweden-Poland Link was planned as a monopolar link with ground return. earth or sea is used as one conductor line and thereby two converter stations must be grounded necessarily.g. Although a monopolar link with ground return can reduce DC-line cost. For a bipolar link with one-terminal neutral ground. [7] [8] [9] [10] It has two conductor lines. the entire bipolar link must be shut down without the possibility of monopolar operation. Initially. and earth return can be used as a backup conductor. e. e. earth or sea cannot be used as a backup conductor. Kingsnorth underground cable HVDC scheme was built to reduce the short-circuit level in areas of high load density. Usually if direct current is not allowed to flow through earth or the site of earth electrode is quite difficult to select. In London. illustrated in Figure 1. [11] [12] A bipolar link with metallic neutral s line uses three conductor lines. Although the line structure is relatively complex and the line cost is considerably high.2 Multiterminal HVDC Transmission A multiterminal HVDC transmission system is used to connect multiple AC systems or separate an entire AC system into multiple isolated subsystems. both rectifier and inverter are placed on the same site. linking via smooth reactor. due to no direct current flowing through earth. the equipments on the DC side can be designed with relatively low voltage and high current rating. [15] [16] 1. smooth reactor and converter valve. Due to no transmission line and low loss.2. 8 . the bipolar link with metallic neutral line can be employed. UK.interference with ship’ magnetic compasses. In a multiterminal HVDC transmission system. The earth electrode for the Sandy Pond converter station is located in Sherbooke. [14] In a back-to-back interconnection. thereby reducing the price of converter transformer. a bipolar link with metallic neutral line can prevent some problems caused by earth current and provide relatively reliable and flexible operating modes. DC filters are not required.1. Because the rectifier and inverter are installed in the same valve hall and thus DC harmonics do not interfere with communication. converter stations can be connected in series or in parallel. Quebec and is connected to the converter station by a metallic return. one low-insulation neutral line and two DC-lines. [13] Part of Hydro-Quebec (Canada)-New England (USA) HVDC scheme employed the bipolar link with metallic neutral line. excellent reliability and fast fault recovery. thereby providing lower loss and excellent economic operation. e. quick power-flow reversal. the entire multiterminal HVDC system must shut down. the firing angle must be maintained in the small variation range during operations and due to constant direct voltage. Although the seriesconnected HVDC scheme can provide advantages. thereby necessarily using double circuits and obviously increasing the line price.2 Parallel-connected (up) and series-connected (down) configurations [1] In the series-connected HVDC scheme. [17] 9 . in order to ensure high converter power factor and less reactive power consumption. the regulation and distribution of active power among converter stations mainly depend on the direct-current variation that is achieved by regulating the converter firingangle or transformer tap-changer.Figure 1. due to permanent faults on one portion of DC line. For the parallel-connected HVDC scheme. the load reduction is achieved by lowering direct current. the regulation and distribution of active power among converter stations mainly depend on the direct-voltage variation that is achieved by regulating the converter firing-angle or transformer tap-changer.g. In the parallel-connected HVDC scheme. one bipolar HVDC line with two conductor bundles takes much less the width of transmission routine. the price for DC cable lines is substantially lower than the prices for AC cable lines. [18] Under the effect of direct voltage. the transmission distance for DC cable is unlimited theoretically.g. (4) Due to the rapid and controllable features. (2) For the AC and DC cables with the same insulation thickness and cross section. 1. The interconnected AC systems can be operated with different nominal frequencies (50 and 60 Hz) respectively and the exchange power between interconnected AC systems can be controlled rapidly and accurately. DC cable lines only require one cable for monopolar link or two cables for bipolar link and AC cable lines need three cables. HVDC systems can be used to improve the performance of AC system.1.1 HVDC Transmission Advantages (1) A bipolar HVDC overhead line only requires two conductors with positive and negative polarities. the stability of frequency and voltage. Compared to a double circuit HVAC line with six conductor bundles. due to three-phase AC transmission. thereby providing simple tower structure. low DC-line investment and less power loss. Therefore. the power quality and reliability of interconnected AC systems.2. Since capacitive current does not exist. direct voltage maintains the same along the transmission line. for the same transmission capacity. For the DC/AC hybrid 10 . Since there is no the cable capacitance in a DC cable transmission. In this book. (3) HVDC links can be used to interconnect asynchronous AC systems and the shortcircuit current level for each AC system interconnected will not increase. the transmission capability for DC cable is considerably higher than that for AC cable. In comparison with one circuit HVAC overhead line.2 HVDC Transmission Characteristics The development of high rating power electronics strongly influences the development of HVDC technology. the capacitance of transmission line is never taken into account. e. thyristor valve (without turn off capability and with low frequency) is mainly discussed. HVDC transmission can save approximately 1/3 steel-core aluminium line and 1/3 –1/2 steel. (4) Without current zero-crossing point. the reactive power demand is approximately 60% of the power transmitted at full load. thereby improving the reliability of HVDC system. As the transmission distance increases. (3) In a conventional converter station. [19] Since reactive power must balance instantaneously. smoothing reactors.2 HVDC Transmission Disadvantages (1) In a converter station. AC filters. For the same rating. (5) For an HVDC system. there are converter valves. except for converter transformers and circuit breakers. thereby developing multiterminal HVDC systems very slowly. in order to improve the stability of commutation and dynamic voltage. above a certain distance. the bipolar link can be changed into the monopolar link automatically. With developing power semiconductors with high switching frequency. loss and operational cost. • Long Distance and Bulk Capacity Transmission For the same transmission capacity. an HVDC transmission offers more economic benefits than HVAC transmission.3 HVDC Transmission Applications HVDC schemes mainly serve the following purposes. For a bipolar link. 1. reactive power compensators must be installed in the converter station. thereby distorting current and voltage waveforms. earth can be used as the return path with lower resistance.transmission system. 11 . but also a source of harmonic currents and voltages. DC filters and reactive power compensators. If faults occur on one pole. DC circuit breakers are difficult to manufacture.2. DC circuit breaker can be innovated. the rapid and controllable features of HVDC system can also be used to dampen the power oscillations in AC systems. earth is normally used as a backup conductor. the investment for a converter station is several times higher than the investment for an AC substation. 1. so as to increase AC lines’ transmission capacity. (2) A converter acts as not only a load or a source. etc. 12 . it is very difficult to select appropriate the overhead-line routine. using HVDC underground/submarine cables is an attractive solution to deliver power from remote power station to urban load center. DC cables are also widely used across strait in the world. largecapacity power stations are not allowed to build in the vicinity of city.the transmission capacity for HVAC line is restricted by stability limitation. owing to high population and loaddensity. thereby necessarily increasing additional investment for short-circuit limitation. but it will give rise to the problems in the super system. • DC Cable Transmission For DC cable. • Power System Interconnection In order to optimize the resource utilization. the transmission capacity is not restricted by transmission distance. voltage support. Due to environmental issue. Moreover. For example. several AC systems intend to be interconnected with the development of power industry. Except for the purpose of long-distance and bulk-capacity. thereby exceeding the capacity of the existing circuit breakers. without capacitance current. AC systems can also be interconnected by HVDC transmission and thereby it not only obtains the interconnection benefits but also avoids the serious consequences. the interconnection for AC systems always increases the short-circuit levels. Therefore. A 12-pulse converter unit can employ the converter transformer of either two-winding or three-winding.1 The main circuit diagram for one pole of a converter station [21] 1 surge arrester 2 converter transformer 3 air-core reactor 4 thyristor valve 5 smoothing reactor 6 voltage measuring divider 7 DC filter 8 current measuring transducer 9 DC line 10 electrode line For a 12-pulse converter. Figure 2. which primarily contains converter valve. the transformer valve-side windings must be 13 . thereby greatly simplifying the number of filters. the components are shown in Figure 2. [21] In order to provide the 30º phase-shift for 12-pulse operation. Usually most HVDC schemes employ the 12-pulse converter as the basic converter unit.1. Basic converter units can be classified into 6-pulse converter unit and 12-pulse converter unit. AC/DC filters can be configured in accordance with the requirements of 12-pulse converter. DC filter and so on.Chapter 2 Converter Station 2. AC filter. [20] In order to form a 12-pulse converter unit.1 Station Design A converter station consists of basic converter unit. converter transformer. two 6-pulse converter units are connected in series on the DC side and in parallel on the AC side. smoothing reactor. reducing land use and lowering the cost. Figure 2. pole I 5-service building with control room 6-valve hall.2 indicates the relative space of the various components for a bipolar converter station. such as isolators and circuit breakers. Figure 2.2 The station layout for a bipolar HVDC station [22] 1-DC and electrode lines 2-DC switchyard 3-DC smoothing reactors 4-valve hall. a smoothing reactor is located on the DC side. are used for the changeover from monopole metallic return to bipolar operation. such as voltage divider and current transducer. In order to limit any steep-front surges entering the station. pole II 7-converter transformers 8-AC harmonic filters 9-high-pass filter 10-eleventh harmonic filter 11-thirteenth harmonic filter 12-shunt capacitors 13-AC switchyard 14 .connected in star-star and star-delta respectively. The switching components. [22] The areas of shunt capacitor banks and AC filter banks are the major proportion of the entire area and the valve hall and control room only take a small fraction of the total station area. The measuring equipments. can provide the accuracy input signals for the control and protection systems. and the container-type control and auxiliary integration systems.Figure 2. shown in Figure 2. voltage dividers. the auxiliary components. Furthermore.3 shows a modern compact converter station. such as outdoor valves. which are air insulated. excessive rate-of-rise of voltage and rate-ofrise of inrush current. gasinsulated bus systems.2 Converter Valve Until today. most HVDC schemes have applied thyristor valves. [27] Owing to the limited voltage rating for each thyristor. using new equipments. the use of a gasinsulated bus can avoid pollution deposits on exposed portions of a converter station. [24] [25] Figure 2. damping circuits and valve firing electronics are necessarily installed together with local thyristor to constitute a valve module. [23] In order to reduce the size of a converter station significantly. play an important role. such as saturable reactor.4. many of them must be connected in series to constitute a 15 . active AC and DC filters. [26] In order to protect thyristor from overvoltage.3 The compact station layout for a bipolar HVDC station [23] ACF-AC filter DCF-DC filter VH-valve hall VY-valve yard SH-shunt capacitor SR-smoothing reactor CC-control and auxiliary modules T-transformer 2. water cooled and suspended indoors. the valve building cost can be reduced considerably and all control systems can be tested in the factory. so as to detect the faulty thyristor immediately and locate the position of the defective thyristor exactly. are transmitted by fibre optics. Therefore.4 The components of thyristor valve module [27] A valve using electrically-triggered thyristor (ETT) requires electronic thyristor control unit (TCU) to generate trigger pulses for protection and monitoring. Figure 7. All signals. 16 . the valve cooling system must have sufficient cooling capacity and relatively high reliability to prevent leakage and corrosion. Figure 2. [28] A valve using light-triggered thyristor (LTT) has been developed to eliminate the electronic circuits for converting the light signals into electrical pulses. The converter valves are normally installed inside a valve hall and arranged as three structures suspended from the ceiling of the valve hall. thereby eliminating the protecting circuit. Powerful light sources at ground level are installed to generate light signals via optical fibres and the light-trggered thyristor is self-protected against overvoltage. [29] A cooling system is very important to ensure the availability and reliability of a converter valve.9 show the location and basic functions of the Cross –Channel converter valve. Microcomputers are used in the control room to process the information from the valve and the feedback signals are used to monitor the state of each individual thyristor. such as the firing signals and the feedback signals.converter valve. thereby greatly reducing the delivery time and lowering the maintenance cost. a number of disadvantages are the large costly valve buildings. there is no need to build the valve hall and thus the civil content (cost and time) of valve hall is greatly reduced. the complex interface to the electrical equipment. 17 . for the outdoor valve. In order to overcome those disadvantages of indoor valve design.5 Location and basic functions of the Cross-Channel valve electronic systems [28] For indoor valves. the outdoor valve design can be an effective alternative. In addition.Figure 2. An outdoor valve is completely assembled in a modular container and fully tested in the factory. the risk of a valve-building fire and the risk of flashovers across large wall bushings. if the short-circuit occurs on the valve arm or DC busbar. the standard configurations of converter transformer banks can be: six single-phase two-winding transformers. Because the operation of converter transformer is closely related to the nonlinearity caused by converter commutation. the converter transformer is of different characteristics.6 The types of converter transformer [31] 18 . test. the impedance of converter transformer can restrict the fault current. compared with ordinary AC transformer. for a 12-pulse converter. Usually. the converter transformer is one of most important components. such as the short-circuit impedance. A converter transformer provides 30º phase shift between two 6-pulse converters to obtain the configuration of 12-pulse converter. three single-phase three-winding transformers. [30] A converter transformer employs single-phase arrangement or three-phase arrangement. in order to protect converter valve. modern HVDC systems employ the configuration of one 12-pulse converter for each pole. insulation and on-load tap changing. two three-phase two-winding transformers and one three-phase threewinding transformer. Figure 2.2. harmonics. DC-magnetisation.3 Converter Transformer A converter transformer is placed on the core location to link the AC network with the valve bridge. Therefore. Owing to expensive component cost and complicated manufacture technology. thereby avoiding the damage to the converter valve due to overvoltage stress. the three-phase transformer can be selected. the main considerations 19 . For the converter transformer with relatively large capacity and high voltage. in order to reduce material consumption. especially no-load loss. In order to select the suitable reactor inductance. bushing and on-load tap changer.4 Smoothing Reactor Smoothing reactor can prevent steep impulse waves caused by DC lines or DC switching yard entering the valve hall. oil tank. transport conditions and the layout of converter station. land use and loss. For the converter transformer with medium capacity and voltage. Figure 2. Excessive inductance likely results in overvoltages during operations. thereby lowering the response speed.7 One large single-phase three-winding converter transformer with its valve side bushings mounted for entering the valve hall [30] 2. especially without transport limitations. the singlephase three-winding transformer is of less core. the single-phase transformer groups can be selected. the configuration of converter transformer depends on transformer ratings. compared with single-phase two-winding transformer.In accordance with the voltage requirement. the air/dry-type smoothing reactor has the following advantages. Compared to the oiltype smoothing reactor. Without oil-insulated systems. the phenomena of magnetism saturation cannot occur under fault conditions. 100Hz. Figure 2. to prevent the intermittent current at low-load condition. Without limitations of critical electric-field strength. to arrange the parameters of DC filters with the reactor inductance. A dry/air-type smoothing reactor is installed on the high-voltage side. to smooth the ripples of direct current.8 Air-core smoothing reactor in the Kontek HVDC transmission [32] 20 . For air/dry-type smoothing reactors. Smoothing reactors can be classified into dir/air-type and oil-type. thereby improving the reliability of insulation. air/dry-type smoothing reactors require relatively lower impulse insulation level. thereby always maintaining the same inductance. Without iron-core constructions. the support insulator of air/dry-type smoothing reactor is very similar to that of other busbars. Since the capacitance between air/dry-type smoothing reactor and ground is much smaller than the capacitance between oil-type smoothing reactor and ground. the air/dry-type smoothing reactor cannot cause fire hazard and environmental effects. Only porcelain support insulators have to be taken into considerations. to prevent the low-frequency resonance at 50Hz.are: to limit the rate of rise of the fault current. reversal of voltage polarity only produces the stresses on the support insulators. In contrast to air/dry-type smoothing reactor. Figure 2. The oil-type smoothing reactor is installed on the ground. thus providing the excellent anti-seismic performance. The oil-paper insulation system is very feasible and reliable. the oil-type smoothing reactor has the following advantages. With iron-core constructions. the oil-type smoothing reactor likely increases the reactor inductance.9 Oil-insulated smoothing reactor in the Rihand –Dehli HVDC transmission [32] 21 . generators can provide part of reactive power. Besides the active power. save the investment of the reactive-power compensators (capacitor and reactor).Chapter 3 Reactive Power Management For line commutated converters. Therefore. i. an HVDC system need to absorb capacitive reactive power from AC systems. P is the active power on the DC side of converters. in order to reduce the number of equipments supplying inductive reactivepower. when an HVDC system operates at high load. generators can absorb part of overcompensation reactive power. (3 –1) Q is the reactive power consumed by converters. in order to reduce the number of equipments providing capacitive reactive-power. when the conversion power is much less than the rated power. The reactive power is expressed in terms of the active power. Q = P tan Where. when an HVDC system operates at low load. the converter is always the reactive-power load to AC system. no matter at the rectification or inversion state. when the conversion power is close to the rated power. 3. 22 . the reactive power consumed by converters is also related to some operating parameters very sensitively.e. is the phase difference between the fundamental-frequency voltage and current components. Fully utilizing the reactive-power capability of AC system to balance the reactive power can reduce the reactive-power compensation capacity provided by the converter station. all possible control modes are employed to minimize the reactive power consumed by converters. In the normal operation. AC filters must be added to eliminate harmonics and converters are used to absorb surplus reactive power.1 Reactive Power Balance For a converter station located close to a power station or power station group. such as firing angle and extinction angle. appropriately using generators is always more economical and effective to handle most reactive power demands and maintain AC voltage within an acceptable range. thereby resulting in the reactive-power impact on the AC system and causing AC-voltage fluctuation. a converter station must install reactive-power compensators. adding the minimum number of AC-filters will lead to surplus reactive power and thus it is very difficult to regulate AC voltage. 3. in order to avoid the reduction of ACbusbar voltage. If the HVDC system operates at high load. Furthermore. in order to provide reactive power and satisfy filtering requirements. the supplied reactive-power must match the reactive-power consumed by converters. If the HVDC system operates at low load. the local generators must be regulated immediately to supply reactive power. In order to minimize AC-voltage variations. the converter station is required to compensate part of reactive power. For weak AC systems. the converter station is required to absorb part of reactive power. due to surplus reactivepower and AC-voltage rise. it is necessary to install static var compensators or synchronous compensators. [35] When the HVDC system deblocks. Under the high-load mode.2 Voltage Stability AC voltage depends on the active-power and reactive-power characteristics of the converter. Therefore. Under the low-load mode. due to insufficient reactive power in the sending-end converter station. the AC-busbar voltage of converter station is required to maintain basically constant.reduce the load-rejection overvoltage level at the instant of HVDC system sudden interruption. [33] [34] For a converter station located in a load center. due to inadequate reactive-power compensation and AC-voltage drop. it is necessary to block the HVDC system. under the most severe condition. If generators are close to the sending-end of HVDC system. [36] 23 . if the minimum number of AC filters are suddenly added. the reactive power consumed by converters will be much less than the reactive power supplied by AC filters. and synchronous compensators are of the slow response characteristic. [37] 1. using synchronous compensators can increase the shortcircuit ratio. If the short-circuit ratio is greater than 3. can be employed. AC filters and capacitor banks are usually employed. synchronous compensators are required to install in the receiving-terminal converter station. In addition. especially from a remote power station to a high-density load centre. AC Filter and Capacitor Bank If the connected AC system is not very weak. AC filters can also provide fundamentalfrequency reactive power. the ratio of the ACsystem short-circuit capacity to DC-link power. Static Var Compensator In order to regulate reactive power smoothly and quickly. the reactive-power compensators can be primarily classified into the following categories. Synchronous Compensator For a very weak AC network relative to the capacity of HVDC system. i. thereby enhancing the control stability for HVDC system and increasing the speed of response. However.3. 3. such as AC self-saturated reactors. only using capacitor banks for reactive power compensation can provide much better economic solution rather than improving the capacity of AC filter. thyristor-controlled reactors and thyristor-switched capacitors. Besides harmonics elimination. which is generally expressed by the short-circuit ratio. 24 . Selecting suitable reactive-power compensatiors mainly depends on the AC-DC system strength. thereby causing a certain problem especially in the lack of local generation.e. In order to meet reactive power demands. static var compensators. 2. if the receiving-end AC system is weak.3 Reactive Power Compensators In the converter station. using static var compensators can also improve dynamic AC-voltage stability. thereby reducing the sensitivity to transients. capacitors and reactors are only considered. voltage stability must be calculated and the reactive-power compensatiors with voltage control capability can be considered. when using conventional conversion technology. installing synchronous compensators is the most effective method. if the short-circuit ratio is less than 2. [38] 25 . if the short-circuit ratio is between 2 and 3. phase-impedances are not identical. Voltage Distortion Because the system harmonic impedance is small. two converters are of different commutating voltages. The non-ideal factors are: the ripples exist in direct current. converter firing pulses are of equally-spaced. the operating circumstances are always not ideal. converter firing pulses are not of equally-spaced. [39] 2. due to different converter-transformer ratios. the current flowing through DC-circuit is ideal direct current.1 AC Harmonics Line commutated converters discussed in this book generate characteristic harmonics. two converter-transformers are of the same impedances or ratios.Chapter 4 AC Filter Design 4. 1. Therefore. [40] 4. for a converter transformer. two converters are of different firingangles. the flow of harmonic current cannot cause the serious problem.2 Design Criteria 1. noncharacteristic harmonics (including cross-modulation harmonics) on the AC side. AC fundamentalfrequency voltages are asymmetrical with negative-sequence voltage. the harmonics exist in AC voltage. Characteristic Harmonics Characteristic harmonics are based on the following ideal conversion circumstances: ACbusbar voltage is of the constant-frequency ideal sinusoidal waveform. two converter-transformers are of different impedances. the reduction of harmonic voltage to an acceptable 26 . for a converter transformer. phase-impedances or ratios are the same. Non-Characteristic Harmonics In practice. Therefore. concerning with filter design. Active AC filters and continuously tuned AC filters were rarely installed in HVDC schemes. Telephone Interference Factor (TIF) An early telephone system was based on open-wire communications disturbed by powerlines likely. the telephone interference factor must be taken into account to approximately assess the effect of the distorted voltage or current waveform of a power line on telephone noise.3 Passive AC Filters For instance. only passive AC filters are discussed in this book. A 27 . In general. Kf = 5000(f/1000) = 5f. the voltage distortion caused by individual harmonics (Vn) and the total harmonic voltage distortion are specified factors. The TIF is defined as 1∞ TIF = ∑ (K f Pf V f )2 V f =0 1/ 2 (4 –2) ∞ V = ∑V 2f f =0 1/ 2 (4 –3) Where.level at the converter station is a more effective criterion for filter design.m. Pf = C-message weighting. The total harmonic voltage distortion is defined as VTD = ∑V n =2 ∞ 2 n (4 –1) 2. voltage of frequency f on the power line. 4. [41] Therefore. Vf = r.s. most HVDC schemes use conventional passive AC filters with successful experience. thereby providing very low impedance under the harmonic frequency. i. but also to supply part of fundamental-frequency reactive-power. high-pass filters (relatively low impedance over a wide range of frequency) and multi-tuned high-pass filters (tuned filters combined with high-pass filters). the 5th. at most three frequencies). the single-tuned filters are not installed any longer in new HVDC schemes. Single-Tuned Filter The circuit and impedance-frequency characteristic of single-tuned filter are shown in Figure 4. In accordance with the frequency-impedance characteristics. 7th.3. and tuning frequency. AC filters are used not only to eliminate harmonic currents. 11th and 13th. Figure 4.passive filter is parallel with the connected AC system and also regarded as bypass path for harmonics. conventional passive filters are of tuned filters (normally tuned for one or two frequencies.1. 28 . Because 12-pulse converters have been used widely.1 The circuit and impedance-frequency characteristic of single-tuned filter [42] The main conditions to determine filter parameters are fundamental-frequency reactivepower capacity per single filter under rated voltage.e. A single-tuned filter is more sensitive to the frequency deviation and normally tuned for the characteristic harmonics.1 Tuned Filters 1. 4. The double-tuned filter is the most popular filter in modern HVDC transmission schemes.2. 29 . Figure 4. Triple-Tuned Filter The circuit and impedance-frequency characteristic of triple-tuned filter are shown in Figure 4.2. A double-tuned filter can cancel double characteristic harmonics and produce much lower loss than two single-tuned filters together.2 The circuit and impedance-frequency characteristic of double-tuned filter [42] The main conditions to determine filter parameters are fundamental-frequency reactivepower capacity per single filter under rated voltage. Double-Tuned Filter The circuit and impedance-frequency characteristic of double-tuned filter are shown in Figure 4. [43] 3. double tuning frequencies and parallelcircuit tuning frequency.3. 4. Figure 4. the number of high-voltage circuit breakers and capacitors are less than the double-tuned filter. For the triple-tuned filter.Figure 4. The most outstanding advantage of triple-tuned filter is the convenient reactive-power balance characteristic at low load. [44] 4.3 The circuit and impedance-frequency characteristic of triple-tuned filter [42] A triple-tuned filter can eliminate three harmonics.4 The circuit and impedance-frequency characteristic of second-order high-pass damped filter [42] 30 .3.2 Damped Filters 1. thereby substantially reduce the land use. Second-Order High-Pass Damped Filter The circuit and impedance-frequency characteristic of second-order high-pass damped filter are shown in Figure 4. 5 The circuit and impedance-frequency characteristic of third-order high-pass damped filter [42] Except for selecting suitable damped resistance. The fundamental-frequency loss of third-order high-pass damped filter is lower than that of second-order high-pass damped filter. Figure 4. Type-C Damped Filter The circuit and impedance-frequency characteristic of type-C damped filter are shown in Figure 4. the tuning frequency of parallel circuit must be also selected to determine the component parameters.5. the component parameters of second-order high-pass damped filter are similar to those of single-tuned filter.Except for selecting suitable damped resistance. 3. 2. The second-order highpass damped filter was used frequently in early HVDC schemes. 31 . Third-Order High-Pass Damped Filter The circuit and impedance-frequency characteristic of third-order high-pass damped filter are shown in Figure 4.6. 32 .6 The circuit and impedance-frequency characteristic of type-C damped filter [42] A type-C damped filter was developed from the third-order high-pass damped filter. [45] 4.Figure 4. the type-C damped filter is widely used for low-order harmonics. resonance condition. The main factors to determine component parameters are fundamental-frequency reactive power. the performance of double-tuned high-pass damped filter has no obvious merits. resonance frequency and damped requirement. For instance. In compared with the above damped filters. Double-Tuned High-Pass Damped Filter A double-tuned high-pass damped filter is used to eliminate harmonics over a wide range of frequency. According to Fourier analysis. the parameters of the converter itself are three-phase absolutely symmetry. 24th. when built U. direct voltage contains harmonics of order 12n (i. the amplitudes of harmonic voltages are the same and equal to 1/4 of the values of the 12-pulse model’ s harmonic voltages.). 36th.S. for various harmonic-voltage sources.e. Under above ideal conditions.1 was used to express the 3-pulse model of DC-side harmonics for a 12-pulse converter. 12th. 33 . the equivalent circuit shown in Figure 5. the current flowing through the converter is ripple-free direct current. the control system of the converter produces perfectly equally-spaced converter firing pulses. the DC-side harmonics exceeded the normal standards seriously. 1. Characteristic Harmonics Characteristic harmonics are based on the following ideal conversion circumstances: the AC-busbar voltages of the converter are purely three-phase symmetrical sinusoidal waves. direct voltage contains harmonics of order 6n (i. [47] In the 3pulse model of DC-side harmonics.1 DC Harmonics Line commutated converters generate characteristic and non-characteristic harmonics on the DC side. etc.e. Therefore.. etc. while the stray capacitance of the converter to ground has the important effect on the distribution of DCside harmonic currents. 12th. In the early 1990s. 6th.Chapter 5 DC Filter Design 5..) and for a 12-pulse bridge converter. 18th. [46] The capacitance of the DC neutral point to ground has the important effect on the 18th harmonic. converters generate direct voltage on the DC side. owing to DC earth electrode lines and DC lines erected on the same tower.A Intermountain Power Project HVDC transmission. for a 6-pulse bridge converter. (2). the transformer leakage reactances are unbalanced in the three phases. Z1 –Z4 –1/4 12-pulse converter internal impedance. which produce the DC-side non-characteristic harmonics. For a converter transformer. unequal commutation reactances also cause non-characteristic voltages on the DC side. the AC busbar voltages contain the harmonic voltages and generate the non-characteristic harmonic voltages on the DC side. C2 –the stray capacitance of the converter transformer to ground 2. AC system voltages are never perfectly balanced and undistorted. C1.1 The novel 3-pulse model of DC-side harmonics for a 12-pulse converter [47] U1 –U4 –3-pulse harmonic voltage source. can be classified into the following categories. the transformer turn ratios and transformer reactances are not identical for the star-star connected converter transformer and the star-delta connected converter transformer. 34 .Figure 5. Non-Characteristic Harmonics The factors. (1). and the system impedances are not exactly equal in the three phases. For a 12-pulse converter. Therefore. As a result. due to the disturbances at both ends of the link. Therefore the DC filters are mainly designed to overcome the interference on the open-wire communications. [50] Ieq(x) = [Ie(x)2S + Ie(x)2R]1/2 Where. In order to assess the interference level. In accordance with practical situations. the overhead-line HVDC systems usually install the DC filters. the induced noise voltage was no longer used. [48] There is no unified DC-side harmonic indexes defined by the international conferences. the unequal operating parameters of two-pole converters must be calculated. so-called the equivalent disturbing current.(3). the comprehensive interference effect produced by all the harmonic currents of the DC lines can be expressed by the singlefrequency (800 Hz) harmonic current. Moreover. and the DC-side harmonic standards must be evaluated in the HVDC system respectively. the harmonic voltage and current profiles along the HVDC line. and the equivalent disturbing current was widely employed to design the DC filters. Close to parallel or cross communication lines. in the planning stage of the HVDC transmission system. equivalent disturbing current (EDC) and DC-line harmonic current limit. The DC-side harmonic indexes (DC filter performance) contain induced noise voltage (INV). 5. [49] Until 1970s. especially electromagnetic induction from harmonic currents. The amplitudes and phasors of harmonics must be fully considered respectively. (5 –1) 35 . the profits from each end must add their effects necessarily. but the back-to-back and full-cable HVDC systems are not required to install the DC filters.2 Design Criteria In order to reduce the harmonic hazard. must be carried out comprehensively. Two harmonic weighting factors are in common use: The psophometric weighting by the CCITT [51]. extensively used in Europe.2 shows that the difference between these two harmonic weighting curves is very slight and that the human ear has a sensitivity to audio-frequencies that peaks at about 1 kHz. The standard harmonic weighting curves are used to take into account the sensitivity of the human ear to the harmonic frequencies. [53] Figure 5. Ie(x)S is the magnitude of the equivalent disturbing current component due to harmonic voltage sources at the sending end.Ieq(x) is the 800 Hz equivalent disturbing current at any point along the transmission corridor. which is caused by harmonic voltages. The C-message weighting by Bell Telephone Systems (BTS) and Edison Electric Institute (EEI). used in the USA and Canada.2 C-message (real-line) and psophometric weighting (dash-line) factors [53] 36 . I e(x)R is the magnitude of the equivalent disturbing current component due to harmonic voltage sources at the receiving end. highly depends on the harmonic weights. [52] Figure 5. x denotes the relative location along the transmission corridors. The equivalent disturbing current. such as single-tuned. 3rd. [55] It is feasible to install active filters on both AC and DC side. but the capacity of super-capacitor is limited owing to the price. Therefore. so as to avoid telephone interference. [54] Usually a DC filter is connected between the pole busbar and the neutral busbar. active DC filters are only used to reduce harmonic currents that enter into the DC line. On the DC side.).3 The cost of DC filter versus the equivalent disturbing current [56] 37 . In the Three Gorges-Changzhou HVDC project using 12-pulse converter. 9th. thereby providing low-impedance path for harmonic currents of order 3n (i. In 1991 the world’ first s active DC filter was commissioned in the Konti-Skan HVDC link. A capacitor is installed between the neutral busbar and ground.5.e. 6th.3 Active DC Filter Passive DC filters had been employed in most HVDC schemes. doubletuned and triple-tuned circuits with or without high-pass characteristic. active AC filters are mainly used for harmonic elimination and reactive power compensation can be solved by other alternatives. Figure 5. passive double-tuned (12/24 and 12/36) DC filters are finally installed. The structure of passive DC filter is similar to that of AC filter.. etc. without the reactive power demand. Active AC filters also provide reactive power. Thereby the DC-side harmonic current on the DC-line is cancelled.4 The topology of an active filter [54] 38 . [56] Based on present passive filter.6. since active filter can eliminate all harmonics within the whole range of frequency variation.5. the cost of active filter remains constant. and a control system injects the harmonic currents into the DC line with the same magnitude but opposite phase as the original harmonic currents in the DCline. and they are connected either in series or in parallel. A current transducer can measure the harmonic currents in the DC-line. an active filter is composed of passive part and active part. Figure 5. [57] [58] The main components of the active filter are shown in Figure 5. however.According to different equivalent disturbing currents. the cost of active DC filters compared to passive DC filters is shown in Figure 5. The cost of passive filter can increase dramatically when the equivalent disturbing current reduces. the hot standby channel is automatically switched to the active status and automatic switching actions should not cause the obvious disturbances to the transfer power. monitoring a variety of operating parameters for converter stations and DC lines. and supervising the information of the control system itself.Chapter 6 Control System In a two-terminal (point-to-point) HVDC transmission system. 3. 5. ThreeGorge –Guangdong HVDC systems and Russia 39 . 6.1 Multiple Configurations In order to meet the indexes of availability and reliability required by HVDC systems. controlling the capacity and direction of transfer power. in order to provide the following fundamental control functions. Gezhouba –Nanqiao. For example. 2. 6. so as to satisfy the operational demands for the entire AC/DC hybrid systems. protecting the equipments of the converter station. In some cases. the capacity and direction of power flow can be controlled rapidly. the control system is only designed for the two-terminal HVDC transmission system. In this book. the triple-channel design is employed. one channel is active and the other channel is on the hot standby status. When faults occur in the active channel. all the control systems employ the design of multiple configurations. 1. 4. controlling the abnormal operations of converters and the disturbances of AC systems interconnected. enhancing the interface to the equipments of the AC substation and improving the link with operators. controlling the starting and stopping sequences of an HVDC transmission system. when faults occur. usually using the doublechannel design. the equipments’ investment and components’ fault will increase correspondingly. In general. [59] Figure 6. all the control components are divided into bipole function (highest level).1 shows the control-level system of a bipolar HVDC system. 40 .–Finland back-to-back HVDC system all employ the triple-channel design. bipole (master) control hierarchy. the control system of a modern HVDC scheme is generally divided into four-level hierarchies from top to down. the double-channel design is a rather better selection.e. pole control hierarchy and converter (bridge) control hierarchy. all the control functions must be put into the utmost low level and especially the number of the control components concerning with bipole function must minimize.2 Control System Levels A complicated control system using different levels can improve the reliability and flexibility of system operation and maintenance. system (overall) control hierarchy. pole function and valve group function (lowest level) respectively. As the channel number increases. For reasons of reliability. i. in order to minimize the influence and hazard extent caused by control faults. According to the level concept. 6. In order to reduce the faults’influence scope. in order to simplify structures. the system control and bipole control are set in the master control station. constant current control. maximum and minimum direct-voltage limit control. the system control and bipole control usually group together as one hierarchy. The main s control functions of the converter control hierarchy are: converter firing-phase control. Converter Control Hierarchy A converter control hierarchy is used to control the converter’ firing phase. maximum and minimum firing-angle limit control. converter unit blocking and deblocking sequence control. for only one bipolar line. maximum and minimum direct-current limit control. constant extinction-angle control.1 The block diagram of level structure for the control system [59] For only one converter unit in each pole. 1. direct voltage control. 41 . Among all converter stations. in order to send out control commands via communication systems and coordinate the operation of the whole system. only one is regarded as the master control station and others as slave control stations. the pole control and converter control can group together as one control hierarchy.Figure 6. Pole Control Hierarchy A pole control hierarchy is used to control one pole. The main functions of bipole control hierarchy are: (1) bipolar power orders are determined by the power command. the other pole must operate independently and complete the main control functions. (4) fault process controls. (5) remote controls and communications between converter stations for the same pole. such as phase-shift stopping. The main functions of the pole control hierarchy are: (1) in order to control direct current. if one pole is isolated due to a fault. (3) the starting and stopping controls for one pole. which is ordered by the system control hierarchy. Bipole Control Hierarchy A bipole control hierarchy is used to control two poles simultaneously for a bipolar HVDC transmission system. 42 . (2) the direction control for power transfer. the pole control hierarchy provides the current orders to the converter control hierarchy and the master control station transfers the current orders to the slave control station through communication systems. automatic restarting and voltagedependent current limit. one pole control hierarchy is completely independent from the other and each pole control hierarchy must configure the utmost control functions. 3. (2) in order to control DC power. and power orders are defined by the bipole control hierarchy.2. direct current orders are determined by power orders and actual direct voltages. in order to coordinate and control the bipolar operations via the commands. Therefore. For a bipolar HVDC transmission system. (4) power flow reversal control. (5) current and power modulation control. The main functions of system control hierarchy are: (1) a system control hierarchy communicates with power system dispatch center. in order to accept the control commands from the dispatch center and to transfer the corresponding operating information to the communication center.(3) the current balance control for bipolar link. An ideal control system for a converter must meet perfectly symmetrical and sinusoidal waveforms with the firing angles occurring at exactly equal intervals and in the appropriate cyclic sequence. 6. System Control Level A system control hierarchy is the highest-hierarchy control level in the HVDC transmission system. (4) AC-busbar voltage control and reactive power control of the converter station. (2) in according with the transfer-power command from dispatch center. 4. AC system frequency or power/frequency control. Two basic types of control methods have been used for the generation of converter firing pulses: 43 . the system control hierarchy distributes the transfer power among all the DC lines.3 Converter Firing Phase Control Converter firing phase control is used to change the firing phase of the converter valve. [60] Deviations from such ideal conditions give rise to two basically different control methods. damp control for damping AC system oscillations. (3) emergence power support control. 2 Equidistant Pulse Control An equidistant pulse control method ensures the equal phase-intervals between the cyclic firing pulses as a target. these pulses are in turn transferred to the corresponding valve’ firing-pulse s generator to trigger the valve. In addition. Individual phase control (IPC) 2. The low-order non-characteristics harmonic currents flowing into the AC systems will further cause AC-voltage distortion and zero crossing spacing.1 Individual Phase Control An individual phase control method was widely used in the early HVDC converter. The characteristics of individual phase control are: the firing-phase control circuit is installed individually and respectively for each converter valve. [61] [62] 6.3. The unequal phase-intervals of the firing pulses will give rise to the noncharacteristics harmonic currents and voltages on the AC and DC sides respectively. Equidistant pulse control (EPC) 6. the firing pulses are generated individually for each converter valve and determined by the zero crossing of commutation voltage in order to determine the firing-instant phase and maintain the identical firing angle for each valve. Each converter solely installs one phase-control circuitry which generates a series of equal-interval firing-pulse signals. the phase intervals of the cyclic firing pulses are not equal. and although the firing angles for all the valves are equal.3. In accordance with a specific sequence. thereby causing more unequal firing-pulse intervals and producing even more considerable non-characteristic harmonics.1. 44 . In general the three-phase voltage-waveforms are more or less asymmetrical. the unequal firing-pulse intervals will produce DC bias magnetisation on the converter transformers and thereby increase the transformer’ losses and noise. The s harmonic instability is the main disadvantage for individual phase control. termed current margin. even with the unequal firing angles. In order to avoid regulation instability caused by two-terminal current regulators working simultaneously.2.2 Basic control characteristics schematic diagram [65] The basic control mode of two-terminal HVDC system is simply shown in Figure 6. [63] [64] 6. According to 45 . the equidistant pulse control method can effectively suppress the non-characteristic harmonics in order to avoid the harmonic instability. the setting of the inverter-side current regulator is lower than that of the rectifier-side current regulator.2). the fundamental control principle – current margin mode. the equidistant pulse control ensures the identical firing angles for all the valves. [65] Figure 6.If the three-phase voltage-waveforms are symmetrical. The rectifier-side characteristic consists of two segments: constant direct current and constant minimum firing angle. the fully microprocessor control has been employed widely in the world. The inverter-side characteristic consists of two segments: constant direct current and constant extinction angle or constant direct voltage (dash-line shown in Figure 6. However.4 Converter Control Mode As electronics technologies have developed rapidly in recent years. If the three-phase voltage-waveforms are asymmetrical. was used as an effective control method since Gotland HVDC scheme in 1954. the current margin must be maintained no matter under the steady-state operation or under the transient situation. 6. (2). so as to energize the converter transformers and converter valves. the current margin is set at 10% of the rated direct current.5 Control System Functions 1. Starting/Stopping Control In order to reduce overvoltage and overcurrent. and the normal operating condition is represented by the intersection point N as shown in Figure 6.2. [66] • Normal Starting Main Sequences (1). 46 . (3). adding the appropriate AC-filter branches respectively in the two-terminal converter stations. and the operating condition is represented by the intersection point M as shown in Figure 6. DC-side switches are operated respectively in the two-terminal converter stations.the current margin control principle. the rectifier automatically shifts to the constant minimum firing angle control mode and the inverter automatically shifts to the direct current control mode. the converter-transformer network-side’ circuit-breakers are closed respectively in the s two-terminal converter stations. when considerably reducing the rectifier-side AC-voltage or substantially increasing the inverter-side AC-voltage. and to decrease the impact on two-terminal AC systems. usually the rectifier is operated at the constant direct current and the inverter is operated at the constant extinction angle or the constant direct voltage. the normal starting and stopping procedures for an HVDC transmission system must be executed strictly by following a prescribed series of steps and sequences.2. Under normal operation. so as to connect DC circuits. In most HVDC systems. when increasing the direct voltage and direct current to the settings. and the remaining AC-filter banks are switched out on the inverter side. at the same time. As DC power decreases. when direct current reduces to zero. the starting procedure completes and the HVDC transmission system is on the normal operation. so as to satisfy the requirements of reactive-power balance. according to the variation of direct current.(4). The time of normal starting procedure generally depends on the capabilities of AC-systems at both ends. the rectifier firing pulses are blocked and the remaining AC-filter banks are switched out on the rectifier side. according to the direct-voltage variation. the inverter is deblocked initially. (3). when the firing angle is equal to 90º (or greater than 90º). • Normal Stopping Main Sequences (1). AC-filter banks are added group by group during the normal starting procedure. (7). (4). according to the direct-current variation. DC-side switches are operated respectively in the two-terminal converter stations. (5). (6). the rectifier-side current regulator gradually decreases direct current down to the allowable minimum value. and then the rectifier is deblocked. (2). the inverter-side direct-voltage regulator (or extinction-angle regulator) gradually increases direct voltage up to the operating setting (or extinction-angle setting). taking as short as several seconds or as long as several tends of minutes. the rectifier-side current regulator gradually increases direct current up to the operating setting. 47 . the inverter firing pulses are blocked. so as to disconnect between DC lines and converter stations. so as to satisfy the requirements of reactive-power compensation and harmonics elimination. AC-filter banks are switched out group by group. AC switches are operated respectively in the two-terminal converter stations. in order to enhance the AC-system dynamic performance. the current order may fluctuate considerably. Since 48 . Power-Flow Reversal Control Using the fast controllability of the HVDC transmission system can automatically reverse the direction of power flow during operations. due to the extreme variations of direct voltage. and thus the modulation functions are so called the supplementary controls of HVDC transmission system. so as to trip the circuit breakers on the converter transformer network sides. • Constant Current Control Usually the response of constant-current control-loop is faster than that of constant-power control-loop. 2. in order to enhance the system stability.(5). Modulation Functions The modulation functions are required to fully exploit the controllability of HVDC system. Constant-power control mode can fully exploit the fast response characteristics of direct current regulation loop. Power Control • Constant Power Control Constant power control is the primary control mode in HVDC schemes. and the time of power-flow reversal primarily depends on the requirements of two-terminal AC systems to DC-power change and the constraints of DC-system main circuits. constant current control is used during extreme disturbances. Usually. 3. Therefore. Moverover. [66] 4. the transfer power can be controlled by changing the current order of direct current regulator. The power-flow reversal only depends on the polarity change of direct voltage and is automatically executed by the prescribed sequences. under the transient state. frequency control. damping control. power modulation and so on. The modulation functions usually include power run-ups. the HVDC modulation has considerable advantages on grid s interconnection and power system stability. reactive power modulation. The modulation functions designed fully depend on the requirements of the connected AC systems. power run-backs.1976 the power modulator was installed in Pacific Intertie HVDC scheme to damp the parallel AC-line’ oscillations. [67] [68] [69] [70] 49 . Shortcircuit faults can occur at different locations of the converter station.1 Converter Faults Converter faults can be classified into main circuit fault and control system fault. Figure 7. and must be studied separately.1 The locations of short-circuit faults in one 12-pulse converter [71] 1. as shown in Figure 7. due to the valve internal or external insulation breakdown. and the fault location of valve short 50 .1. Converter Valve Short Circuit Fault • Rectifier Valve Short Circuit A valve short-circuit is the most serious fault.Chapter 7 Fault Analysis The characteristics of internal and external faults are rather different. 7. or the valve short-circuited. If the peak value of the reverse voltage leaps extremely or the water cooling system leaks considerably. the DC-busbar voltage and DC-side current of the converter bridge fall down. 51 .circuit is shown in Figure 7. the valve insulation may be damaged. the AC-side current increases significantly.1 (b). thereby causing the short circuit across valve.1 (a). The excessive high voltage or the rate of rise of voltage is likely to break the valve insulation. The characteristics of valve short circuit are: the two-phase short circuit and three-phase short circuit occur alternatively on the AC side. thereby causing the short circuit. direct current increases temporarily. Inverter Commutation Failure A commutation failure is the common fault of the inverter and is caused by the inverter valve short circuit. the fundamental frequency components penetrate into the DC system. the AC-system frequency spectrum. Converter DC-side Terminal Short Circuit A DC-side terminal short circuit is the short-circuit fault occurred between converter DCside terminals and the fault locations are shown in Figure 7. [72] [73] The characteristics of commutation failure are: the extinction angle is lower than the time of valve recovery block. A rectifier valve must withstand the reverse voltage during the non-conduction period. the direct voltage of 6-pulse inverter reduces to zero during a certain period. • Inverter Valve Short Circuit An inverter valve withstands the forward voltage during most of non-conduction period. [74] 3. and thereby the converter valve and converter transformer withstand considerable current more than the normal current. the open circuit occurs temporarily on the AC side and the current decreases. the current flows through the fault valve from the reverse direction and increases dramatically. 2. the inverter firing-pulse lost and the inverter-side AC-system faults. The characteristics of the rectifier DC-side terminal short circuit are: the two-phase short circuit and three-phase short circuit occur alternatively on the AC side. when each valve is fired.• Rectifier DC-side Terminal Short Circuit When the rectifier DC-side terminal short circuit occurs. • Inverter DC-side Terminal Short Circuit When the inverter DC-side terminal short circuit occurs. • Rectifier AC-side Phase-to-Phase Short Circuit A rectifier AC-side phase-to-phase short circuit can cause the two-phase short-circuit current on the AC side. the converter valve still maintains the unidirectional conduction characteristic. direct current and transmission power will reduce rapidly. the rectifier will loose the two-phase commutating voltage and direct voltage. Due to DC-side smoothing reactor. the DC-line current rises up. the short circuit causes the DC-line-side current to fall down. the momentary charge current still exists. but the short circuit cannot be cleared. the conduction-valve current and AC-side current increase dramatically. and the fault value is many times higher than the normal value.1 (c). the rate of rise of the fault current is quite slow and the short-circuit current is relatively small. In fact. Usually the fault current can be controlled via the current regulator. the current flowing through the inverter valve will reduce rapidly to zero and the short-circuit fault causes no harm to the inverter and converter transformer. the converter valve maintains the forward-direction conduction. Converter AC-side Phase-to-Phase Short Circuit A converter AC-side phase-to-phase short circuit directly leads to AC-system two-phase short circuit and the fault location is shown in Figure 7. Therefore. • Inverter AC-side Phase-to-Phase Short Circuit 52 . 4. When the inverter DC-side short circuit occurs. 7.An inverter AC-side phase-to-phase short circuit results in the two-phase commutating voltage lost and the abnormal phase on the inverter.1 (d and f). Control System Faults 53 . the DC neutral busbar is always one s part of the short-circuit loop. Therefore. the inverter commutation failure occurs and the DC-circuit current rises up and the AC-side current falls down. • Rectifier AC-side Phase-to-Ground Short Circuit For a 12-pulse rectifier. • Inverter AC-side Phase-to-Ground Short Circuit For a 12-pulse inverter. When the rectifier AC-side phase-to-ground short circuit occurs. If the inherent frequency of the DC circuit is close to the second-order harmonic frequency. it may lead to the DC-circuit resonance. 6. No matter single phase-to-ground short circuit occurs at the highvoltage or low-voltage terminal’ 6-pulse converter. the DC high-voltage terminal and the DC neutral terminal. The fault locations are shown in Figure 7. the fault 6-pulse inverter occurring commutation failure causes direct current to increase. 5. the second-order harmonic component will penetrate into the DC side. the fault of the converter AC-side phase-to-ground short circuit is similar to that of the valve short circuit. Converter AC-side Phase-to-Ground Short Circuit For a 6-pulse converter.1 (e). the two-phase short circuit causes the s AC-side current and DC-neutral-terminal current to increase. no matter single-phase-to-ground short circuit occurs at the highvoltage or low-voltage terminal’ 6-pulse converter. the fault location is shown in Figure 7. Converter DC-side to Ground Short Circuit Ground short-circuit faults at the DC side contain the ground short-circuit faults occurring at the middle point of 12-pulse converter. For a 12-pulse converter. If nonconduction faults occur on the rectifier side. AC-side Three-Phase Short-Circuit Faults When AC-system faults occur. if misconduction faults occur on the inverter side. direct voltage and direct current reduce. water-cooled or oil-cooled) must be installed. direct voltage reduces or commutation failure occurs. • Nonconduction Faults A valve nonconduction fault is caused by lost firing pulse or gate control-circuitry fault. Cooling system’ faults will cause the temperature of heat-exchange s agent to rise.A converter is controlled by the firing pulses to ensure the normal operation of HVDC system. direct voltage reduces and direct current rises. the amplitude and the phase shift of AC-system voltage. the slight increase of direct voltage causes direct current to increase slightly. if nonconduction faults occur on the inverter side. • Misconduction Faults If misconduction faults occur on the rectifier side. following the abnormal phenomena of flow and quality. cooling equipments (air-cooled. thereby increasing direct current. the operation of HVDC link is influenced by the speed of AC-voltage drop. Converter Auxiliary Components Faults In order to protect thyristors. Abnormal firing pulses can result in control system faults and thus lead to the malfunction of the converters. 8.2 AC-side Faults Due to AC-system faults. 7. 54 . the depressed voltage at the converter terminals will either reduce or eliminate the power transmitted. 1. • Rectifier-side AC Three-Phase Short-Circuit Faults For remote three-phase faults.2 Converter DC power following a three-phase fault at the inverter end [75] Pi = inverter power waveform Pr = rectifier power waveform 55 . DC-power recovers very quickly with the recovery of AC-system voltage.2 [75] Figure 7. the rectifier commutating voltage drops significantly. the rectifier commutating voltage drops slightly. Since there is no overvoltage and overcurrent generated on the DC components. thus producing large direct-current peaks. the DC-power transfer is illustrated in Figure 7. it is not necessary to stop the DC system. When a fault occurs sufficiently close to the inverter end or an AC system is relatively weak. After AC system faults are cleared. • Inverter-side AC Three-Phase Short-Circuit Faults A short circuit occurring at the inverter side causes the reduction of AC busbar voltage at the inverter station and commutation failures. Therefore. the amplitude of commutating voltage decreases dramatically and quickly. The rate and amplitude of depressed AC voltage is related to the weak or strong AC system and the remote or close-in three-phase fault at the inverter. For closein three-phase faults. thereby easily causing commutation failures. the converter is greatly influenced by close-in three-phase faults until the rectifier commutating voltage reduces to zero. For AC faults close to the inverter. AC-side Single-Phase Short-Circuit Faults Single-phase faults are AC-system common faults.3 The response to a staged AC fault [77] 56 . which contain components of positive sequence. double-frequency modulation will produce heavy oscillations. Because of the lack of symmetry. [77] Figure 7. Single-phase faults are unsymmetrical faults.2. [76] When the fault of single-phase to ground occurs near the Haywards converter station in the New Zealand system. negative sequence and zero sequence. double-frequency (second harmonic component) modulation is introduced on the DC side. usually ground flashover. Commutating voltage is highly influenced by the circuit mode of converter transformer. For a weak AC system. the response to a staged 60 ms AC fault is shown in Figure 7.3. inductor. but the reduction of power is smaller than that during three-phase faults. 5. Converter Transformer and Auxiliary Components Faults The internal faults of a converter transformer will cause the winding temperature. AC-Filter Faults Usually an AC-filter consists of capacitor. Different faults and switching operations will produce the inrush current and harmonic components during a certain time. 4. following normal sequence commutation. resistor and surge arrester. oil potential. the average value of the inverter direct voltage is lower than the normal value. oil flow. If ground faults occur on these components. fan and motor) cause the converter transformer to malfunction. Commutation failure cannot occur. direct current and voltage relatively decrease during faults period. oil temperature. the high-voltage terminal and ground terminal currents will appear the differential value and the current flowing through AC-filter will increase substantially. • Inverter-side AC Single-Phase Faults In order to ensure sufficient extinction angle. unbalanced commutating voltage can produce the second-order harmonic on the DC system. firing angle must be reduced immediately and commutation failure can recover normal commutation within several tends of millisecond. The reduced firing angle must be limited by the inverter minimum firing angle. The faults of auxiliary components (oil pump.• Rectifier-side AC Single-Phase Faults As single-phase faults occur on the rectifier-side AC system. 3. with considerable second harmonic component. Station Power System Faults 57 . AC-network switching operations may produce abnormal overvoltage on the converter and converter transformer. gas and pressure to change. Like three-phase faults. an adjacent AC system usually provides two or three power sources to a converter station. contamination or branch may reduce the DC-line insulation level. [79] Figure 7. switching station power systems may cause the operation of HVDC system to shut down. In accordance with the New Zealand link parameters.In order to prevent all converter stations from losing power sources simultaneously. When faults occur in the effective power source.3 DC-Line Fault Lightning strike. The faults of station power system lead to the corresponding voltage dip initially.4. [78] 7. and further produce the flashover of DC-line to ground. direct voltage falls and direct current rises on the rectifier side. the back-up power sources are automatically switched. and this characteristic can be utilized to design the fast switching control and protection. In order to avoid causing the circulating current. is illustrated in Figure 7. and direct voltage and direct current fall on the inverter side. using a digital model. a typical DCline fault.4a Direct current waveform during a DC-line fault [79] (i) Rectifier end (ii) Inverter end 58 . If the design of station power system is not suitable. only one power source works effectively and others are back-up power sources. As the short circuit of DC-line to ground occurs. usually caused by ground fault. may lead to DC-line sudden discharge phenomenon. thereby causing ground flashover.4b Direct voltage waveform during a DC-line fault [79] (iii) Rectifier-line end (iv) Inverter-line end 1. Lightning Strike For a bipolar HVDC link. trees. and discharge current causes direct current to rise momentarily.Figure 7. the changes of direct voltage and current will propagate from flashover points to the converter stations. Especially when ground flashover occurs at the DC lines. discharge phenomena will occur. [81] 3. snow). Ground Flashover Owing to environmental influences (contamination. High-Impedance Ground Fault 59 . These continued wave reflections produce high-frequency transient voltage and current on the lines. two poles cannot be stroke simultaneously by lightning at the same location. the insulation of the DC-line tower becomes badly. thereby producing inrush current. Lightning strike causes direct voltage to rise momentarily and then to fall. If DC-lines’insulations cannot withstand momentary increasing voltage. Sudden changing voltage. [80] 2. fog. Usually DC lines are stroke by lightning very shortly. [82] 4. 5. DC-Lines Broken When serious faults (DC-lines’ tower collapse) occur.When high-impedance ground short-circuit faults (trees touch DC lines) occur in the DC lines. thereby appearing fundamental frequency AC-components in DC-lines’ current. 60 . the DC lines may break simultaneously. and two direct-currents at both terminals will appear the differential value. AC/DC-lines touch faults may occur during long-term operations. DC-Line and AC-Line Touch Long-distance overhead DC-lines may cross over many AC-lines of different voltage levels. partial direct current is short-circuited. direct current falls down to zero and the rectifier voltage rises up to the maximum limit value. the variations of direct voltage and current cannot be detected by the traveling wave protection. The broken DC-lines cause the DC system to open-circuit. all capacitive components will discharge through this surge arrester. protective-gaps were used as primary overvoltage protection devices in most early HVDC schemes. 4. 3. thereby improving the system reliability and reducing the equipment costs. switching. after the protective-gap operates. low price. 2. AC surge arresters can cut off the current at natural zero-crossing instant. DC surge arresters are not of natural zero-crossing points. direct current can automatically drop to zero. In some DC surge arresters. sturdiness. Due to simple structure. but discharging-voltage instability and no automatic arc-suppression capability are main shortcomings. faults. and it is relatively difficult to suppress arc. in order to suppress arc. two terminals are not grounded. DC surge arresters can produce substantial heat very seriously under the normal operation. In order to protect equipment and to limit overvoltage level. HVDC systems can generate overvoltages with a variety of waveforms. the energy-absorption capacity of DC surge arrester is much higher than that of conventional AC surge arrester.Chapter 8 Overvoltages and Insulation Co-ordination 8. durability and high energy absorption capability. Therefore. Because an HVDC system provides the perfect control system. All capacitive components are on the full-charging state during the normal operation.1 Overvoltage Protection Devices Due to lightning strikes. there are great differences in the operating condition and working principle. overvoltage protection devices are required to install. 61 . The main differences are: 1. For AC and DC surge arresters. Once a certain surge arrester operates. In order to reduce the equipment insulation level. In 1970s metal-oxide surge arresters started to emerge with the development of technology. gapless zincoxide surge arresters are predominantly employed as overvoltage protection devices. Typically. silicon-carbide surge arresters with series gaps have been extensively used in HVDC schemes. and the residual voltage cannot be reduced effectively.5. 8. Metal-oxide surge arresters have now taken over the overvoltage protection devices in HVDC schemes. a variety of possible overvoltages need to be discussed separately. [83] The main advantages of metal-oxide surge arresters are less space. high energy absorption capability. a zincoxide disc can carry thousands of amps at twice the nominal voltage. simple structure.2 Overvoltages Studies In order to consider the insulation co-ordination of converter station. the requirement of external insulation is very high. For DC surge arresters. Under such situation. The volt-amp characteristic of metal-oxide surge arrester is much superior to that of silicon-carbide surge arrester.1 AC-side Overvoltages 1. They consist mainly of zinc-oxide but contain additives of other metal oxides. excellent nonlinearity. strong arc-suppression capability and the lack of gap spark-over transient. Compared to protective gap. excellent pollution-resistance performance. metal-oxide surge arresters are sometimes socalled gapless surge arresters. in order to maintain safety operations. But the protective characteristic of silicon-carbide surge arrester is still not prefect. and thus series-connected spark gaps are not required any more. silicon-carbide surge arresters had greatly improved protective characteristics. Therefore.2. 8. Transient Overvoltage 62 . In conventional HVDC transmission systems. the rated value of surge arrester must be reduced. Until 1960s silicon-carbide DC surge arresters had been put into operation. 8. Switching Overvoltage AC-busbar switching overvoltages are caused by the AC-side faults and switching. lightning overvoltages are usually not regarded as the key to AC overvoltage and insulation co-ordination in the converter station. 3. Transient Overvoltage 63 . thereby influencing the converter-valve surge arrester. thereby becoming the initial condition caused by the internal fault of the converter. in general. Lightning Overvoltage AC-line intrusion-waves and the direct-strike lightning of the converter station can generate lightning overvoltages on the AC-busbar of the converter station.2. Because there are incoming lines. Moreover.Transient overvoltage is the overvoltage lasting several cycles to hundreds of cycles. Therefore. This kind of transient overvoltage is transferred to the valve side via the converter transformer. lightning waves cannot intrude into the converter valve side. AC-busbar surge arrester. Normally switching overvoltage with relatively high amplitude maintains only half a cycle. Switching overvoltages influence the insulation level of AC-busbar equipments and the energy-absorption capacity of AC-side surge arresters. Transient overvoltage can develop directly on the equipment and will cause the switching overvoltage to rise. The most typical transient overvoltage occurs on the AC-busbar of converter station and influences the AC-busbar surge arrester directly. equipments (AC filters and capacitor banks) which can considerably damp the lightning wave. and directly considered in accordance with conventional AC substation rules. under the normal condition.2 DC-side Overvoltages 1. owing to an effective shield of converter transformer. Switching overvoltages can be also transferred to the converter valve side via the converter transformer. the lightning overvoltage of converter station is less severe than that of conventional substation. 2. 1 Line current and voltage recorded at the inverter during missing pulse condition in a rectifier bridge (1980 CIGRE) [84] 64 .Transient overvoltages generated on the DC side of the converter station mainly include two categories below. [84] Figure 8. owing to a variety of origins. Due to the enlargement effect caused by resonances. the resonance frequencies close to the fundamental frequency exist. • AC-side Transient Overvoltage When converters operate. • Converter Faults When the faults such as partially-missing pulses. and fully-missing pulses occur within the converter.1. An example of fundamental-frequency resonance which occurred during early operation of the Cahora-Bassa scheme is shown in Figure 8. commutation failures. thereby mainly causing considerable energy to pass through surge arresters. If the main parameters of the DC side are not configured correctly. internal converter disturbances give rise to ACfundamental voltages penetrating into the DC side. transient overvoltages generated on the AC busbar can propagate into the DC side of converter station. overvoltages can be generated during a relative long-term on the DC side. due to the inrush AC-current and discharges caused by DC filter capacitors. Switching Overvoltage Switching overvoltages generated on the DC lines mainly include two categories below.2. Directstrike lightning can generate lightning overvoltages on the DC switching yard as well. • Short Circuit Fault When short-circuit faults occur within converters. due to monopole-to-ground short circuit occurrence. • AC-side Switching Overvoltage AC-side switching overvoltages can be transferred to the converter via the converter transformer. and lightnings will propagate into the DC switching yard along the DC lines.2. 2. the overvoltages transferred to the DC side usually do not produce considerable stresses on the DC equipments.3 DC-Line Overvoltages 1. switching overvoltages are normally generated on the converter and DC neutral equipments. 65 . Due to the protection effect of AC-busbar surge arresters. Lightning Overvoltage Direct-strike and back-strike lightnings can generate lightning overvoltages on the DC lines. Except for the design of DC line tower head. The most typical short-circuit location is between the valve-side terminal of the converter transformer and the converter valve. switching overvoltages will induce on the healthy pole. Switching Overvoltage Switching overvoltages generated within the converter mainly include two categories below. When two poles operate. 8. the surge arresters of the converter zone are mainly used to protect the thyristor converter valves and the converter transformers.2 shows the overvoltage developed on the DC line when the rectifier is deblocked with full rectifier voltage against an open inverter end. the 66 . [86] This kind of overvoltage can develop not only on the DC lines.e. but also possibly on the opposite-terminal DC switching yard and non-conducting converters. [85] When the opposite terminal of the DC lines is open-circuit and the local terminal of the DC lines is deblocked with the minimum firing angle.2 Deblocking with full rectifier voltage against an open inverter end [86] 8. converter zone and DC-yard zone. a converter station can be divided into three zones. The overvoltage amplitude depends on the parameters of line and the two-terminal circuit impedances.3 Insulation Co-ordination In accordance with the arrangement of surge arresters. Figure 8. AC zone. The surge arresters of the AC zone are very similar to that of the conventional AC substation. the surge arresters of the DC-yard zone are mainly used to protect the DC-yard equipments. excessive high overvoltages are generated on the open-circuit terminal of the DC lines. The main principle of surge arrester arrangement is: the overvoltages generated on the AC side should be limited by AC-side surge arresters. i.this kind of switching overvoltage also influences the overvoltage protection and insulation co-ordination on the DC switching yards of converter stations at both terminals. Figure 8. The typical arrangement solution of surge arresters for a 12-pulse converter is shown in Figure 8. the reactor is protected by surge arresters installed on the two ends. The 6-pulse converter bridge arrester can be replaced by series-connected valve-arresters.4. for an oil-insulated smoothing reactor. [87] For the high-voltage capacitors. For the low-voltage reactors and resistors. Owing to different structures of AC and DC filters.overvoltages generated on the DC side should be limited by DC-side surge arresters. in order to reduce the vertical insulation level. surge arresters have to be arranged individually and respectively. the surge arrester is directly parallel with the reactor after the economic and technical comparison. the 6-pulse converter bridge busbar arrester can be used. parallelconnected surge arresters must be installed.3. Two typical arrangements of surge arresters for AC and DC filters are shown in Figure 8.3 The typical arrangement solution of surge arresters for a 12-pulse converter [87] The practical arrangement solution of surge arresters may be slightly different with the above solution. the dedicated surge arresters are not required to install. [87] Figure 8. In order to reduce the insulation level on the valve-side windings of star-star connected transformer. 67 . the important equipments are parallel with the surge arresters individually to be protected. For an air-insulated smoothing reactor. 4 Surge arrester arrangement for AC filter (left) and DC filter (right) [87] The rated voltage and energy absorption capability of surge arresters are referred to as the parameters of surge arresters. The energy absorption capability of the surge arrester determines whether the surge arrester. thereby mainly determining the insulation level and protection level. 68 . under overvoltages. The rated voltage of the surge arrester must be selected in accordance with the maximum continuous operating voltage and the transient overvoltage. thereby influencing the protection level. can consume the energy safety or not.Figure 8. the selection of conductor cross-section depends largely on corona discharges and electric-field effects. as the distance between two poles increase. 69 . selecting the appropriate insulation level provides considerable economic benefits for overhead-lines’construction. [88] In accordance with the transfer capacity of HVDC system. cable line and electrode line. 9. all possible flashover paths. ground wire. due to the expensive price of DC insulator. e.1. tower. the terrains of lines route and the crowded situations of land use. the potential gradient of conductor surface will fall down and the corona loss will reduce.1 Overhead Line 9. thermal limitations were mainly considered and electric-field effects were much less considered. As the voltage levels increase gradually. audible noise and radio interference must be taken into account. Under reliable operating conditions. During the design of transmission lines. It is very important to appropriately determine insulation levels. 9. the potential gradient of conductor surface. and positive-conductor to negative-conductor. HVDC transmission lines can be classified into overhead line. In order to restraint the influence on the environment. several models of conductor cross-section were initially selected and then analyzed from the result of economical comparison.1. for example.g. the appropriate selection depends on the locations of converter stations. thereby finally deciding the most appropriate cross-section.Chapter 9 Transmission Lines According to applications.1 Conductor Cross-section In the early design of HVDC overhead lines. must be taken into account. conductor to earth.2 Insulation Level In order to design the insulation level of overhead line. on one pole having polarity opposite to overvoltages. [89] 70 . and for the other pole. the ground wires must be erected along the entire route. Tower-Head Air Gap Air-gap breakdown voltage and insulator flashover voltage are related to atmospheric conditions (pressure.1. temperature. DC-Line Lightning Protection Although the hazard caused by lightning-strike overvoltage on the extra high-voltage DC overhead lines is less severe than that on the AC lines. under different operating modes. the striking distance factor will obviously decrease. the rate of lightning shielding failure will increase. the external insulation discharge voltage. Owing to the horizontal arrangement for two poles. two ground (lightning) wires are also arranged horizontally. Therefore. under the circumstance of the same average height (or very slight difference) for ground wires and conductors. must be corrected according to different atmospheric conditions. humidity). Dealing with the effect of working voltage. Under the same lightning current and tower height. the voltages on the tower-head air-gap and insulatorbunch are the sum of lightning-impulse voltage and DC working-voltage. under the standard atmospheric condition. With the increase of working voltage. when towers and ground wires are struke by lightning. In the design of the overhead-line external insulation. Dealing with the effect of working voltage. the striking distance will not be approximately equal between the lead to ground wires and the lead to conductors. with the increase of working voltage. the line lightning-strike endurances are different in an HVDC transmission line. the voltages are the difference. a bipolar HVDC overhead line is of natural imbalance insulation characteristics under lightning-strike. 2. For an HVDC overhead line. so as to improve the reliability of system operation. light weight and excellent pollution-proof performance. Consequently. Composite insulators are of high strength. in places close to the sea the number of insulator elements was increased to reduce salt pollution problems.9. toughened glass insulators and composite insulators had been employed in the commissioned HVDC overhead lines. composite insulators are unlikely to damage and are not required to clean or maintain. the materials of contamination tests and the distribution situations of pollution areas must be collected to carry out the different levels of contamination. the price of composite insulators is cheaper than that of porcelain or glass insulators. the DC-voltage withstand level on switching surge or impulse flashover performance has to be taken into considerations. Owing to the hydrophobic status of composite insulators. for the flashover voltage.70% higher than that of porcelain or glass insulators. different creepage distances can be obtained easily. Toughened glass insulators and porcelain insulators are mostly employed in the world. In order to suitably arrange the insulation level of DC overhead lines according to contamination situation.3 Insulator Types Porcelain insulators. the effect of un-uniform pollution distribution (along the top and bottom surfaces) on the composite insulator is smaller than that on the porcelain or glass insulator. For example. Moreover.1. and composite insulators are mainly employed in the areas of contamination and inconvenient cleaning.1. [90] Owing to simple shaping technology. the pollution flashover voltage of composite insulators is 60% . under the same creepage distance and pollution degree. The DC pollution flashover gradient under hydrophobic status is 153% higher than that under hydrophilic status according to the research conditions.4 Insulator Numbers For DC overhead lines the number of insulator elements is usually determined by the normal voltage under contamination circumstances. 71 . under the circumstance of the unchanged insulator-bunch length. If the pollution levels are very low. 9. 1. OPGW used as ground wires. On one hand. when the short-circuit faults occur on the transmission lines. it is necessary to install ground wires. Most of DC overhead lines are the bipolar lines erected on the same tower and the positive-polarity and negative-polarity conductors are arranged on two sides of steel tower. manufacture and installation. On the other hand. OPGW (optic fibre ground wire) has already employed. complicated terrain and various atmospheres. The design principles used in AC overhead lines also determine the dimension of DC overhead lines and hence the tower designs of DC overhead lines are very similar to those of AC overhead lines.5 Steel Tower DC overhead lines are classified into monopolar lines and bipolar lines.9.6 Ground Wire In order to ensure the safety operation and to prevent the trip fault caused by direct-strike lightning. in order to satisfy the requirements of telecommunications. In order to avoid the temperature beyond permissible value.1. In recent years. must take fairly effect as a safeguard against lightning. 9. 72 . the tower types employed in HVDC schemes are required to consider comprehensively in accordance with long route. as the temperature increases dramatically. the consideration of thermal stability must be taken into account. Under normal circumstances. but more tower types can increase the investment of design. so that the characteristic of sag of span of OPGW is very similar with that of the other ground wire erected on the same tower. when selecting OPGW. When the shortcircuit current is extremely high. more tower types employed in an HVDC scheme can consume much less steels. it may cause fibre optics to be damaged. the short-circuit current will cause the temperatures of OPGW and the other ground wire to increase. Therefore. In 2000.1 Application and Development DC cable can transmit bulk power over long distances.2. Therefore. [92] The 700 MW. the 2800 MW. In twenty years.2. due to the transient fault caused by converter. AC cable transmission only uses relatively low voltages over very short distances. so as to counteract excessive high voltages at the middle or end of lines. In order to avoid the conductor-core overheat within AC cables. DC cable is mainly employed as submarine cable and underground cable. and DC cables have already been used in many schemes. and using DC cable lines is rather feasible. 580 km long NorNed link is the longest submarine cable transmission commissioned in 2007. it is impossible to employ AC transmission in practice. the shunt reactors must be installed along the route. Consequently. ± 450 kV.2 Cable Insulation When reversing the power-flow direction. [91] The 600 MW. In addition.9. unlike for overhead transmissions. For relative long-distance submarine cables. the power of AC cables increases in proportion to the voltage of AC cables. the peak value of transient overvoltages may reach to twice working voltage.2 Cable Line 9. 450 kV. 9. it gives rise to instantaneous oscillation overvoltage added to DC voltage. Under the most severe condition. the application of DC transmission has made considerable progress. DC cable must withstand fast reversal of voltage polarity. ± 500 kV Kii-channel undersea scheme was commissioned with highest DC voltage in the world. [93] Owing to thermal limitations. the power transmitted by AC cables is much less than the natural power. Therefore. and the charging current within AC cables increases with the distance and with the square of the voltage. 73 . the current direction maintains unchanged while the voltage polarity changes. 250 km long Baltic cable connecting Sweden and Germany was commissioned in 1994. 1. Mass-Impregnated Cable The great bulk of cables are of the mass-impregnated type in earlier installations. and the losses of DC-cable insulation are much less than those of AC-cable insulation under the same voltage. XLPE is a fourth type. for DC-cable insulation the thermal instability has become less important. convenient manufacture and maintenance and low cost. so that the structure of DC cable is very similar to that of AC cable. oil filled. the effective value of the voltage is also the peak value of the voltage. 9. There are three types of DC cable. The advantages of mass-impregnated cable are simple structure.Although the structures of DC cables are very similar to those of ordinary AC cables. the breakdown voltage of AC-cable insulation is related to the working time. the mass-impregnated cable is likely to use for long-distance submarine installation.3 Cable Types The development of DC cable took the successful experience from AC cable. the working conditions of DC-cable insulation are much excellent than those of AC-cable insulation. often referred to as solid cables.1 shows the development of the maximum power per cable and the maximum operating voltages of the mass-impregnated cables. In addition. For DC cables. Figure 9. Consequently.2. [94] 74 . so far studied and employed widely. Due to no need for supplying oil and cooling effect of seawater. gas pressurized and mass impregnated. but there is no such problem on the DC cable. 3. 4.Figure 9. Oil-Filled Cable Oil-filled cables were generally installed and developed across land (without intermediate pressure/feeding stations). Due to extreme high manufacture working stresses and sealing effect. oil-filled cables can be used as submarine cables. Gas-Pressurized Cable Gas-pressurized cables can withstand relatively high insulation working stresses. gas-pressurized cables have not been manufactured for DC use since the 1960s. Crosslinked Polyethylene (XLPE) Cable 75 . oil-filled cables can withstand higher insulation working stresses than mass-impregnated cables. For example. thereby likely using for long-distance submarine installation and great depth installation. As the technical problem for supplying oil over long-distance settles down.1 Maximum power per cable and operating voltage of realized HVDC submarine mass-impregnated cables [94] 2. Oil-filled cables can provide more excellent technical features than other types of cables. (2) For rated DC voltage. Conductor Core The conductor core of the DC cable is usually made of copper and the cross-sectional area of the conductor core depends on rated current. 9.2.4 Cable Structures 1. the working stress on the conductor’ s surface must be lower than the permissible value. (4) Under rated current. permissible voltage drop. XLPE cables are suitably used as submarine cables.XLPE has become an accepted insulation now as an alternative to oil filled cables. the conductor’ temperature must be lower than the permissible s value. sea water permeation after fault must be highly considered. Insulation Layer The insulation layer thickness of the DC cable must satisfy simultaneously the following requirements: (1) For rated DC voltage. short-circuit capacity and so on. When selecting the structure of conductor core. (3) The insulation layer thickness must withstand the impulse test voltage. the working stress on the outer casing must be lower than the permissible value. 2. at no-load condition. Recently only steel-wire armoured XLPE cables are widely paid attention in the world. Owing to simple structure and robustness. at full-load condition. Outer Protective Layer 76 . 3. especially for submarine cables. Corrosion-Proof Layer Owing to the effect of the leakage current and other currents (used sea water as return circuit). Metallic Sheath In order to ensure the reliability and flexibility of the metallic sheath. [95] 77 . The structure of outer protective layer mainly depends on environmental corrosion and mechanical damage. 6. the plastic sheath is usually added to prevent electrolysis corrosion.2. considerable mechanical stresses are generated on the cable. A typical design is the 250 kV DC cable of the Skagerrak scheme laid at a depth of 550 m. and the sheath losses are not existed in the DC cable. 5. the electrolysis corrosion will occur on the metallic sheath and reinforcing layer. 4. until today DC cables usually employ the galvanized sheath. several layer metallic tapes must be added as the reinforcing layer. For oil-filled or gas-pressurized cables. illustrated in Figure 9. owing to the dead weight of submarine cables.There is no induced voltage on the metallic sheath and armour. Consequently. steel tapes or armoured steel wires are covered outside the corrosion-proof layer in accordance with specific circumstances. while submarine cables are easily attacked by sea animals. When carrying and laying submarine cables. Submarine cables usually employ steel wires armoured cables. Armour In order to prevent mechanical damages. earth return provides substantially low resistance and low power losses correspondingly.1 Insulation Level The current flows into earth via earth electrode and electrode line under normal operating conditions.Figure 9. Using earth return can develop the HVDC transmission system gradually in accordance with the requirement of transmission capacity. when one pole or converter is shut down. the half-capacity power can still be transmitted by using the other pole and earth return.2 The cross section of the double-armoured DC cable [95] 9. From the terminals of converter station to earth electrode.3 Earth Electrode Line An earth electrode line transfers direct current into earth. thereby using earth (or seawater) as the return conductor with cheap costs and low losses. the voltage potentials reduce linearly along the electrode line. 9. Even under monopolar operating conditions. Compared to the metallic return with the same distance. In a bipolar HVDC transmission system. 78 .3. thereby causing the voltage drop on the electrode line. the voltage potentials on the electrode line are only several thousands of voltages. In accordance with the above situations. but also satisfy the transmission requirement. thereby causing the low voltage drop on the line. The earth electrode line and earth electrode are usually used to fix the neutral potential of converter station.9. 79 . 3. the conductor cross-section of earth electrode lines only depends on thermal stability conditions under the most severe operating mode.3. 2. The current flowing through electrode lines is only 1% at rated current. The current only flows through the conductor and earth electrode.2 Conductor Cross-section The earth electrode line has several characteristics below. such conductors can not only save the capital costs. 1. Therefore. The earth electrode line is usually tens of kilometers. thereby producing the energy loss termed corona loss. radio interference and audible noise must be controlled in the appropriate extent. termed DC-line corona. radio interference and audible noise. The critical degree of corona discharge is directly related to the electric-field strength of the conductor surface.e. The initial corona electric-field strength of small-diameter conductor is higher than that of large-diameter conductor. 10. atmospheric ionization following visual discharge. corona loss. The electric-field strength of the conductor surface. is termed the corona critical electric-field strength or the initial corona electric-field strength. [97] According to the testing. corona loss. The movement of electrically charged ions forms the corona current around the DC transmission line. i. which leads to the line corona. [96] In order to design the DC lines.1 Corona If the potential gradient of conductor surface exceeds a specific critical value. 80 . DC electric-field effect.Chapter 10 Transmission Line Environmental Effects As a significant technology issue. the increased corona loss in the DC line is much less than that in the AC line. the DC-line corona is of the following characteristics: [98] (1) In the rainy days. will occur in the vicinity of DC-line conductor and the electrically charged ions caused by conductor corona will move towards the conductor of the opposite polarity or deviate from the conductor of the same polarity. Therefore. DC electric-field effect. the entire space is fulfilled with electrically charged ions. the environmental effects must be taken into considerations and the electromagnetic environment is directly related to the corona characteristics of transmission line. especially the surface maximum electric-field strength. In addition. the corona loss of each pole is 1. (4) Under the specific voltage. the electric-field may be strengthened further by external factors in terms of the weather. 10. Figure 10. [99] The displacement velocity of positive and negative ions. and the lowest electric-field strength normally occurs at the symmetrical center of bipolar conductors.2 Electric-Field Effect If the electric-field gradient of the conductor surface exceeds the initial corona electricfield gradient. the DC-line corona loss usually increases. is at the same scale with the wind velocity. (3) In the specific range (0 – 10 m/s) of wind speed. 81 . even in the very slow wind velocity (1m/s).5 times that of the monopolar line. under the electric field. ionization will occur in the atmosphere close to the conductor surface and the space charges caused by ionization will move along the electric-flux direction. no matter in the bipolar or monopolar operating mode. the corona loss of the positive pole is approximately identical to that of the negative pole. with increasing the wind speed. with the increase of the number of bundle conductor. seasonal variations and relative humidity. The electric-field strength under DC line mainly depends on the critical degree of conductor corona discharge. no matter in the rainy or sunny days. the distribution of the electric-field will be distorted.(2) If the electric-field strength of the conductor surface maintains a certain value. for the bipolar lines.5 – 2. The distribution shown in Figure 10. the DC-line corona will increase.1 shows the distribution diagram of the electricfield strength under the 450 kV overhead line. The highest electric-field strength occurs directly under the overhead conductor.1 is the most ideal circumstances without wind. both with monopolar and bipolar transmissions. Therefore. (5) Under the specific voltage. Figure 10.1 The electric field of monopolar and bipolar 450 kV overhead lines [99] At the same voltage level, the electric-field strength under DC line is higher than the electric-field strength under AC line. Under normal operations, without the induced phenomenon of capacitor coupling, for AC and DC electric-fields, the same strengths produce the different effects. The electric-field and ionic current density are related to the electric-field strength of the conductor surface and the corona initial electric-field strength. With the specific geometric size of lines, higher the electric-field strength of the conductor surface or lower the corona initial electric-field strength, higher the electric-field and ionic current density. Therefore, either lowering the electric-field strength of the conductor surface or increasing the corona initial electric-field strength can reduce the electric-field and ionic current density. 10.3 Radio Interference Under the normal operating voltage, the DC-line conductors always produce a certain degree of corona discharge with the ionic current in the vast space. Furthermore, it gives rise to the radio interference in the vicinity of DC lines. The corona discharge process is of pulsating characteristics, thereby producing the current and voltage pulses on the DC-line conductors. 82 For the conductor of negative polarity, the corona discharge points are generally welldistributed on the entire conductor surface and the repeatedly emerged pulses are of almost identical amplitudes with low values. Compared with the conductor of positive polarity, the radio signals are interfered slightly by the corona discharge from the conductor of negative polarity. For the conductor of positive polarity, the corona discharge points are randomly distributed on the entire conductor surface and the continuous discharge points are mostly located on the drawbacks of the conductor surface. The discharge pulses are of high amplitudes and irregular distribution. The corona discharge from the conductor of positive polarity is the principal source of radio interference. As the distance away from DC line increases, the radio interference caused by conductor corona gradually attenuates. For bipolar DC lines, the conductor of positive polarity is the main source and the symmetric center of radio interference, which attenuates transversely from the symmetric center to both sides. With the increase of frequency, the DC-line radio interference gradually reduces. The spectrum characteristics of the DC line are very similar to those of the AC line. With the increase of the humidity, the DC-line radio interference tends to reduce, and with increasing the temperature, the DC-line radio interference tends to increase. But the pressure variations have no obvious influences on the radio interference. (1) The DC-line interference level in rainy days is approximately 3dB lower than that in sunny days. Compared with the AC line, the DC-line interference level is obviously different under different weather conditions. (2) Wind causes the DC-line radio interference to increase; especially the most severe consequence is caused by the wind direction flowing from negative polarity to positive polarity. [100] (3) In the late-autumn and early-winter seasons, owing to fairly low temperature and quite high atmospheric humidity, the DC-line radio interference is relatively low. In the 83 winter and early-autumn seasons, the radio interference is close to the average value. In the summer season, dust, insect and bird’ droppings always stick on the conductors s and the wind velocity is considerable high, consequently, owing to considerable high temperature and rather low atmospheric humidity, the DC-line radio interference is the highest level. [100] 10.4 Audible Noise As voltage levels increase gradually, in order to design DC line properly, audible noise caused by corona discharge must be considered as an important factor. Since serious audible noise often makes the residents close to lines annoyed, when designing and constructing the DC lines, audible noise must be limited within a suitable range. For DC lines, the corona of the positive-polarity conductor is the main source of audible noise and the audible noise attenuates transversely towards both sides of DC lines. However, the symmetry axis of audible noise is not the center of bipolar lines, but the positive-polarity conductor. With the increase of the distance, the audible-noise attenuation is much slow than the radio-interference attenuation. As the distance increases one time, the audible-noise attenuation is approximately close to 2.6dB (A). [100] Audible noise caused by AC-line corona is composed of two parts, one part is the wide frequency band noise (primary part in AC noise) caused by positive polarity injection discharge and the other part is the pure tone (multiple of fundamental frequency) caused by the back and forth movement of charged ion in the vicinity of conductor, due to periodicvoltage variation. In sunny days, AC-line audible noise is fairly small. In little rainy, fog and snow days, dripping water sticks on the conductor surface, thereby causing considerable audible noise. DC-line audible noise in rainy days is less than that in sunny days and the noise in snow days is slightly different with the noise in sunny days. Therefore, when designing the DC line, the audible noise in sunny days must be considered primarily. [100] 84 i.2 Earth Electrode Operational Features Except for obvious merits. For monopole with metallic return.1 Earth Electrode Effects For two-terminal HVDC transmission systems commissioned. 85 . thermodynamic effect and electrochemical effect. bipole with two terminals grounded.e. using ground return also brings the adverse effects caused by considerable direct current. (1) A direct-current field can change the ground magnetic field nearby earth electrodes. there is no current in the ground and earth electrodes are only used to clamp the neutral-point electricpotential. the electromagnetic effect may bring the following consequences. due to one point grounded or ungrounded. bipole with one terminal grounded and bipole with two terminals ungrounded. but also to provide the path for direct current. Therefore. thereby influencing the magnetic compass. thereby rising ground potential and producing surface step voltage and touch potential. bipole with two terminals ungrounded and bipole with one terminal grounded. a constant directcurrent field arises in the soil of electrode site. the main circuit modes can be classified into monopole with ground return. For monopole with ground return and bipole with two terminals grounded. • Electromagnetic Effect As considerable direct current injects into earth via earth electrode.Chapter 11 Earth Electrode 11. 11. monopole with metallic return. earth electrodes are used not only to clamp the neutral-point electric-potential. electromagnetic effect. 11. For a human-body resistance of 1000 . excellent electric-conduction and heat-conduction performances. which are close to electrode site. earth 86 . in order to ensure the perfect thermal stability performance during the operations. Due to the effect of direct current. for land electrodes (including coastal electrodes). the moisture in the soil will evaporate. considerable heat-capable coefficiency. Surface step voltage and touch potential nearby electrode site may influence human being and animals. the maximum safe current flowing through the body is recommended as the limit value of 5 mA. but also in the underground metallic equipments or power system grounded. If an earth electrode is very close to the converter station.3 Electrode Site Selection In accordance with the operational features of earth electrode and the circumstances of current distribution in the ground. Therefore. [101] • Thermodynamic Effect Earth electrodes are buried in different soils. which can cause shock currents. selecting a suitable electrode site must satisfy the following conditions. termed step voltage. electro-corrosion occurs not only in earth electrode. usually 8 – 50 km. and thus the conduction performance of the soil will become worse and even loose operational capability. (3) Land electrodes create potential differences at the earth surface. the temperature of earth electrode will increase.(2) Rising ground potential may give rise to adverse effects on the underground pipelines. [102] • Electrochemical Effect Due to direct current flowing through earth. especially at the temperature reaching to a certain extent. armoured cables and electrical equipments grounded. thereby obtaining different resistances. [103] (1) An earth electrode is located a certain distance away from the converter station. the soil around electrode site must have sufficient humidity. (5) Earth surface used to bury earth electrode must be sufficiently large flat area. thereby influencing the safety operation of power system equipments and corroding the grounding grid. and excellent operational features of earth electrode. (2) Earth electrodes must be placed in the wide terrain with excellent conductivity (low soil resistivity). usually larger than 10 km. the soil resistivity is generally under 100 m. (3) Soil must have sufficient water to maintain humidity. (6) An appropriate electrode site must provide convenient route and cheap investment for electrode line. lower the surface step voltage and ensure the safety and stability operation. thereby providing benefits i. so as to avoid the corrosion in the underground metallic equipments or the unnecessary grounding investment for electrical equipments.e. 87 . The soil of earth surface (close to earth electrode) must be of excellent thermal characteristics. especially in the vicinity close to the electrode site. even under the worst situation that considerable direct current flows through earth electrode over a long period. i.current easily injects into the grounding grid of the converter station. (4) There are no important and complicated underground metallic equipments around electrode site.e. convenient installation and operation. the investment of electrode line will increase and the neutral-point electric-potential in the converter station will be excessive high. in order to reduce the cost of earth electrode. so as to reduce the size of earth electrode. high heat-conductivity and large heat-capacity. the electrode site must be sufficient distance away from some important AC substations. If an earth electrode is too far away from the converter station. In addition. and especially is well suited to the conditions of low-resistivity surface layer soil. Figure 11. wide ground and smooth terrain.1 The cross section through a horizontal land electrode [104] The bottom of the vertical land electrode is few tens of meters deep in common. thereby causing less 88 . and thus the current can flow into the deep-layer ground directly through the vertical land electrode. According to different conditions for electrode sites.11.4 Earth Electrode Design Earth electrodes commissioned in the world can be classified into land electrodes and sea electrodes. A horizontal land electrode is usually laid at the depth of few meters owing to low resistivity of surface layer soil.1. 1. In accordance with bury methods. Therefore. Land Electrode Land earth electrodes mainly use electrolyte in soil as conducting medium. a horizontal land electrode has the advantages of convenient installation and low cost. shown in Figure 11. land electrodes and sea electrodes are arranged into different patterns respectively. in some cases the depth of up to few hundreds of meters. land earth electrodes can be classified into horizontal land electrodes and vertical land electrodes. In accordance with the arrangement modes. in order to avoid the impact from wave and ice. so as to obtain the minimum grounding resistance. the submarine electrode shown in Figure 11. Most coastal electrodes are arranged linearly along the coastline.2 The vertical electrode at the Southern Cahora Bassa HVDC station [104] 2. The conducting elements of the coastal electrode must be enclosed by robust protective equipments. sea electrodes are classified into coastal electrodes and submarine electrodes. The conducting elements of the submarine electrode are laid in the seawater. If only using cathodic operation. and the dedicated supporting and protective equipments are used to strengthen conducting elements and prevent wave and ice from impacting. or the electrode site limited by land use. shown in Figure 11. Due to the 89 . Sea Electrode Sea electrodes mainly use sea water as conducting medium and sea water has much better conductivity than land. A vertical land electrode is commonly well suited to the electrode site with high surface-layer soil resistivity and fairly low deep-layer soil resistivity.influence on environment.3 provides a rather economic solution. Figure 11.2. thereby causing relatively low potential and potential gradient on the earth surface. The first deep hole ground electrode had been installed in the Baltic cable project. it is feasible to build a compact seashore or seawater ground electrode. each of earth electrodes must be designed in accordance with anodic requirements. 90 . thereby reducing loss. using relatively short earth electrode line.3 The submarine electrode (cathodic operation) [104] 11. Under such a condition. reducing interferences and the risk of lightning strike. a deep hole ground electrode can reach into the earth layer of quite low soil resistivity. An ordinary ground electrode requires substantial land areas and must be placed at the location of relatively low soil resistivity. Figure 11.5 Earth Electrode Development 1. The advantages of deep hole ground electrode are: the location of ground electrode is fairly close to the converter station.polarity change caused by power-flow reversal. improving the feasibility of HVDC monopolar operation. Deep Hole Ground Electrode In a submarine HVDC transmission or nearby a converter station close to sea. easily seeking the appropriate location of earth electrode. If the substations are located within the vicinity of earth electrodes current-field. if there are transformers with neutral-point grounded. Common Earth Electrode For planning bulk hydropower outgoing transmission schemes.6 Influence of Earth Electrode Current While considerable direct current injects into ground via earth electrode. the electrode site selection and common earth electrode design must satisfy relatively high requirements. multiple sending-terminal converter stations are located very closely. for two or more HVDC systems feeding into the same AC network. thereby causing the significant difficulties in site selection of earth electrode. a common earth electrode not only reduces electrode-site land uses. due to complex terrain especially in a mountainous area. However. If considerable direct current flows 91 . the design scheme of common earth electrode by multiple HVDC systems is required. partial earth currents flows along and through these metallic equipments towards remote places. owing to these metallic equipments providing much perfect conduction paths rather than earth soil. Direct current flows into one substation (the transformer neutral-point) and flows out of the other substation (the transformer neutral-point). the common earth electrode shared by multiple converter stations requires reliable and available dispatching and communication. [105] In some countries. the electric-potential differences are generated among substations within the current-field. both earth electrode resistance and electrode line resistance must be restricted within a certain range. and thus it may cause unfavourable consequences to system operation and transformer. 11. underground metallic pipelines and armoured cables close to electrode site. the transformers (above 110 kV) neutral-points are mostly grounded directly. This design scheme has obvious advantages in reducing influences of earth currents on environment. Meantime.2. Aiming at such a circumstance. but also lowers overall cost. multiple receiving-terminal converter stations are placed electrically close to each other in the highdensity load center. the appropriate electrode site selection and electrode design can enhance the system safety and reliability. a constant direct current field is formed in the vicinity of electrode site. (3) The isolation measurements are: rationally configuring the grounding points and grounding modes of AC-system transformers. (1) The interception measurements from source are: rotationally arranging the HVDC system operating modes and reducing the circuit modes of monopole ground-return.through the transformers windings. owing to inaccuracy measurements. the following measurements must be considered. [107] (2) Influence on electromagnetism-induced voltage transformer. noise of the transformer to increase. While direct ground current flows through the grounding grid (mat) of power system. While direct current flows into electromagnetism-induced transformer. it may give rise to the following harmful effects on power systems. 92 . it may cause the temperature. it may cause the corresponding relay protective devices to maloperation. loss. Since transformer-core magnetism saturation occurs. (3) Electro-corrosion. [106] (1) Transformer-core magnetism saturation. In order to reduce the effect of earth current. it may give rise to electro-corrosion in the grounding grid. (2) The conduction measurements are: building reliable current path or dedicated line and reducing the grounding resistance of earth electrode. Since harmonics must be eliminated in accordance with more strict criteria and the complicated algorithm can be implemented with the excellent performance of digital signal processor. the functions of overvoltage protection and state monitor can be integrated into light-triggered thyristor as well. numerous control points and considerable data processing capacity. thereby enhancing the reliability and reducing the cost. In addition. concerning with different 93 . i. especially on the DC side of the converter station. due to the merits of long distance (more than 1500 km). fully hot redundancy. better fault diagnostic and on-line monitor function. Because considerable components for electrically triggered unit can be cancelled. so as to significantly lower the cost. the number of thyristors in series will reduce substantially. the HVDC transmission will be employed with extensive foreground. large capacity (more than 5 GW). economic and environmentally point of view. the number of control cubicle installed in the control room has reduced significantly. higher performance and lower price. The controller based on digital signal processor has been developed dramatically and thus the control system is of high reliability. from a technical. light-triggered thyristor is of the advantages. Lots of HVDC schemes have been commissioned in the world and the present experience can be referenced during the construction. Moreover. However. the future development trend for the HVDC transmission is to employ the modular.e. Due to fully digitalization. Each air-insulated valve can be installed inside the outdoor container and thus there is no need to build large valve hall. standard and duplicated design. the controller based on digital signal processor can also provide more advanced control algorithm. thereby simplifying the design and lowering power loss.Chapter 12 Conclusion As some countries are planning to exploit the natural resources (hydro and low-grade coal fields). In order to compete with the HVAC transmission. lower maintenance. active filters will play an important role. With the development of larger-diameter thyristor. flexible control and convenient dispatching. Therefore. can be replaced by deep hole ground electrode or common earth electrode. according to the practical arrangement of transmission lines. heavily icing region. heavily polluted contamination. Although the environmental effects of transmission lines can be limited under the allowable level. rainy). the environmental effects must be measured on the specific case. Due to complex terrain like mountainous area or large-scale load center with highpopulation density.geographical and climatic circumstances (high altitude. corona characteristics and overvoltage and insulation co-ordination must be carried out in each case respectively. an ordinary earth electrode. which requires substantial land areas. 94 . the environmental criterion is related to the entire cost. snow. J. Delivery.aspx [11]. The Institution .Jos Arrillaga. Fisher. Donahue.: ‘ High voltage DC transmission: a power electronics workhorse’ . IEEE Spectr.aspx Baltic Cable HVDC project. Sweden 95 . 8.93. J. Kingsnorth-Beddington-Willesden HVDC scheme. pp.87. Ludvika.‘ 500 kV Chandrapur-Padghe HVDC bipole project’ ABB Power Technologies AB.wikipedia. Fenno-Skan. Railing. Kontek (1995) [6]. 1998.‘ The Three Gorges – Guangdong HVDC link’ ABB Power Technologies AB.A.abb. UK.: “ Performance testing of the Sandy Pond HVDC converter terminal” IEEE Transactions on Power . AB. Power .aspx [4]. pp.A.Jos Arrillaga.. Ludvika.wikipedia. Ludvika.References [1]. .63-72 [2].G. P.abb.: ‘ High Voltage Direct Current Transmission’ 2nd Edition. of Electrical Engineers. Sweden [7]. of Electrical Engineers. Cross-Channel link.org/wiki/HVDC_Kingsnorth [14]..fi/cawp/gad02181/c1256d71001e0037c12568330063c686. The Institution . January 1993 [15]. Sweden [8]. No. Vol. N. Ludvika. English Channel. pp. Sellindge SE England and Bonningues-les-Calais Northern France. System.wikipedia.D.: ‘ High Voltage Direct Current Transmission’ 2nd Edition. ‘ Vizag II HVDC Back-to-back interconnection’ ABB Power Technologies . and Tatro.com/cawp/gad02181/d6a991f0cba1c22cc1256d8800402514. ± .org/wiki/HVDC_Cross-Channel [12]. D.‘ The Three Gorges – Changzhou HVDC Power Project’ ABB Power Technologies . AB. Ludvika. Konti-Skan (1965) [3]. Sweden and Germany. B. 1998. Gezhouba-Shanghai. Great Britain and France [13]. Sweden [10]. Grid System –HVDC. 1. Sweden [9].. [5]. April 1996.‘ SwePol Link HVDC Power Transmission’ ABB Power Technologies AB. P. Arrillaga.. HVDC station [23]. outdoor valves [25]. pp. Operation and Management.H.H. Dennis A. Y. J. [29]. particularly during forward recovery’ IEE Conf.158-63 . 2007.. 1998. Jos Arrillaga. compact converter stations [26]. Bjorklund. October 1996. Woodford. J. W.L. December 1993.: ‘ Direct current transmission’ Wiley Interscience. J.285-286. Publ. PAS-99. Kimbark.: ‘ control The and protection of thyristors in the English Terminal Cross-Channel valves.: ‘ Recent and nd Future Trends in HVDC Converter Station Design’ IEE 2 International Conference . E. pp. S. ‘ ETT vs.. link [21].: ‘ Flexible Power Transmission The HVDC Options’ John Wiley & Sons Ltd. N.: ‘ High Voltage Direct Current Transmission’ 2nd edition.: ‘ Modern HVDC thyristor valves’ ABB Power Systems.R.: ‘ High Voltage Direct Current Transmission’ 2nd edition.abb.. G. Liu. Liu.227 station layout for a bipolar . Bernt Abrahamsson and Olaf Saksvik.C. Vol.: ‘ HVDC transmission and the environment’ Power Engineering Journal.[16]. and D. J. transmission’ 1981.: ‘ Flexible Power Transmission The HVDC Options’ John Wiley & Sons Ltd. Schmidt.729-737. Ballad. Asplund. Henrik Stomberg.C. 18 March 1998 [28]. pp. Manitoba. Haddock. M. Y.H. [27]. Watson. and Kolbeck. pole of a converter station [22]. Reeve: ‘ Multiterminal HVDC Power Systems’ IEEE Trans. N. B.. .: ‘ HVDC Transmission’ Manitoba HVDC Research . The . and Stromberg. ‘ Brazil – Argentina Interconnection I & II’ ABB Power Technologies AB. 2007.R. Institution of Electrical Engineers. Carlsson.. pp. Canada.226 typical circuit diagram for one . pp. pp. 1998. Sweden .138-146 [20]. Institution of Electrical Engineers. March/April 1980 [18]. Jos Arrillaga. Power Systems/HVDC. J. L. York.283-284. 1971. B. Watson. Fiegl. Woodhouse.233-235 structure of the HVDC .. pp. 205 on ‘ . space related consequences [19]. and Rowe.com/hvdc 96 .: ‘ Flexible Power Transmission The HVDC Options’ John Wiley & Sons Ltd.. Centre. . pp. on Advances in Power System Control. Winnipeg.A. N. R3T 3Y6. pp.) www. Thyristor and variable static equipment for A. Watson.209– .R. Arrillaga.. The . Sweden [17]. Y. J. Ludvika. 2007. H. H.L. Ludvika. 210. Arrillaga. G. New . Liu. Hong Kong [24]. LTT for HVDC (A technical comparison between Electrically triggered thyristor and Light triggered thyristor for HVDC applications. : ‘ High Voltage Direct Current Engineering Technology Reactive Power Compensation and AC side Filtering’ Central China Electric Power . 1996:281– 284.296-318 [40]. pp. tunable AC filter [42]. China and Siemens . N. 1971.: ‘ Design and Realization of Tian – Guang HVDC Project’ State Power South Company. Ake Carlson. J. China (Chinese) [43]. Siemens Converter Transformers.[30]. Sweden [31]. pp. Rao Hong. Thorvaldsson..energyportal.W.259. . Chen Yiping.R. Guo. B.: ‘ specific requirements on HVDC converter transformers’ . AG PTD. Arrillaga. .siemens. ‘ HVDC smoothing reactor’ABB. Kimbark.com/cawp/GAD02181/C1256D71001E0037C125683400432 [34]. Watson. Guangdong.: ‘ HVDC and FACTS Controllers Applications of Static Converters in Power Systems’ Kluwer Power Electronics and Power Systems Series. ‘ Harmonic Filter and Reactive Compensation for HVDC’ CIGRE WG AC .: “ Joint operation HVDC/SVC” IEE Conference on AC and DC Power Transmission Publication. pp. across the DC link [41].aspx [33]. Transmission and Distribution Committee of the IEEE Power Engineering Society. Electra No. Watson.com/static/hq/de/products_solutions/9110_70515_converter%20transfor mers.: ‘ Direct current transmission’ Wiley Interscience. 1979 [35]. Saetlire. E. Zhao WanJun. China (Chinese) [37].: ‘ Flexible Power Transmission The HVDC Options’ John Wiley & Sons Ltd. Y.276. Center.: “ Survey of Three Gorges Power Grid (TGPG)” IEEE 2000. E. 14-03.63. et al.262.. B. New York. Sood. 2004. dynamic voltage regulation . [39]. . Part 1: AC-DC interaction phenomena’ Dec 1997 . pp.: “ Analysis of the influence of HVDC reactive power control on the voltage regulation of AC system” CSG Power Dispatching and Communication .H.R. Liu. Arrillaga. . 2007. Hao. Design Institution. Guangzhou. harmonic cross-modulation . [38]. Ludvika. Peng Baoshu. 2007.: ‘ Flexible Power Transmission The HVDC Options’ John Wiley & Sons Ltd. N. Germany 97 .H. ABB Transformers AB. Guangzhou.abb. J. Liu. Arnlov. ‘ IEEE Guide for planning DC lines terminating at AC system locations having low short-circuit capacities. Yu Jiangou. 912. Vijay K. [36].html [32].. Y. 2002 [45]. Stanley. 3. D. Jul 1989.[44]. on .1195-1204 [47]. Watson. Christchurch.J. 1599-1606 98 . Conference. Christchurch. Jos Arrillaga and Neville R. Jos Arrillaga and N. Huang H. no.. New Zealand. India [51].178-179. Power Delivery. edition.: ‘ Power System Harmonics’ University of . Anders. Watson. Transaction on Power Delivery. pp. 1989. Christchurch. psophometric weighting recommended by Internation Telegraph and Telephone Consultative Committee (CCITT) [52]..: ‘ Power system harmonics’ second .J. Conference on Power Systems Conf. Petersson. Zhang. Shore. 2002 IEE-PES/CSEE International . Delivery. Wenyan. and Asplund. [49]. 4(3)... 2003.P.: ‘ IEEE Guide for analysis and definition of DC side harmonic performance of HVDC transmission systems’ . Sweden . R. Paris. New Zealand. Lin. Sadek K.. [46]. Tao Yu. pp. edition. Bartzsch C. Sweden and ABB Limited.L. et al: ‘ Active DC filter for HVDC system – a test installation in the Konti-Skan DC link at Lindome converter station’ IEEE Trans. Stefan. Sun JiaJun.: ‘ Design and performance of AC filters for 12-pulse HVDC schemes’ IEE Conf. New Zealand. and Peterson.: ‘ Analysis of DC harmonics using the three-pulse model for the Intermountain Power Project HVDC Transmission’ IEEE . and Brewer. Jos Arrillaga and Neville R..: ‘ Power system harmonics’ second . pp. C. 8. 2003.179-181.: ‘ three-pulse A model of DC side harmonic flow in HVDC systems’ IEEE Transaction on Power . Watson. pp. New Zealand. Canterbury. [55]. K.. 158– 1977 .255. Anderson. A. Dickmander.. 2003.H. University of Canterbury. 4(2).. C-message and psophometric weighting factors [54]. pp. University of Canterbury. [50]. Watson. Zhou XiaoQian. pp.178. vol. Christchurch. J. University of Canterbury. France [56]. Price. Gunnarsson. Huo JiAn.L. Publ.: ‘ Design features of the Three Gorges-Changzhou ±500 KV HVDC project’ CIGRE 2000 .: ‘ Triple-tuned harmonic filters – design principle and operating experience’ Proc. G. N.: ‘ – DC Harmonic Filters for Three AC Gorges – Changzhou ± 500 kV HVDC Project DC Harmonic Filters’ ABB Power . IEEE STD 1124-2003. edition.: ‘ Power system harmonics’ second . 154. Liu ZeHong. 61. 2003. Systems. July 1993. Jos Arrillaga and Neville R.1945-1954 [48]. Bernt Bergdahl and Rebati Dass.: “ Active filters in HVDC transmissions” ABB Power Technologies.L. Cmessage weighting recommended by the Edison Electric Institute and the Bell Telephone System [53]. G. G. pp.. Jiang. Canelhas. 100. Jos Arrillaga. 3356-3364.75-80. pp. HVDC controls [62]. The . active DC side filters . Vol. Vijay K. 205 on ‘ .723-732 [65]. pp. pp.: ‘ Power System Stability and Control’ Chapter 10 High.C. Sood. pp. PAS-87. Kundur. starting.” IEEE Trans. pp.: ‘ refined HVDC control system’ Trans. Watson. Institution of Electrical Engineers. pp.121. Rashwan. N.C. Jos Arrillaga.67. pp.859-65 .. 1998. W. Institution of Electrical Engineers. G. Vijay K. Sood. stopping. 2004. May 1991 [69].: ‘ HVDC and FACTS Controllers Applications of Static Converters in Power Systems’ Kluwer Power Electronics and Power Systems Series. and D. pp.67. M. 2. and Liss.: ‘ High Voltage Direct Current Transmission’ 2nd Edition.223-26 [70].. J. J. July 1981. Institution of Electrical Engineers. The . transmission’(London. pp.. 1998. harmonic instability [63].. “ HVDC Controls for System Dynamic Performance. IEEE Committee Report. [59]. [68].: ‘ The phase-locked oscillator – a new control system for controlled static converters’ Trans. .R. No.different control levels [60].H. Arrillaga.: ‘ High Voltage Direct Current Transmission’ 2nd Edition.. 1998. 2004. current margin control method [66]. PWRS-6. The . Jos Arrillaga. PAS-100. converter control basic philosophy [61].”IEEE Trans. Y. DC-side active cancellation [58]. Prabha. The . 1970. Voltage Direct-Current Transmission.D. Jos Arrillaga. 2007.: ‘ HVDC and FACTS Controllers Applications of Static Converters in Power Systems’ Kluwer Power Electronics and Power Systems Series. PAS-89. [64]. A . pp. Ekstrom. Chand.: ‘ HVDC and FACTS Controllers Applications of Static Converters in Power Systems’ Kluwer Power Electronics and Power Systems Series. Thyristor and variable static equipment for A. 1968. Ainsworth. Liu. Institution of Electrical Engineers. “ Dynamic Performance Characteristics of North American HVDC Systems for Transient and Dynamic Stability Evaluations.81.: ‘ High Voltage Direct Current Transmission’ 2nd Edition. pp.[57]. . Publ. IEEE.. Sood.: ‘ Nelson River HVDC system-operating experience’ IEE Conf. A. Vijay K.124. IEEE. 743-752.281. (3) pp. Vol. .K. IEEE Committee Report. hierarchical power control at the New Zealand link 99 . 2004.: ‘ Flexible Power Transmission The HVDC Options’ John Wiley & Sons Ltd. pp.521-523. 1981). pp. 1998.: ‘ High Voltage Direct Current Transmission’ 2nd edition. and Tishinski.M. power flow reversal [67]. J. Jos Arrillaga. Oyama.. Guangdong. The . Yamashita.02.. PAS-100.D. pp. Jos Arrillaga.217. Turner. Kristmundsson. G. 1998. pp. pp. DC line fault [81].: ‘ High Voltage Direct Current Transmission’ 2nd Edition. and Arnold.: ‘ High Voltage Direct Current Transmission’ 2nd Edition. China (Chinese) [83]. Sun Caixin. and Chen. Prabha. Guangzhou. pp. S. Institution of Electrical Engineers. 1980. AC system unsymmetrical faults [77]. (4).. J. possible locations of internal AC-DC short-circuit faults in typical 12-pulse thyristor converter [72]. and Carroll. Zhang Zhijin. 1998. June 1982. August .X.M. Zehui.534-535..: ‘ High Voltage Direct Current Transmission’ 2nd Edition. Guangzhou.1864-70 . Institution of Electrical Engineers..: ‘ Study on Commutation Failure in an HVDC Inverter’ International Conference on Power System Technology. Kundur.. China (Chinese) [79]. X. Arrillaga. pp.P. G.P. Jos Arrillaga. Apparatus and Systems. pp. I.S.: ‘ High Voltage Direct Current Transmission’ 2nd Edition.: ‘ Recovery from temporary h. Institution of Electrical Engineers. line faults’ Trans.v. K.121-128 [74]. Honda.: ‘ Present Situation and Prospect of Research on Flashover Characteristics of Polluted Insulators’ Chongqing . J. December 1996. 1998. Zou. C. GZ .208. Li. Chongqing.215 converter DC power following a threephase fault at the inverter end [76].: ‘ Analysis on the Tian-Guang DC line protection against high-impedance ground faults’ CSG EHV Transmission Company.: ‘ Analysis of and advice on setting adaptation for the station power system of Gui-Guang HVDC project’ CSG EHV Power Transmission Company. Chen.59-85 . Jos Arrillaga. 1998. pp. 1990. [80]. Bureau.. The .[71]. pp. M.1363-1368 100 . Bureau.216. Voltage Direct-Current Transmission. University. D.c. Ohshima. CIGRE WG 14. 1998. The . Liu.: ‘ effect of AC system frequency The spectrum on commutation failure in HVDC inverters’ IEEE Transactions on Power . M. staged AC fault of the New Zealand HVDC hybrid link [78]. (169). The .C. IEEE. Dong and Wu..: ‘ Power System Stability and Control’ Chapter 10 High. 5(2). Guangdong. China (Chinese) [82]. Jiang Xingliang. Kojima..: ‘ Commutation failure in HVDC transmission systems due to AC system faults’ Electra.: ‘ Life performance of zinc oxide elements under DC voltage’ IEEE Transactions on Power .d. pp. Delivery.503-506 [75]. M. Zheng. Heffernan. pp. Institution of Electrical Engineers. M. [73]. GZ . High Voltage Division. ‘ Submarine Cable Link – The Baltic Cable HVDC Sweden/Germany’ ABB Power Technologies AB. Siemens AG. SU Zhiyi. of Electrical Engineers. Ludvika. the environmental effects of overhead transmission lines 101 . LI Hua. Ludvika. Engineering. RAO Hong. 1998. [92]. Jos Arrillaga. Sweden . Taizo.187.: ‘ High Voltage Direct Current Transmission’ The Institution .: ‘ Simulation study on lightning-strike endurance of HVDC transmission line towers’ College of Electrical and Electronic . Connection [93]. Inc.184. cross section of the double-armoured DC cable [96]. Huazhong University of Science and Technology.: ‘ High Voltage Direct Current Transmission’ 2nd Edition. 1998. Jos Arrillaga. pp. Wuhan.188.584-592 [86].233. 1998. pp. Guangdong. pp. 1998. LI Xiaolin. Jos Arrillaga. of Electrical Engineers. Guangzhou.. . ZHOU Jun.188.266. maximum power per cable and operating voltage of realised HVDC submarine mass-impregnated cables [95]. pp. pp. the choice of conductors of overhead lines [89]. China (Chinese) [91]. Jos Arrillaga.: ‘ High Voltage Direct Current Transmission’ 2nd Edition.[84]. line current and voltage recorded at the inverter during missing pulse condition in a rectifier bridge [85]. ‘ NorNed HVDC transmission link –the longest underwater high-voltage The cable in the world’ ABB Power Technologies AB. LUO Bing. lines’ IEEE Transaction on Power Apparatus and System. pp. Research Center. 1970. 1998. Hasegawa. China and China Electric Power Research Institute. [88]. Sweden [94].: ‘ High Voltage Direct Current Transmission’ The Institution . Grid System-HVDC. Beijing. The . Edward W.: ‘ High Voltage Direct Current Transmission’ 2nd Edition. Power Transmission and Distribution.: ‘ Construction and operation experience of large capacity DC transmission system in Japan’ The Kansai Electric Power CO.229-230. Kimbark: ‘ Transient overvoltages caused by monopolar ground fault on bipolar d. pp. . The . . Jos Arrillaga. deblocking with full rectifier voltage against an open inverter end [87].: ‘ High Voltage Direct Current Transmission’ The Institution . of Electrical Engineers. Hubei. Jos Arrillaga. YE Huisheng. Institution of Electrical Engineers. .: ‘ flashover The characteristic of DC composite insulators in un-uniform pollution’ CSG Technology . HE Junjia. Institution of Electrical Engineers.: ‘ High Voltage Direct Current Transmission – Proven Technology for Power Exchange’ Paper 30 .c. pp. China (Chinese) [90]. [99]. C. S. Fiegl. and Kolbeck. K. CUI Xiuyu. 2003. Asia and Pacific. electric field of .: ‘ HVDC Power Transmission Systems Technology and System Interactions’ pp. electrodes . Padiyar.. . Fiegl.. and Distribution Conference and Exhibition. London.200-206 . 2005. Ltd. S.: ‘ HVDC transmission and the environment’ Power Engineering Journal. G. [105]. Guangzhou.. 1960. Di zheng. Zou Yun.T.M. pp. Haidian District.: “ High Voltage Direct Current Transmission –Proven Technology for Power Exchange” Paper 36-37 . New York. Hingorani. pp.: ‘ Analysis of the impact of ground DC on AC transformer’ .[97].114-115 . Wan Da. B. Adamson. China (Chinese) [107]. 1971. and Kolbeck. (Chinese) [101].G. [103]. Siemens AG Power Transmission and Distribution.: ‘ Analysis on influence of ground electrode current in HVDC on AC power network’ China Electric Power Research . China (Chinese) 102 . WANG Mingxin and ZHANG Qiang. Schmidt. Institute.205. Kimbark.: ‘ Direct current transmission’ Wiley Interscience. Guangdong Electric Power Dispatching and Communication Co. monopolar and bipolar 450 kV overhead lines [100]. IEEE Transaction on Power Delivery. pp.: ‘ HVDC transmission and the environment’ Power Engineering Journal.. [98].R. Xicheng District. Beijing. N. E.: ‘ High voltage direct current power transmission’ Garrawy Ltd.. Schmidt..208.W.: ‘ analysis and handling of the impact of The ground current in DC transmission on power grid equipment’ IEEE/PES Transmission . pp. October 1996. C.: ‘ Electromagnetic environment problem brought about by ±500kV DC transmission project in China’ China Electric Power Research Institute .: ‘ heating around the ground electrode of Soil an HVDC system by interaction of electrical thermal and electroosmotic phenomena’ . October 1996. Dalian. J.. China and State Grid Corporation of China. [102]. [106].443-445 [104]. Villas. and Portela. Guangdong. B.E. Beijing. China. 18(3). Wu. G. GuiFang.
https://www.scribd.com/document/60834690/52064169-HVDC-Master-Thesis
CC-MAIN-2017-09
en
refinedweb
How Companies are Reinventing their Idea to Launch Methodologies - Cordelia Sherman - 1 years ago - Views: Transcription 1 How Companies are Reinventing their Idea to Launch Methodologies By Robert G. Cooper This article appeared in Research Technology Management March April 2009, Vol 52, No 2, pp Stage Gate International Stage Gate is a registered trademark of Stage Gate Inc. Innovation Performance Framework is a trademark of Stage Gate Inc. 2 HOW COMPANIES ARE REINVENTING THEIR IDEA TO LAUNCH METHODOLOGIES Next-generation Stage-Gate systems are proving more fl exible, adaptive and scalable. Robert G. Cooper OVERVIEW: The Stage-Gate system introduced in the mid-1980s has helped many fi rms drive new products to market. But leaders have adjusted and modifi ed the original model considerably and built in many new best practices. They have made the system more fl exible, adaptive and scalable; they have built in better governance; integrated it with portfolio management; incorporated accountability and continuous improvement; automated the system; bolted on a proactive front-end or discovery stage; and fi nally, adapted the system to include open innovation. All of these improvements have rendered the system faster, more focused, more agile and leaner, and far better suited to today s rapid pace of product innovation. KEY CONCEPTS: Stage-Gate, next-generation Stage- Gate, idea-to-launch process, best practices. The Stage-Gate process has been widely adopted as a guide to drive new products to market ( 1,2 ). The original Stage-Gate model, introduced in the mid-1980s, was based on research that focused on what successful project teams and businesses did when they developed winning new products. Using the analogy of North American Robert Cooper is emeritus professor at the DeGroote School of Business, McMaster University, Hamilton, Ontario, Canada. He is also ISBM Distinguished Research Scholar at Penn State s Smeal College of Business Administration and president of the Product Development Institute. fi eld of innovation management, he is a Fellow of the Product Development Management Association, and creator of the Stage-Gate new product process used by many fi rms. He received his Ph.D. in business administration from the University of Western Ontario. ; football, Stage-Gate is the playbook that the team uses to drive the ball down the field to a touchdown; the stages are the plays, and the gates are the huddles. The typical Stage-Gate system is shown in Figure 1 for major product development projects. With so many companies using the system, invariably some firms began to develop derivatives and improved approaches; indeed, many leading firms have built in dozens of new best practices, so that today s stage-andgate processes are a far cry from the original model of 20 years ago. Here are some of the ways that companies have modified and improved their idea-to-launch methods as they have evolved to the next-generation Stage- Gate system ( 3 ). Focus on Effective Governance Making the gates work Perhaps the greatest challenge that users of a stageand-gate process face is making the gates work. As go the gates, so goes the process, declared one executive, noting that the gates in her company s process were ineffectual. In a robust gating system, poor projects are spotted early and killed; projects in trouble are also detected and sent back for rework or redirect put back on course. But as quality control check points, the gates aren t effective in too many companies; gates are rated one of the weakest areas in product development with only 33 percent of firms having tough, rigorous gates throughout the idea-to-launch process ( 4 ). Gates with teeth A recurring problem is that gates are either non-existent or lack teeth. The result is that, once underway, projects are rarely killed at gates. Rather, as one senior manager exclaimed, Projects are like express trains, speeding down the track, slowing down at the occasional station [gate], but never stopping until they reach their ultimate destination, the marketplace. March April 3 Figure 1. Many fi rms use a Stage-Gate system to drive development projects to commercialization. Shown here is a fi ve-stage, fi ve-gate process typically used for major new product projects. Such a model provides a guide to project teams, suggesting best-practice activities within stages, and defi ning essential information or deliverables for each gate. Gatekeepers meet at gates to make the vital Go/Kill and resource commitment decisions. Example: In one major high-tech communications equipment manufacturer, once a project passes Gate 1 (the idea screen), it is placed into the business s product roadmap. This means that the estimated sales and profits from the new project are now integrated into the business unit s financial forecast and plans. Once into the financial plan of the business, of course, the project is locked-in: there is no way that the project can be removed from the roadmap or killed. In effect, all gates after Gate 1 are merely rubber stamps. Management in this firm missed the point that the idea-to-launch process is a funnel, not a tunnel, and that gates after Gate 1 are also Go/Kill points; this should not be a one-gate, five-stage process! In too many firms, like this example, after the initial Go decision, the gates amount to little more than a project update meeting or a milestone check-point. As one executive declared: We never kill projects, we just wound them! Thus, instead of the well-defined funnel that is so often used to shape the new product process, one ends up with a tunnel where everything that enters comes out the other end, good projects and bad. Yet management is deluded into believing they have a functioning Stage-Gate process. In still other companies, the gate review meeting is held and a Go decision is made, but resources are not committed. Somehow management fails to understand that approval decisions are rather meaningless unless a check is cut and the project leader and team leave the gate meeting with the resources they need to advance their project. Instead, projects are approved, but resources are not a hollow Go decision, and one that usually leads to too many projects in the pipeline and projects taking forever to get to market. If gates without teeth and hollow gates describe your company s gates, then it s time for a rethink. Gates are not merely project review meetings or milestone checks! Rather, they are Go/Kill and resource allocation meetings: Gates are where senior management meets to decide whether the company should continue to invest in the project based on latest information, or to cut one s losses and bail out of a bad project. And gates are a resource commitment meeting where, in the event of a Go decision, the project leader and team receive a commitment of resources to move their project forward. Example ( 5 ): Cooper Standard Automotive (no relation to the author) converted its gates into decision factories. Previously management had failed to make many Kill decisions, with most gates merely automatic Go s. The result was a gridlocked pipeline with over 50 major projects, an almost-infinite time-to-market, and no or few launches. By toughening the gate meetings making them rigorous senior management reviews with solid data available and forcing more kills, management 48 Research. Technology Management 4 dramatically reduced the number of projects passing each gate. The result was a reduction today to eight major, high-value projects, time-to-market down to 1.6 years, and five major launches annually. Leaner and simpler gates Most companies new product processes suffer from far too much paperwork delivered to the gatekeepers at each gate. Deliverables overkill is often the result of a project team that, because they are not certain what information is required, prepare an overly comprehensive report, and in so doing, attempt to bullet-proof themselves. The fault can also be the design of the company s idea-to-launch system itself, which often includes elaborate templates that must be filled out for every gate. While some of the information that gating systems demand may be interesting, often much of it is not essential to the gate decision. Detailed explanations of how the market research was done or sketches of what the new molecule looks like, add no value to the decision. Restrict the deliverables and their templates to the essential information needed to make the gate decisions: Example ( 6 ): Lean gates is a positive feature of Johnson & Johnson Ethicon Division s Stage-Gate process. Previously, the gate deliverables package was a 30-to- 90-page presentation, and a lot of work for any project team to prepare. Today, it s down to the bare essentials: one page with three back-up slides. The expectation is that gatekeepers arrive at the gate meeting knowing the project, having read and understood the deliverables package prepared by the project team (the gate meeting is not an educational session to bring a poorly prepared gatekeeping group up to speed). Senior management is simply informed at the gate review about the risks and the commitments required. Finally, there is a standardized presentation format. The result is that weeks of preparation work have been saved. Example ( 7 ): One of the compelling features of Procter & Gamble s latest release of SIMPL (its Stage-Gate process) is much leaner gates a simpler SIMPL. Previously, project teams had decided which deliverables they would prepare for gatekeepers. Desirous of showcasing their projects and themselves, the resulting deliverables package was often very impressive but far too voluminous. As one astute observer remarked, it was the corporate equivalent of publish or perish. The deliverables package included up to a dozen detailed attachments, plus the main report. In the new model, the approach is to view the gates from the decision-makers perspective. In short, what do the gatekeepers need to know in order to make the Go/Kill decision? The gatekeepers requests boiled down to three key items: The greatest challenge stage-andgate users face is making the gates work. Have you done what you should have are the data presented based on solid work? What are the risks in moving forward? What are you asking for? Now the main gate report is no more than two pages, and there are four required attachments, most kept to a limit of one page. The emphasis in lean gates is on making expectations clear to project teams and leaders that they are not required to prepare an information dump for the gatekeepers. The principles are that: Information has a value only to the extent it improves a decision; and The deliverables package should provide the decisionmakers only that information they need to make an effective and timely decision. Page restrictions, templates with text and field limits, and solid guides are the answer favored by progressive firms. Who are the gatekeepers? Many companies also have trouble defining who the gatekeepers are. Every senior manager feels he or she should be a gatekeeper, and so the result is too many gatekeepers more of a herd than a tightly-defined decision group and a lack of crisp Go/Kill decisions. Defining governance roles and responsibilities is an important facet of Stage- Gate. At gates, the rule is simple: The gatekeepers are the senior people in the business who own the resources required by the project leader and team to move forward. For major new product projects, the gatekeepers should be a cross-functional senior group the heads of Technical, Marketing, Sales, Operations and Finance (as opposed to just one function, such as Marketing or R&D, making the call). Because resources are required from March April 5 many departments, the gatekeeper group must involve executives from these resource-providing areas so that alignment is achieved and the necessary resources are in place. Besides, a multi-faceted view of the project leads to better decisions than a single-functional view. And because senior people s time is limited, consider beginning with mid-management at Gate 1, and for major projects, ending up with the leadership team of the business at Gates 3, 4 and 5 in Figure 1. For smaller, lowerrisk projects, a lower-level gatekeeping group and fewer gates usually suffices. Fostering the right behavior A recurring complaint concerns the behavior of senior management when in the role of gatekeepers. Some of the bad gatekeeping behaviors consistently seen include: Executive pet projects receiving special treatment and by-passing the gates (perhaps because no one had the courage to stand up to the wishes of a senior person a case of the emperor wearing no clothes ). Gate meetings cancelled at the last minute because the gatekeepers are unavailable (yet they complain the loudest when projects miss milestones). Gate meetings held, but decisions not made and resources not committed. Key gatekeepers missing the meeting and not delegating their authority to anyone. Gate meeting decisions by executive edict the assumption that one person knows all. Using personal Go/Kill criteria (rather than robust and transparent decision-making criteria). Gatekeepers are members of a decision-making team. And decision teams need rules of engagement. Senior people often implement Stage-Gate in the naïve belief that it will shake up the troops and lead to much different The new approach is to view gates from the decision-makers perspective. behavior in the ranks. But quite the opposite is true: the greatest change in behavior takes place at the top! The leadership team of the business must take a close look at their own behaviors often far from ideal and then craft a set of gatekeeper rules of engagement and commit to live by these. Table 1 lists a typical set. Portfolio Management Built In Portfolio management should dovetail with your Stage- Gate system ( 8 ). Both decision processes are designed to make Go/Kill and resource allocation decisions, and hence ideally should be integrated into a unified system. There are subtle differences between portfolio management and Stage-Gate, however: Gates are an evaluation of individual projects in depth and one-at-a-time. Gatekeepers meet to make Go/Kill and resource allocation decisions on an on-going basis (in real time) and from beginning to end of a project (Gate 1 to Gate 5 in Figure 1). By contrast, portfolio reviews are more holistic, looking at the entire set of projects, but obviously less in-depth per project than gates do. Portfolio reviews two to four times per year are the norm ( 9 ). They deal with such issues as achieving the right mix and balance Table 1. Rules of the Game: Sample Set from a Major Flooring-Products Manufacturer. All projects must pass through the gates. There is no special treatment or bypassing of gates for pet projects. Once a gate meeting date is agreed (calendars checked), gatekeepers must make every effort to be there. If the team cannot provide deliverables in time for the scheduled gate, the gate may be postponed and rescheduled, but timely advance notice must be given. If a gatekeeper cannot attend, s/he can send a designate who is empowered to vote and act on behalf of that gatekeeper (including committing resources). Gatekeepers can attend electronically (phone or video conference call). Pre-gate decision meetings should be avoided by gatekeepers don t prejudge the project. There will be new data presented and a Q&A at the gate meeting. Gatekeepers should base their decisions on the information presented and use the scoring criteria. Decisions must be based on facts, not emotion and gut feel! A decision must be made the day of the gate meeting (Go/Kill/Hold/Recycle). The project team must be informed of the decision face to face, and reasons why. When resource commitments are made by gatekeepers (people, time or money), every effort must be made to ensure that these commitments are kept. Gatekeepers must accept and agree to abide by these Rules of the Game. 50 Research. Technology Management 6 of projects in the portfolio, project prioritization, and whether the portfolio is aligned with the business s strategy. Besides relying on traditional financial criteria, here are methods that companies use to improve portfolio management within Stage-Gate ( 10 ); 1. Strategic buckets to achieve the right balance and mix of projects The business s product innovation and technology strategy drives the decision process and helps to decide resource allocation and strategic buckets. Using the strategic buckets method, senior management makes a priori strategic choices about how they wish to spend their R&D resources. The method is based Table 2. A Typical Scorecard for Gate 3, Go to Development: An Effective Tool for Rating Projects ( 10 ). Factor 1: Strategic Fit and Importance Alignment of project with our business s strategy. Importance of project to the strategy. Impact on the business. Factor 2: Product and Competitive Advantage Product delivers unique customer or user benefits. Product offers customer/user excellent value for money (compelling value proposition). Differentiated product in eyes of customer/user. Positive customer/user feedback on product concept (concept test results). Factor 3: Market Attractiveness Market size. Market growth and future potential. Margins earned by players in this market. Competitiveness - how tough and intense competition is (negative). Factor 4: Core Competencies Leverage Project leverages our core competencies and strengths in: technology production/operations marketing distribution/sales force. Factor 5: Technical Feasibility Size of technical gap (straightforward to do). Technical complexity (few barriers, solution envisioned). Familiarity of technology to our business. Technical results to date (proof of concept). Factor 6: Financial Reward versus Risk Size of financial opportunity. Financial return (NPV, ECV, IRR). Productivity Index (PI). Certainty of financial estimates. Level of risk and ability to address risks. Projects are scored by the gatekeepers (senior management) at the gate meeting, using these six factors on a scorecard (0 10 scales). The scores are tallied and displayed electronically for discussion. The Project Attractiveness Score is the weighted or unweighted addition of the six factor scores (averaged across gatekeepers), and taken out of 100. A score of 60/100 is usually required for a Go decision. on the premise that strategy becomes real when you start spending money. So make those spending decisions! Most often, resource splits are made across project types (new products, improvements, cost reductions, technology developments, etc.), by market or business area, by technology (base, pacing, embryonic) or by geography. Once these splits are decided each year, projects and resources are tracked. Pie charts reveal the actual split in resource (year to date) versus the target split based on the strategic choices made. These pie charts are reviewed at portfolio reviews to ensure that resource allocation does indeed mirror the strategic priorities of the business. The method has proven to be an effective way to ensure that the right balance and mix of projects is achieved in the development pipeline that the pipeline is not overloaded with small, short-term and low-risk projects. 2. Scorecards to make better Go/Kill and prioritization decisions Scorecards are based on the premise that qualitative criteria or factors are often better predictors of success than financial projections. The one thing we are sure of in product development is that the numbers are always wrong, especially for more innovative and step-out projects. In use, management develops a list of about 6 8 key criteria, known predictors of success (Table 2). Projects are then scored on these criteria right at the gate meeting by senior management. The total score becomes a key input into the Go/Kill gate decision and, along with other factors, is used to rank or prioritize projects at portfolio review meetings. A number of firms (for example, divisions at J&J, P&G, Emerson Electric and ITT Industries) use scorecards for early-stage screening (for Gates 1, 2 and 3 in Figure 1). Note that different scorecards and criteria are used for different types of projects. 3. Success criteria at gates Another project selection method for use at gates, and one employed with considerable success at firms such as P&G, is the use of success criteria : Specific success criteria for each gate relevant to that stage are defined for each project. Examples include: expected profitability, launch date, expected sales, and even interim metrics, such as test results expected in a subsequent stage. These criteria, and targets to be achieved on them, are agreed to by the project team and management at each gate. These success criteria are then used to evaluate the project at successive gates ( 11 ). For example, if the project s estimates fail on any agreed-to criteria at successive gates, the project could be killed. March April 7 4. The Productivity Index helps prioritize projects and allocate resources This is a powerful extension of the NPV (net present value) method and is most useful at portfolio reviews to prioritize projects when resources are constrained. The Productivity Index is a financial approach based on the theory of constraints ( 12 ): in order to maximize the value of your portfolio subject to a constraining resource, take the factor that you are trying to maximize e.g., the NPV and divide it by your constraining resource, for example the person-days (or costs) required to complete the project: Productivity Index PI = Forecasted NPV = Person-Days to Complete Project Forecasted NPV Cost to Complete Project Then rank your projects according to this index until you run out of resources. Those projects at the top of the list are Go projects, are resourced, and are accelerated to market. Those projects beyond the resource limit are placed on hold or killed. The method is designed to maximize the value of your development portfolio while staying within your resource limits. Make the System Lean, Adaptive, Flexible and Scalable A leaner process Over time, most companies product development processes have become too bulky, cumbersome and bureaucratic. Thus, smart companies have borrowed the concept of value stream analysis from lean manufacturing, and have applied it to their new product process in order to remove waste and inefficiency. A value stream is simply the connection of all the process steps with the goal of maximizing customer value ( 13 ). In NPD, a value stream represents the linkage of all value-added and non-value-added activities associated with the creation of a new product. The value stream map is used to portray the value stream or product development process, and helps to identify both value-added and non-value-added activities; hence, it is a useful tool for improving your process ( 14 ). or Most companies product development processes have become too bulky and bureaucratic. In employing value stream analysis, a task force creates a map of the value stream your current idea-to-launch process for typical development projects in your business. All the stages, decision points and key activities are mapped out, with time ranges for each activity and decision indicated. Once the value stream is mapped, the task force lowers the microscope on the process and dissects it. All procedures, activities and tasks, required deliverables, documents and templates, committees and decision processes are examined, looking for problems, time-wasters and non-value-added activities. Once these are spotted, the task force works to remove them. Example : In one B2B company, field trials were found to be a huge time waster, taking as much as 18 months and often having to be repeated because they failed. A value stream analysis revealed this unacceptable situation, and a subsequent root cause analysis showed that there were huge delays largely because field trials could be done only when the customer undertook a scheduled plant shut-down in this case a paper machine, costing in excess of $100 million and further there was little incentive for the customer to agree to a field trial, especially one that did not work. The lack of early involvement of technical people (the first phases of the project were handled largely by sales and business development people) meant that technical issues were often not understood until too late in the project and after commitments had been made to the customer. Solutions were sought and included: first field trials on a pilot paper machine (several universities rented time on these in their pulp and paper institutes); involving technical people from the beginning of the project; and offering the customer incentives such as limited exclusivity and preferential pricing. Value stream analysis can result in leaner gates, a topic mentioned earlier, but it goes well beyond gates and looks for efficiency improvements in all facets of the process as noted in the example. The result of a solid value stream analysis invariably is a much more streamlined, less bulky idea-to-launch system. An adaptable, agile process Stage-Gate has also become a much more adaptable innovation process, one that adjusts to changing conditions and fluid, unstable information. The concept of spiral or agile development is built in, allowing project teams to move rapidly to a final product design through a series of build test feedback and revise iterations ( 15 ). Spiral development bridges the gap between the 52 Research. Technology Management 8 need for sharp, early and fact-based product definition before development begins versus the need to be flexible and to adjust the product s design to new information and fluid market conditions as development proceeds. Spiral development allows developers to continue to incorporate valuable customer feedback into the design even after the product definition is locked-in before going into Stage 3. Spiral development also deals with the need to get mock-ups in front of customers earlier in the process (in Stage 2 rather than waiting until Stage 3). A fl exible process Stage-Gate is a flexible guide that suggests best practices, recommended activities and likely deliverables. No activity or deliverable is mandatory. The project team has considerable discretion over which activities they execute and which they choose not to do. The project team presents its proposed go-forward plan what needs to be done to make the project a success at each gate. At these gates, the gatekeepers commit the necessary resources, and in so doing, approve the go-forward plan. But note that it is the project team s plan, not simply a mechanistic implementation of a standardized process. Another facet of flexibility is simultaneous execution. Here, key activities and even entire stages overlap, not waiting for perfect information before moving forward. For example, it is acceptable to move activities from one stage to an earlier one and, in effect, overlap stages. Example: At Toyota, the rule is to synchronize processes for simultaneous execution ( 16 ). Truly effective concurrent engineering requires that each subsequent function maximizes the utility of the stable information available from the previous function as it becomes available. That is, development teams must do the most they can with only that portion of the design data that is not likely to change. Each function s processes are designed to move forward simultaneously, building on stable data as they become available. Simultaneous execution usually adds risk to a project. For example, the decision to purchase production equipment before field trials are completed, thereby avoiding a long order lead-time, may be a good application of simultaneous execution. But there is risk too that the project may be cancelled after dedicated production equipment is purchased. Thus, the decision to overlap activities and stages is a calculated risk, but it must be calculated. That is, the cost of delay must be weighed against the cost and probability of being wrong. Scaled to suit different risks Stage-Gate has become a scalable process, scaled to suit very different types and risk levels of projects from very risky and complex platform developments through to lower-risk extensions and modifications, and even to handle simple sales force requests ( 17 ). When first implemented, there was only one version of Stage-Gate in a company, typically a five-stage, fivegate model. But some projects were too small to put through the full five-stage model, and so circumvented it. The problem was that these smaller projects line extensions, modifications, sales-force requests while individually small, collectively consumed the bulk of resources. Thus, a contradictory situation existed whereby projects that represented the majority of development resources were outside the system. Each of these projects big and small has risk, consumes resources, and thus must be managed, but not all need to go through the full five-stage process. The process has thus morphed into multiple versions to fit business needs and to accelerate projects. Figure 2 shows some examples: Stage-Gate XPress for projects of moderate risk, such as improvements, modifications and extensions; and Stage-Gate Lite for very small projects, such as simple customer requests. Multiple versions for platform/technology development projects There is no longer just Stage-Gate for new product projects. Other types of projects platform developments, process developments, or exploratory research projects compete for the same resources, need to be managed, and thus also merit their own version of a stage-and-gate process. For example, ExxonMobil Chemical has designed a three-stage, three-gate version of its Stage-Gate process to handle upstream research projects ( 18 ); while numerous other organizations have adopted a four-stage, four-gate system to handle fundamental research, technology development or platform projects (more on this topic later). Add a Robust Post-Launch Review ( 19 ): H aving performance metrics in place that measure how well a specific new product project performed. For example, were the product s profits on target? Was it launched on time? Establishing team accountability for results, with all members of the project team fully responsible for March April 9 performance results when measured against these metrics. Building-in learning and improvement, namely, when the project team misses the target, focus on fixing the cause rather than putting a band-aid on the symptom, or worse yet, punishing the team. Example ( 20 ): At Emerson Electric, the traditional postlaunch reviews were absent in most divisions new product efforts. But in the new release of Emerson s idea-to-launch process (NPD 2.0), a post-launch review is very evident. Here, project teams are held accountable for key financial and time metrics that were established and agreed to much earlier in the project. When gaps or deficiencies between forecasts and reality are identified, root causes for these variances are sought and continuous improvement takes place. Emerson benefits in three way. First, estimates of sales, profits and time-to-market are much more realistic now that project teams are held accountable for their attainment. Second, with clear objectives, the project team can focus and work diligently to achieve them expectations are clear. Finally, if the team misses the target, causes are sought and improvements to the process are made so as to prevent a recurrence of the cause closed loop feedback and learning. It works much the same way at Procter & Gamble ( 21 ): Winning in the marketplace is the goal. In many firms, too much emphasis is on getting through the process, that Next-generation Stage-Gate systems build-in a rigorous post-launch review. is, getting one s project approved or preparing deliverables for the next gate. In the past, P&G was no different. By contrast, this principle emphasizes winning in the marketplace as the goal, not merely going through the process. Specific success criteria for each project are defined and agreed to by the project team and management at the gates; these success criteria are then used to evaluate the project at the post-launch review, and the project team is held accountable for achieving results when measured against these success criteria. Figure 2. The next-generation Stage-Gate is scalable. Use Stage-Gate Full, XPress and Lite for different project types. Major new product projects go through the full fi ve-stage process (top); moderate risk projects (extensions, modifi cations and improvements) use the XPress version (middle); and sales-force and marketing requests (very minor changes) use the Lite process (bottom). 54 Research. Technology Management 10. Example (22): EXFO Engineering boasts a solid Stage- Gate system coupled with a strong portfolio management process. EXFO has added an additional gate in its process Gate 5 whose purpose is to ensure the proper closing of the project (Launch is Gate 4.1 in this company s numbering scheme). At this final gate meeting, management ascertains that all the outstanding issues (manufacturing, quality, sales ramp-up, and project) have been addressed and closed. Feedback is presented based on a survey of initial customers; the project postmortem is reviewed, which highlights the project s good and bad points; and the recommendations for improvement from the team are examined. Typically, Gate 5 occurs about three months after initial product delivery to customers. Additionally, sales performance and profitability (ROI) of the project are monitored for the first two years of the product s life. Build-In a Discovery Stage To Feed Innovation Funnel Feeding the innovation funnel with a steady stream of a new product ideas and opportunities has become the quest in many companies as they search for the next blockbuster new product. Traditionally, the idea has been shown as a light-bulb at the beginning of the new product process, with ideas assumed to happen magically or serendipitously. No longer. Now, progressive firms such as P&G, Swarovski AG, ITT Industries, and Emerson Electric, have replaced the light-bulb with a new and proactive Stage 0 called Discovery (see Figure 1). Discovery encompasses some of the following activities: Fundamental research and technology development Organizations like ExxonMobil Chemical, Timex, Donaldson, and Sandia Labs recognize that technology development projects where the deliverable is new knowledge, a new technical capability, or even a technology platform are quite different in terms of risk, uncertainty, scope, and cost from the typical new product project found in the Stage-Gate model of Figure 1. Moreover, these technology development projects are often the platform that spawns a number of new product (or new process) development projects and hence acts as a trigger or feed to the new product process. Thus, such organizations have modified the front end of their Stage-Gate process and in effect bolted on a technology development process that then feeds the new product process, as shown in Figure 3 ( 23 ). The Stage-Gate TD process is technologically driven and features quite different stages with more opportunity for experimentation and iterating back, and the system relies on less financial and more strategic Go/Kill criteria at the gates. Other Discovery stage elements In addition to technology development projects, progressive firms have redefined Discovery to include many other ideation activities, including: Voice-of-customer methods, such as ethnographic research ( 24 ), site visits with depth interviews, customer focus groups to identify customer points of pain, and lead-user analysis ( 25 ). Strategically driven ideation, including crafting a product innovation strategy for the business in order to delineate the search fields for ideation, exploiting disruptive technologies ( 26 ), peripheral visioning ( 27 ), competitive analysis, and patent mining. Stimulating internal ideation, such as installing elaborate systems to capture, incubate and enhance internal ideas from employees, much as Swarovski has done ( 28 ). Open innovation as a source of external ideas, outlined next. Make Your Process an Open System Stage-Gate now accommodates open innovation, handling the flow of ideas, IP, technology, and even fully developed products into the company from external sources, and also the flow outward ( 29 ). Kimberly Clark, Air Products & Chemicals, P&G, and others have modified their Stage- Gate processes built in the necessary flexibility capability and systems in order to enable this network of partners, alliances and vendors from idea generation right through to launch. For example, P&G s SIMPL 3.0 version of its system is designed to handle externally-derived ideas, IP, technologies, and even fully developed products. Innovation via partnering with external firms and people has been around for decades joint ventures, venture groups, licensing arrangements and even venture nurturing. Open innovation is simply a broader concept that includes not only these traditional partnering models, but all types of collaborative or partnering activities, and with a wider range of partners than in the past. March April 11 Figure 3. The Technology Development Process handles fundamental science, technology development, and technology platform projects. It typically spawns multiple commercial projects which feed the new product process at Gates 1, 2 or 3. Note that the TD Process (top) is very fl exible: it is iterative and features loops within stages and potentially to previous stages. Gates rely less on fi nancial criteria and more on strategic criteria ( 23 ). In the traditional or closed innovation model, inputs come from internal and some external sources customer inputs, marketing ideas, marketplace information, or strategic planning inputs. Then, the R&D organization proceeds with the task of inventing, evolving and perfecting technologies for further development, immediately or at a later date ( 30 ). By contrast, in open innovation, companies look insideout and outside-in, across all three aspects of the innovation process, including ideation, development and commercialization. In doing so, much more value is created and realized throughout the process (see Figure 1): Discovery stage: Here, not only do companies look externally for customer problems to be solved or unmet needs to be satisfied, but to inventors, start-ups, small entrepreneurial firms, partners, and other sources of available technologies that can be used as a basis for internal or joint development. Development stage: Established companies seek help in solving technology problems from scientists outside the corporation, or they acquire external innovations that have already been productized. They also out-license internallydeveloped intellectual property that is not being utilized. Launch or commercialization stage: Companies sell or out-license already-developed products where more value can be realized elsewhere; or they in-license they acquire already commercialized products that provide immediate sources of new growth for the company. Automate Your Stage-Gate System Progressive companies recognize that automation greatly increases the effectiveness of their new product processes. With automation, everyone from project leaders to executives finds the process much easier to use, thereby enhancing buy-in. Another benefit is information management: the key participants have access to effective displays of relevant information what they need to advance the project, cooperate globally with other team members on vital tasks, help make the Go/Kill decision, or stay on top of a portfolio of projects. Examples of certified automation software for Stage-Gate are found in Ref. 31. The Path Forward This article has outlined new approaches that firms have built into their next-generation Stage-Gate systems. If your idea-to-launch system is more than five years old, if it s burdened with too much make work and bureaucracy, or if it s getting a bit creaky and cumbersome, the time is ripe for a serious overhaul. Design 56 Research. Technology Management 12 your innovation process for today s innovation requirements a faster, leaner, more agile, and more focused system. Reinvent your process to build-in the latest thinking, approaches and methods outlined above and move to the next-generation Stage-Gate system. References and Notes 1. Stage-Gate is a registered trademark of the Product Development Institute Inc ( ), and the term was coined by the author. 2. PDMA and APQC studies show that about 70 percent of product developers in North America use a Stage-Gate or similar system. See: The PDMA Foundation s 2004 Comparative Performance Assessment Study (CPAS), Product Development & Management Association, Chicago, IL. Also: Cooper, R.G., S.J. Edgett and E.J. Kleinschmidt, New Product Development Best Practices Study: What Distinguishes the Top Performers. Houston: APQC (American Productivity & Quality Center), Parts of this article are based on previous publications by the author. See: Cooper, R.G. and S.J. Edgett, Lean, Rapid and Profi table New Product Development, Product Development Institute, www. stage-gate.com, 2005; Cooper, R.G., The Stage-Gate Idea-to-Launch Process Update, What s New and NexGen Systems, J. Product Innovation Managements 25, 3, May 2008, pp ; and: Cooper, R.G., NexGen Stage-Gate What Leading Companies Are Doing to Re-Invent Their NPD Processes, PDMA Visions, XXXII, No 3, Sept. 2008, pp See APQC study ref. 2; also: Cooper, R.G., S.J. Edgett and E.J. Kleinschmidt, Benchmarking Best NPD Practices-2: Strategy, Resources and Portfolio Management Practices, Research-Technology Management 47, 3, May-June 2004, pp Osborne, S. Make More and Better Product Decisions For Greater Impact. Proceedings, Product Development and Management Association Annual International Conference, Atlanta, GA, Oct Belair, G. Beyond Gates: Building the Right NPD Organization. Proceedings, First International Stage-Gate Conference, St. Petersburg Beach, FL, Feb Private discussions with M. Mills at P&G; used with permission. 8. Cooper, R.G., S.J. Edgett and E.J. Kleinschmidt. Optimizing the Stage-Gate Process: What Best Practice Companies Do Part II. Research-Technology Management 45, 6, Nov.-Dec , pp Edgett, S. (subject matter expert). Portfolio Management: Optimizing for Success, Houston: APQC (American Productivity & Quality Center), These portfolio tools are explained in: Cooper, R.G. and S.J. Edgett, Ten Ways to Make Better Portfolio and Project Selection Decisions, PDMA Visions, XXX, 3, June 2006, pp ; also Cooper, R.G., S.J. Edgett and E.J. Kleinschmidt, Portfolio Management for New Products, 2 nd edition. New York, NY: Perseus Publishing, Cooper, R.G. and M. Mills. Succeeding at New Products the P&G Way: A Key Element is Using the Innovation Diamond. PDMA Visions, XXIX, 4, Oct. 2005, pp The Productivity Index method is proposed by the Strategic Decisions Group (SDG). For more information, refer to Matheson, D., Matheson, J.E. and Menke, M.M., Making Excellent R&D Decisions, Research- Technology Management, Nov.-Dec. pp , 1994; and Evans, P., Streamlining Formal Portfolio Management, Scrip Magazine, February, Fiore, C. Accelerated Product Development. New York, NY: Productivity Production Press, 2005, p For more information on value stream mapping, plus examples, see: Cooper & Edgett, ref. 3, ch Spiral development is described in Cooper, R.G. and S.J. Edgett. Maximizing Productivity in Product Innovation. Research- Technology Management, March-April 2008, pp Morgan, J. Applying Lean Principles to Product Development. Report from SAE International Society of Mechanical Engineers, Cooper, R.G. Formula for Success. Marketing Management Magazine (American Marketing Association), March-April 2006, pp ; see also Cooper & Edgett ref Cohen, L.Y., P.W. Kamienski and R.L. Espino. Gate System Focuses Industrial Basic Research. Research-Technology Management, July- August 1998, pp See Cooper & Edgett RTM article in ref Ledford, R.D. NPD 2.0, Innovation, St. Louis: Emerson Electric, 2006, p. 2: and NPD 2.0: Raising Emerson s NPD Process to the Next Level, Innovation, St. Louis: Emerson Electric, 2006, pp Cooper and Mills, ref Bull, S. Innovating for Success: How EXFO s NPDS Delivers Winning New Product. Proceedings, First International Stage-Gate Conference, St. Petersburg Beach, FL, Feb Cooper, R.G. Managing Technology Development Projects - Different Than Traditional Development Projects, Research- Technology Management, Nov.-Dec. 2006, pp R.G. Cooper and S.J. Edgett. Ideation for Product Innovation: What Are the Best Sources? PDMA Visions, XXXII, 1, March 2008, pp More on lead user analysis in Von Hippel, E. Democratizing Innovation, MIT Press, Cambridge MA, 2005; and Thomke, S. and E. Von Hippel. Customers As Innovators: A New Way to Create Value, Harvard Business Review, April 2002, pp Christensen, C.M. The Innovator s Dilemma. New York: Harper Collins, Day, G. and P. Shoemaker. Scanning the Periphery. Harvard Business Review, Nov. 2005, pp Erler, H. A Brilliant New Product Idea Generation Program: Swarovski s I-Lab Story, Second International Stage-Gate Conference, Clearwater Beach, FL, Feb Chesbrough, H. Open Innovation: The New Imperative for Creating and Profi ting from Technology. Cambridge, MA: Harvard Business School Press, This section is based on material in Cooper and Edgett, ref. 3, ch Docherty, M. Primer on Open Innovation : Principles and Practice, PDMA Visions, XXX, No. 2, April A number of software products have been certified for use with Stage-Gate. See March April 13 Receive the latest articles, research, and tips on product innovation Visit to join the Stage-Gate International Knowledge Community Connect With Us to Build Your Innovation Capability United States Corporate Head Office Andean, and Entrepreneurship. What is meant by entrepreneurship? The. principles of. >>>> 1. What Is Entrepreneurship? >>>> 1. What Is? What is meant by entrepreneurship? The concept of entrepreneurship was first established in the 1700s, and the meaning has evolved ever since. Many simply equate it with starting one... Six Sigma Black Belts: What Do They Need to Know? This paper was presented during the Journal of Quality Technology Session at the 45th Annual Fall Technical Conference of the Chemical and Process Industries Division and Statistics Division of the American fs viewpoint fs viewpoint 02 15 19 21 27 31 Point of view A deeper dive Competitive intelligence A framework for response How PwC can help Appendix Where have you been all my life? How the financial Starting up. Achieving success with professional business planning Achieving success with professional business planning Contents The New Venture Business Plan Competition 4 Preface 7 Acknowledgements 9 About this manual 11 Part 1: a company - how companies grow 17 building a performance measurement system building a performance measurement system USING DATA TO ACCELERATE SOCIAL IMPACT by Andrew Wolk, Anand Dholakia, and Kelley Kreitz A Root Cause How-to Guide ABOUT THE AUTHORS Andrew Wolk Widely recognized A simple guide to improving services NHS CANCER DIAGNOSTICS HEART LUNG STROKE NHS Improvement First steps towards quality improvement: A simple guide to improving services IMPROVEMENT. PEOPLE. QUALITY. STAFF. DATA. STEPS. LEAN. PATIENTS. A Framework for Success for All Students A Framework for Success for All Students Collected Papers from the Technical Support Team for the Schools for a New Society Initiative and Carnegie Corporation of New York A Framework for Success for All Retaining and Developing High Potential Talent. Promising Practices in Onboarding, Employee Mentoring & Succession Planning Retaining and Developing High Potential Talent Promising Practices in Onboarding, Employee Mentoring & Succession Planning TABLE OF CONTENTS Executive Summary 3 Diversity and Inclusion 4 Building a Talent Basic Marketing Research: Volume 1 Basic Marketing Research: Volume 1 Handbook for Research Professionals Official Training Guide from Qualtrics Scott M. Smith Gerald S. Albaum Copyright 2012, Qualtrics Labs, Inc. ISBN: 978-0-9849328-1-8 Take Action. Focus On Prevention Make A Difference Take Action Reach Out Red Ribbon Your Community Focus On Prevention U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES Substance Abuse and Mental Health Services Administration EMC ONE: A Journey in Social Media Abstract EMC ONE: A Journey in Social Media Abstract This white paper examines the process of establishing and executing a social media proficiency strategy at EMC Corporation. December 2008 Copyright 2008 EMC Leading Business by Design Why and how business leaders invest in design Leading Business by Design Why and how business leaders invest in design ARE YOU MISSING SOMEONE? Design Council Design Council champions great design. For us that means design which improves lives and Work the Net. A Management Guide for Formal Networks Work the Net A Management Guide for Formal Networks Work the Net A Management Guide for Formal Networks Contact Michael Glueck Programme Co-ordinator Natural Resource Management Programme D-108 An
http://docplayer.net/33058-How-companies-are-reinventing-their-idea-to-launch-methodologies.html
CC-MAIN-2017-09
en
refinedweb
In my java application I want to set the color and also the behavior when it is selected. For this I wrote a custom implementation of the TableCellRenderer and it is working as I want. But there is something I'm still confused about... Here is the implementation of the TableCellRenderer public class AccountMovementTableCellRenderer extends JLabel implements TableCellRenderer{ @Override public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column){ //My implementation here... return this; } } JTable AccountMovementTableCellRenderer accountMovementCellRenderer = new AccountMovementTableCellRenderer(); entryTable = new JTable(entryModel){ private static final long serialVersionUID = 1L; @Override public TableCellRenderer getCellRenderer(int row, int column){ return accountMovementCellRenderer; } }; The Component that TableCellRenderer.prepareRenderer returns is reused to render the contents of the JTable - in your case your renderer extends JLabel (you can could have just extended DefaultTableCellRenderer) - this JLabel is used to paint the contents of the JTable. The prepareRenderer method is used to customize the JLabel for each cell before rendering. To quote Oracle's tutorial on the JTables.
https://codedump.io/share/CWzsfa33CkWo/1/swing-jtable-with-custom-tablecellrenderer
CC-MAIN-2017-09
en
refinedweb
0 I'm new to recursion and am wondering how the output of the code below is 6. So from what I can understand, there are 2 base/anchor cases which return 0 or 1? Is this correct? And the recursive/inductive case is the f() calling f(), but I'm confused on what it returns. What is the +2 do to what the function returns? Im confused on that part mainly. #include<iostream> #include<string> using namespace std; int f(char ch1, char ch2) { if(ch1>ch2) return 0; if(ch1+1==ch2) return 1; else return f(ch1+1,ch2-1) + 2; } int main() { cout << f('a', 'e')<< endl; system("pause"); return 0; }
https://www.daniweb.com/programming/software-development/threads/318517/simple-recursion-question
CC-MAIN-2017-09
en
refinedweb
I’ve finally decided to download the new "Orcas" CTP of Visual Studio, which includes support for the all new C# 3.0. Up until now I only read about it, so I thought it was about time I tried it myself. And it’s awesome. So, here’s my "hello world" program. public void MyFirstProgram(Func<string,string> doSomethingWithString) { var result = doSomethingWithString("dlroW olleH"); Console.WriteLine(result); } public void Main() { MyFirstProgram(str => new string(str.Reverse().ToArray())); } This useless program demonstrates a few of the new C# 3.0 features. The first thing you can notice about MyFirstProgram method, is the use of the "var" keyword. Is this javascript or C#? What’s going on? Well, don’t worry. C# has not gone all dynamically-typed on us. It’s still very much statically typed, but it includes a new feature which allows us to specify local variables (and local variables only) without specifying their types. The compiler infers the type from by looking at the initializer for it. So this won’t compile, since the compiler won’t know how to infer the variable type: var noInit; Neither will this compile: var x = 5; x = "5"; Remember, this is not javascript. Once the compiler has decided about the type of the local variable, it can’t be changed again. The second weird thing you’ve probably noticed is this weird syntax: "str => …". This is actually an easier way to create an anonymous delegate. It’s definitely easier than what you had to write in C# 2.0, which is… delegate(string str) { return new string(str.Reverse().ToArray(); } So str is the delegate parameter, but as you can see, we don’t have to specify its type. The compiler infers the type of the parameter using the method to which we pass the delegate. In our case, it’s this: public void MyFirstProgram(Func<string,string> doSomethingWithString) Func is a new built in generic delegate type, which has a few overloads. We’re using the one in which we have one return value and one parameter, and our generic type is string. So, this method actually accepts a method, whose both return value and single parameter are strings. The cool thing is, that even though the "str => …" does not specify the type of str, there is intellisense support for it in the new Visual Studio. So not only the compiler got a lot smarter in C# 3.0 – Visual Studio followed suit. I really like the step the language has taken towards functional programming. The lambda expressions ("=>" syntax) really remind me of the Lisp variants of programming languages, and I can’t wait to do more cool stuff with it. Still, the whole thing probably wouldn’t have worked without great intellisense. The next thing you might notice, is that String doesn’t have a Reverse() method. You might think that in the new .NET Framework 3.5 the String class has changed, but as far as I can tell, it’s exactly the same as in .NET 2.0. So where did this "Reverse" came from? Well, another new feature of C# 3.0, and an integral part of the Linq project, is extension methods. This feature allows you to extend classes that you can’t or don’t want to inherit from. This includes built in .NET types as string, int, or in our case – the IEnumerable<T> interface. So Reverse is not a String method, what it really is, is an extension to the IEnumerable<T> interface. Since string implements IEnumerable<char>, it gets, "for free" a whole bunch of new methods which you can use. That means that if you’re writing a collection, you don’t have to inherit from some base-class to get lots of useful methods, if you only implement IEnumerable<T> you’ll get methods like Reverse, Sum, Average, and many more. To see how this works, let’s "go to definition" on our reverse method. We’ll see this: namespace System.Linq { public static class Enumerable { ...... public static IEnumerable<TSource> Reverse<TSource>(this IEnumerable<TSource> source); .... } } As you can see, we were sent to the static Enumerable class with a lots of static methods, which include parameters such as "this IEnumerable<TSource>". This is the syntax for creating an extension to a certain class. Let’s have a look at the Reflector to better understand what’s going on (I actually had to go and use an older version of the Reflector to show you what’s under. The new Reflector got all smart on me and displays the code almost identically as its C# origin). 1 [CompilerGenerated] 2 private static Func<string, string> <>9__CachedAnonymousMethodDelegate1; 3 4 public void MyFirstProgram(Func<string, string> doSomethingWithString) 5 { 6 string text1 = doSomethingWithString("dlroW olleH"); 7 Console.WriteLine(text1); 8 } 9 10 public void Main() 11 { 12 if (Class1.<>9__CachedAnonymousMethodDelegate1 == null) 13 { 14 Class1.<>9__CachedAnonymousMethodDelegate1 = new Func<string, string>(Class1.<Main>b__0); 15 } 16 this.MyFirstProgram(Class1.<>9__CachedAnonymousMethodDelegate1); 17 } 18 19 [CompilerGenerated] 20 private static string <Main>b__0(string str) 21 { 22 return new string(Enumerable.ToArray<char>(Enumerable.Reverse<char>(str))); 23 } 24 For the purpose of this post, I will ignore the way the compiler handles anonymous delegates (the reason for the CachedAnonymousDelegate1 thingy and the auto-generated b_0 method). First look at line 6: the "var" keyword is gone, and all you see is a declaration of a string local variables. Now look at line 22: Our "Reverse" method shows its true form… A call to the Enumerable.Reverse static method. In fact, as you can tell, there’s no new CLR coming with Orcas. All these new features are strictly compilation features, so only the C# compiler got smarter – the underlying CLR is still at version 2.0. For more information about extension methods, check out ScottGu’s post. Anyway, I intend to dig in some more in C# 3.0 – especially in Linq, which I have yet to really explore. So more on that later.
http://blogs.microsoft.co.il/dorony/2007/03/23/my-first-c-30-program/
CC-MAIN-2017-09
en
refinedweb
how would i go about converting a string to a double? Printable View how would i go about converting a string to a double? 1. Use the atof function: 2. Use stringstreams:2. Use stringstreams:Code: #include <iostream> #include <string> #include <cstdlib> using namespace std; int main() { string str("19.874"); double d; // Use atof to convert string to double d = atof(str.c_str()); // Output double value cout << d << endl; return 0; } Code: #include <iostream> #include <string> #include <sstream> using namespace std; int main() { string str("19.874"); stringstream sstr(str); double d; // Extract double value from stringstream sstr >> d; // Output double value cout << d << endl; return 0; } this is how you can do it in C not C++ hope it helps [edit1] took me a little while to find the correct link. if you saw this with the old link, sorry to confuse you. here is the index to the site with good details if all else fails google it. [/edit1] thanks for that... also how do you convert a double to a string? using <sstream>: Code: #include <sstream> using namespace std; string makeString(double d) { ostringstream ss; ss<<d<<flush; return ss.str(); } here is another link i found that might be of assistance to you. all i did again was run a search on google to find it. it uses stringstream to convert from and to the formats your looking for.
http://cboard.cprogramming.com/cplusplus-programming/63524-converting-string-int-double-printable-thread.html
CC-MAIN-2014-42
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Spherical equatorial coordinate system, in degree or in radian. This one resembles the geographic coordinate system, and has latitude up from zero at the equator, to 90 at the pole (opposite to the spherical(polar) coordinate system). Used in astronomy and in GIS (but there is also the geographic) template<typename DegreeOrRadian> struct cs::spherical_equatorial { // ... }; Either #include <boost/geometry/geometry.hpp> Or #include <boost/geometry/core/cs.hpp>
http://www.boost.org/doc/libs/1_48_0/libs/geometry/doc/html/geometry/reference/cs/cs_spherical_equatorial.html
CC-MAIN-2014-42
en
refinedweb
- Hierarchy wire display toggle? - global proc and memory - getting the location of a locator in an expression - strPos function (sharing) - Skin ToggleHold Attribute - need help about a command please... - Emitter rate - who can help me ?i have a question.. - Total Listing created Maya Windows - sqrt function problem in maya 8... - Working on calculator script - help please - Copy and Paste functions ??? - Fatal eror issue with shelfButton - default lambert.... - Moving an object along hilly surface - Moving an emitter along hilly surface - textfields updating on change - Moving pivot for selected faces - Using SWF files to personalize your custom UI's - selecting uvs by mel - reverse animation obj-to-camera - Vertex Weighting Tool - skinCluster.bindPreMatrix - animCurveTA connections - transferable custom marking menus - Key Viewport Change?? - layouts - Assigning one locator trans. to another - Assigning one locator trans. to another - Area render script - MEL for hotkey with MM - Controlling expression evaluation per frame - soft constraint (bi-directional) 1.2.0 - find joint orient axis.. - Base and tip color attributes for objects ??? - ewertb.com down... - Checking if an object is selected - Where to get help for MEL except from cgtalk and highend3d ??? - maya API: registering custom nodes with Autodesk - MAnimControl::stop(); - Get pixel color information from texture - Mel assistent needed ... - Avanced Twist Controls.. - Render Target Node? - Mel Noob needs help with a couple scripts :) - Find Axis Direction - How to get the read only state?? - Mel beginner - help please - uvlink syntax - Rotation plane - unparent comand?? - How to get the size of an image imported in maya as texture? - Anyone know why the move command must be executed twice? - FG off on Mult Objects - 3d-math: matrices/spaces - Point On Surface return - Maya8: What happened to the old select shell command?? - Assigning Shaders to render layers... - One problem stopping me from finishing my script - help please - Memory Leaks With Local Procs? - openGL text in viewport - edgeloop selection detection - getting [0] from an array - How to assign hotkeys to commands from scripts like MjPolyTools? - Can i call windows programs - like UltraEdit for example with a mel script? - New and undocumented? - menuItem for shelfButton - convertLightmap and AO? - Which particle id connects to the arrayMapper? - searching array for same number and return index. - Someone can solve this ? - How to check if vertices or edges or faces are selected? - Help with animCurve command please... - Can't understand dirname command... - how to do average ? - camera view - How to force a fps without using prefs? - textscrolllist, what the hell happened to -selectedItem - Help Me.... - mag command ? - Render Info - fprint a fileImport command? - sequencing of scripts - Novice question - Mel, ma, mb...Please help me - Looking for the Maya file that calls AETemplate Procedures - only deleting detachSurface1 input node ? - only deleting detachSurface1 input node ? - date and time return code ... - selecting graph points script help - run script on startup - Returning XYZ coords of a specific face. - HOW: 100 image sequences on 100 sprites? - attaching a color chooser to displayRGBColor - NEWBIE: How to read the name of a shader? - how to get UV infomation insdie Deformer node? - MEL -> API -> MEL -> API... - import into existing heirarchy? - Query Mouse Location - sorry: Layer Visibility - loop - move component tool - easy question on GUI building - How to set up object?! - seed not working in creation expression - faces together MEL - Middle Mouse Viewport Toggle - userSetup and maya 8 - Given a material name, find whats attached to the color channel - API Custom Node creation - Unselecting a radioButton - Modeling Script - Custom Menu - how to execute mel script dynamically - Simply starting Mel (help - begginer) - Quick Newbie question..... - menuItem + popupMenu issues - Custom UI Question - A smart undo? - melfunctions 0.2 - Find nearest vertices fast without comparing all of them - how to load script into script editor via mel? - Help with Array and interpolation - Adding springs using MEL takes much more memory!? - Reset RadioButtons - Output Window - Accessing alpha-numeric chars by index - Button creation in API - how to get the Border of uv "shells"... - New To MEL: Line Numbers! Help!! - UV coords to local space face coords? - Search and delete hypergraph node? - Gnomon's mel scripts - making mesh unselectable.. - locking size of textScrollList in FormLayout - a customed node that calculates distance! - From textScrollList to scrollField - Italicize specific lines of a textScrollList? - Quaternion Rotation and MEL - Absolute value - Write to image? Is it possible? - stroke...pressure. Not working - help - Consistent playblast size - API: creating and adding springs - Moving objects instead of shapenodes into SelectionSets - Passing string arrays between global procs? - Disco Dance Floor - disconnectAttr? - FileTextureManager on Mac: Help please! - I can't store a vector resulting from a command - solid choice menu needed - Plug-in cannot be unloaded because it is still in use - How do I make a script autostart? - API: How to get faces containing the vertex? - Expressions: namespaces and MEL-commands does not cooperate - Wait for a plugin to load - Maya API - Timeline Q ... - changing the color of text - Global ProcTastic! - vertex selection - MEL ordering problem - compound attributes - pre-render MEL in mental ray - API: compute problems - setting AdvancedTwist World-Up Obj with MEL? - connectControl and enumerated attribute not talking - rowSpacing comand - API: Noob here. Where the heck is windows.h? - WireFrameOnShaded - paint color/attribute: query stroke direction - export layer to file? - how to make a float or any control animatable? - how to control more atributes with one control? - Help me understand scripted panels please ... - Refresh BIN TAB - problems with my own "skin deformer" - Frame rate - finding and setting local vertex position - Query Menu Items In radioMenuItemCollection - how to find exact same model ni the scene? - How to find corner vertex in a poly plane - API: Custom Constraint Icon? - Need MEL to open a data file and create a scene - Precision with Curves - Find a pixel location on a textured sphere - Find Directory Maya Is Installed In - Please help with a script of mine - How to activate maya's create menu with MEL - How to create locator with a name specified? - How to get the progress window to work while maya is busy? - Isolate Select - turn on AutoLoad NewObjects - Help to create an Expression for Node Connections - connect attribute to multiple objects - 1k mel competition - Simple Distance Between Problem - Shelf Button Name Lost On Restart - Exporting animation data from maya to text files - Why won't my button fill the window - Reloading AEtemplates - Query joint name to use in text field - API: Reading vertex positions via attribute vrts - Expressions within scripts??? - Query attribute value...?? - reset joint axis - API: Where does the MStatus Status changes? Mystery inside! - Accessing a dynamic variable - overriding convert to file texture resolution limit - API : Distance Manipulator SCALE ? - Compiler for Maya 7 - Want info on dag and transformation matrices - correct rain spash balloons' creation translateY location - How to name the result of the annotate command? - Any chance MEL components placing to become visual in future? like in Delphi? - API: A problem to get Normals in world space - Getting unique MTypeID's from Autodesk - Very simple (but annoying) divide problem... - vectorArray Problem - Bring back "Mel How To" - Copying Uv's To Uv Sets For Multiple Objects - Invoke Marking Menu without LMB click? - Auto Excute Script on viewport change - API setDependentsDirty question: compound attrs - expression - relative scaling of objects - Changing Values with MEL - Implicit Object Reference in MEL Shading Expression - Noise / Turbulance on rotation - cycling visibility on a set of nodes - API Q: calling support functions from the compute method - UI snapShot?! - Mel, API, Python . . . - MEL game, Helping each other out - GI_Joe or alternative - query color? - mocap server for joystick - UI Color (under Linux) - Expression documentation inconsistencies - Number Logic... - Cannot find procedure "maya" error?? - Simple slider question - Mel Script to Track Time Worked? - Baking Hud info into Rendered Images - Maya unable to detect visual C++ 2005. Please help - Closest point on plane? - Problems loading image onto iconTextButton - HowTo: Get older MelScripts working with Maya8? - How to connect float attr to enum attr type? - API devkit - global proc - select all - mesh numbering - polyColorPerVertex - Mental Ray Batch Bake
http://forums.cgsociety.org/archive/index.php/f-89-p-15.html
CC-MAIN-2014-42
en
refinedweb
Mohd Tariq wrote: > Hi Senthil, > > I am building a C++ Application with HP aCC 6.10 compiler. During build it > gave error about iterators "#2135" and later we found that iterators are > not supported by HP aCC6.10.'s STL. > > So now I have been trying to use third party STL that we got from >. I have build that library > successfully. > I am using "+nostl" option to link it to 3rd party STL. > > To check we have written a small cpp program which prints "Hello". The > following are the compiler options that we gave to compile this file: > > aCC -v +DD64 +W829,641 -D_RWSTD_USE_CONFIG -D_RWSTDDEBUG +z -mt > -Wc,-ansi_for_scope,on I don't think you need the ansi_for_scope option, it's on by default. +nostl -I/opt/aCC/include_std > -I/opt/aCC/include_std/iostream_compat -I/home/stdcxx-4.1.3/include > -I/home/build1/nls -I/usr/include -I/usr -L/usr/lib/hpux64 > -L/home/build1/lib -L/opt/langtools/lib/hpux64 -L/opt/aCC/lib/hpux64 > -L/home/build1/rwtest -l:librwtest11D.a -l:libstd11D.so -l: You don't need to link with librwtest unless you're building the stdcxx test suite. Also, you shouln't be linking with the versioned library below. Use the link instead. > libstd11D.so.4.1.3 test.cpp You're compiling and linking in the same step (which is okay). When compiling and linking in two steps, dependent libraries must be listed after the object file on the link line. I suspect the same is going to be true when doing both in one step. > > We are using +nostl option as mentioned in the acc programming guide which > would suppress all default -L and -I options. The options mentioned > above in > bold are to include and link the third party libraries. > > The sample program is: > > #include<iostream> > using namespace std; > int main(){ > cout<<"Hello"; > return 0; > } > > > We are getting errors during the linking stage. The errors are: > > ld: Unsatisfied symbol "std::__rw_std_streams" in file test.o > ld: Unsatisfied symbol "std::ios_base::_C_unsafe_clear(int,int)" in file > test.o > ld: Unsatisfied symbol "_HPMutexWrapper::lock(void*)" in file test.o This suggests that you have a dependency on the HP C++ Standard Library. That would be bad (you can't mix two implementations of the same library in the same program). Martin > ld: Unsatisfied symbol "virtual table of __rw::__rw_thread_error" in file > test.o > ld: Unsatisfied symbol "type info of __rw::__rw_thread_error" in file > test.o > ld: Unsatisfied symbol "_HPMutexWrapper::unlock(void*)" in file test.o > > And when i compiled a sample program which contails templates it gave the > same error as before. > Please anyone help me in this regard, > Thanks and Regards >
http://mail-archives.apache.org/mod_mbox/incubator-stdcxx-dev/200612.mbox/%3C45802B3C.9030800@roguewave.com%3E
CC-MAIN-2014-42
en
refinedweb
<br><br><div class="gmail_quote">On Thu, Mar 25, 2010 at 4:25 PM, Thu, Mar 25, 2010 at 12:29 PM,;"> OK, here are my initial code comments:<div><br></div><div>*?</div> </blockquote><div><br></div></div><div>I think Web.Routes is a fine name. I'll make it happen. In the rest of this post I refer to things by the old names, but I do intend to change the module names and rename the package to web-routes.</div> <div> <div> </div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div>* I like the PathInfo class and to/fromPathSegments. Perhaps we should bundle that with the decode/encodePathInfo into a single module?</div> </blockquote><div><br></div></div><div>I put PathInfo in a separate module because I am a little dubious of classes these days. I find it a bit annoying that you can only have one PathInfo instance per type. And I think it helps show that using PathInfo is not actually required. But, in practice, I think having less modules is probably a good thing in this case, since it does not affect the dependency chain at all. Just because I *can* put every function in it's own module doesn't mean I should. ;) Also, we probably do want people to provide PathInfo instances, even if they don't have to..</div> <div> <div><br></div></div></div></blockquote><div>I also am beginning to share a mistrust of classes; I think I went a little too overboard on them on a few previous packages (namely, convertible-text) and am now having a reaction in the opposite direction. I'm sure one day I'll find the Golden Path...<br> <br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div class="gmail_quote"><div><div></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"> <div>* I'd like to minimize dependencies as much as possible for the basic package. The two dependencies I've noticed are Consumer and applicative-extras. I think the type signatures would be clearer *without* those packages included, eg:</div> <div><br></div><div> fromPathSegments :: [String] -> Either ErrMsg a</div></blockquote><div><br></div></div><div>Except that is not a usable type. fromPathSegments may consume, some, but not all of the path segments. Consider the type:</div> <div><br></div><div>data SiteURL = Foo Int Int</div><div><br></div><div>fromPathSegments is going to receive the path segments:</div><div><br></div><div>["Foo","1","2"]</div><div><br></div><div> If you wrote a parser by hand, you would want it to look a little something like:</div><div><br></div><div> do string "Foo"</div><div> slash</div><div> i <- fromPathSegments</div><div> slash</div> <div> j <- fromPathSegments</div><div> eol</div><div> return (Foo i j)</div><div> </div><div>The key concept here is that when you call fromPathSegments to get the first argument of Foo you need to know how many of the path segments were consumed / are remaining, so you can pass only those segments to the second fromPathSegments.</div> <div><br></div><div>So you really need a type like:</div><div><br></div><div><div> fromPathSegments :: [String] -> (Either ErrMsg a, [String])</div><div><br></div><div>which outputs the unconsumed path segments.</div> <div><br></div></div></div></blockquote>><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"> <div class="gmail_quote"><div><div></div><div>But this is obviously a ripe target for a monad of some sort -- trying keep track of the unconsumed portions by hand seems like it would asking for trouble... </div><div><br> </div> <div>The Consumer monad takes care of that and provides the functions you would expect such as, next, peek, and poke. And it seems nice to be able to use Monad, MonadPlus, Applicative, Alternative, etc, for composing fromPathSegments into larger parsers ?</div> <div><br></div><div>But, perhaps there is a better choice of monad, or a better way of dealing with the problem? Or maybe it's not really a problem?</div><div><br></div><div>I think Failing is a pretty nifty data-type for dealing with errors. But perhaps it is not a big win here.. The #1 thing that makes Failing better than (Either [String] a) is it's Applicative instance. Specifically, Failing will accumulate and return all the errors which have occurred, not the just first failure (which is the behavior for Applicative (Either e)).</div> <div><br></div><div>So for example, let's say you are doing are trying to lookup a bunch of keys from the query string. The key / value pairs in the query string are typically independent of each other. So let's say you do:</div> <div><br></div><div> (,) <$> lookup "foo" <*> lookup "bar"</div><div><br></div><div>but neither of those keys exist. With Either you will only get the error 'could not find "foo"'. But with Failing you will get the error 'could not find "foo". could not find "bar"'. It is nice to get a report of all the things that are broken, instead of getting only one error at a time, fixing it, and then getting another error, etc.</div> <div><br></div><div>However, I am not sure if this property is all that useful which urlt. If you are trying to parse a url like:</div><div><br></div><div> (string "Foo" *> Foo) <$> fromPathSegments <*> fromPathSegments</div> <div><br></div><div>And the parsing of "Foo" fails.. then there is no use in finding out if the other segments parse ok -- because they are likely to be garbage. Maybe it failed because it got the string "FOo" instead of "Foo", but more likely it got something completely unrelated like, /bar/c/2.4.</div> <div><br></div><div>So, perhaps Either is a better choice even with out considering dependencies... I think that Applicative / Alternative instances for Either are only defined in transformers in the Control.Monad.Error module -- which is a bit annoying. But we don't actually need those to implement urlt itself. </div> <div><br></div><div>This brings up another detail though. </div><div><br></div><div>the fromPathSegments / Consumer stuff is basically implementing a parser. Except, unlike something like parsec, we do not keep track of the current position for reporting errors. I wonder if we should perhaps use a slightly richer parser environment. Within a web app, once you got your to/from instances debugged, you will never get a parse error, so having great error messages is not essential. But, for other people linking to your site it could be potentially helpful. Though, it seems like the current error messages out to be sufficient given how short the urls are..</div> <div><br></div></div></div></blockquote><div>I don't think fancy error reporting will help here. More to the point: we could always layer a fancy parser on top of a simpler typeclass. For that matter, the same argument can be made for Failing and Consumer.<br> <br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div class="gmail_quote"><div><div></div></div><div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"> <div>I'm not certain what exactly the type of ErrMsg should be here; I don't really have a problem using [String], which would be close to the definition of Failing.</div> <div><br></div><div>* I think it's very important to allow users to supply customized 404 pages. Essentially, we need to augment handleWai (possibly others) with a (ErrMsg -> Application) parameter.</div></blockquote> <div><br></div></div><div>Yeah, there are (at least) two possibilities, add an extra param for the handler. Or bubble the error up to the top:</div><div><br></div><div><div>handleWai_1 :: (url -> String) -> (String -> Failing url) -> String -> ([ErrorMsg] -> Application) -> ((url -> String) -> url -> Application) -> Application</div> <div>handleWai_1 fromUrl toUrl approot handleError handler =</div><div> \request -></div><div> do let fUrl = toUrl $ stripOverlap approot $ S.unpack $ pathInfo request</div><div> case fUrl of</div><div> (Failure errs) -> handleError errs request</div> <div> (Success url) -> handler (showString approot . fromUrl) url request</div><div> </div><div>handleWai_2 :: (url -> String) -> (String -> Failing url) -> String -> ((url -> String) -> url -> Application) -> (Request -> IO (Failing Response))</div> <div>handleWai_2 fromUrl toUrl approot handler =</div><div> \request -></div><div> do let fUrl = toUrl $ stripOverlap approot $ S.unpack $ pathInfo request</div><div> case fUrl of</div><div> (Failure errs) -> return (Failure errs)</div> <div> (Success url) -> fmap Success $ handler (showString approot . fromUrl) url request </div><div><br></div><div>The second choice is perhaps more flexible. Which do you prefer? In the first option, the handleError function could be a Maybe value -- and if you supply Nothing you get some default 404 page?</div> <div><br></div></div></div></blockquote><div>I personally prefer the first option exactly as you describe it, but you're also correct that the second is more flexible. If anyone else reading this thread would prefer the second, speak now or forever hold your peace ;).<br> <br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div class="gmail_quote"><div><div></div></div><div>In happstack we have a third possiblity. The ServerMonad is an instance of MonadPlus so we can throw out the error message and just call mzero:</div> <div><br></div><div><div>implSite :: (Functor m, Monad m, MonadPlus m, ServerMonad m) => String -> FilePath -> Site url String (m a) -> m a</div> <div>implSite domain approot siteSpec =</div><div> do r <- implSite_ domain approot siteSpec</div><div> case r of</div><div> (Failure _) -> mzero</div><div> (Success a) -> return a</div><div><br> </div><div>implSite_ :: (Functor m, Monad m, MonadPlus m, ServerMonad m) => String -> FilePath -> Site url String (m a) -> m (Failing a)</div><div>implSite_ domain approot siteSpec =</div><div> dirs approot $ do rq <- askRq</div> <div> let pathInfo = intercalate "/" (rqPaths rq)</div><div> f = runSite (domain ++ approot) siteSpec pathInfo</div><div> case f of</div> <div> (Failure errs) -> return (Failure errs)</div><div> (Success sp) -> Success <$> (localRq (const $ rq { rqPaths = [] }) sp)</div><div><br></div><div>then we can do:</div> <div><br></div><div> msum [ implSite "domain" "approot" siteSpec</div><div> , default404</div><div> ]</div><div><br></div><div>if implSite calls mzero, then the next handler (in this case default404) is tried. </div> </div><div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"> <div>* It might be nice to have "type WaiSite url = Site url String Application". By the way, are you certain you want to allow parameterization over the pathInfo type?</div></blockquote><div><br></div></div><div> I'm not certain I don't want to allow it... I have a vague notion that I might want to use Text sometimes instead of String. Though if I was really committed to that then I should make toPathInfo and fromPathInfo parameterized over pathInfo as well... So perhaps I will axe it from Site for now. I need to change the name of that type and it's record names too I think.</div> <div> <div> </div></div></div></blockquote><div>Referring to the fear of typeclasses mentioned above: I'd like to avoid MPTCs even more so. In fact, as I look at it, each extra parameter we add creates more potential for incompatible components. For instance, I can see an argument being made to use extensible exceptions for the fromPathSegments return type, but I'd rather keep things standard with [String] than create more division.<br> <br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div class="gmail_quote"><div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"> <div>The only packages that I feel qualified to speak about then are urlt and urlt-wai, and my recommendation would be:</div> <div><br></div><div>urlt contains decode/encodePathInfo, PathInfo class and related functions, Site and related functions. If you agree on allowing the parameterization of 404 errors, then also provide a default 404 error.</div> </blockquote><div> </div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div>urlt-wai contains WaiSite, handleWai and related functions.</div> </blockquote><div><br></div> </div><div>Yeah, that is what I was thinking. urlt would contain what is currently in;</div><div><br></div><div>URLT.Base</div><div>URLT.PathInfo</div><div>URLT.HandleT</div><div>URLT.Monad</div><div>URLT.QuickCheck</div> <div><br> </div><div>QuickCheck module does not actually depend on QuickCheck, which is nice because QC1 vs QC2 is a big problem right now. </div><div><div> </div></div></div></blockquote><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"> <div class="gmail_quote"><div><div></div><div>It might also be nice to include:</div><div><br></div><div> <a href="" target="_blank">URLT.TH</a> </div> </div><div><br></div><div><div>with depends on template-haskell. But I am not sure that depending on template-haskell is an issue because template-haskell comes with ghc6, and the code in <a href="" target="_blank">URLT.TH</a> already handles the breakage that happened with TH 2.4.</div> <div><br></div></div></div></blockquote><div>I have a different motive for keeping the TH code out: it seems like all of the other pieces of code should be relatively stable from early on, while the TH code (and quasi-quoting, and regular) will probably have some major changes happening for a while. It would be nice to have a consistent major release number for long periods of time on the core.<br> <br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div class="gmail_quote"><div><div></div></div><div>If I switch to Either instead of Failing I believe the dependencies would be:</div> <div><br></div><div> base, Consumer, template-haskell, network, utf8-string</div><div><br></div><div>urlt-wai would just include:</div> <div><br></div><div>URLT.Wai</div><div><br></div></div></blockquote><div>Sounds great. Let me know when this is available for review. If you want me to do any of the merging/renaming, I have some time now (I arrived in southern California at 3:30 in the morning...).<br> <br>Michael <br></div></div>
http://www.haskell.org/pipermail/web-devel/attachments/20100326/254ea8a9/attachment.html
CC-MAIN-2014-42
en
refinedweb
This section details how to go about using the dynamic features of Groovy such as implementing the GroovyObject interface and using ExpandoMetaClass, an expandable MetaClass that allows adding of methods, properties and constructors. GroovyObject ExpandoMetaClass Compile-time metaprogramming is also available using Compile-time Metaprogramming - AST Transformations You can invoke a method even if you don't know the method name until it is invoked: class Dog { def bark() { println "woof!" } def sit() { println "(sitting)" } def jump() { println "boing!" } } def doAction( animal, action ) { animal."$action"() //action name is passed at invocation } def rex = new Dog() doAction( rex, "bark" ) //prints 'woof!' doAction( rex, "jump" ) //prints 'boing!' You can also "spread" the arguments in a method call, when you have a list of arguments: def max(int i1, int i2) { Math.max(i1, i2) } def numbers = [1, 2] assert max( *numbers ) == 2 This also works in combination of the invocation with a GString: someObject."$methodName"(*args)
http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=209322003
CC-MAIN-2014-42
en
refinedweb
There’s a new buzzword going around the Developer’s ear. Have you heard it yet? It’s called SOA. It is an acronym for Service Oriented Architecture. This new concept is not really new; it is old in its definition; however the marketing departments of all the big wig companies are re-singing its tune. Microsoft is singing its SOA pop song with .NET building blocks, WSE, ASP.NET and XML. IBM is using another artist called On Demand, and others are following suit as well. This is a new way in thinking especially for a Component architect. Let’s think about it for a second. In Object Oriented Programming from a Windows developer perspective, we designed basic COM applications and COM components around a basic three tier approach. The three Layers: Our basic design was simple in concept, if a class had a code that interacted with the user interface such as a Windows application, or a web page, then we wrote the code on the user interface side. We wrote the code in a Windows Form, or in DHTML or some other user interface component. On the other hand, if we had a logic that dealt with business rules, processes, day to day logic, that was the real problem we were trying to solve, we’d place that code in one to many separate components within our business layer, and install that file on a business logic server such as MTS/COM+ services. Lastly, any logic that dealt with accessing a data source, such as a file, database, server and etc., we’d place that on the server or in a component that ran on the server like using stored procedures. In OOP, we have dealt with many sometimes frustrating terms and concepts. We had to learn Abstraction, Inheritance, Implementation, Encapsulation, Interfaces, Composition, Aggregation and other terms. OOP became so complex that we created diagrams to visually represent all of these terms and conditions called UML. We invested a lot of time and effort, learning this paradigm, yet, we still ran into one major subtly, one major flaw: Interoperability. COM did not communicate with EJB, ISAM databases did not communicate with COM, CRM systems did not communicate with EJB, nor COM. These technologies did not allow for a smooth communication pattern automatically. We had to finesse and “tweak” each technology to be able to link these disparate designed systems together. The effort and hard work to make this possible caused many strains, and morphing of technologies. Even to the point where some technologies had many add-on features that are a 180 degree turn from its original designed architecture. Just look at the ADO from its first versions to its current one, version 2.8 or so, and the most important change from COM technology to .NET. This eventually led to the inevitable. A new paradigm in software design: SOA. So you mentioned all these songs the big wig companies are singing, what are they all about? SOA is a new way in designing systems. We now think of a system as a well designed suite of components that is entirely based on message communication patterns of what a component does (service). It is an idea to center the design of a system(s) on an “orchestrated” suite of services through message communication. These services talk to each other by passing XML forms of messages back and forth to each other; this is the focal point. An SOA System. The image to the left shows a standard depiction of a service, with three prongs sticking out from a triangle. These stubs or points are known as “Endpoints” or “touchpoints”. They are the portals that allow the XML messages to come into and be sent out of some network using any protocol. This is quite the opposite of Distributed COM, where we were forced to use a proprietary protocol, and a special port to send network packets of data across the wire. We could talk about services and how to architect them and all the new benefits that SOA offers, however the main point of this introduction is to talk about how these services talk to one another: messages. The basis of a message is XML. XML is the format, and within this format we have a very specific format called Simple Object Access Protocol (SOAP). SOAP messages contain all the data needed to perform some unit of work. It may also contain security specifics like username and password, certificate information, and other secure concepts such as encryption and hashing. SOAP messages give different platforms such as UNIX a way to communicate with others like Microsoft. SOA, and SOAP solves our interoperability problems. The data being passed is all text, and all platforms understand text. Knowing this, the industry has comprised a set of templates, or models of ways by which these messages can pass back and forth. These models are known as Message Exchange Patterns. There are many Message Exchange Patterns (MEP). There is the first implementation of SOA called XML web services which uses the Request/Response (Remote Procedure Call) MEP. There is also the Dead Letter Channel pattern, where a message is sent to a service, and any error that occurs during the processing of the message is sent to a special “node” or Channel. These errors, more often referred to as SOAP Faults are then queued on to a stack. A client application can then retrieve the messages from this queue. There is the Message Router pattern where a message is routed to another service(s) based on its content, or security credentials. There is the Message Splitter pattern, which splits or combines messages and sends them to other destinations, and most notably, there is the Publisher-Subscriber pattern. The Publisher-Subscriber pattern is where a message comes into service to notify it that it wants to listen for “messages that a publisher broadcasts to its listeners”. A Client application sends a “subscription” request message about who it is, and where it can receive these “responses” from the publisher. The application, be it physically installed on the Client or the Server, can then run, and wait on the Publisher service to generate these responses and send it back to its subscribers. Client Application makes a Subscription Request. The Publisher service would then execute its logic and eventually loop through its collection of subscribers and send the message on. The publisher service may even send a “Fire and Forget” type of message to all the subscribers. This is because the Publisher service may not necessarily be concerned with who gets the message successfully, such as message confirmation. Publisher service sends a copy of the message to its subscribers. With Microsoft’s implementation of the community standard: WSA, we can implement the Publisher Subscriber model of SOA. Microsoft implements this standard using an Add-on tool called Web Service Enhancements (WSE). Microsoft is currently in version 2.0 service pack 2 of the WSE toolkit, which is the version we’ll use to implement this model. WSE gives us many classes and technologies we can use inside of a .NET based application. The two classes we’ll focus on are the SoapReceiver class and the SoapSender class. SoapReceiver SoapSender A SoapReceiver is a class that inherits/implements from the IHttpHandler interface (see my article on using WSE with SimpleHandlerFactory). This class provides all the functionality you need to receive SOAP messages. To use this class, just create a custom class and inherit from the SoapReceiver class. This class asks that you override the Receive method. This is the method that receives the SOAP message from a SoapSender class. Here is the signature of the Receive method: IHttpHandler SimpleHandlerFactory Receive protected override void Receive ( SoapEnvelope envelope ) The Receive method, takes in a WSE SoapEnvelope class as its argument, which is the SOAP message passed in by the SoapSender. The SoapEnvelope class inherits from the System.Xml.XmlDocument class and contains many properties and instance methods that allow a developer to read and parse the XML SOAP message being sent in. SoapEnvelope System.Xml.XmlDocument A SoapSender is a class that inherits from the abstract SoapPort class. This class basically corresponds to a Filter that allows you modify the input of the SOAP message and the output of the message. This base class allows you to control the sending and receiving of the SOAP message. To use the SoapSender class, create an instance of this class and set its Destination property (an URI) either through its constructor or by setting its property explicitly. Next, call the Send or BeginSend methods Synchronously and Asynchronously respectively, and send a SoapEnvelope class to the Destination. SoapPort Destination Send BeginSend The demo application is separated into two separate projects. A Publisher Windows Application that hosts the Publisher Service, and the Client Subscription Application that hosts the Client Subscription Response Service. The Publisher application is broken up into two pieces: The Publisher application is a basic Windows Forms application that displays subscribers through a ListBox in real time subscribing to the Publisher, and unsubscribing from the Publisher. It also contains a TextBox that gives the Publisher the ability to publish an article or data to all the listed subscribers. When the Publisher clicks on the Publish Article button, a file is created on the server, and then a copy of its contents is sent to all the subscribers. ListBox TextBox The publisher class is a custom class that inherits from the SoapReceiver class. It overrides the Receive method and checks for a SOAPAction on the SoapEnvelope. It parses the SoapAction to determine if the message being sent in is a Subscription request, or a Unsubscription request. Also, while the Publisher application continues to run, if the Publisher decides to publish an article, it is then sent using a SoapSender to all the listening Subscribers. publisher SOAPAction SoapAction using System; using Microsoft.Web.Services2; using Microsoft.Web.Services2.Messaging; using Microsoft.Web.Services2.Addressing; using System.Web.Services.Protocols; using System.Xml; using System.Collections; using System.IO; using System.Collections.Specialized; namespace ArticlePublisherApp { internal class Literals { static Literals() { Literals.LocalhostTCP = "soap.tcp://" + System.Net.Dns.GetHostName() + ":"; } internal readonly static string LocalhostTCP; } public delegate void NewSubscriberEventHandler(string subscriberName, string ID, Uri replyTo ); public delegate void RemoveSubscriberEventHandler( string ID); /// <SUMMARY> /// Summary description for Publisher. /// </SUMMARY> public class Publisher : SoapReceiver { public event NewSubscriberEventHandler NewSubscriberEvent; public event RemoveSubscriberEventHandler RemoveSubscriberEvent; public Publisher() { _subscribers = new Hashtable(); fsw = new FileSystemWatcher(); System.Configuration.AppSettingsReader configurationAppSettings = new System.Configuration.AppSettingsReader(); string folderWatch = ((string)(configurationAppSettings.GetValue("Publish." + "PublishFolder", typeof(string)))); try { fsw = new System.IO.FileSystemWatcher(folderWatch); } catch { throw new Exception("Directory '" + folderWatch + "' referenced does not exist. " + "Change the fileName variable or create this directory in " + "order to run this demo."); } fsw.Filter = "*.txt"; fsw.Created += new FileSystemEventHandler(fsw_Created); fsw.Changed += new FileSystemEventHandler(fsw_Created); fsw.EnableRaisingEvents = true; } protected void OnNewSubscriberEvent(string Name, string ID, Uri replyTo) { if (NewSubscriberEvent != null) NewSubscriberEvent(Name, ID, replyTo); } protected void OnRemoveSubscriberEvent(string ID) { if (RemoveSubscriberEvent != null) RemoveSubscriberEvent(ID); } private void AddSubscriber(string ID, Uri replytoAddress, string Name) { SoapSender ssend = new SoapSender(replytoAddress); SoapEnvelope response = new SoapEnvelope(); response.CreateBody(); response.Body.InnerXml = String.Format("<?xml:namespace prefix=x />" + "<x:AddSubscriber xmlns:x=\"urn:ArticlePublisherApp:Publisher\">" "<NOTIFY>Name: {0} ID: {1}</NOTIFY></x:AddSubscriber>", Name, ID); Action act = new Action("response"); response.Context.Addressing.Action = act; ssend.Send(response); _subscribers.Add ( ID, new Subscriber(Name,replytoAddress, ID) ); OnNewSubscriberEvent(Name, ID, replytoAddress); } private void RemoveSubscriber(string ID, Uri replytoAddress) { if (_subscribers.Contains(ID) ) { _subscribers.Remove(ID); SoapSender ssend = new SoapSender(replytoAddress); SoapEnvelope response = new SoapEnvelope(); response.CreateBody(); response.Body.InnerXml = String.Format("<x:RemoveSubscriber xmlns:x=\"" + "urn:ArticlePublisherApp:Publisher\">" + "<NOTIFY>ID: {0} Removed</NOTIFY>" + "</x:RemoveSubscriber>", ID); Action act = new Action("response"); response.Context.Addressing.Action = act; ssend.Send(response); OnRemoveSubscriberEvent(ID); } } protected override void Receive( SoapEnvelope envelope ) { //Determine Action if no SoapAction throw exception Action act = envelope.Context.Addressing.Action; if (act == null) throw new SoapHeaderException("Soap Action must be set", new XmlQualifiedName()); string subscriberName = String.Empty ; string subscriberID = String.Empty; switch (act.ToString().ToLower()) { case "subscribe": //add new subscriber subscriberName = envelope.SelectSingleNode ( "//name").InnerText ; subscriberID = System.Guid.NewGuid().ToString(); AddSubscriber(subscriberID, envelope.Context.Addressing.From.Address.Value, subscriberName); break;; case "unsubscribe": subscriberID = envelope.SelectSingleNode("//name") .InnerText ; RemoveSubscriber(subscriberID, envelope.Context.Addressing.From.Address.Value); break; default: break; } } private void fsw_Created(object sender, System.IO.FileSystemEventArgs e) { Uri uriThis = new Uri (Literals.LocalhostTCP + "9090/Publisher" ); // Send each subscriber a message foreach(object o in _subscribers) { DictionaryEntry de = (DictionaryEntry)o; Subscriber s = (Subscriber)_subscribers[de.Key]; SoapEnvelope responseMsg = new SoapEnvelope (); FileStream fs = new FileStream(e.FullPath ,FileMode.Open, FileAccess.Read , FileShare.ReadWrite ); StreamReader sr = new StreamReader(fs); string strContents = sr.ReadToEnd() ; sr.Close(); fs.Close(); // Set the From Addressing value responseMsg.Context.Addressing.From = new From ( uriThis ); responseMsg.Context.Addressing.Action = new Action( "notify"); responseMsg.CreateBody(); responseMsg.Body.InnerXml = "<x:ArticlePublished xmlns:x=\"" + "urn:ArticlePublisherApp:Publisher\">" + "<NOTIFY><FILE>" + e.Name + "</FILE><CONTENTS>" + strContents + "</CONTENTS></NOTIFY></x:ArticlePublished>"; // Send a Response Message SoapSender msgSender = new SoapSender (s.ReplyTo ); msgSender.Send ( responseMsg ); } } internal StringCollection GetSubscribers() { StringCollection coll = new StringCollection(); foreach(Subscriber s in _subscribers) { coll.Add(String.Format("Name - {0}\t ID - {1}\t Reply To Uri {2}", s.Name, s.ID, s.ReplyTo.ToString())); } return coll; } private Hashtable _subscribers; private FileSystemWatcher fsw; } public class Subscriber { public string Name; public Uri ReplyTo; public string ID; public Subscriber(string name, Uri replyTo, string id) { Name = name; ReplyTo = replyTo; ID = id; } } } The Client Subscriber application is also broken up into two pieces: Subscriber The Client Subscriber application is a basic Windows Forms application that contains a MainMenu, and a StatusBar, along with a ReadOnly TextBox. The MainMenu has a MenuItem that has the ability to send a Subscription request to the Publisher, along with an Unsubscription Request to stop subscribing to the Publisher. The StatusBar displays the Register ID of the Client when registered. The ReadOnly TextBox displays any article that is published from the Publisher, at any given time the Publisher decides to publish the article/data. The Subscriber class is a custom class that inherits from the SoapReceiver class. It overrides the Receive method and checks for a SoapAction header on the SoapEnvelope. It parses the SoapAction to determine if the message being sent back from the Publisher is a simple response to the subscription or unsubscription request, or a notify message to let the Subscriber Form know that an article is being sent from the Publisher. To really test the application out, start multiple instances of the Client Subscription application. MainMenu StatusBar MenuItem using Microsoft.Web.Services2.Messaging; using Microsoft.Web.Services2.Addressing; using System.Web.Services.Protocols; using System.Xml; namespace ClientSubscriptionApp { public delegate void ResponseFromServerEventHandler( string Response); public delegate void SubscriptionNotificationEventHandler( string Notification); public class SubscriberNotification : SoapReceiver { public event ResponseFromServerEventHandler ResponseFromServerEvent; public event SubscriptionNotificationEventHandler SubscriptionNotificationEvent; public SubscriberNotification() { } protected void OnResponseFromServer (string Response) { if (ResponseFromServerEvent != null) ResponseFromServerEvent(Response); } protected void OnSubscriptionNotification(string Notification) { if (SubscriptionNotificationEvent != null) SubscriptionNotificationEvent(Notification); } protected override void Receive(Microsoft.Web.Services2.SoapEnvelope envelope) { string sResponse = string.Empty; Action act = envelope.Context.Addressing.Action; if (act == null) throw new SoapHeaderException("Soap Action must be present", new XmlQualifiedName()) ; switch (act.Value.ToLower() ) { case "response": sResponse = envelope.SelectSingleNode("//notify").InnerText ; OnResponseFromServer(sResponse); break; case "notify": sResponse = envelope.SelectSingleNode("//notify").InnerText ; OnSubscriptionNotification(sResponse); break; default : break; } } } } Happy coding! If you like this SOA tune… Stay tuned for BizTalk Server 2004 articles.
http://www.codeproject.com/Articles/9218/SOA-The-Subscriber-Publisher-Model-Introduction-an
CC-MAIN-2014-42
en
refinedweb
Steep rise in inputs and uncertainty over water availability are among factors More and more small and marginal farmers are selling their meagre landholdings to become agricultural workers. This is how agriculturists, policy-makers and economists explain the finding in the Census for Tamil Nadu: Between 2001 and 2011, the strength of cultivators declined and the number of agricultural workers went up. In the 10-year period, there was a fall of about 8.7 lakh in the number of cultivators and a rise of nearly 9.7 lakh among farm workers. With agriculture remaining unprofitable generally, many cultivators are forced to give up farming and consequently sell their lands. Uncertainty over water availability, steep rise in inputs, particularly fertilizers, and inadequate procurement price for food grains are among the factors that drive out farmers from their basic calling. According to the State Planning Commission’s 12th Five Year Plan document, the overall average size of landholding had come down from 0.83 hectares in 2005-06 to 0.80 hectares in 2010-11. “What is ironical is that when the scope for agriculture is shrinking, the number of agricultural workers is on the rise,” says K. Balakrishnan, president of the Tamil Nadu Vivasayigal Sangam and Communist Party of India (Marxist) MLA from Chidambaram. Farmers not getting fair compensation in times of floods or droughts and cumbersome procedures associated with crop insurance are other reasons that make the farming community have second thoughts over continuing with agriculture. S. Janakarajan, professor, Madras Institute of Development Studies, and a seasoned expert on agrarian issues, refers to the trend of agricultural land being purchased in a big way by institutions of higher education and companies that are putting up thermal power plants. “This is happening in the Cauvery delta,” says Prof. Janakarajan, who has just carried out field surveys in eastern parts of the delta, particularly in the Nagapattinam-Vedaranayam belt. Pointing out that the big picture is extremely disturbing, he says that pull and push factors are in operation against farming. While the push factor pertains to the distress conditions in which agriculturists are placed, the pull factor refers to “greater opportunities,” as viewed by farmers, in urban areas, for their livelihood. According to him, the most important finding of the Census – the urban boom in Tamil Nadu – means conversion of rural poverty into urban poverty. However, a senior policy-maker, who had a considerable stint in the State Agriculture Department in the last 10 years, sees the trend differently. “What we are witnessing is economic transition. When an economy matures, the contribution of the primary sector to the overall economy becomes less and less. At one stage, it will stabilise.” What everyone acknowledges is that given the level of urbanisation in the State, many farm workers are no longer dependent solely on farming for livelihood. For some months in a year, they get into non-farming activities such as construction. In fact, another policy-maker says there should be enough avenues for non-farm income for the agriculturists so that they do not find themselves in economic distress in times of successive spells of drought. As regards the Census finding on the increase in the strength of farm workers, not many are willing to agree with it. The policy-maker says that be it in the Cauvery delta or in Cuddalore-Villupuram belt, the dearth of workers has been the general complaint. S. Ranganathan, general secretary of the Cauvery Delta Farmers’ Welfare Association, says there is a perceptible fall in the number of labourers even in the delta over the years. With vast improvement in connectivity, the practice of people in rural parts of the region going to faraway places for livelihood is no longer uncommon. A substantial workforce in the Tirupur knitwear industry is from the delta, he points out. Keywords: agricultural workers, 12th Five Year Plan, Cauvery delta It is quite hard to digest the reality of our farmers. But Northern parts of our nation are the worst. Thanks to the politicians who propagate saying they work for the welfare & development of people. Well the main issue of distress is said clearly which is WATER. When farmers have associations, why not they all work collectively in water management. The Farmers association comes forward only during the compensation & when it comes to development...... its a very big question mark. Agriculture is our backbone of our nation and it is self sufficient. Distress is due to the disability of knowledge which leads to destruction. Already our fertile land are being spoilt using harsh fertilizers and now...... The State Govt's of our nation should collective work for the agricultural development and should maintain. If not our nation would import food grains which our Govt. did few years back for Rs.1500 Crores. INDIA can feed the whole world but if this attitude continues.....? While the agriculture producers are always incurring loss, then how come the middlemen and others depending on agriculture products are able to make huge profits. Those buying from the farmer, running Hotels, running departmental stores & retails shops are making huge profits and leading a wonderful life. Why alone, the producer of agri commodity is facing the loss? Is anyone from this country to highlight this issue, including the media? No, never this will be highlighted. It is better to be a beggar than to be a farmer. According to the experts opinion push and pull effect is working strongly in agri-sector for various factors like, First is the second generation has progressed well in education, and they want to move to the greener pasture, second point is the boom of realeaste across the country has forced directly or indirectly the small farmers to sell their land. Third point is that the input cost has increased in many folds that are not equal to the harvest price. The fourth and the last factor is the huge power crises in our state has not only forced agriculturist to became a labour in other sector, its happing for many small entrepreneur to close the shop and force them to move towards the metros in search of job. Unless the policy makers take any concert step to make available the basic amenities like , housing, better education, good healthcare and proper employment facilities in every villages this scenarios will not change rather it will more deteriorate. Please Email the Editor
http://www.thehindu.com/news/national/tamil-nadu/more-small-farmers-selling-land-turning-workers-experts/article4775733.ece
CC-MAIN-2014-42
en
refinedweb
Tutorials Forum Career Development Articles Reviews Jobs Practice Tests Projects Code Converter Interview Experience | New Members | IT Companies | Peer Appraisal | | Revenue Sharing | New Posts | Social | Forums » .NET » ASP.NET » Text To Speech in ASP.Net And save it into .wav file Posted Date: 02 Apr 2013 Posted By:: Prasad Member Level: Bronze Member Rank: 6674 Points : 2 Responses: 2 Text To Speech in ASP.Net And save it into .wav file only in webforms not in winforms Tweet Are you looking for a way to convert Text to Speech using ASP.NET ? then read this thread to learn how to convert it Responses #710230 Author: Prasad kulkarni Member Level: Diamond Member Rank: 6 Date: 02/Apr/2013 Rating: Points : 3 Microsoft has Speech SDK 5.1. You can download and can use it with asp.net. check link below Check following link which gives you more idea about it Thanks Koolprasd2003 Editor, DotNetSpider MVM Microsoft MVP [ASP.NET/IIS] #710316 Author: sambath Member Level: Silver Member Rank: 717 Date: 03/Apr/2013 Rating: Points : 4 Introduction Voice over IP (VoIP) VoIP (or Voice over Internet Protocol) refers to voice transmission over the Internet. In other words it refers to telephony over the Internet. Its main advantage against traditional telecommunication is that it is based on the existing technical structure. Due to this fact, a VoIP communication system is far less expensive than a traditional telecommunication system with similar functionality. VoIP technology uses Session Initiation Protocol (SIP) mainly due to its easy implementation. Since SIP can be implemented in the easiest way it overcomes all other protocols that could be used in VoIP. Text-to-speech Text to speech functionality means that the system converts normal (not mathematical or technical) texts into voice. These systems are based on the so called Speech synthesis. These systems structure the voice from the given input text on the basis of sample data and the characteristics of human voice. Sample programs This tutorial includes two sample programs in Visual Basic that demonstrate the utilization of text to speech functionality in VoIP. The first sample program is called "TextToSpeechSDK_Sample" (Figure 1). It has a text field and it converts texts entered in this text field into voice. Then you can play the voice and/or save it as a wav file. The other program is called "SDK_VoicePlayerSample" (Figure 2). It is able to play the previously saved voice during a VoIP call. After it plays the wav file it finishes the call. TextToSpeechSDK_Sample It is quite simple to develop an application that is only responsible for converting an input text into voice, especially if you use a recent speech API. In this sample program the tools offered by Microsoft have been used. In this way the source of the program can be made via a few lines. The tool can be found on System.Speech namespace. For conversion only a SpeechSynthesizer object is needed that will make the text to speech conversion. Figure 1 - Text To Speech SDK Sample 1. Private Sub buttonSave_Click(ByVal sender As Object, ByVal e As EventArgs) Handles buttonSave.Click 2. If Not String.IsNullOrEmpty(Me.textBox1.Text) Then 3. Using file As SaveFileDialog = New SaveFileDialog 4. file.Filter = "Wav audio file|*.wav" 5. file.Title = "Save an Wav audio File" 6. If (file.ShowDialog = DialogResult.OK) Then 7. Me.synthetizer.SetOutputToWaveFile(file.FileName, New SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono)) 8. Me.synthetizer.Speak(Me.textBox1.Text) 9. Me.synthetizer.SetOutputToDefaultAudioDevice() 10. MessageBox.Show("File saving completed.") 11. End If 12. End Using 13. End If 14. End Sub The conversion and saving can be performed by the code above. Essentially, the whole program requires only a few lines of code. It can be seen from the source code that a file is opened for writing then the output of the mentioned speech.dll SpeechSynthesizer object is set to this file. To let the other application process the saved sound I used 800 Mhz sampling frequency and 16 bit rate with mono audio channel. The conversion is made via the Speak method ofSpeechSynthesizer object with the help of a given input text(Me.synthetizer.Speak(Me.textBox1.Text)) that will appear on the oupt, which was set onSpeechSynthesizer. SDK_VoicePlayerSample Basically this program is a simplified VoIP SIP softphone (Figure 2). Since this softphone is for demonstration it only has dialing and call initializing functionality. In case of successful call, the application plays the sound, which was saved by the other program, during the phone call and then it ends the call. For making it simple Ozeki VoIP SIP SDK has been used for this softphone. Ozeki VoIP SIP SDK then also can be used for creating a more complex VoIP phone with much more functions even with a few lines of source code. Figure 2 - SDK Voice Player Sample Running the program When the application is started, it automatically tries to register to the SIP PBX based on the given parameters. In case of successful registration, an "online" caption appears on the display that denotes the ready-to-call status of the program. For making a call you only need to dial the phone number and click the call button. Then open the previously recorded wav file. If the wav file can be loaded successfully, the program starts the call towards the dialed party. When the call is established the program plays the sound data to the other party with the help of a timer. After playing the whole sound data the program ends the call and you can start a new call. Source code For being simple, the program misses to use any design samples and other conventions. Therefore the full source code can be found in FormSoftphone.cs file that is related to the interface. By using Ozeki VoIP SIP SDK only a few objects and the handling of their events are needed to create the whole functionality of a complex softphone. The following objects have been used in this program: 1. Private phoneCall As IPhoneCall 2. Private mediaTimer As MediaTimer 3. 4. Private phoneLine As IPhoneLine 5. Private phoneLineInformation As PhoneLineInformation 6. Private softPhone As ISoftPhone 7. Private wavReader As ozWaveFileReader ISoftphone IphoneLine. There can be more telephone lines which mean that you can develop a multi line phone. For simplicity this example only uses one telephone line.. MediaTimer It is a Timer that ensures a more accurate timing than Microsoft .Net Timer. ozWaveFileReader It extends the Microsoft.Net Stream type to simplify the reading and processing of Wav audio files. After the program is started, it automatically registers to a previously specified SIP PBX server. This is made via the InitializeSoftPhone() method that is called in the 'Load' event handler of the interface. 1. Private Sub InitializeSoftPhone() 2. Try 3. Me.softPhone = SoftPhoneFactory.CreateSoftPhone("192.168.91.42", 5700, 5760, 5700, Nothing) 4. Me.phoneLine = Me.softPhone.CreatePhoneLine(New SIPAccount(True, "oz891", "oz891", "oz891", "oz891", "192.168.91.212", 5060)) 5. AddHandler Me.phoneLine.PhoneLineInformation, New EventHandler(Of VoIPEventArgs(Of PhoneLineInformation))(AddressOf Me.phoneLine_PhoneLineInformation) 6. Me.softPhone.RegisterPhoneLine(Me.phoneLine, Nothing) 7. Me.mediaTimer = New MediaTimer 8. Me.mediaTimer.Period = 20 9. AddHandler Me.mediaTimer.Tick, New EventHandler(AddressOf Me.mediaTimer_Tick) 10. Catch ex As Exception 11. MessageBox.Show(String.Format("You didn't give your local IP adress, so the program won't run properly." & ChrW(10) & " {0}", ex.Message), String.Empty, MessageBoxButtons.OK, MessageBoxIcon.Hand) 12. End Try 13. End Sub Where the softphone object has been instanced with the network parameters of the running computer. Please note that if you do not update these parameters (do not modify the IP address) according to your own computer, the program will not be able to register onto the given SIP PBX. The parameters of the object are the follows: IP address of the local computer, the minimum port to be used, the maximum port to be used, the port that is assigned to receive SIP messages. Create a phoneLine with a SIP account that can be a user account of your corporate SIP PBX or a free SIP provider account. In order to display the status of the created phoneline, subscribe to its 'phoneLine.PhoneLineInformation' event. Then you only need to register the created 'phoneLine' onto the 'softPhone'. In this example only one telephone line is registered but of course multiple telephone lines can also be registered and handled with Ozeki VoIP SIP SDK. After the phoneline registration was successful the application is ready to load sound data and to call a specified phone number. Making an outgoing call Outgoing calls can be made by entering the phone numbers to be called and clicking the 'Call' button. 1. Private Sub buttonPickUp_Click(ByVal sender As Object, ByVal e As EventArgs) Handles button13.Click 2. If (String.IsNullOrWhiteSpace(Me.labelDialingNumber.Text)) Then 3. MessageBox.Show("You haven't given a phone number.") 4. Return 5. End If 6. If ((Me.phoneCall Is Nothing)) Then 7. If ((Me.phoneLineInformation <> phoneLineInformation.RegistrationSucceded) AndAlso (Me.phoneLineInformation <> phoneLineInformation.NoRegNeeded)) Then 8. MessageBox.Show("Phone line state is not valid!") 9. Else 10. Using openFileDialog As OpenFileDialog = New OpenFileDialog 11. openFileDialog.Multiselect = False 12. openFileDialog.Filter = "Wav audio file|*.wav" 13. openFileDialog.Title = "Open a Wav audio File" 14. If (openFileDialog.ShowDialog = DialogResult.OK) Then 15. Me.wavReader = New ozWaveFileReader(openFileDialog.FileName) 16. Me.phoneCall = Me.softPhone.CreateCallObject(Me.phoneLine, Me.labelDialingNumber.Text, Nothing) 17. Me.WireUpCallEvents() 18. Me.phoneCall.Start() 19. End If 20. End Using 21. End If 22. End If 23. End Sub By clicking the 'Call' button the file loader window appears. In this window you can select the wav audio file that you want to play into a phone call. The call is made via the IPhoneCall object by Ozeki VoIP SIP SDK. In this way you need to create such a call object in the registered phoneline of the softphone. To make a successful call you need to subscribe to some events (Me.WireUpCallEvents()). With the help of these events the application will receive information about the changes that occur during the call. After subscribing to these events you only need to invite the Start() method on the IPhoneCall object that represents the call. As a result the call starts to be established. 1. Private Sub WireUpCallEvents() 2. AddHandler Me.phoneCall.CallStateChanged, New EventHandler(Of VoIPEventArgs(Of CallState))(AddressOf Me.call_CallStateChanged) 3. AddHandler Me.phoneCall.CallErrorOccured, New EventHandler(Of VoIPEventArgs(Of CallError))(AddressOf Me.call_CallErrorOccured) 4. End Sub CallStateChanged event is for displaying the changes of the call status. The call statuses can be the follows (Setup, Ring, Incall, Completed, Rejected). 1. Private Sub call_CallStateChanged(ByVal sender As Object, ByVal e As VoIPEventArgs(Of CallState)) 2. Me.InvokeGUIThread(Sub() 3. Me.labelCallStatus.Text = e.Item.ToString 4. End Sub) 5. Select Case e.Item 6. Case CallState.InCall 7. Me.mediaTimer.Start() 8. Exit Select 9. Case CallState.Completed 10. Me.mediaTimer.Stop() 11. Me.phoneCall = Nothing 12. Me.InvokeGUIThread(Sub() 13. Me.labelDialingNumber.Text = String.Empty 14. End Sub) 15. Exit Select 16. Case CallState.Cancelled 17. Me.phoneCall = Nothing 18. Exit Select 19. End Select 20. End Sub In this case only 'Incall', 'Cancelled' and 'Completed' call states are important. After initializing the call, it gets into 'Setup' state. When the called party accepts the call we receive a notification about it via 'CallStateChanged' event that returns the new state in parameters. If the new state is 'InCall' (so the telephone was picked up) the wav audio file starts to be sent to the other party. Since the participants of VoIP communication send out a defined amount of sound data within a defined time period, I do not send out the whole sound data at once. With the help of a MediaTimer 320 byte of sound data is sent out in every 20ms periods via the 'SendMediaData'method of of the call object: 1. Private Sub mediaTimer_Tick(ByVal sender As Object, ByVal e As EventArgs) 2. If (Not Me.wavReader Is Nothing) Then 3. Dim data As Byte() = New Byte(320 - 1) {} 4. If (Me.wavReader.Read(data, 0, 320) = 0) Then 5. Me.phoneCall.HangUp() 6. Else 7. Me.phoneCall.SendMediaData(VoIPMediaType.Audio, data) 8. End If 9. End If 10. End Sub 11. If the application plays the whole file, it hang-ups the call via the (Me.phoneCall.HangUp())call. call.CallErrorOccured event notifies about the reasons that prevents the establishment. Such reason is, for example, when the called party is busy, the call was rejected, the called number does not exist or it is unavailable. Post Reply This thread is locked for new responses. Please post your comments and questions as a separate thread . If required, refer to the URL of this page in your new post . Tweet Return to Discussion Forum Start new thread Active Members Today Sekhar Babu (9) naveensanagase... (6) saravanan (6) Last 7 Days Anil Kumar ... (124) Asheej T K (52) Prasad kulkarn... (51) more... Awards & Gifts .NET Jobs .NET Articles .NET Forums Articles Rss Feeds Forum Rss Feeds Talk to Webmaster Tony John Online Members praveen DHARMENDRA KUMAR More... Advertise
http://www.dotnetspider.com/forum/324319-Text-To-Speech-in-ASPNet-And-save-it-into-wav-file.aspx
CC-MAIN-2014-42
en
refinedweb
I: or: module Main where import Data.Binary import Data.Int import System.Random import qualified Data.ByteString.Lazy as BL encodeFileAp f = BL.appendFile f . encode path = "Results.data" n = 20*1024*1024 :: Int getBlockSize :: BL.ByteString -> Int64 getBlockSize bs = round $ (fromIntegral $ BL.length bs) / (fromIntegral n) fillFile :: StdGen -> Int -> IO () fillFile _ 0 =return () fillFile gen i = do let (x, gen') = random gen :: (Double, StdGen) encodeFileAp path x fillFile gen' (i-1) processFile :: BL.ByteString -> Int64 -> Int -> Double -> Double processFile bs blockSize 0 sum = sum processFile bs blockSize i sum = let tmpTuple = BL.splitAt blockSize bs x = decode $ fst $! tmpTuple in processFile (snd tmpTuple) blockSize (i-1) $! sum + x main = do fillFile (mkStdGen 42) n results <- BL.readFile path putStrLn $ show $ processFile results (getBlockSize results) n 0
http://www.haskell.org/pipermail/haskell-cafe/2009-September/066326.html
CC-MAIN-2014-42
en
refinedweb
To get an idea, during Sonar analysis, your project is scanned by many tools to ensure that the source code conforms with the rules you’ve created in your quality profile. Whenever a rule is violated… well a violation is raised. With Sonar you can track these violations with violations drill down view or in the source code editor. There are hundreds of rules, categorized based on their importance. Ill try, in future posts, to cover as many as I can but for now let’s take a look at some common security rules / violations. There are two pairs of rules (all of them are ranked as critical in Sonar ) we are going to examine right now. 1. Array is Stored Directly ( PMD ) and Method returns internal array ( PMD ) These violations appear in the cases when an internal Array is stored or returned directly from a method. The following example illustrates a simple class that violates these rules. public class CalendarYear { private String[] months; public String[] getMonths() { return months; } public void setMonths(String[] months) { this.months = months; } } To eliminate them you have to clone the Array before storing / returning it as shown in the following class implementation, so noone can modify or get the original data of your class but only a copy of them. public class CalendarYear { private String[] months; public String[] getMonths() { return months.clone(); } public void setMonths(String[] months) { this.months = months.clone(); } } 2. Nonconstant string passed to execute method on an SQL statement (findbugs) and A prepared statement is generated from a nonconstant String (findbugs) Both rules are related to database access when using JDBC libraries. Generally there are two ways to execute an SQL Commants via JDBC connection : Statement and PreparedStatement. There is a lot of discussion about pros and cons but it’s out of the scope of this post. Let’s see how the first violation is raised based on the following source code snippet. Statement stmt = conn.createStatement(); String sqlCommand = 'Select * FROM customers WHERE name = '' + custName + '''; stmt.execute(sqlCommand); You’ve already noticed that the sqlcommand parameter passed to execute method is dynamically created during run-time which is not acceptable by this rule. Similar situations causes the second violation. String sqlCommand = 'insert into customers (id, name) values (?, ?)'; Statement stmt = conn.prepareStatement(sqlCommand); You can overcome this problems with three different ways. You can either use StringBuilder or String.format method to create the values of the string variables. If applicable you can define the SQL Commands as Constant in class declaration, but it’s only for the case where the SQL command is not required to be changed in runtime. Let’s re-write the first code snippet using StringBuilder Statement stmt = conn.createStatement(); stmt.execute(new StringBuilder('Select FROM customers WHERE name = ''). append(custName). append(''').toString()); and using String.format Statement stmt = conn.createStatement(); String sqlCommand = String.format('Select * from customers where name = '%s'', custName); stmt.execute(sqlCommand); For the second example you can just declare the sqlCommand as following private static final SQLCOMMAND = insert into customers (id, name) values (?, ?)'; There are more security rules such as the blocker Hardcoded constant database password but I assume that nobody is still hardcodes passwords in source code files… In following articles I’m going to show you how to adhere to performance and bad practice rules. Until then I’m waiting for your comments or suggestions. Happy coding and don’t forget to share! Reference: Fixing common Java security code violations in Sonar from our JCG partner Papapetrou P. Patroklos at the Only Software matters blog.
http://www.javacodegeeks.com/2012/09/fixing-common-java-security-code.html
CC-MAIN-2014-42
en
refinedweb
One thing that came up in the retrospectives as a good thing a few times in my previous team was working in war teams. So what is a war team? To us it meant that whenever we ran into some kind of problem, a difficult bug, a big design challenge or similar we format war teams. Sometimes the war team was two people and sometimes it was the whole team. The war team focused on solving that single problem and nothing else. So what we said in the retrospects was that working in these teams was better than when everybody was working on their own tasks. I guess the explanation was that it was a great satisfaction in solving a difficult problem and also it is great working together with others. And this was in an environment where the team was sitting together in an open landscape and not in separate rooms. What that taught us was to do as much pairing as possible when completing tasks. To be honest, it is hard to teach old dogs to sit and I guess that is why it came back in the retrospects from time to time. So what made me think about this today was that I stumbled over this old article about war rooms. It compares teams where each team member sits in their own cubicle compared to teams sitting together. I think that is the basic start to make any team more productive (more suggestions here). But once you have your team in the same room you want the team members to scramble around a few tasks rather than working on different things. As the article about war rooms point out; the team might say they don't like it but once they try it, they will probably love it... When I wrote this the other day it made me think of another thing involving the memcmp function and the VC compiler. In the code I've seen over the years where memcmp was used it was always to find out if an area was identical or not. So the code typically looked something like this: 1: if (memcmp(a, b) == 0) 2: doSomething(a); 3: else 4: doSomething(b); However another very common pattern is to write code like this: 1: if (!memcmp(a, b)) 2: doSomething(b); 3: else 4: doSomething(a); Turns out the latter results in faster code when compiled (at least with the VC compiler)! How is that? Turns out the compiler recognizes that you're not really interested in if memory block a is lower or greater than block b and the key to do this is the not operator. So by using the not operator the compare operation does not need to find the first difference and calculate which block is less than the other. The compiler instead generates code that just checks if the memory blocks are equal or not, which is much faster. Ten years ago I worked on a project where we did a lot of fancy things with a number of databases. I learned a lot during that project but I also heard a funny story a coworker told me:... The basic problem was that the developers never tested the database nor the accounting system with enough data to simulate several years of accounting information. You don't want to do that. There are also a number of other things you probably don't want to do. This article (register for a free account to read) covers a few good things to look out for. For example you there is usually a bad idea to do "IF (SELECT COUNT(*) FROM ... WHERE ...) > 0" when "IF EXISTS (SELECT 1 FROM ... WHERE ...)" works just as fine without the potential problem actually counting the number of rows. I was working with some code a couple of weeks ago and I stumbled over this "interesting" "pattern"... 1: public class SomeApiWrapper 2: { 3: private string _method = ""; 4: List<string> _arguments = new List<string>(); 5: private string _instance = ""; 6: private string _result = ""; 7: 8: public string Method 9: { 10: set 11: { 12: _method = value; 13: ExecuteIfReady(); 14: } 15: } 16: 17: public string Instance 18: { 19: set 20: { 21: _instance = value; 22: ExecuteIfReady(); 23: } 24: } 25: 26: public string Argument 27: { 28: set 29: { 30: _arguments.Add(value); 31: } 32: } 33: 34: public string Result 35: { 36: get 37: { 38: if (_result.Length > 0) 39: return _result; 40: throw new InternalErrorException("Method not executed."); 41: } 42: } 43: 44: private void ExecuteIfReady() 45: { 46: if (_instance.Length > 0 && _method.Length > 0) 47: _result = SomeApi.Execute(_instance, _method, _arguments); 48: } 49: } So what is happening here? Well, first of all you need to set the arguments, the method and the instance properties and then you just get the result. Setting the properties magically executes the method and stores the result as soon as you've provided enough information for the method call to be completed. An interesting twist however is that you cannot set the arguments last since the method is executed when you have set method and instance properties. the ordering between these two are however irrelevant... Please don't do this. Ever. Properties should not have side effects in general and especially not side effects that are dependent of the order in which they are called. That is only confusing at best. If you read what I wrote the other day you might be confused because a DSL created using properties will have side effects. Well, no rule without an exception! Also consider properties that are lazy loaded (i.e. they do a costly retrieval of values only when asked but then remembers that value). That is also going to be OK. But the pattern you see above is not. Preemptive comment: No this pattern was not seen in any code created by a Microsoft employee...? So I've mentioned this topic before; choosing the right words to send the message you want. But this time I just want to mention a good read on the topic of using words. The short version is: That's all for now.
http://blogs.msdn.com/b/cellfish/archive/2010/02.aspx?PostSortBy=MostRecent&PageIndex=1
CC-MAIN-2014-42
en
refinedweb
08 September 2006 19:19 [Source: ICIS news] TORONTO (ICIS news)--Southwest Ontario has no immediate prospects for attracting new chemical investments to replace the loss of Dow Chemical’s plants at the Sarnia petrochemicals hub, an official said on Friday. ?xml:namespace> While Dow’s presence in ?xml:namespace> Dow said last week it will close all of its manufacturing plants in Sarnia by the end of 2008 due the unavailability of affordable feedstock following the suspension of ethylene shipments on the Dow's is closing plants making low density polyethylene (ldPE), polystyrene, acrylate latex and propylene oxide derivatives. The company had closed its epoxy resin plant in 2004 and last year it closed a polyethylene wax facility. Mallay said the impacts of the latest closures in He added that Dow’s closures would have been more devastating 10 years ago. The region has diversified beyond chemicals and petrochemcials into automotive parts, call centre services, bio-products and other sectors, he said. He noted the recent start-up of Suncor’s 200m litre/year ethanol plant, Mallay said SLEP will work with Dow to find potential investors for the idled plants. He said there are no concrete prospects for any new chemicals investments at Mallay said he did not expect the BP, which operates the Cochin pipeline via two subsidiaries, suspended ethylene shipments in March, citing an incident of stress corrosion cracking and ethylene’s high vapour pressure. But a BP official told ICIS news earlier this week the company plans to resume ethylene shipments after 2007 if there is demand. Excluding Dow’s
http://www.icis.com/Articles/2006/09/08/1090041/interview-dow-closures-hit-ontario-chem-hub-hard.html
CC-MAIN-2014-42
en
refinedweb
Type: Posts; User: antlet88 It's written in this form. Just the value and then hitting enter to the next line. 15 2.34 2.43 2.30 2.29 2.41 2.42 2.33 Hey it worked. I found the setting exclude from project and it builds just fine. I mean the program doesn't actually work but at least it runs. Haha. I think the numbers from the text file aren't... Thanks everyone who didn't just yell at me for formatting. I'm sure once I become better at programming those things will come a little more natural. I'll try those recommendations tonight.... Okay, you're coming off kind of aggressive...Or maybe I'm getting the wrong impression. As I said, this is literally my first class in programming. I've never done it before. I just can't figure... Sorry about that!!! I'll try it here. Ummm. No I don't think I compile the .txt file. It's just saved in the folder and then called upon in the coding. #include <iostream> #include <fstream>... 1>------ Build started: Project: Lab 5, Configuration: Debug Win32 ------ 1> data.txt 1>c:\users\anthony\documents\visual studio 2010\projects\lab 5\lab 5\data.txt(1): error C2059: syntax error :...
http://forums.codeguru.com/search.php?s=ab3103b920032321a633bfc7604665be&searchid=5376127
CC-MAIN-2014-42
en
refinedweb
Details - Type: Bug - Status: Resolved (View Workflow) - Priority: Minor - Resolution: Fixed - - Labels:None - Environment:Jenkins v 1.357 Windows 7 Windows service - Similar Issues: Description I was watching the build page, and saw that the following text to the upper right "Build has been executing for null on master" I was not expected "null" in the message Attachments Activity Oleg: Really? 1.537 is at least somewhat recent (1.532.3 released on April 11)… FWIW I'm seeing this on 1.532.3 with Build Flow occasionally, hence the question. Unless you feel that 1.537 is too obsolete, I'd reopen this. Jenkins v 1.357 BTW, it could be just a typo. If you see the issue for 1.532, lets reopen it Resolving as Incomplete after ~2 months without response to comment asking for additional information. I am seeing this issue on version 1.559 for my multi-configuration (matrix) project. Started 6 min 42 sec ago Build has been executing for null on Michael: Which of the Matrix builds shows this message? The parent? A regular configuration's build? Is there no node name specified (i.e. 'Build has been executing for null on' rather than 'Build has been executing for null on master' as described above)? It is the parent. The configuration build seem to have the correct description. Sorry I accidentally clipped off the host last time. The on "HOSTNAME" portion of the string is correct. Started 46 sec ago Build has been executing for null on b261806 I'm almost sure I found the cause in Run.getExecutor(). Workaround should be -Dhudson.model.Hudson.flyweightSupport=false, because it affects only Flyweight tasks that run on One-off executors. Also seeing this on 1.571 hosted on a FreeBSD 9.1 machine. Started 3 hr 21 min ago Build has been executing for null on xlicenses May be related to null time refereces in the build progress column plugin, which shows Started null ago Estimated time remaining null (May because the actual parent job has a non-null time for Started ago.) Code changed in jenkins User: Daniel Beck Path: core/src/main/java/hudson/model/Run.java Log: [FIXED JENKINS-20307] Consider OneOffExecutors in Run.getExecutor() Code changed in jenkins User: Daniel Beck Path: core/src/main/java/hudson/model/Run.java Log: Merge pull request #1348 from daniel-beck/ JENKINS-20307 [FIXED JENKINS-20307] Consider OneOffExecutors in Run.getExecutor() Compare: Integrated in jenkins_main_trunk #3651 [FIXED JENKINS-20307] Consider OneOffExecutors in Run.getExecutor() (Revision 9379d1fefc49fbe8cf11bb96290a9924a6eb38cc) Result = SUCCESS daniel-beck : 9379d1fefc49fbe8cf11bb96290a9924a6eb38cc Files : - core/src/main/java/hudson/model/Run.java I ran the following script to run the jenkins jobs import hudson.console.HyperlinkNote import java.util.concurrent.CancellationException import com.tikal.jenkins.plugins.multijob.MultiJobProject import hudson.model.* import hudson.AbortException job = hudson.model.Hudson.instance.items.each { job -> try { def running = job.lastBuild.building def disabled = job.isDisabled() def numbuilds = job.builds.size() lastbuild = job.builds[0] if (numbuilds == 0) else if (running) { println "${job.name} is already running. Not launching" } else if (disabled) { println "${job.name} is disabled...Cannot be kicked off" } else { def buildParameter = job.builds[0].getAction(ParametersAction.class) if (buildParameter == null) } } catch(Exception e) } The jobs shows blinking green button but they are not running. Message : Build has been executing for null on master bump I am also seeing this on Centos 6.4, Jenkins 1.537.
https://issues.jenkins.io/browse/JENKINS-20307?focusedCommentId=203892&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2021-31
en
refinedweb
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 01/03/2017 at 05:24, xxxxxxxx wrote: Hello I test new DrawHUDText with strange results - something wrong with clipping. I add null object, add script-tag and write this into tag(Niklas tip or somebody at forum) : import c4d from c4d import plugins class ObjectData(plugins.ObjectData) : def Draw(self) : bd = doc.GetActiveBaseDraw() bd.DrawHUDText(100, 100, "Test") bd.SetMatrix_Matrix(op.GetObject(), c4d.Matrix()) return True def main() : td = ObjectData() td.Draw() Maybe did i miss something in code? I test in active environment of c4d, not plugin body, for decoration of scripts, assets On 02/03/2017 at 01:33, xxxxxxxx wrote: Hello, honestly, pretty much everything in your script is wrong. You cannot perform viewport draw operations whenever you want. One can only draw in the viewport when Cinema is drawing the viewport, you can add to that drawing process. This is done by implementing special "Draw" functions of various plugin classes like ObjectData.Draw() or TagData.Draw(). But you have to implement these functions with the exact signature, you cannot invent some new "Draw" function. You cannot call this "Draw" function yourself, it has to be called by Cinema. See the Draw Manual and BaseDraw Manual. So if you want to test DrawHUDText() you can create a ObjectData or TagData plugin, implement the "Draw" function of these classes and create and instance of this object or tag to test it. An example for an ObjectData plugin that draws something is the Py-DoubleCircle.pyp. If you want to create an instance of an object plugin you must not create an instance of the plugin class. You must call BaseObject() with the ID of you plugin class to create the corresponding BaseObject as seen in e.g. Py-LiquidPainter.pyp. best wishes, Sebastian On 02/03/2017 at 03:20, xxxxxxxx wrote: Hello Thank you Sebastian. I will check out
https://plugincafe.maxon.net/topic/9988/13452_testing-of-drawhudtext
CC-MAIN-2021-31
en
refinedweb
Hello, readers! In this article, we will be focusing on NumPy Linear Algebraic functions in Python. So, let us get started! 🙂 The NumPy module offers us various functions to deal with and manipulate data. It enables us to create and store data in an array data structure. Moving ahead, it offers us various functions to analyze and manipulate the data values. List of NumPy Linear Algebraic functions 1. Matrix functions offered by NumPy module With NumPy module, we can perform the linear algebraic matrix functions on the array structure. In the course of this topic, we would be having a look at the below functions– - Rank of the matrix: We can calculate the rank of the array using numpy.linalg.matrix_rank() function. - Determinant: The numpy.linalg.det() function helps us calculate the determinant of the array treating it as a matrix. - Inverse: The inv() function enables us to calculate the inverse of the array. - Exponent: Using numpy.linalg.matrix_power() function, we can raise a power value to the matrix and fetch the results. Example: In the below example, we have created an array using numpy.array() function. Further, we have performed the above mentioned linear algebraic operations on the array and printed the results. import numpy x = numpy.array([ [2, 8, 7], [6, 1, 1], [4, -2, 5]]) print("Rank: ", numpy.linalg.matrix_rank(x)) det_mat = numpy.linalg.det(x) print("\nDeterminant: ",det_mat) inv_mat = numpy.linalg.inv(x) print("\nInverse: ",inv_mat) print("\nMatrix raised to power y:\n", numpy.linalg.matrix_power(x, 8)) Output: Rank: 3 Determinant: -306.0 Inverse: [[-0.02287582 0.17647059 -0.00326797] [ 0.08496732 0.05882353 -0.13071895] [ 0.05228758 -0.11764706 0.1503268 ]] Matrix raised to power y: [[ 85469036 43167250 109762515] [ 54010090 32700701 75149010] [ 37996120 22779200 52792281]] 2. Eigen value with NumPy Array NumPy Linear Algebraic functions have the linalg class that has eigh() function to calculate the eigenvalue from the array elements passed to it. Have a look at the below syntax! Syntax: numpy.linalg.eigh(array) The eigh() function returns the eigenvalues as well as the eigenvectors of a complex or a real symmetric matrix. Example: from numpy import linalg as li x = numpy.array([[2, -4j], [-2j, 4]]) res = li.eigh(x) print("Eigen value:", res) Output: Eigen value: (array([0.76393202, 5.23606798]), array([[-0.85065081+0.j , 0.52573111+0.j ], [ 0. -0.52573111j, 0. -0.85065081j]])) 3. Dot Product With NumPy Linear Algebraic functions, we can perform dot operations on scalar as well as multi-dimensional values. It performs scalar multiplication for single dimensional vector values. For multi-dimensional arrays/matrices, it performs matrix multiplication on the data values. Syntax: numpy.dot() Example: import numpy as np sc_dot = np.dot(10,2) print("Dot Product: ", sc_dot) vectr_x = 1 + 2j vectr_y = 2 + 4j vctr_dot = np.dot(vectr_x, vectr_y) print("Dot Product: ", vctr_dot) Output: Dot Product: 20 Dot Product: (-6+8j) 4. Solving Linear equations with NumPy module With NumPy Linear Algebraic functions, we can even perform the calculations and solve the linear algebraic scalar equations. The numpy.linalg.solve() function solves for the array values with the equation ax=b. Example: import numpy as np x = np.array([[2, 4], [6, 8]]) y = np.array([2, 2]) print(("Solution of linear equations:", np.linalg.solve(x, y))) Output: ('Solution of linear equations:', array([-1., 1.])) Conclusion Feel free to comment below, in case you come across any question. For more such posts related to Python programming, stay tune with us. Till then, happy learning!! 🙂
https://www.askpython.com/python/numpy-linear-algebraic-functions
CC-MAIN-2021-31
en
refinedweb
: class Security { private $_isHttps = true; private $_isAdmin = true; private $_totalLoginsPerDay = 19; . . . public function can_upload_file() { if($this->_isHttps == false) return 0; if($this->_isAdmin == false) return 0; if($this->_totalLoginsPerDay > 100) return 0; return 1; } } $login = new Security(); echo $login->can_upload_file(); After refactoring: class Security { private $_isHttps = true; private $_isAdmin = true; private $_totalLoginsPerDay = 199; . . . public function can_upload_file() { return $this->check_security_status(); } private function check_security_status() { if($this->_isHttps == false || $this->_isAdmin == false || $this->_totalLoginsPerDay > 100) return 0; else return 1; } } $login = new Security(); echo $login->can_upload_file(); 10 thoughts to “Refactoring 1: Consolidating Conditional Expressions” this works and a bit shorter. private function check_security_status() { return ($this->_isHttps === false || $this->_isAdmin === false || $this->_totalLoginsPerDay > 100); }.. Where are your unit tests? Any refactorings require unit tests. If you want to make sure you don’t change the behavior (logic) of your conditionals you need some unit tests. I understand the importance of unit tests as mentioned in my earlier post ‘Refactoring: An introduction for PHP programmers’ but have left it out on purpose. “…The important part is to test your code after refactoring.” In this quote from your initial post do you write tests first, then refactor? Or do you write code then tests? Always write tests first and then refactor.
https://www.codediesel.com/software/refactoring-1-consolidate-conditional-expression/
CC-MAIN-2021-31
en
refinedweb
79336/how-do-i-set-custom-html-attributes-in-django-forms I have a Django form that is part of page. Lets say I have a field: search_input = forms.CharField(_(u'Search word'), required=False) I can access it only in template via {{ form.search_input }}. How to set custom HTML attrs (such as name and value)? I would like to find flexible solution, that would allow me to add any needed custom attributes to all types of fields. But using attrs gives me (if used with CharField): __init__() got an unexpected keyword argument 'attrs' Hello @kartik, You can change the widget on the CharField to achieve the effect you are looking for. search_input = forms.CharField(_(u'Search word'), required=False) search_input.widget = forms.TextInput(attrs={'size': 10, 'title': 'Search',}) Hope it helps!! Thank You!! from django.tests import TestCase class MyTests(TestCase): ...READ MORE can you tell me the procedure for ...READ MORE Hi, You can use dedicated hooks(decorators) called before ...READ MORE I have the following code below I ...READ MORE Hi, there is only that way and ...READ MORE Hi all, with regard to the above ...READ MORE Hello @kartik, To turn off foreign key constraint ...READ MORE Hello @kartik, Let's say you have this important ...READ MORE Hii Kartik, check uwsgi + django source code and pass static param ...READ MORE Hello @kartik, Use Sample.objects.filter(date__range=["2020-01-01", "2020-01-31"]) Or if you are just ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/79336/how-do-i-set-custom-html-attributes-in-django-forms?show=79337
CC-MAIN-2021-31
en
refinedweb
From: Peter Dimov (pdimov_at_[hidden]) Date: 2003-01-05 10:23:05 From: "Joel de Guzman" <djowel_at_[hidden]> > From: "Peter Dimov" <pdimov_at_[hidden]> > [...] > > ref(x)(...) can mean two different things, both reasonable. One is to simply > > return x. The other is to return x(...). The convention we have adopted so > > far in bind and function is to treat ref as if ref(x)(...) returns x(...). > > This has nothing to do with spirit using bind, function, or lambda. It's > > about the semantics of ref. > > > > In fact, if you use ref(b) as above, you now have no way to express the > > other operation, make if_p store a reference to the function object: > > > > if_p(ref(f)) > > [ > > ] > > > > This is necessary when f has state or cannot be copied. > > Ok, I understand. Anyway, do you have a suggestion? Perhaps > what I need then is a low-fat var(x) and val(x). > > Thoughts? My thoughts with my "independent observer/low-tolerance library user" hat on are: * A good lambda library should already have a low-fat, low-dependency var(x), something like template<class E> struct basic_lambda_expression {}; // tag template<class T> class var_expression: public basic_lambda_expression< var_expression<T> > { public: typedef T & result_type; // obvious implementation }; * Two good lambda libraries sharing the same boost:: namespace will share basic_lambda_expression<> and recognize each other's lambdas. ;-) Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/01/41782.php
CC-MAIN-2021-31
en
refinedweb
Asked by: Search Query in SearchBar Working in IOS But Not in Android Question - User382837 posted I created a sample Pet List with a Search View at the very top. The Search View would help user filter results of the List. The implementation works in IOS but I get an error in Android. in IOS: in Android: Here is How I implemented it: 1. I created a view model with the SearchItem as one of the properties public class PetsViewModel : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; private IList _pets; private IList originalList; private int _current; private string _searchTerm; public string SearchTerm { get => _searchTerm; set { if (_searchTerm != value) { _searchTerm = value; RaisePropertyChanged(); if (string.IsNullOrWhiteSpace(_searchTerm)) {// if search string is null List to original list Pets = originalList; } else {// show data that has filtered data var data = Pets.Where(i => i.Name.Contains(_searchTerm)); Pets = (IList<Pet>)data; } } } } Tuesday, March 26, 2019 5:37 PM in XAML I used binding with the SearchBar SearchBar Placeholder="Name or Keyword" Text="{Binding SearchTerm, Mode=TwoWay}" All replies - User382871 posted I've reproduced the issue on Android. You can refer to it. page.xaml.cs ``` public partial class MainPage : ContentPage { List country = new List { "India", "pakistan", "Srilanka", "Bangladesh", "Afghanistan" }; public MainPage() { InitializeComponent(); list.ItemsSource = country; } private void Search_bar_TextChanged(object sender, TextChangedEventArgs e) { var keyword = search_bar.Text; if (keyword.Length >= 1) { var suggestion = country.Where(c => c.ToLower().Contains(keyword.ToLower())); list.ItemsSource = suggestion; list.IsVisible = true; } else { //list.IsVisible = false; } } } page.xamlWednesday, March 27, 2019 9:03 AM - User382837 posted Actually, I did the same implementation as you by moving the implementation to an Event. I wanted to see if it was possible to put it in the View ModelWednesday, March 27, 2019 9:17 AM - User382871 posted Have a look at the code. page.xaml.cs ``` public partial class Page1 : ContentPage { ObservableCollection list = new ObservableCollection (); List<string> list1 = new List<string>(); public Page1 () { InitializeComponent (); list.Add(new People { Name="jack" }); list.Add(new People { Name = "jarvan" }); peopleList.ItemsSource = list; } private void Search_bar_TextChanged(object sender, TextChangedEventArgs e) { if (string.IsNullOrWhiteSpace(e.NewTextValue)) { peopleList.ItemsSource = list; } else { peopleList.ItemsSource = list.Where(i => i.Name.Contains(e.NewTextValue)); } } } page.xaml model.cspublic class People { public string Name { get; set; } public People() { } } ```Friday, March 29, 2019 2:11 AM
https://social.msdn.microsoft.com/Forums/en-US/1e3de658-365b-47f9-ba31-02145dc0983e/search-query-in-searchbar-working-in-ios-but-not-in-android?forum=xamarinforms
CC-MAIN-2021-31
en
refinedweb
time_series_add_data_at NameName time_series_add_data_at — Add data to all monitors in a time series SynopsisSynopsis #include "misc/time_series.h" | void **time_series_add data to all monitors in a time series. The data will be added to the bucket corresponding to a given time stamp. - ts - the time series. - data - the data to be added. - whence - the time stamp when data should be added. If 0 or in a future time, then current time is to used. If too old to be within a monitor windows, the request will be skiped for that monitor. void.
https://www.sparkpost.com/momentum/3/3-api/apis-time-series-add-data-at/
CC-MAIN-2021-31
en
refinedweb
#include <Wt/Mail/Message> A mail message. This class represents a MIME-compliant mail message. The message can have a plain text body and an optional HTML body, which when present is encoded as an MIME multipart/alternative. It is recommended to send the same contents both in a plain text and an HTML variant. Recipient names, names, and body text may contain unicode text. Default constructor. Creates an empty message. You need to add at least a sender and a recipient to create a valid email message. Adds an attachment. Ownership of the data stream is not transferred; you should keep this object valid until the message has been sent using Client::send() or written using write(). Adds a header value. A header is added, even if a header with the same name already was present. Adds an HTML body. The text should be an HTML version of the plain text body. Adds a recipient. A mail can have multiple recipients. Returns a header value. Returns 0 if no header with that name is found. Returns the HTML body. Returns the recipients. Returns the reply-to mailbox. Sets the plain text body. This is the plain text mail contents. Sets a date. According to RFC 2822, the date should express local time. Sets a header value. If a header with that value was already defined, it is replaced with the new value. Otherwise, the header is added. Returns the subject. Writes the message to the stream. This writes the message as a MIME 1.0 message to the output stream.
https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1Mail_1_1Message.html
CC-MAIN-2021-31
en
refinedweb
Heart Rate Monitor example for the BLE API using nRF51822 native mode drivers Dependencies: BLE_API mbed nRF51822 X_NUCLEO_IDB0XA1 BLE_HeartRate implements the Heart Rate Service which enables a collector device (such as a smart phone) to connect and interact with a Heart Rate Sensor. For the sake of simplicity and portability, the sensor in this case has been abstracted using a counter which counts up to a threshold and then recycles. The code can be easily extended to use the real heart rate sensor. Apps on the collector device may expect auxiliary services to supplement the HRService. We've therefore also included the Device Information Service and the Battery Service. BLE_API offers the building blocks to compose the needed GATT services out of Characteristics and Attributes, but that can be cumbersome. As a shortcut, it is possible to simply instantiate reference services offered by BLE_API, and we'll be taking that easier route. The user is encouraged to peek under the hood of these 'services' and be aware of the underlying mechanics. It is not necessary to use these ready-made services. Like most non-trivial services, the heart-rate service is connection oriented. In the default state, the application configures the Bluetooth stack to advertise its presence and indicate connectability. A Central/Master device is expected to scan for advertisements from peripherals in the vicinity and then initiate a connection. Once connected, the peripheral stops advertising, and communicates periodically as a server using the Attribute Protocol. Walkthrough of the code Let's see how this magic is achieved. We'll be pulling out excerpts from main.cpp where most of the code resides. You'll find that the entire system is event driven, with a single main thread idling most of its time in a while loop and being interrupted by events. An important startup activity for the application is to setup the event callback handlers appropriately. The first thing to notice is the BLEDevice class, which encapsulates the Bluetooth low energy protocol stack. BLEDevice #include "BLEDevice.h" BLEDevice ble; void disconnectionCallback(Gap::Handle_t handle, Gap::DisconnectionReason_t reason) { ble.startAdvertising(); // restart advertising } int main(void) { ble.init(); ble.onDisconnection(disconnectionCallback); ... ble.startAdvertising(); while (true) { ... ble.waitForEvent(); ... } } There is an init() method that must be called before using the BLEDevice object. The startAdvertising() method is called to advertise the device's presence allowing other devices to connect to it. onDisconnect() is a typical example of setting up of an event handler. With onDisconnect(), a callback function is setup to restart advertising when the connection is terminated. The waitForEvent() method should be called whenever the main thread is 'done' doing any work; it hands the control over to the protocol and lets you save power. So when will waitForEvent() return? Basically whenever you have an application interrupt, and most typically that results in some event callback being invoked. In this example there is a Ticker object that is setup to call a function every second. Whenever the ticker 'ticks' the periodicCallback() is invoked, and then waitForEvent() returns, resuming the execution in main. Interrupt to trigger periodic; } int main(void) { led1 = 1; Ticker ticker; ticker.attach(periodicCallback, 1); ... It is worth emphasizing that the periodicCallback() (or any other event handler) is called in interrupt context; and should not engage in any heavy-weight tasks to avoid the system from becoming unresponsive. A typical workaround is to mark some activity as pending to be handled in the main thread; as done through 'triggerSensorPolling'. BLEDevice offers APIs to setup GAP (for connectability) and GATT (for services). As has been mentioned already, GATT services may be composed by defining Characteristics and Attributes separately (which is cumbersome), or in some cases by simply instantiating reference services offered by BLE_API. The following illustrates how straightforward this can be. You are encouraged to peek under the hood of these implementations and study the mechanics. Service setup /* Setup primary service. */ uint8_t hrmCounter = 100; HeartRateService hrService(ble, hrmCounter, HeartRateService::LOCATION_FINGER); /* Setup auxiliary services. */ BatteryService battery(ble); DeviceInformationService deviceInfo(ble, "ARM", "Model1", "SN1", "hw-rev1", "fw-rev1", "soft-rev1"); Setting up GAP mostly has to do with configuring connectability and the payload contained in the advertisement packets. Advertiser setup ble.accumulateAdvertisingPayload(GapAdvertisingData::BREDR_NOT_SUPPORTED | GapAdvertisingData::LE_GENERAL_DISCOVERABLE); ble.accumulateAdvertisingPayload(GapAdvertisingData::COMPLETE_LIST_16BIT_SERVICE_IDS, (uint8_t *)uuid16_list, sizeof(uuid16_list)); ble.accumulateAdvertisingPayload(GapAdvertisingData::GENERIC_HEART_RATE_SENSOR); ble.accumulateAdvertisingPayload(GapAdvertisingData::COMPLETE_LOCAL_NAME, (uint8_t *)DEVICE_NAME, sizeof(DEVICE_NAME)); ble.setAdvertisingType(GapAdvertisingParams::ADV_CONNECTABLE_UNDIRECTED); ble.setAdvertisingInterval(1600); /* 1000ms; in multiples of 0.625ms. */ The first line (above).
https://os.mbed.com/teams/Bluetooth-Low-Energy/code/BLE_HeartRate/
CC-MAIN-2021-31
en
refinedweb
When it comes to programming, I have a belt and suspenders philosophy. Anything that can help me avoid errors early is worth looking into. The type annotation support that's been gradually added to Python is a good example. Here's how it works and how it can be helpful. Introduction The first important point is that the new type annotation support has no effect at runtime. Adding type annotations in your code has no risk of causing new runtime errors: Python is not going to do any additional type-checking while running. Instead, you'll be running separate tools to type-check your programs statically during development. I say "separate tools" because there's no official Python type checking tool, but there are several third-party tools available. So, if you chose to use the mypy tool, you might run: $ mypy my_code.py and it might warn you that a function that was annotated as expecting string arguments was going to be called with an integer. Of course, for this to work, you have to be able to add information to your code to let the tools know what types are expected. We do this by adding "annotations" to our code. One approach is to put the annotations in specially-formatted comments. The obvious advantage is that you can do this in any version of Python, since it doesn't require any changes to the Python syntax. The disadvantages are the difficulties in writing these things correctly, and the coincident difficulties in parsing them for the tools. To help with this, Python 3.0 added support for adding annotations to functions (PEP-3107), though without specifying any semantics for the annotations. Python 3.6 adds support for annotations on variables (PEP-526). Two additional PEPs, PEP-483 and PEP-484, define how annotations can be used for type-checking. Since I try to write all new code in Python 3, I won't say any more about putting annotations in comments. Getting started Enough background, let's see what all this looks like. Python 3.6 was just released, so I’ll be using it. I'll start with a new virtual environment, and install the type-checking tool mypy (whose package name is mypy-lang).: $ virtualenv -p $(which python3.6) try_types $ . try_types/bin/activate $ pip install mypy-lang Let's see how we might use this when writing some basic string functions. Suppose we're looking for a substring inside a longer string. We might start with: def search_for(needle, haystack): offset = haystack.find(needle) return offset If we were to call this with anything that's not text, we'd consider it an error. To help us avoid that, let's annotate the arguments: def search_for(needle: str, haystack: str): offset = haystack.find(needle) return offset Does Python care about this?: $ python search1.py $ Python is happy with it. There's not much yet for mypy to check, but let's try it: $ mypy search1.py $ In both cases, no output means everything is okay. (Aside: mypy uses information from the files and directories on its command line plus all packages they import, but it only does type-checking on the files and directories on its command line.) So far, so good. Now, let's call our function with a bad argument by adding this at the end: If we tried to run this, it wouldn't work: $ python search2.py Traceback (most recent call last): File "search2.py", line 4, in <module> search_for(12, "my string") File "search2.py", line 2, in search_for offset = haystack.find(needle) TypeError: must be str, not int In a more complicated program, we might not have run that line of code until sometime when it would be a real problem, and so wouldn't have known it was going to fail. Instead, let's check the code immediately: $ mypy search2.py search2.py:4: error: Argument 1 to "search_for" has incompatible type "int"; expected "str" Mypy spotted the problem for us and explained exactly what was wrong and where. We can also indicate the return type of our function: def search_for(needle: str, haystack: str) -> str: offset = haystack.find(needle) return offset and ask mypy to check it: $ mypy search3.py search3.py: note: In function "search_for": search3.py:3: error: Incompatible return value type (got "int", expected "str") Oops, we're actually returning an integer but we said we were going to return a string, and mypy was smart enough to work that out. Let's fix that: def search_for(needle: str, haystack: str) -> int: offset = haystack.find(needle) return offset And see if it checks out: $ mypy search4.py $ Now, maybe later on we forget just how our function works, and try to use the return value as a string: x = len(search_for('the', 'in the string')) Mypy will catch this for us: $ mypy search5.py search5.py:5: error: Argument 1 to "len" has incompatible type "int"; expected "Sized" We can't call len() on an integer. Mypy wants something of type Sized -- what's that? More complicated types The built-in types will only take us so far, so Python 3.5 added the typing module, which both gives us a bunch of new names for types, and tools to build our own types. In this case, typing.Sized represents anything with a __len__ method, which is the only kind of thing we can call len() on. Let's write a new function that'll return a list of the offsets of all of the instances of some string in another string. Here it is: from typing import List def multisearch(needle: str, haystack: str) -> List[int]: # Not necessarily the most efficient implementation offset = haystack.find(needle) if offset == -1: return [] return [offset] + multisearch(needle, haystack[offset+1:]) Look at the return type: List[int]. You can define a new type, a list of a particular type of elements, by saying List and then adding the element type in square brackets. There are a number of these - e.g. Dict[keytype, valuetype] - but I'll let you read the documentation to find these as you need them. mypy passed the code above, but suppose we had accidentally had it return None when there were no matches: def multisearch(needle: str, haystack: str) -> List[int]: # Not necessarily the most efficient implementation offset = haystack.find(needle) if offset == -1: return None return [offset] + multisearch(needle, haystack[offset+1:]) mypy should spot that there's a case where we don't return a list of integers, like this: $ mypy search6.py $ Uh-oh - why didn't it spot the problem here? It turns out that by default, mypy considers None compatible with everything. To my mind, that's wrong, but luckily there's an option to change that behavior: $ mypy --strict-optional search6.py search6.py: note: In function "multisearch": search6.py:7: error: Incompatible return value type (got None, expected List[int]) I shouldn't have to remember to add that to the command line every time, though, so let's put it in a configuration file just once. Create mypy.ini in the current directory and put in: [mypy] strict_optional = True And now: $ mypy search6.py search6.py: note: In function "multisearch": search6.py:7: error: Incompatible return value type (got None, expected List[int]) But speaking of None, it's not uncommon to have functions that can either return a value or None. We might change our search_for method to return None if it doesn't find the string, instead of -1: def search_for(needle: str, haystack: str) -> int: offset = haystack.find(needle) if offset == -1: return None else: return offset But now we don't always return an int and mypy will rightly complain: $ mypy search7.py search7.py: note: In function "search_for": search7.py:4: error: Incompatible return value type (got None, expected "int") When a method can return different types, we can annotate it with a Union type: from typing import Union def search_for(needle: str, haystack: str) -> Union[int, None]: offset = haystack.find(needle) if offset == -1: return None else: return offset There's also a shortcut, Optional, for the common case of a value being either some type or None: from typing import Optional def search_for(needle: str, haystack: str) -> Optional[int]: offset = haystack.find(needle) if offset == -1: return None else: return offset Wrapping up I've barely touched the surface, but you get the idea. One nice thing is that the Python libraries are all annotated for us already. You might have noticed above that mypy knew that calling find on a str returns an int - that's because str.find is already annotated. So you can get some benefit just by calling mypy on your code without annotating anything at all -- mypy might spot some misuses of the libraries for you. For more reading:
https://www.caktusgroup.com/blog/2017/02/22/python-type-annotations/
CC-MAIN-2018-39
en
refinedweb
javascript - Javascript Tutorial - java script - javascript array JavaScript Arrays: - Acces process objects arrays or json - Append something to an array - Array contains a value - Array elements at beginning - Best way to find if an item is in a javascript array - Check if object is array - Copying array by value in javascript - Create a two dimensional array in javascript - Deleting array elements in javascript delete vs splice - Difference between array and while declaring a javascript array - Empty an array - Extend an existing javascript array with another array - Finding object by id in an array of javascript objects - For each over an array in javascript - Get the last item in an array - How to insert an item into an array at a specific index - How to randomize a javascript array - How to short circuit array foreach like calling break - Insert an item into an array at a specific index - Loop through an array - Merge two arrays in javascript - Mergeflatten an array of arrays in javascript - Remove duplicate from javascript array - Remove empty elements from an array in - Sort array of objects by string property value in javascript - Sorting an array of javascript objects - Unique values in an array JavaScript Date: - Add days to javascript date - Compare two dates with javascript - Detecting an invalid date date instance in javascript - Documentation on formatting a date in javascript - Format a javascript date - The right json date format - Get the current year in javascript JavaScript with AngularJS: - Data binding work in angular js - How do we bind to list of checkbox values with angular js - How does data binding work in angular js - Nuances of scope prototypal prototypical inheritance in angular js - Scope variable using angularjs - Thinking in angular js JavaScript Variables: - Check if a variable is a string - How can we check if a javascript variable is function type - Javascript check if variable exists is definedinitialized - Let and var - Read environment variables in node js - Scope of the variable in javascript - Static variable in javascript - Variable in a regular expression JavaScript Operators: JavaScript Functions: - Exists function for jquery - Find out caller function in javascript - Function overloading - How to measure time taken by a function to execute - In node js how do we include functions from my other files - Is there a better way to do optional function parameters in javascript - Javascript plus sign in front of function name - Jquery remove style added with css function - Jsl int is suddenly reporting use the function form of use strict - Set a default parameter value for a javascript functions - Standard function to check for null - Call and apply JavaScript Objects: - Add options to a select from as js object with jquery - Check does an object has property in javascript - Check if a value is an object in javascript - Checking if a key exists in a javascript object - Clone a javascript object - Convert js object to json string - Converting an object to a string - Display a javascript object - Efficiently count the number of keysproperties of an object - Enumerate the properties of a javascript object - How do i loop through or enumuerate a javascript object - How do we remove a key from a javascript object - Iterate through object properties - Length of a javascript object - List the properties of javascript object - Merge properties of two javascript objects dynamically - Null an object - Object comparison in javascript - Safely turning a json string into an object - Storing objects in html5 local storage - The name of an objects type - Undefined object property - New keyword in javascript - This keyword - Javascript new keyord JavaScript Scope: JavaScript Events: - Event binding on dynamically - Event bubbling and capturing - Event keypress - Getting the id of the element that fired an event - How to find event listeners on a dom node when debugging or from the javascript code - How to prevent buttons from submitting forms - Javascript jquery event binding with fire bug - Select box JavaScript Strings: - Check for an empty string in javascript - Convert a string into an integer - Convert javascript string to lower case - Convert string to boolean in javascript - Creating multiline string in javascript - Encode a string to base 64 in javascript - First letter of a string uppercase - Formating numbers as dollars currency string - Generate random string characters in javascript - Is there a built in way in javascript to check if a string is a valid number - Javascript case insensitive string comparison - Javascript chopslicetrim off last character in string - Name as a string - Query string values in javascript - Replace all occurrences of a string in javascript - String with another string - Sub string - Substring in javascript - Trim a string in javascript JavaScript Numbers: - Can we hide the html 5 number inputs spin box - Conversion a float number to a whole number - Generate random number between two numbers in javascript - Generating random whole numbers in javascript in a specific range - How do we check that a number is float or integer - Print a number with commas as thousands separators in javascript - What is javascripts highest integer value that a number can go to without losing precision - Validate decimal numbers - How to perform integer division and get the remainder we javascript jQuery: - Abort ajax requests using jquery - Add table row in jquery - Change the href for a hyperlink using jquery - Change the selected value of a drop down list with jquery - Check if element exists in jquery - Creating a div element in jquery - Disable enable an input with jquery - Document ready equivalent without jquery - How to check a radio button with jquery - How to detect pressing enter or keyboard using jquery - How to remove all css classes using jquery - Include jquery in the javascript console - Inner html of a div using jquery - Jquery ajax file upload - Jquery disable enable submit button - Jquery document create element equivalent - Jquery get selected element tag name - Jquery get selected option from dropdown - Jquery get specific option tag text - Jquery jquery min map is triggering - Jquery library on google apis - Jquery to perform a synchronous rather than asynchronous ajax request - Make an ajax call without jquery - Preloading images with jquery - Pure javascript equivalent to jquerys ready - Radio button is selected via jquery - Redirect request after a jquery ajax call - Refresh a page with jquery - Scroll to the top of the page using javascriptjquery - Select an element by name with jquery - Selecting and manipulating css pseudo elements such as before and after using jquery - Serializing to json in jquery - Setting checked for a checkbox with jquery - Using jquery to center a div on the screen JavaScript JSON: - Json responses - Json using node js - Jsonp all about - Pretty print json using javascript - What is jsonp all about - Update each dependency in package json - Parse int JavaScript Timing: - Convert a unix timestamp to time in javascript - How can we pass a parameter to a set timeout callback - Settimeout or setinterval - Timestamp in javascript - Why is settimeout fn zero sometimes useful - Stop set interval call in javascript JavaScript Loops: JavaScript DOM: - Current url with javascript - Encode url in javascript - Get current url in javascript - How can we check for a hash in a url using javascript - How to get browser to navigate to url in javascript - Modify the url without reloding the page - How to redirect to another webpage - Preview an image before it is uploaded - Dom element focus JavaScript Ajax: - Safari on ios six caching ajax - Why does my java script get a no access control allow origin header is present on the requested resource error when postman does not - How to get the value from the get parameters - Javascript post request for a form submit - Post query parameters - Valid email address in javascript JavaScript HTML/CSS: - Capture html canvas as gif jpg png pdf - Html text input allows only numeric input - Insert html into view - Offsetting an html anchor to adjust for fixed header duplicate - Retrieve the position x y of an html element - Script tags in html markup - Using htmlfivecanvasjavascript to take in browser screenshots - Apply css to half of a character - Css before javascript - How to apply important using css - Reload cached css js - Change an element class with javascript - Get selected text from a drop down list using query - Get selected text from a drop down list - Get the children of the this selector - Get the selected value from a dropdown list using javascript - How to detect a click outside an element - Remove a particular element - Scroll to bottom of div - Self closing script tags JavaScript Advanced: - Access the correct this inside a callback - Add a class - Add a key value - Best way to detect a mobile device - Can one controller call another - Command line arguments - Commonly accepted best practices around code organization in javascript - Constants in javascript - Convert character to asciwe code in javascript - Copy to the clipboard in javascript - Create a file in memory for user to download not through server - Data binding - Decimal to hex - Declare a namespace in javascript - Disable same origin policy in chrome - Dollar sign - Element visibility - Encodeuri encodeuricomponent - Endswith in javascript - Enums in javascript - Exclamation mark - For in - Graph visualization libe rary in javascript - Guid uuid in javascript - Hash or javascriptvoid - Hidden element - How can we obfuscate javascript - How does access control allow origin header work - How does facebook disable the browsers integrated developer tools - How does javascript prototype work - How do javascript closures work - How do we completely uninstall node js and reinstall from beginning mac os x - How do we redirect with javascript - How encoding lost when attribute resd from input field - How to check for undefined in javascript - How to clear the canvas for redrawing - How to detect javascript is disable - Invoking javascript code in an iframe from the parent page - Javascript equivalent to printf format - Javascript unit test tools for tdd - Javascript version of sleep - Javascript void - Matched groups in a javascript - N asynchronous call - Prototype - Purpose of node js module exports - Redux - Relationship between commonjs amd and requirejs - Remove element by id - Round to at most two decimal places - Set a javascript breakpoint in code in chrome - Smile javascript - Trello access the users clipboard - Trigger a button click with javascript on the enter key in a text box - Turning off eslint rule for a specific line - Typescript - Undefined or null - Upload files asynchronously - Use strict - What is the explanation for these bizarre javascript behaviours mentioned in the wat talk for codemash - When to use double or single quotes in javascript - When to use node js - Why does javascript only work after opening developer tools in ie once JavaScript Vs Other: - Difference between angular route and angular ui router - Difference between bower and npm - Difference between double equal and triple equal in javascript - Difference between grunt npm and bower - Difference between null and undefined in javascript - Difference between substr and substring - Difference between tilt and caret in package json - Difference between titde and caret in package json - Differences between lodash and underscore - Event prevent default vs return false - Module exports vs exports in node js - Prop vs attr - Service provide vsfactory - Use of prototype vs this we javascript - Var functionname function vs function functionname - Window onload vs document ready
https://www.wikitechy.com/tutorials/javascript/javascript-tutorial
CC-MAIN-2018-39
en
refinedweb
ZDI-CAN-3766: Mozilla Firefox Clear Key Decryptor Heap Buffer Overflow Remote Code Execution Vulnerability RESOLVED FIXED in Firefox 48 Status () P1 normal People (Reporter: abillings, Assigned: gerald) Tracking ({csectype-bounds, sec-high}) Bug Flags: Firefox Tracking Flags (firefox46 wontfix, firefox47 wontfix, firefox48+ fixed, firefox49+ fixed, firefox-esr38 wontfix, firefox-esr4548+ fixed, firefox50+ fixed) Details (Whiteboard: [adv-main48+][adv-esr45.3+]) Attachments (2 attachments) Created attachment 8754923 [details] ZDI POC We received the following report from Trending Micro's Zero Day Initiative (ZDI): ZDI-CAN-3766: Mozilla Firefox ClearKeyDecryptor Heap Buffer Overflow Remote Code Execution Vulnerability -- CVSS ----------------------------------------- 6.8, AV:N/AC:M/Au:N/C:P/I:P/A:P -- ABSTRACT ------------------------------------- Trend Micro's Zero Day Initiative has identified a vulnerability affecting the following products: Mozilla Firefox -- VULNERABILITY DETAILS ------------------------ Tested against Firefox 45.0.2 on Windows 8.1 ``` (984.bac): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=0253170d ebx=0252effd ecx=0000270d edx=00002710 esi=0252f000 edi=05174ffb eip=672af26a esp=063efab8 ebp=05174ff8 iopl=0 nv up ei pl nz ac pe cy cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010217 clearkey!memcpy+0x2a: 672af26a f3a4 rep movs byte ptr es:[edi],byte ptr [esi] 0:004> kv ChildEBP RetAddr Args to Child 063efabc 672a36fb 05174ff8 0252effd 00002710 clearkey!memcpy+0x2a (FPO: [3,0,2]) (CONV: cdecl) [f:\dd\vctools\crt\crtw32\string\i386\memcpy.asm @ 188] 063efb04 672a366e 0252eff8 00000004 00000000 clearkey!ClearKeyDecryptor::Decrypt+0x5c (FPO: [3,10,0]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\media\gmp-clearkey\0.1\clearkeydecryptionmanager.cpp @ 182] 063efb20 672a5e2a 02200fa8 02200fcc 063efb90 clearkey!ClearKeyDecryptionManager::Decrypt+0x3b (FPO: [2,0,4]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\media\gmp-clearkey\0.1\clearkeydecryptionmanager.cpp @ 138] 063efb48 672ab4ec 02200fa8 5ca1ff99 03109450 clearkey!VideoDecoder::DecodeTask+0x84 (FPO: [Non-Fpo]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\media\gmp-clearkey\0.1\videodecoder.cpp @ 167] 063efb50 5ca1ff99 03109450 5b9cfd3d 063efc88 clearkey!gmp_task_args_m_1<VideoDecoder *,void (__thiscall VideoDecoder::*)(VideoDecoder::DecodeData *),VideoDecoder::DecodeData *>::Run+0xe (FPO: [0,0,0]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\media\gmp-clearkey\0.1\gmp-task-utils-generated.h @ 133] 063efb58 5b9cfd3d 063efc88 0311b8e0 063efbc0 xul!mozilla::gmp::Runnable::Run+0xb (FPO: [0,0,4]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\dom\media\gmp\gmpplatform.cpp @ 41] 063efbc0 5b9cf5d1 063efc88 063efc88 03104b08 xul!MessageLoop::DoWork+0x19c (FPO: [Non-Fpo]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\ipc\chromium\src\base\message_loop.cc @ 459] 063efc08 5b9cfff3 063efc88 0ff0bb96 5bca3c39 xul!base::MessagePumpDefault::Run+0x2e (FPO: [Non-Fpo]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\ipc\chromium\src\base\message_pump_default.cc @ 35] 063efc40 5b9d0039 03104b1c 00000001 7503ef00 xul!MessageLoop::RunHandler+0x20 (FPO: [SEH]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\ipc\chromium\src\base\message_loop.cc @ 228] 063efc60 5c026784 5bca3c39 063efd5c 03119610 xul!MessageLoop::Run+0x19 (FPO: [Non-Fpo]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\ipc\chromium\src\base\message_loop.cc @ 202] 063efd44 5bca3c42 770f4198 03104b08 770f4170 xul!base::Thread::ThreadMain+0x382b3d (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\ipc\chromium\src\base\thread.cc @ 175] 063efd48 770f4198 03104b08 770f4170 ba9c818e xul!`anonymous namespace'::ThreadFunc+0x9 (FPO: [1,0,0]) (CONV: stdcall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\ipc\chromium\src\base\platform_thread_win.cc @ 27] 063efd5c 77722cb1 03104b08 100f92e3 00000000 KERNEL32!BaseThreadInitThunk+0x24 (FPO: [Non-Fpo]) 063efda4 77722c7f ffffffff 7774e75f 00000000 ntdll!__RtlUserThreadStart+0x2b (FPO: [SEH]) 063efdb4 00000000 5bca3c39 03104b08 00000000 ntdll!_RtlUserThreadStart+0x1b (FPO: [Non-Fpo]) 0:004> !lmi clearkey Loaded Module Info: [clearkey] Module: clearkey Base Address: 672a0000 Image Name: C:\Program Files\Mozilla Firefox\gmp-clearkey\0.1\clearkey.dll Machine Type: 332 (I386) Time Stamp: 57070eac Thu Apr 07 18:51:40 2016 Size: 32000 CheckSum: 33f5d Characteristics: 2122 Debug Data Dirs: Type Size VA Pointer CODEVIEW 82, 297a8, 27fa8 RSDS - GUID: {94D222EF-9D7F-45FF-81C0-C0F16ADA8872} Age: 2, Pdb: c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\obj-firefox\media\gmp-clearkey\0.1\clearkey.pdb ?? 14, 2982c, 2802c [Data not mapped] CLSID 4, 29840, 28040 [Data not mapped] Image Type: MEMORY - Image read successfully from loaded memory. Symbol Type: PDB - Symbols loaded successfully from image header. z:\export\symbols\clearkey.pdb\94D222EF9D7F45FF81C0C0F16ADA88722\clearkey.pdb Compiler: Linker - front end [0.0 bld 0] - back end [12.0 bld 30723] Load Report: private symbols & lines, source indexed z:\export\symbols\clearkey.pdb\94D222EF9D7F45FF81C0C0F16ADA88722\clearkey.pdb 0:004> lmvm clearkey start end module name 672a0000 672d2000 clearkey (private pdb symbols) z:\export\symbols\clearkey.pdb\94D222EF9D7F45FF81C0C0F16ADA88722\clearkey.pdb Loaded symbol image file: C:\Program Files\Mozilla Firefox\gmp-clearkey\0.1\clearkey.dll Image path: C:\Program Files\Mozilla Firefox\gmp-clearkey\0.1\clearkey.dll Image name: clearkey.dll Timestamp: Thu Apr 07 18:51:40 2016 (57070EAC) CheckSum: 00033F5D ImageSize: 00032000 File version: 45.0.2.5941 Product version: 45.0.2.5941 File flags: 0 (Mask 3F) File OS: 4 Unknown Win32 File type: 2.0 Dll File date: 00000000.00000000 Translations: 0000.04b0 CompanyName: Mozilla Foundation ProductName: Firefox InternalName: Firefox OriginalFilename: clearkey.dll ProductVersion: 45.0.2 FileVersion: 45.0.2 FileDescription: 45.0.2 LegalCopyright: License: MPL 2 LegalTrademarks: Mozilla Comments: Mozilla 0:004> vertarget Windows 8 Version 9600 UP Free x86 compatible Product: WinNt, suite: SingleUserTS kernel32.dll version: 6.3.9600.17415 (winblue_r4.141028-1500) Machine Name: Debug session time: Mon Apr 18 12:07:56.520 2016 (UTC - 7:00) System Uptime: 0 days 2:05:09.672 Process Uptime: 0 days 0:00:57.841 Kernel time: 0 days 0:00:00.046 User time: 0 days 0:00:00.031 ``` -- CREDIT --------------------------------------- This vulnerability was discovered by: Anonymous working with Trend Micro's Zero Day Initiative ---- The readme on the POC states: Firefox 45.0.1 ClearKeyDecryptor::Decrypt() (media/gmp-clearkey/0.1/ClearKeyDecryptionManager.cpp) heap overflow. The problem is that we fully control aBufferSize, ClearBytes and CipherBytes arrays. To debug you will need to attach to plugin-container.exe (clearkey.dll should be loaded). How to test: 1) copy all files from this directory to www dir 2) open index.html 3) alert will pop up, attach to plugin-container.exe 4) back to firefox, press OK to continue Priority: -- → P1 Created attachment 8756939 [details] [diff] [review] 1274637-detect-oob-copy-attempts-in-clearkey-decryptor.patch Detect OOB copy attempts in clearkey decryptor. This detects the issue quite late in the decryption process, on the child side. Chris, should we investigate ways to prevent this issue from the parent side? (This would protect other GMPs against this attack, in case they don't check for it either.) Assignee: nobody → gsquelart Attachment #8756939 - Flags: review?(cpearce) Comment on attachment 8756939 [details] [diff] [review] 1274637-detect-oob-copy-attempts-in-clearkey-decryptor.patch [Security approval request comment] How easily could an exploit be constructed based on the patch? Not that obvious (to me). The attacker would have to trace back where the parameters come from, then tailor a video file to give tricky values there. I'm not so sure there's a remote-code-execution possibility, as the target buffer is on the heap. Also, this is in the GMP, which runs inside a sandbox, making it harder to do real harm. Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem? The comments and code point at the reading-past-the-end issue, not about the writing part, so it's hiding the problem a bit. Which older supported branches are affected by this flaw? All of them (code landed in FF35, Sept 2014) Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be? It applies cleanly to aurora and beta. Easy to rebase for release and ESRs, if needed. How likely is this patch to cause regressions; how much testing does it need? I'd like to say zero chance of regression, as it's a simple test followed by an early return. No special testing needed apart from checking that the POC now fails gracefully. *? Flags: needinfo?(cpearce) Attachment #8756939 - Flags: sec-approval? This needs a security rating before you know if it needs sec-approval to go in. That said, it has missed the 47 window and, even if approved, wouldn't go in until June 21 (two weeks into the next cycle). (In reply to Al Billings [:abillings] from comment #3) > This needs a security rating before you know if it needs sec-approval to go > in. That wiki page says: > For security bugs with no sec- severity rating assume the worst and follow the rules for sec-critical. > [...] > if the bug has a patch *and* is sec-high or sec-critical, the developer should set the sec-approval flag to '?' on the patch And > If you have a patch and the bug is a hidden core-security bug with no rating then either: > 1. request sec-approval (to be safe) and wait for a rating, And > If developers are unsure about a bug and it has a patch ready, just mark the sec-approval flag to '?' and move on. All this tells me I was allowed to request sec-approval before knowing the exact sec-rating. Did I misread? If I were to rate the bug, I'd probably go with sec-high: I'm not so sure about the code execution possibility, but even if it was possible, it would be contained in a plugin in a sandbox, making it harder to cause more harm. status-firefox46: --- → affected status-firefox47: --- → affected status-firefox48: --- → affected status-firefox49: --- → affected status-firefox-esr38: --- → affected status-firefox-esr45: --- → affected (In reply to Gerald Squelart [:gerald] (may be slow to respond) from comment #2) > *? I think it's unlikely that the Adobe GMP is also vulnerable to this; their DRM robustness requirements should have required them to *not* use a copy of our decrypt loop here in their own decrypt code. In fact, IIRC they don't support decrypt-only mode, so that code path shouldn't be hit even if they copied it. Flags: needinfo?(cpearce) (In reply to Gerald Squelart [:gerald] (PTO until 2016-06-13) from comment #4) > All this tells me I was allowed to request sec-approval before knowing the > exact sec-rating. Did I misread? My bad. I'd forgotten I'd put that in there. I'll make this sec-high. This is too late for 47 in any case (since we're about to make final builds) so this won't be able to check in until June 21, two weeks into the next cycle. Keywords: sec-high Whiteboard: [checkin on 6/21] status-firefox46: affected → wontfix status-firefox47: affected → wontfix status-firefox-esr38: affected → wontfix tracking-firefox49: --- → + tracking-firefox-esr45: --- → 48+ tracking-firefox48: --- → + We'll want branch patches for affected branches once it goes into trunk. status-firefox50: --- → affected tracking-firefox50: --- → + The attached patch still applies cleanly to central, aurora, beta, and esr45, so it's ready to go. It is okay to land this now. Keywords: checkin-needed Keywords: checkin-needed Whiteboard: [checkin on 6/21] Status: NEW → RESOLVED Last Resolved: 2 years ago status-firefox50: affected → fixed Resolution: --- → FIXED Target Milestone: --- → mozilla50 Group: media-core-security → core-security-release Hi Gerald, could you please nominate the patch for uplift to Beta, Aurora and ESR45? I could do it for you but I'd like to get your thoughts on risk, test coverage and whether this is easily exploitable or not. Thanks! Flags: needinfo?(gsquelart) Comment on attachment 8756939 [details] [diff] [review] 1274637-detect-oob-copy-attempts-in-clearkey-decryptor.patch Approval Request Comment [Feature/regressing bug #]: Clearkey media playback. [User impact if declined]: Potential RCE in sandboxed plugin. [Describe test coverage new/current, TreeHerder]: Locally tested by running the POC; landed in Nightly for a week. [Risks and why]: I'd like to say none, this is a simple past-the-end test resulting in early function exit with an error code. [String/UUID change made/needed]: None. [ESR Approval Request Comment] If this is not a sec:{high,crit} bug, please state case for ESR consideration: User impact if declined: sec-high. Fix Landed on Version: 50. Risk to taking this patch (and alternatives if risky): No risks that I can see. String or UUID changes made by this patch: None. See for more info. (In reply to Ritu Kothari (:ritu) from comment #12) > Hi Gerald, could you please nominate the patch for uplift to Beta, Aurora > and ESR45? I could do it for you but I'd like to get your thoughts on risk, > test coverage and whether this is easily exploitable or not. Thanks! This patch applies as-is on Aurora, Beta, and ESR45. As I wrote above, I personally don't see any risk with this patch. As to the exploitability, I think it would be difficult to exploit, as the overwritten buffer is in heap space (not stack); Also it is in a plugin running in a sandbox, so should the issue be exploitable, it would (hopefully) require a lot of work to be able to cause any real harm outside of the video playback. Flags: needinfo?(gsquelart) Attachment #8756939 - Flags: approval-mozilla-esr45? Attachment #8756939 - Flags: approval-mozilla-beta? Attachment #8756939 - Flags: approval-mozilla-aurora? Comment on attachment 8756939 [details] [diff] [review] 1274637-detect-oob-copy-attempts-in-clearkey-decryptor.patch Sec-high issue, Aurora49+, Beta48+, ESR45+ Attachment #8756939 - Flags: approval-mozilla-esr45? Attachment #8756939 - Flags: approval-mozilla-esr45+ Attachment #8756939 - Flags: approval-mozilla-beta? Attachment #8756939 - Flags: approval-mozilla-beta+ Attachment #8756939 - Flags: approval-mozilla-aurora? Attachment #8756939 - Flags: approval-mozilla-aurora+ status-firefox48: affected → fixed status-firefox49: affected → fixed status-firefox-esr45: affected → fixed Flags: qe-verify+ Alias: CVE-2016-2837 Whiteboard: [adv-main48+][adv-esr45.3+]. Flags: qe-verify+ Flags: needinfo?(mwobensmith) Flags: needinfo?(gsquelart) Mihai, you are correct. Sorry about that. Marking qe-verify- as a result. If Gerald or anyone else has a way to verify this change without debugging, let us know. Flags: needinfo?(mwobensmith) → qe-verify- (In reply to Mihai Boldan, QA [:mboldan] from comment #16) >. I've just tried on Mac OS X, and I can see a difference: - With an unpatched FF 47, after opening the POC's index.html and pressing OK, a drop-down notification bar says "The Clearkey plugin has crashed" (most probably due to the unchecked memory copy trying to write outside of its permitted bounds). - With patched FF 50, 49b, and 48a, there is no such notification, the plugin does not crash. The crash may be dependent on the OS and other environment factors. Would the QA team be able to try on different platforms? Flags: needinfo?(gsquelart) I managed to reproduce this issue following the STR from Comment 0, using Firefox 47.0.1 and on Windows 10 x64. I've tested this issue on Firefox 45.3.0 ESR, Firefox 48.0, Firefox 49.0b1, Firefox 50.0a2 (2016-08-08) and on Firefox 51.0a1 (2016-08-08), across platforms [1 ]and I confirm that the notification is no longer displayed and the plugin does not crash. [1]Windows 10 x64, Mac OS X 10.11.1, Ubuntu 16.04x64 Group: core-security-release Keywords: csectype-bounds
https://bugzilla.mozilla.org/show_bug.cgi?id=1274637
CC-MAIN-2018-39
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hi Friends! There is a way to transition parent when all sub-tasks change the status. But is it a way to transition parent issue when simple checkbox field with one value are marked in all sub-tasks? In other words I need to transition parent issue when the checkbox customfield in the last sub-task was updated. Hi friend, Actually the fast track is not an option for you because this will fast track the issue where the update happen, so in your case the subtask. You want a custom listener where will listens for Issue Updated events and if the updated issue is a subtask then it should go to the parent, get all it's subtasks and if all of them have selected at least the value you want then it will transit the parent. Therefore you custom listener, that listens for an Issue Updated event and for the project/s you want. Now, your script (and for a JIRA v7) will be import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.issue.Issue import com.atlassian.jira.issue.IssueInputParameters import com.atlassian.jira.user.ApplicationUser import com.atlassian.jira.workflow.TransitionOptions def change = event?.getChangeLog()?.getRelated("ChildChangeItem").find {it.field == "CheckBox A"} if (! change) { // there was an update issue update but it was not the checkbox therefore do nothing return } Issue issue = issue def flag = true def ACTION_ID = 21 // replace with the action id that you want to transit the parent // is not a subtask therefore do nothing if (! issue.isSubTask()) return def cwdUser = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser() // iterate over the subtasks if at least on of the subtasks has notn the value 'Yes' checked then return false issue.getParentObject().subTaskObjects?.each {subtask -> def cf = ComponentAccessor.getCustomFieldManager().getCustomFieldObjectByName("CheckBox A") def cfValue = subtask.getCustomFieldValue(cf) if (! ("Yes" in cfValue*.value) || !cfValue) { log.debug ("Yes option is not selected for subtask ${subtask.key} or there is no option at all") flag = false } } // it it reaches here means all the subtasks have selected ONLY the value yes for the custom field CheckBox A if (flag) transitIssue(issue.parentObject, ACTION_ID, cwdUser) // this means that all the subtasks have the value 'Yes' checked, therefore return true and make the transition def transitIssue(Issue issue, int actionId, ApplicationUser cwdUser) { def issueService = ComponentAccessor.getIssueService() IssueInputParameters issueInputParameters = issueService.newIssueInputParameters() issueInputParameters.setSkipScreenCheck(true) def transitionOptions= new TransitionOptions.Builder() .skipConditions() .skipPermissions() .skipValidators() .build() def transitionValidationResult = issueService.validateTransition(cwdUser, issue.id, actionId, issueInputParameters, transitionOptions) if (transitionValidationResult.isValid()) { return issueService.transition(cwdUser, transitionValidationResult).getIssue() } log.error "Transition of issue ${issue.key} failed. " + transitionValidationResult.errorCollection } For you information the checkboxes can have more than one value. So in the script above will check if at least on of the selected values (for each subtask) is Yes then it will transit the parent. Hope that helps. Regards, Thanos Hi Thanos! Many thanks for your incredible script Tomorrow I will check it and come back with the answer. Hi Thanos! I still was not able to completely check your great script. Moreover I found a workaround by using of Automation wich allows to transit an issue automaticly by checking a checkbox and then I use post-function from my description. But I have marked your reply as an answer because I think it will work and someone can use it in the future! Thanks See you You would need to write a customer listener using script runner for doing that. Check here for some examples: -Ravi Hi Ravi. Thanks for the answer! So I think I should use Fast-track transition an issue listener. In this case what do I need to choose as event when sub-task is updated? Which condition helps me to check that all e.g. customfields_10800 have checkboxes? Sorry I am not good at script.
https://community.atlassian.com/t5/Jira-Core-questions/Transition-Parent-Issue-when-a-some-customfield-in-all-sub-task/qaq-p/274735
CC-MAIN-2018-39
en
refinedweb
I would like to truncate the dataloader to run epochs faster for testing purposes. My attemp so far was using itertools islice: from itertools import islice # Testing mode if test: trainloader = islice(trainloader, 20) However, this gives problems since iterators has no len method for example, and I need to be changing code depending of if I am testing or not. Is there any “official” way of dealing with PyTorch Dataloaders? And btw, are PyTorch Dataloaders Python Iterable type? Thanks!
https://discuss.pytorch.org/t/truncate-data-loader-for-quicker-testing/25197
CC-MAIN-2018-39
en
refinedweb
Here are several ways you can take screenshots and edit the screenshots by adding text, arrows etc. Instructions and mentioned When I switched from Windows to Ubuntu as my primary OS, the first thing I was worried about was the availability of screenshot tools. Well, it is easy to utilize the default keyboard shortcuts in order to take screenshots but with a standalone tool, I get to annotate/edit the image while taking the screenshot. In this article, we will introduce you to the default methods/tools (without a 3rd party screenshot tool) to take a screenshot while also covering the list of best screenshot tools available for Linux. Method 1: The default way to take screenshot in Linux Do you want to capture the image of your entire screen? A specific region? A specific window? If you just want a simple screenshot without any annotations/fancy editing capabilities, the default keyboard shortcuts will do the trick. These are not specific to Ubuntu. Almost all Linux distributions and desktop environments support these keyboard Let’s take a look at the list of keyboard shortcuts you can utilize: PrtSc – Save a screenshot of the entire screen to the “Pictures” directory. Shift + PrtSc – Save a screenshot of a specific region to Pictures. Alt + PrtSc – Save a screenshot of the current window to Pictures. Ctrl + PrtSc – Copy the screenshot of the entire screen to the clipboard. Shift + Ctrl + PrtSc – Copy the screenshot of a specific region to the clipboard. Ctrl + Alt + PrtSc – Copy the screenshot of the current window to the clipboard. As you can see, taking screenshots in Linux is absolutely simple with the default screenshot tool. However, if you want to immediately annotate (or other editing features) without importing the screenshot to another application, you can use a dedicated screenshot tool. Method 2: Take and edit screenshots in Linux with Flameshot Feature Overview - Annotate (highlight, point, add text, box in) - Blur part of an image - Crop part of an image - Upload to Imgur - Open screenshot with another app Flameshot is a quite impressive screenshot tool which arrived on GitHub last year. If you have been searching for a screenshot tool that helps you annotate, blur, mark, and upload to imgur while being actively maintained unlike some outdated screenshot tools, Flameshot should be the one to have installed. Fret not, we will guide you how to install it and configure it as per your preferences. To install it on Ubuntu, you just need to search for it on Ubuntu Software center and get it installed. In case you want to use the terminal, here’s the command for it: sudo apt install flameshot If you face any trouble installing, you can follow their official installation instructions. After installation, you need to configure it. Well, you can always search for it and launch it, but if you want to trigger the Flameshot screenshot tool by using PrtSc key, you need to assign a custom keyboard shortcut. Here’s how you can do that: - Head to the system settings and navigate your way to the Keyboard settings. - You will find all the keyboard shortcuts listed there, ignore them and scroll down to the bottom. Now, you will find a + button. - Click the “+” button to add a custom shortcut. You need to enter the following in the fields you get: Name: Anything You Want Command: /usr/bin/flameshot gui - Finally, set the shortcut to PrtSc – which will warn you that the default screenshot functionality will be disabled – so proceed doing it. For reference, your custom keyboard shortcut field should look like this after configuration: Method 3: Take and edit screenshots in Linux with Shutter Feature Overview: - Annotate (highlight, point, add text, box in) - Blur part of an image - Crop part of an image - Upload to image hosting sites Shutter is a popular screenshot tool available for all major Linux distributions. Though it seems to be no more being actively developed, it is still an excellent choice for handling You might encounter certain bugs/errors. The most common problem with Shutter on any latest Linux distro releases is that the ability to edit the screenshots is disabled by default along with the missing applet indicator. But, fret not, we have a solution to that. You just need to follow our guide to fix the disabled edit option in Shutter and bring back the applet indicator. After you’re done fixing the problem, you can utilize it to edit the screenshots in a jiffy. To install shutter, you can browse the software center and get it from there. Alternatively, you can use the following command in the terminal to install Shutter in Ubuntu-based distributions: sudo apt install shutter As we saw with Flameshot, you can either choose to use the app launcher to search for Shutter and manually launch the application, or you can follow the same set of instructions (with a different command) to set a custom shortcut to trigger Shutter when you press the PrtSc key. If you are going to assign shutter -f Method 4: Use GIMP for taking screenshots in Linux Feature Overview: - Advanced Image Editing Capabilities (Scaling, Adding filters, color correction, Add layers, Crop, and so on.) - Take a screenshot of the selected area If you happen to use GIMP a lot and you probably want some advance edits on your screenshots, GIMP would be a good choice for that. You should already have it installed, if not, you can always head to your software center to install it. If you have trouble installing, you can always refer to their official website for installation instructions. To take a screenshot with GIMP, you need to first launch it, and then navigate your way through File->Create->Screenshot. After you click on the screenshot option, you will be greeted with a couple of tweaks to control the screenshot. That’s just it. Click “Snap” to take the screenshot and the image will automatically appear within GIMP, ready for you to edit. Method 5: Taking screenshot in Linux using command line tools This section is strictly for terminal lovers. If you like using the terminal, you can utilize the GNOME screenshot tool or ImageMagick or Deepin Scrot– which comes baked in on most of the popular Linux distributions.. gnome-screenshot -d -5 ImageMagick ImageMagick should be already pre-installed on your system if you are using Ubuntu, Mint, or any other popular Linux distribution. In case, it isn’t there, you can always install it by following the official installation instructions (from source). In either case, you can enter the following in the terminal: sudo apt-get install imagemagick After you have it installed, you can type in the following commands to take a screenshot: To take the screenshot of your entire screen: import -window root image.png Here, “image.png” is your desired name for the screenshot. To take the screenshot of a specific area: import image.png Deepin Scrot Wrapping Up So, these are the best screenshot tools available for Linux. Yes, there are a few more tools available (like Spectacle for KDE-based distros), but if you end up comparing them, the above-mentioned tools will outshine them. In case you find a better screenshot tool than the ones mentioned in our article, feel free to let us know about it in the comments below. Also, do tell us about your favorite screenshot tool! Hi Everyone, My favorite screenshot tool is called “Ksnip”. DamirPorobic/ksnip: Ksnip is a Qt based Linux screenshot tool that provides many annotation features for your screenshots. Phil (phd21) Flameshot is wonderful. I didn’t know it. Thank you so much for the recommendation.
https://itsfoss.com/take-screenshot-linux/
CC-MAIN-2018-39
en
refinedweb
I have simply added an extension to the EmptyStackException in Java to show an error message. I have put this file under the same package, but I get the error: "The constructor EmptyStackException(String) is undefined" EmptyStackException.java: package mypackage; public class EmptyStackException extends RuntimeException{ private static final long serialVersionUID = 1L; public EmptyStackException(String message){ super(message); } } Should be something very simple. But I got stuck! Any help is appreciated. You have defined your own EmptyStackException, but Java supplies java.util.EmptyStackException, and its only constructor takes no arguments, which means your code that you think uses your own EmptyStackException actually is using java.util.EmptyStackException. To extend EmptyStackException, define your own: public MyEmptyStackExceptionWithMessage extends EmptyStackException { In the constructor, call super() and store your own message. It looks like you're trying to replace the existing EmptyStackException class with one of your own; this is always a very bad idea (and in many Java environments, not possible.) Probably the compile error is arising because although you're trying to construct an instance of your class, the Java compiler is actually trying to instantiate java.util.EmptyStackException which indeed has no one-argument constructor. Basically, if you need to define your own exception types, choose names that aren't already used by a platform class -- you'll avoid a lot of hassle. And if your goal is truly to replace the Java API class: give up. It's not going to happen.
http://www.dlxedu.com/askdetail/3/35fcb0f7b3e419705fe6a70109c809be.html
CC-MAIN-2018-39
en
refinedweb
Windows 10 Development - Store The benefit of Windows Store for developers is that you can sell your application. You can submit your single application for every device family. The Windows 10 Store is where applications are submitted, so that a user can find your application. In Windows 8, the Store was limited to application only and Microsoft provides many stores i.e. Xbox Music Store, Xbox Game Store etc. In Windows 8, all these were different stores but in Windows 10, it is called Windows Store. It is designed in a way where users can find a full range of apps, games, songs, movies, software and services in one place for all Windows 10 devices. Monetization Monetization means selling your app across desktop, mobile, tablets and other devices. There are various ways that you can sell your applications and services on Windows Store to earn some money. You can select any of the following methods − The simplest way is to submit your app on store with paid download options. The Trails option, where users can try your application before buying it with limited functionality. Add advertisements to your apps with Microsoft Advertising. Microsoft Advertising When you add Ads to your application and a user clicks on that particular Ad, then the advertiser will pay you the money. Microsoft Advertising allows developers to receive Ads from Microsoft Advertising Network. The Microsoft Advertising SDK for Universal Windows apps is included in the libraries installed by Visual Studio 2015. You can also install it from visualstudiogallery Now, you can easily integrate video and banner Ads into your apps. Let us have a look at a simple example in XAML, to add a banner Ad in your application using AdControl. Create a new Universal Windows blank app project with the name UWPBannerAd. In the Solution Explorer, right click on References Select Add References, which will open the Reference Manager dialog. From the left pane, select Extensions under Universal Windows option and check the Microsoft Advertising SDK for XAML. Click OK to Continue. Given below is the XAML code in which AdControl is added with some properties. "> <UI:AdControl </StackPanel> </Grid> </Page> When the above code is compiled and executed on a local machine, you will see the following window with MSN banner on it. When you click this banner, it will open the MSN site. You can also add a video banner in your application. Let us consider another example in which when the Show ad button is clicked, it will play the video advertisement of Xbox One. Given below is the XAML code in which we demonstrate how a button is added with some properties and events. "> <Button x: </StackPanel> </Grid> </Page> Given below is the click event implementation in C#. using Microsoft.Advertising.WinRT.UI; using Windows.UI.Xaml; using Windows.UI.Xaml.Controls; // The Blank Page item template is documented at namespace UWPBannerAd { /// <summary> /// An empty page that can be used on its own or navigated to within a Frame. /// </summary> public sealed partial class MainPage : Page { InterstitialAd videoAd = new InterstitialAd(); public MainPage() { this.InitializeComponent(); } private void showAd_Click(object sender, RoutedEventArgs e) { var MyAppId = "d25517cb-12d4-4699-8bdc-52040c712cab"; var MyAdUnitId = "11388823"; videoAd.AdReady += videoAd_AdReady; videoAd.RequestAd(AdType.Video, MyAppId, MyAdUnitId); } void videoAd_AdReady(object sender, object e){ if ((InterstitialAdState.Ready) == (videoAd.State)) { videoAd.Show(); } } } } When the above code is compiled and executed on a local machine, you will see the following window, which contains a Show Ad button. Now, when you click on the Show Ad button, it will play the video on your app.
https://www.tutorialspoint.com/windows10_development/windows10_development_store.htm
CC-MAIN-2018-39
en
refinedweb
Available with Standard or Advanced license. Summary Removes a feature class from a topology. Usage Removing a feature class from a topology also removes all the topology rules associated with that feature class. Removing a feature class from a topology will require the entire topology to be validated. Syntax RemoveFeatureClassFromTopology_management (in_topology, in_featureclass) Code sample The following stand-alone script demonstrates how to use the RemoveFeatureClassFromTopology function in the Python window. import arcpy arcpy.RemoveFeatureClassFromTopology_management("C:/Datasets/TestGPTopology.gdb/LegalFabric/topology", "Parcel_line") The following stand-alone script demonstrates how to use the RemoveFeatureClassFromTopology function. # Name: RemoveClassFromTopology_Example.py # Description: Removes a feature class from participating in a topology # Import system modules import arcpy topo = "C:/Datasets/TestGPTopology.gdb/LegalFabric/topology" fc = "Parcel_line" arcpy.RemoveFeatureClassFromTopology_management(topo, fc) Environments This tool does not use any geoprocessing environments. Licensing information - ArcGIS Desktop Basic: No - ArcGIS Desktop Standard: Yes - ArcGIS Desktop Advanced: Yes
http://pro.arcgis.com/en/pro-app/tool-reference/data-management/remove-feature-class-from-topology.htm
CC-MAIN-2018-39
en
refinedweb
Creating Your First Elm App: From Authentication to Calling an API (Part 1) Creating Your First Elm App: From Authentication to Calling an API (Part 1). TL;DR:. In part two, we'll add authentication using JSON Web Tokens. The full code is available at this GitHub repository. All JavaScript app developers are likely familiar with this scenario: we implement logic, deploy our code, and then in QA (or worse, production) we encounter a runtime error! Maybe it was something we forgot to write a test for, or it's an obscure edge case we didn't foresee. Either way, when it comes to business logic in production code, we often spend post-launch with the vague threat of errors hanging over our heads. Enter Elm: a functional, reactive front-end programming language that compiles to JavaScript, making it great for web applications that run in the browser. Elm's compiler presents us with friendly error messages before runtime, thereby eliminating runtime errors. Why Elm? Elm's creator Evan Czaplicki positions Elm with several strong concepts, but we'll touch on two in particular: gradual learning and usage-driven design. Gradual learning is the idea that we can be productive with the language before diving deep. As we use Elm, we are able to gradually learn via development and build up our skillset, but we are not hampered in the beginner stage by a high barrier to entry. Usage-driven design emphasizes starting with the minimimum viable solution and iteratively building on it, but Evan points out that it's best to keep it simple, and the minimum viable solution is often enough by itself. If we head over to the Elm site, we're greeted with an attractive featureset highlighting "No runtime exceptions", "Blazing fast rendering", and "Smooth JavaScript interop". But what does this boil down to when writing real code? Let's take a look. Building an Elm Web App In the first half of this two-part tutorial, we're going to build a small Elm application that will call an API to retrieve random Chuck Norris quotes. In doing so, we'll learn Elm basics like how to compose an app with a view and a model, how to update application state, and how to implement common real-world requirements like HTTP. In part two of the tutorial, we'll add the ability to register, log in, and access protected quotes with JSON Web Tokens. If you're familiar with JavaScript but new to Elm the language might look a little strange at first—but once we start building, we'll learn how the Elm Architecture, types, and clean syntax can really streamline development. This tutorial is structured to help JavaScript developers get started with Elm without assuming previous experience with other functional or strongly typed languages. Setup and Installation The full source code for our finished app can be cloned on GitHub here. We're going to use Gulp to build and serve our application locally and NodeJS to serve our API and install dependencies through the Node Package Manager (npm). If you don't already have Node and Gulp installed, please visit their respective websites and follow instructions for download and installation. Note: Webpack is an alternative to Gulp. If you're interested in trying a customizable webpack build in the future for larger Elm projects, check out elm-webpack-loader. We also need the API. Clone the NodeJS JWT Authentication sample API repository and follow the README to get it running. Installing and Configuring Elm App To install Elm globally, run the following command: npm install -g elm Once Elm is successfully installed, we need to set up our project's configuration. This is done with an elm-package.json file: // elm-package.json { "version": "0.1.0", "summary": "Build an App in Elm with JWT Authentication and an API", "repository": "", "license": "MIT", "source-directories": [ "src", "dist" ], "exposed-modules": [], "dependencies": { "elm-lang/core": "4.0.1 <= v < 5.0.0", "elm-lang/html": "1.0.0 <= v < 2.0.0", "evancz/elm-http": "3.0.1 <= v < 4.0.0", "rgrempel/elm-http-decorators": "1.0.2 <= v < 2.0.0" }, "elm-version": "0.17.0 <= v < 0.18.0" } We'll be using Elm v0.17 in this tutorial. The elm-version here is restricted to minor point releases of 0.17. There are breaking changes between versions 0.17 and 0.16 and we can likely expect the same for 0.18. Now that we've declared our Elm dependencies, we can install them: elm package install Once everything has installed, an /elm-stuff folder will live at the root of your project. This folder contains all of the Elm dependencies we specified in our elm-package.json file. Build Tools Now we have Node, Gulp, Elm, and the API installed. Let's set up our build configuration. Create and populate a package.json file, which should live at our project's root: // package.json ... "dependencies": {}, "devDependencies": { "gulp": "^3.9.0", "gulp-connect": "^4.0.0", "gulp-elm": "^0.4.4", "gulp-plumber": "^1.1.0", "gulp-util": "^3.0.7" } ... Once the package.json file is in place, install the Node dependencies: npm install Next, create a gulpfile.js file: // gulpfile.js var gulp = require('gulp'); var elm = require('gulp-elm'); var gutil = require('gulp-util'); var plumber = require('gulp-plumber'); var connect = require('gulp-connect'); // File paths var paths = { dest: 'dist', elm: 'src/*.elm', static: 'src/*.{html,css}' }; // Init Elm gulp.task('elm-init', elm.init); // Compile Elm to HTML gulp.task('elm', ['elm-init'], function(){ return gulp.src(paths.elm) .pipe(plumber()) .pipe(elm()) .pipe(gulp.dest(paths.dest)); }); // Move static assets to dist gulp.task('static', function() { return gulp.src(paths.static) .pipe(plumber()) .pipe(gulp.dest(paths.dest)); }); // Watch for changes and compile gulp.task('watch', function() { gulp.watch(paths.elm, ['elm']); gulp.watch(paths.static, ['static']); }); // Local server gulp.task('connect', function() { connect.server({ root: 'dist', port: 3000 }); }); // Main gulp tasks gulp.task('build', ['elm', 'static']); gulp.task('default', ['connect', 'build', 'watch']); The default gulp task will compile Elm, watch and copy files to a /dist folder, and run a local server where we can view our application at. Our development files should be located in a /src folder. Please create the /dist and /src folders at the root of the project. Our file structure now looks like this: Syntax Highlighting There's one more thing we should do before we start writing Elm, and that is to grab a plugin for our code editor to provide syntax highlighting and inline compile error messaging. There are plugins available for many popular editors. I like to use VS Code with vscode-elm, but you can download a plugin for your editor of choice here. With syntax highlighting installed, we're ready to begin coding our Elm app. Chuck Norris Quoter App We're going to build an app that does more than echo "Hello world". We're going to connect to an API to request and display data and in part two, we'll add registration, login, and make authenticated requests—but we'll start simple. First, we'll display a button that appends a string to our model each time it's clicked. Once we've got things running, our app should look like this: Let's fire up our Gulp task. This will start a local server and begin watching for file changes: gulp Note: Since Gulp is compiling Elm for us, if we have compile errors they will show up in the command prompt / terminal window. If you have one of the Elm plugins installed in your editor, they should also show up inline in your code. HTML We'll start by creating a basic index.html file: <!-- index.html --> <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Chuck Norris Quoter</title> <script src="Main.js"></script> <link rel="stylesheet" href=""> <link rel="stylesheet" href="styles.css"> </head> <body> </body> <script> var app = Elm.Main.fullscreen(); </script> </html> We're loading a JavaScript file called Main.js. Elm compiles to JavaScript and this is the file that will be built from our compiled Elm code. We'll also load the Bootstrap CSS and a local styles.css file for a few helper overrides. Finally, we'll use JS to tell Elm to load our application. The Elm module we're going to export is called Main (from Main.js). CSS Next, let's create the styles.css file: /* styles.css */ .container { margin: 1em auto; max-width: 600px; } blockquote { margin: 1em 0; } .jumbotron { margin: 2em auto; max-width: 400px; } .jumbotron h2 { margin-top: 0; } .jumbotron .help-block { font-size: 14px; } Introduction to Elm We're ready to start writing Elm. Create a file in the /src folder called Main.elm. The full code for this step is available in the source repository on GitHub: Main.elm - Introduction to Elm Our file structure should now look like this: If you're already familiar with Elm you can skip ahead. If Elm is brand new to you, keep reading: we'll introduce The Elm Architecture and Elm's language syntax by thoroughly breaking down this code. Make sure you have a good grasp of this section before moving on; the next sections will assume an understanding of the syntax and concepts. import Html exposing (..) import Html.App as Html import Html.Events exposing (..) import Html.Attributes exposing (..) At the top of our app file, we need to import dependencies. We expose the Html package to the application for use and then declare Html.App as Html. Because we'll be writing a view function, we will expose Html.Events and Html.Attributes to use click and input events, IDs, classes, and other element attributes. Everything we're going to write is part of The Elm Architecture. In brief, this refers to the basic pattern of Elm application logic. It consists of Model (application state), Update (way to update the application state), and View (render the application state as HTML). You can read more about The Elm Architecture in Elm's guide. main : Program Never main = Html.program { init = init , update = update , subscriptions = \_ -> Sub.none , view = view } is a type annotation. This annotation says " main : Program Never main has type Program and should Never expect a flags argument". If this doesn't make a ton of sense yet, hang tight—we'll be covering more type annotations throughout our app. Every Elm project defines main as a program. There are a few program candidates, including beginnerProgram, program, and programWithFlags. Initially, we'll use main = Html.program. Next, we'll start our app with a record that references an init function, an update function, and a view function. We'll create these functions shortly. subscriptions may look strange at first. Subscriptions listen for external input and we won't be using any in the Chuck Norris Quoter so we don't need a named function here. Elm does not have a concept of null or undefined and it's expecting functions as values in this record. This is an anonymous function that declares there are no subscriptions. Here's a breakdown of the syntax. \ begins an anonymous function. A backslash is used because it resembles a lambda (λ). _ represents an argument that is discarded, so \_ is an anonymous function that doesn't have arguments. -> signifies the body of the function. subscriptions = \_ -> ... in JS would look like this: // JS subscriptions = function() { ... } (What would an anonymous function with an argument look like? Answer: \x -> ...) Next up are the model type alias and the init function: {- MODEL * Model type * Initialize model with empty values -} type alias Model = { quote : String } init : (Model, Cmd Msg) init = ( Model "", Cmd.none ) The first block is a multi-line comment. A single-line comment is represented like this: -- Single-line comment Let's create a type alias called Model: type alias Model = { quote : String } A type alias is a definition for use in type annotations. In future type annotations, we can now say Something : Model and Model would be replaced by the contents of the type alias. We expect a record with a property of quote that has a string value. We've mentioned records a few times, so we'll expand on them briefly: records look similar to objects in JavaScript. However, records in Elm are immutable: they hold labeled data but do not have inheritance or methods. Elm's functional paradigm uses persistent data structures so "updating the model" returns a new model with only the changed data copied. Now we've come to the init function that we referenced in our main program: init : (Model, Cmd Msg) init = ( Model "", Cmd.none ) The type annotation for init means " init returns a tuple containing record defined in Model type alias and a command for an effect with an update message". That's a mouthful--and we'll be encountering additional type annotations that look similar but have more context, so they'll be easier to understand. What we should take away from this type annotation is that we're returning a tuple (an ordered list of values of potentially varying types). So for now, let's concentrate on the init function. Functions in Elm are defined with a name followed by a space and any arguments (separated by spaces), an =, and the body of the function indented on a new line. There are no parentheses, braces, function or return keywords. This might feel sparse at first but hopefully you'll find the clean syntax speeds development. Returning a tuple is the easiest way to get multiple results from a function. The first element in the tuple declares the initial values of the Model record. Strings are denoted with double quotes, so we're defining { quote = "" } on initialization. The second element is Cmd.none because we're not sending a command (yet!). {- UPDATE * Messages * Update case -} type Msg = GetQuote update : Msg -> Model -> (Model, Cmd Msg) update msg model = case msg of GetQuote -> ( { model | quote = model.quote ++ "A quote! " }, Cmd.none ) The next vital piece of the Elm Architecture is update. There are a few new things here. First we have type Msg = GetQuote: this is a union type. Union types provide a way to represent types that have unusual structures (they aren't String, Bool, Int, etc.). This says type Msg could be any of the following values. Right now we only have GetQuote but we'll add more later. Now that we have a union type definition, we need a function that will handle this using a case expression. We're calling this function update because its purpose is to update the application state via the model. The update function has a type annotation that says " update takes a message as an argument and a model argument and returns a tuple containing a model and a command for an effect with an update message." This is the first time we've seen -> in a type annotation. A series of items separated by -> represent argument types until the last one, which is the return type. The reason we don't use a different notation is to indicate the return has to do with currying. In a nutshell, currying means if you don't pass all the arguments to a function, another function will be returned that accepts whatever arguments are still needed. You can learn more about currying elsewhere. The update function accepts two arguments: a message and a model. If the msg is GetQuote, we'll return a tuple that updates the quote to append "A quote! " to the existing value. The second element in the tuple is currently Cmd.none. Later, we'll change this to execute the command to get a random quote from the API. The case expression models possible user interactions. The syntax for updating properties of a record is: { recordName | property = updatedValue, property2 = updatedValue2 } Elm uses = to set values. Colons : are reserved for type definitions. A : means "has type" so if we were to use them here, we would get a compiler error. We now have the logic in place for our application. How will we display the UI? We need to render a view: {- VIEW -} view : Model -> Html Msg view model = div [ class "container" ] [ h2 [ class "text-center" ] [ text "Chuck Norris Quotes" ] , p [ class "text-center" ] [ button [ class "btn btn-success", onClick GetQuote ] [ text "Grab a quote!" ] ] -- Blockquote with quote , blockquote [] [ p [] [text model.quote] ] ] The type annotation for the view function reads, " view takes model as an argument and returns HTML with a message." We've seen Msg a few places and now we've defined its union type. A command Cmd is a request for an effect to take place outside of Elm. A message Msg is a function that notifies the update method that a command was completed. The view needs to return HTML with the message outcome to display the updated UI. The view function describes the rendered view based on the model. The code for view resembles HTML but is actually composed of functions that correspond to virtual DOM nodes and pass lists as arguments. When the model is updated, the view function executes again. The previous virtual DOM is diffed against the next and the minimal set of updates necessary are run. The structure of the functions somewhat resembles HTML, so it's pretty intuitive to write. The first list argument passed to each node function contains attribute functions with arguments. The second list contains the contents of the element. For example: button [ class "btn btn-success", onClick GetQuote ] [ text "Grab a quote!" ] This button's first argument is the attribute list. The first item in that list is the class function accepting the string of classes. The second item is an onClick function with GetQuote. The next list argument is the contents of the button. We'll give the text function an argument of "Grab a quote!" Last, we want to display the quote text. We'll do this with a blockquote and p, passing model.quote to the paragraph's text function. We now have all the pieces in place for the first phase of our app! We can view it at. Try clicking the "Grab a quote!" button a few times. Note: If the app didn't compile, Elm provides compiler errors for humans in the console and in your editor if you're using an Elm plugin. Elm will not compile if there are errors! This is to avoid runtime exceptions. That was a lot of detail, but now we're set on basic syntax and structure. We'll move on to build the features of our Chuck Norris Quoter app. Calling the API Now, we're ready to fill in some of the blanks we left earlier. In several places, we claimed in our type annotations that a command Cmd should be returned, but we returned Cmd.none instead. Now we'll replace those with the missing command. When this step is done, our application should look like this: Clicking the button will call the API to get and display random Chuck Norris quotes. Make sure you have the API running at so it's accessible to our app. Once we're successfully getting quotes, our source code will look like this: Main.elm - Calling the API The first thing we need to do is import the dependencies necessary for making HTTP requests: import Http import Task exposing (Task) We'll need Http and Task. A task in Elm is similar to a promise in JS: tasks describe asynchronous operations that can succeed or fail. Next, we'll update our init function: init : (Model, Cmd Msg) init = ( Model "", fetchRandomQuoteCmd ) Now instead of Cmd.none we have a command called fetchRandomQuoteCmd. A command is a way to tell Elm to do some effect (like HTTP). We're commanding the application to fetch a random quote from the API on initialization. We'll define the fetchRandomQuoteCmd function shortly. {- UPDATE * API routes * GET * Messages * Update case -} -- API request URLs api : String api = "" randomQuoteUrl : String randomQuoteUrl = api ++ "api/random-quote" -- GET a random quote (unauthenticated) fetchRandomQuote : Platform.Task Http.Error String fetchRandomQuote = Http.getString randomQuoteUrl fetchRandomQuoteCmd : Cmd Msg fetchRandomQuoteCmd = Task.perform HttpError FetchQuoteSuccess fetchRandomQuote We've added some code to our update section. First, we'll store the API routes. The Chuck Norris API returns unauthenticated random quotes as strings, not JSON. Let's create a function called fetchRandomQuote. The type annotation declares that this function is a task that either fails with an error or succeeds with a string. We can use the Http.getString method to make the HTTP request with the API route as an argument. HTTP is something that happens outside of Elm. A command is needed to request the effect and a message is needed to notify the update that the effect was completed and to deliver its results. We'll do this in fetchRandomQuoteCmd. This function's type annotation declares that it returns a command with a message. Task.perform is a command that tells the runtime to execute a task. Tasks can fail or succeed so we need to pass three arguments to Task.perform: a message for failure ( HttpError), a message for success ( FetchQuoteSuccess), and what task to perform ( fetchRandomQuote). HttpError and FetchQuoteSuccess are messages that don't exist yet, so let's create them: -- Messages type Msg = GetQuote | FetchQuoteSuccess String | HttpError Http.Error -- Update update : Msg -> Model -> (Model, Cmd Msg) update msg model = case msg of GetQuote -> ( model, fetchRandomQuoteCmd ) FetchQuoteSuccess newQuote -> ( { model | quote = newQuote }, Cmd.none ) HttpError _ -> ( model, Cmd.none ) We add these two new messages to the Msg union type and annotate the types of their arguments. FetchQuoteSuccess accepts a string that contains the new Chuck Norris quote from the API and HttpError accepts an Http.Error. These are the possible success/fail results of the task. Next, we add these cases to the update function and declare what we want returned in the (Model, Cmd Msg) tuple. We also need to update the GetQuote tuple to fetch a quote from the API. We'll change GetQuote to return the current model and issue the command to fetch a random quote, fetchRandomQuoteCmd. FetchQuoteSuccess's argument is the new quote string. We want to update the model with this. There are no commands to execute here, so we will declare the second element of the tuple Cmd.none. HttpError's argument is Http.Error but we aren't going to do anything special with this. For the sake of brevity, we'll handle API errors when we get to authentication but not for getting unauthenticated quotes. Since we're discarding this argument, we can pass _ to HttpError. This will return a tuple that sends the model in its current state and no command. You may want to handle errors here on your own after completing the provided code. It's important to remember that the update function's type is Msg -> Model -> (Model, Cmd Msg). This means that all branches of the case statement must return the same type. If any branch does not return a tuple with a model and a command, a compiler error will occur. Nothing changes in the view. We altered the GetQuote onClick function logic, but everything that we've written in the HTML works fine with our updated code. This concludes our basic API integration for the first half of this tutorial. Try it out! In part two, we'll tackle adding users and authentication. Aside: Reading Compiler Type Errors If you've been following along and writing your own code, you may have encountered Elm's compiler errors. Though they are very readable, type mismatch messages can sometimes seem ambiguous. Here's a small breakdown of some things you may see: String -> a A lowercase variable a means "anything could go here." The above means "takes a string as an argument and returns anything." [1, 2, 3] has a type of List number: a list that only contains numbers. [] is type List a: Elm infers that this is a list that could contain anything. Elm always infers types. If we've declared type definitions, Elm checks its inferences against our definitions. We'll define types upfront in most places in our app. It's best practice to define the types at the top-level at a minimum. If Elm finds a type mismatch, it will tell us what type it has inferred. Resolving type mismatches can be one of the larger challenges to developers coming from a loosely typed language like JS (without Typescript), so it's worth spending time getting comfortable with this. Recap and Next Steps We've covered installing and using the Elm language and learned how to create our first app. We've also integrated with an external API through HTTP. You should now be familiar with Elm's basic syntax, type annotation, and compiler errors. If you'd like, take a little more time to familiarize with Elm's documentation. The Elm FAQ is another great resource from the Elm developer community. In the second half of this tutorial (soon to come), we'll take a deeper dive into authenticating our Chuck Norris Quoter app using JSON Web Tokens. Stay tuned! Kim Maida , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/creating-your-first-elm-app-from-authentication-to
CC-MAIN-2018-39
en
refinedweb
. License: For the Field(s) parameter, sorting by the Shape field or by multiple fields is only available with an Desktop Advanced license. Sorting by any single attribute field (excluding Shape) is available at all license levels. Syntax Sort_management (in_dataset, out_dataset, sort_field, {spatial_sort_method}) Code sample The following Python window script demonstrates how to use Sort to order features by the values of a field. import arcpy from arcpy import env env.workspace = "C:/data/city.gdb" arcpy.Sort_management("crime", "crime_Sort", [["DATE_REP", "ASCENDING"]]) The following Python script demonstrates how to use Sort in a stand-alone script. # Name: Sort_example2.py # Description: Sorts wells by location and well yield. # Import system modules import arcpy # Set workspace environment arcpy) Environments Licensing information - ArcGIS Desktop Basic: Limited - ArcGIS Desktop Standard: Limited - ArcGIS Desktop Advanced: Yes
http://pro.arcgis.com/en/pro-app/tool-reference/data-management/sort.htm
CC-MAIN-2018-39
en
refinedweb
Often you'll need additional configuration for your functions, such as third-party API keys or tuneable settings. The Firebase SDK for Cloud Functions offers built-in environment configuration to make it easy to store and retrieve this type of data for your project. Set environment configuration for your project To store environment data, you can use the firebase functions:config:set command in the Firebase CLI. Each key can be namespaced using periods to group related configuration together. Keep in mind that only lowercase characters are accepted in keys; uppercase characters are not allowed. For instance, to store the Client ID and API key for "Some Service", you might run: firebase functions:config:set someservice.key="THE API KEY" someservice.id="THE CLIENT ID" Retrieve current environment configuration To inspect what's currently stored in environment config for your project, you can use firebase functions:config:get. It will output JSON something like this: { "someservice": { "key":"THE API KEY", "id":"THE CLIENT ID" } } This functionality is based on the Google Cloud Runtime Configuration API. Access environment configuration in a function Some configuration is automatically provided under the reserved firebase namespace. Environment configuration is made available inside your running function via functions.config(). To use the configuration above, your code might look like this: const functions = require('firebase-functions'); const request = require('request-promise'); exports.userCreated = functions.database.ref('/users/{id}').onWrite(event => { let email = event.data.child('email').val(); return request({ url: '', headers: { 'X-Client-ID': functions.config().someservice.id, 'Authorization': `Bearer ${functions.config().someservice.key}` }, body: {email: email} }); }); Use environment configuration to initialize a module Some Node modules are ready without any configuration. Other modules need extra configuration to initialize correctly. We recommend you store this configuration in environment configuration variables rather than hard-coding it. This helps you keep your code much more portable, which lets you open source your application or easily switch between production and staging versions. For example, to use the Slack Node SDK module, you might write this: const functions = require('firebase-functions'); const IncomingWebhook = require('@slack/client').IncomingWebhook; const webhook = new IncomingWebhook(functions.config().slack.url); Prior to deploying, set the slack.url environment config variable: firebase functions:config:set slack.url= Additional Environment Commands firebase functions:config:unset key1 key2removes the specified keys from the config firebase functions:config:clone --from <fromProject>clones another project's environment into the currently active project. Automatically populated environment variables There are environment variables that are automatically populated in the functions runtime and in locally emulated functions, including: process.env.GCLOUD_PROJECT: Provides the Firebase project ID process.env.FIREBASE_CONFIG: Provides the following Firebase project config info: { databaseURL: '', storageBucket: 'projectId.appspot.com', projectId: 'projectId' } This configuration is applied automatically when you initialize the Firebase Admin SDK with no arguments. If you are writing functions in JavaScript, initialize like this: const admin = require('firebase-admin'); admin.initializeApp(); If you are writing functions in TypeScript, initialize like this: import * as functions from 'firebase-functions'; import * as admin from 'firebase-admin'; import 'firebase-functions'; admin.initializeApp(); If you need to initialize the Admin SDK with the default project configuration using service account credentials, you can load the credentials from a file and add them to FIREBASE_CONFIG like this: serviceAccount = require('./serviceAccount.json'); const adminConfig = JSON.parse(process.env.FIREBASE_CONFIG); adminConfig.credential = admin.credential.cert(serviceAccount); admin.initializeApp(adminConfig);
https://firebase.google.com/docs/functions/config-env?hl=ru
CC-MAIN-2018-39
en
refinedweb
Prints basic Graph statistics to standard output or to a file named OutFNm. Additional extensive statistics which is computationally more expensive is computed when Fast is False. Parameters: A Snap.py graph or a network. Graph description. Do not provide an empty string “” for this parameter, it might cause your program to crash. Optional file name for output. If not specified, output is printed to standard output. Do not provide an empty string “” for this parameter, it might cause your program to crash. To print to standard output on Mac OS X or Linux, provide “/dev/stdout” as a file name. This method does not work on Windows. Optional flag specifing whether basic (True) or extended (False) statistics should be printed. Currently, it is not possible to have extended statistics printed out to standard output on Windows, since OutFNm must be non-empty, if specified. Return value: The following example shows how to calculate graph statistics for random graphs of type TNGraph, TUNGraph, and TNEANet: import snap # print extended statistics to file 'info-pngraph.txt' Graph = snap.GenRndGnm(snap.PNGraph, 100, 1000) snap.PrintInfo(Graph, "Python type PNGraph", "info-pngraph.txt", False) # print basic statistics to file 'info-pungraph.txt' UGraph = snap.GenRndGnm(snap.PUNGraph, 100, 1000) snap.PrintInfo(UGraph, "Python type PUNGraph", "info-pungraph.txt") # print basic statistics to standard output Network = snap.GenRndGnm(snap.PNEANet, 100, 1000) snap.PrintInfo(Network, "Python type PNEANet")
https://snap.stanford.edu/snappy/doc/reference/PrintInfo.html
CC-MAIN-2018-39
en
refinedweb
I am prototyping some C# 3 collection filters and came across this. I have a collection of products: public class MyProduct { public string Name { get; set; } public Double Price { get; set; } public string Description { get; set; } } var MyProducts = new List<MyProduct> { new MyProduct { Name = "Surfboard", Price = 144.99, Description = "Most important thing you will ever own." }, new MyProduct { Name = "Leash", Price = 29.28, Description = "Keep important things close to you." } , new MyProduct { Name = "Sun Screen", Price = 15.88, Description = "1000 SPF! Who Could ask for more?" } }; Now if I use LINQ to filter it works as expected: var d = (from mp in MyProducts where mp.Price < 50d select mp); And if I use the Where extension method combined with a Lambda the filter works as well: var f = MyProducts.Where(mp => mp.Price < 50d).ToList(); Question: What is the difference, and why use one over the other?
http://ansaurus.com/question/5194-c-30-where-extension-method-with-lambda-vs-linqtoobjects-to-filter-in-memory-collections
CC-MAIN-2018-39
en
refinedweb
- NAME - SYNOPSIS - DESCRIPTION - METHODS - SEE ALSO - AUTHOR NAME App::CPANTS::Lint - front-end to Module::CPANTS::Analyse SYNOPSIS use App::CPANTS::Lint; my $app = App::CPANTS::Lint->new(verbose => 1); $app->lint('path/to/Foo-Dist-1.42.tgz') or print $app->report; # if you need raw data $app->lint('path/to/Foo-Dist-1.42.tgz') or return $app->result; # if you need to look at the details of analysis $app->lint('path/to/Foo-Dist-1.42.tgz'); print Data::Dumper::Dumper($app->stash); DESCRIPTION App::CPANTS::Lint is a core of cpants_lint.pl script to check the Kwalitee of a distribution. See the script for casual usage. You can also use this from other modules for finer control. METHODS new Takes an optional hash (which will be passed into Module::CPANTS::Analyse internally) and creates a linter object. Available options are: - verbose Makes Module::CPANTS::Analyse verbose. False by default. - core_only If true, the lintmethod (see below) returns true even if extrametrics (as well as experimentalmetrics) fail. This may be useful if you only care Kwalitee rankings. False by default. - experimental If true, failed experimentalmetrics are also reported (via reportmethod). False by default. Note that experimentalmetrics are not taken into account while calculating a score. - save If true, output_reportmethod writes to a file instead of writing to STDOUT. - dump, yaml, json If true, reportmethod returns a formatted dump of the stash (see below). - search_path If you'd like to use extra metrics modules, pass a reference to an array of their parent namespace(s) to search. Metrics modules under Module::CPANTS::Kwalitee namespace are always used. lint Takes a path to a distribution tarball and analyses it. Returns true if the distribution has no significant issues (experimental metrics are always ignored). Otherwise, returns false. Note that the result doesn't always match with what is shown at the CPANTS website, because there are metrics that are only available at the site for various reasons (some of them require database connection, and some are not portable enough). Returns a report string that contains the details of failed metrics (even if lint method returns true) and a Kwalitee score. If dump (or yaml, json) is set when you create an App::CPANTS::Lint object, report returns a formatted dump of the stash. result Returns a reference to a hash that contains the details of failed metrics and a Kwalitee score. Internal structure may change without notice, but it always has an "ok" field (which holds a return value of lint method) at least. stash Returns a reference to a hash that contains the details of analysis (stored in a stash in Module::CPANTS::Analyse). Internal structure may change without notice, but it always has a "kwalitee" field (which holds a reference to a hash that contains the result of each metric) at least. score Returns a Kwalitee score. output_report Writes a report to STDOUT (or to a file). report_file Returns a path to a report file, which should have the same distribution name with a version, plus an extension appropriate to the output format. (eg. Foo-Bar-1.42.txt, Foo-Bar-1.42.yml etc) SEE ALSO AUTHOR Kenichi Ishigaki, <ishigaki@cpan.org> This software is copyright (c) 2014 by Kenichi Ishigaki. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
https://metacpan.org/pod/App::CPANTS::Lint
CC-MAIN-2016-26
en
refinedweb
#include <db.h> int DB_ENV->set_data_dir(DB_ENV *dbenv, const char *dir); This interface has been deprecated. You should use DB_ENV->add_data_dir() and DB_ENV->set_create_dir() instead. Set the path of a directory to be used as the location of the access method database files. Paths specified to the DB->open() function will be searched relative to this path. Paths set using this method are additive, and specifying more than one will result in each specified directory being searched for database files. If any directories are specified, database files will always be created in the first path specified. If no database directories are specified, database files must be named either by absolute paths or relative to the environment home directory. See Berkeley DB File Naming for more information. The database environment's data directories may also be configured using the environment's DB_CONFIG file. The syntax of the entry in that file is a single line with the string "set_data_dir", one or more whitespace characters, and the directory name. Note that if you use this method for your application, and you also want to use the db_recover or db_archive utilities, then you should create a DB_CONFIG file and set the "set_data_dir" parameter in it. The DB_ENV->set_data_dir() method configures operations performed using the specified DB_ENV handle, not all operations performed on the underlying database environment. The DB_ENV->set_data_dir() method may not be called after the DB_ENV->open() method is called. If the database environment already exists when DB_ENV->open() is called, the information specified to DB_ENV->set_data_dir() must be consistent with the existing environment or corruption can occur. The DB_ENV->set_data_dir() method returns a non-zero error value on failure and 0 on success. The dir parameter is a directory to be used as a location for database files. This directory must currently exist at environment open time. When using a Unicode build on Windows (the default), this argument will be interpreted as a UTF-8 string, which is equivalent to ASCII for Latin characters. The DB_ENV->set_data_dir() method may fail and return one of the following non-zero errors: If the method was called after DB_ENV->open() was called; or if an invalid flag value or parameter was specified.
http://docs.oracle.com/cd/E17276_01/html/api_reference/C/envset_data_dir.html
CC-MAIN-2016-26
en
refinedweb
java.lang.Object com.bea.netuix.laf.genes.mutators.PathResolvercom.bea.netuix.laf.genes.mutators.PathResolver public class PathResolver PathResolver performs path resolution for Look and Feel resources. It provides functionality similar to HtmlPathResolver for genes. Resource paths specified as gene values are resolved based on the search path of the specified (as "path-type") resource type. For example: <gene name="icon"> <value>icon.gif</value> <mutator name="path-resolver"> <arg name="path-type">images</arg> </mutator> </gene>The resulting gene value will be the resolved path of the icon.gif image based on the configured skin image search path. HtmlPathResolver, "Skin and Skeleton configuration element <render-dependencies>", Serialized Form public PathResolver() public String mutate(String mutatorName, String currentValue, Gene gene, Map<String,String> args, MutationContext context) mutatein interface Mutator mutatorName- Ignored currentValue- Parital resource path to resolve args- See above gene- The gene being mutated context- The context for mutation
http://docs.oracle.com/cd/E26806_01/wlp.1034/e14255/com/bea/netuix/laf/genes/mutators/PathResolver.html
CC-MAIN-2016-26
en
refinedweb
public class InvalidTargetObjectTypeException extends Exception The serialVersionUID of this class is 1190536278266811217L.TargetObjectTypeException() public InvalidTargetObjectTypeException(String s) s- String value that will be incorporated in the message for this exception. public InvalidTargetObjectTypeException(Exception e, String s) e- Exception that we may have caught to reissue as an InvalidTargetObjectTypeException. The message will be used, and we may want to consider overriding the printStackTrace() methods to get data pointing back to original throw stack. s- String value that will be incorporated in message for this.
http://docs.oracle.com/javase/7/docs/api/javax/management/modelmbean/InvalidTargetObjectTypeException.html?is-external=true
CC-MAIN-2016-26
en
refinedweb
Star Fox: Assault Arwing/Wolfen Riding FAQ by TheUnruly1 Version: 1.3 | Updated: 08/29/07 | Search Guide | Bookmark Guide | | / _| | | | ___| |_ __ _ _ __| |_ _____ ____ _ ___ ___ __ _ _ _| | |_ / __| __/ _` | '__| _/ _ \ \/ / _` / __/ __|/ _` | | | | | __| \__ \ || (_| | | | || (_) > < (_| \__ \__ \ (_| | |_| | | |_ |___/\__\__,_|_| |_| \___/_/\_\__,_|___/___/\__,_|\__,_|_|\__| ____ _ ___ _ _ _ ____ ____ ____ ____ Star Fox Assault |__/ | | \ | |\ | | __ |___ |__| | | Vehicle Riding FAQ | \ | |__/ | | \| |__] | | | |_\| Copyright 2007 TheUnruly1. =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | | \ | | | |_\| TABLE OF CONTENTS =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= 1. INTRODUCTION & OTHERS (inttuon) 2. HOW TO RIDE (htrdmns) 3. WEAPONS (wepsess) 4. RIDER COMBAT (rdvsveh) 5. HOW TO BEAR A RIDER (htbaruo) 6. CHARACTERS TO USE (chartus) 7. CONCLUSION Best viewed in Courier New 10 point. =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 1. INTRODUCTION (inttuon) | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= To quote HAXage: =-_-==-_-==-_-==-_-==-_-==-_-==-_-= "That's right, riding. Mission 4, mission 7. Only this time, you don't have the Plasma Cannon, you fall off, and your friend doesn't go as slow as Wolf/Falco. And there's always that thought that they might flip you off accidentally, or your opponent might actually do some DAMAGE to you, unlike those dullard Aparoids that fly at you. So! Who wants to try?" =-_-==-_-==-_-==-_-==-_-==-_-==-_-= And that sums it up very well indeed. It's actually considered by some as a "cheat strategy", but if every little secret in a game was a cheat strategy, well, then competitive SSBM wouldn't be too much fun, would it? =D Of course, that is but one example. __________________________________ / \ --------- ABOUT ME --------- --------- --------- \__________________________________/ Alrighty then, I'll start. My name's Will, and I go by TheUnruly1. I make no claim of being a good player, although HAXage (whom I taught, and true to Star Wars spirit far outgrew my tactical training wheels), in a character profile referred to me in his FAQ as a "Krystal vet". As a matter of fact, I doubt this very statement at the cost of my pride. I am an average player, and I forever will be, in this game. The only thing true of that statement is that my main character is Krystal. (I like Barriers.) I also think of new, fun ways to attack my opponent when the chips are down. As a matter of fact, this manner of thinking helped me instigate a set of common rules about Riding, which is just one of those dangerous things. Anyways, I'm excited, so I'm moving on. __________________________________ / \ --------- WHY RIDE? --------- --------- --------- \__________________________________/ The reason one would ride is quite simple - to use a form of SFA combat that actually lets you and your teammate work with synergy. Such a team has the advantage of double cover - that is, the rider covers the machine he rides on, and the Arwing pilot keeps the rider safe. Strafes low to the ground from off an Arwing's wing can be easily fatal to pilots on the ground. A fly-by Gatling Gunning of a Landmaster can prove very satisfying indeed in killing something that usually counters an Arwing. Unless you're loaded with Homing Launchers, it takes a great aim and reflex(es) to be an effective rider, and that's discounting the fact that your bearer has to have these as well. __________________________________ / \ --------- SOLO RIDES --------- --------- --------- \__________________________________/ Riding on your own is a weird concept, but the way it works is you get out of your Arwing briefly by pressing Z so you can ride, firing your weapon, and then getting back in. The downside, as compared to team riding, is that while you are outside your Arwing your vehicle will just move in a straight line. However, it can be a good tool for quickly dispatching enemy aircraft if you have the right weaponry. Great sniping tool, too. The OTHER way to use a solo ride is...bailing out - if in trouble, press Z and then press Y. Nothing more. You will fall out through the neon lilac jetstream of your previous vehicle and hope your opponent doesn't notice you did. =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 2. HOW TO RIDE (htrdmns) | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= First off, I've only mentioned in passing how to actually GET RIDING. So, I will now take the time to detail all the major points of riding. __________________________________ -------- THE PREPARATIONS --------- --------__________________________________--------- You should always make sure you have a clear route up and a clear six, because you don't want people ripping you apart as you scramble for the plane. First and foremost, you need the designated rider (no one's drunk here, don't fear) on the wing of choice. Now how much health do your wings have? 59. This makes riding on the fuselage of the plane viable too... on a Wolfen, it's advised, and for an Arwing it's silly (considering you have those two massive wings). Getting up on the wing is tough for Falco, Krystal and sometimes Fox, which makes many people name the higher jumpers in the game the best riders. If you have problems, take a run at it, and try, try, again. (until you are finally silenced after the 10 seconds it takes for the enemy to seek and destroy you.) After that, just get the designated bearer into the plane, and-HEY, COME BACK, YOU DROPPED ME! __________________________________ -------- THE TAKEOFF --------- --------__________________________________--------- The takeoff, as we all agree, is the time where most riders fall off, because the bearer rushes the ascent (but this is about HOW TO RIDE, HOW TO BEAR will be covered later). Anyways, now is when you begin your R-button-holding spree - there's no point in moving much, unless your wing's about to die and you need to hop over to the fuselage. Just hang on, cover the plane's six, and save the real combat for after you have gained a lot of altitude. __________________________________ -------- STAYING ON --------- --------__________________________________--------- During a fight, DON'T PANIC and do not jump off. You act as much needed support for your friend's aircraft. KEEP R HELD DOWN if you're about to fall. If your wing is about to go, you can do one of two things. The first, the simpler but riskier one, is to hug the hull of the Arwing and move to the very edge of the wing (not on the tip, you dorfus, but on the edge where it connects to the plane). When a wing blows, everything but this little tip is removed. You will then be staying on rather dangerously, and could easily be shot off as the unbalanced aircraft teeters and totters. The upside of this is you still get the same features - less chance of hitting yourself - as you would on a full wing (kinda). A reader, whose GameFAQs name is nivlac91, sent me this about the Arwing's wing break area: ============. The other way, the safer but harder one, is to get your bearer to stop and jump from the wing to the fuselage/hull. This will then allow you to ride the fuselage just like solo riding, but with a movable aircraft. Wolfens, bad luck: your wings get almost completely blown off, and riding on your fuselage is the best option anyways, because your wings are so stubby. __________________________________ -------- RIDING A WOLFEN --------- --------__________________________________--------- Wolfens are a whole new game - in riding, you could say Wolfens are high risk, low return (something you don't want a mode of transport to be, normally). If you are merely sitting on one of the thin Wolfen wings that are menaces to board, you might just walk off without even knowing you let go of R. The fuselage is the only way to go, because as aforementioned, those wings are hard as a week-old kaiser bun to get on and stay on. There is, however a "safe spot" that is near the back: right where either of the upper wings joins with the hull, kind of wedged inbetween the wing and the upper brake on the back of the plane. You're good there, but you can't shoot to many different angles from there; you can only shoot out as if you were a fixed gun on the side of the Wolfen. After this two- paragraph explanation, a simple point is made - the wings, the back, and even the safe spot are worse options than just the fuselage. Also, Wolfens can't stop, and their wings are short, tiny, and pointy, making them very hard to Speedboard (will be explained later) or jump from wing to fuselage, if you got on them already. =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 3. WEAPONS (wepsess) | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= __________________________________ / \ --------- GOOD WEAPONS TO USE --------- --------- --------- \__________________________________/ Well, what weapons does one use when riding? There are two characteristics that need attention: **The weapon won't harm you. This is rather obvious, but we don't want to use Sensor Bombs when riding, do we? **Good control over the weapon is possible, or is offset by a large usefulness. The last bit was added in for the Gatling Gun. Therefore, I have a list of weapons that are useful for riding. The stats showing how many hits the weapons take to break a wing are to help you know when you're going to lose your footing (or when you'll snatch it from your opponent). And by the way, Planes take 1/16 of their HP in damage from hitting something (tested), and so do wings (guess). __________________________________ -------- HOMING LAUNCHER --------- --------__________________________________--------- Ammo per Item: 10 Max: 99 Lock-on Blow Off a Wing? 2 Hits Well, this is probably the best riding weapon there is. It locks on, comes in bulk, and does a crapload of damage to Landmasters. Not only that, but it's insured that it blows up away from where you are, and it's excellent at killing other aircraft/riders. It's almost like giving the Arwing a second Charge Shot, which is obviously good at dispatching Pilots too. __________________________________ -------- GATLING GUN --------- --------__________________________________--------- Ammo Per Item: 100 Max: 999 Blow Off a Wing? A mere 4 bullets GATLING GUN? WHAT? It has horrible range and sporadic firing! Why would you ride with one? Well, the reason we use the Gatling Gun, is of course to do tons of damage, kill vehicles in paltry numbers of seconds and whatnot. This makes it a godsend against Landmasters that you can use to swoop and destroy on the annoying tanks. The GGun also serves as your primary pilot killer, although the Homing Launcher can also take this job if you need range. It's good at tearing the wings off other rider-bearing planes as well if you can get close. __________________________________ -------- MISSILE LAUNCHER --------- --------__________________________________--------- Ammo Per Item: 3 Max: 5 Blow Off a Wing? 2 Hits Okay, okay, the Homing Launcher's the best pick for going after aircraft, but it's just too cool launching a guided missile off a plane and owning a Landmaster with it. Missile Launchers are mostly luxury items, so don't expect them to be your main weapon. __________________________________ -------- SNIPER & DEMON SNIPER --------- --------__________________________________--------- Ammo Per Item: 10 for Sniper Rifle, 5 for Demon Sniper Max: 99 for both Blow Off a Wing? Both 1 hit These weapons are complete BEASTS when unleashed in a riding setting. The Sniper, on one hand, is awesome against Pilots if you aim (extremely) well, and will take an Arwing down in 2 hits (Landmaster with 3). It is your oyster, and open to whatever purpose you want. The Demon, on the other hand, does only one thing: break stuff. Aim at selected target. Insure you will hit, as ammo is precious. Induce fear into your opponent as you ready the shot. Up in the air like this, you couldn't miss. You fire, annihilating the vehicle and leaving a hapless Pilot to be finished off with the weapon of your choice. That's the Demon Sniper, and that's how it usually ends up. __________________________________ -------- GRENADE --------- --------__________________________________--------- Ammo Per Item: 5 Max: 99 Blow Off a Wing? 2 Grenades, if you are actually dumb enough to let that happen Explodes after 5 1/2 seconds Yeah, there are oodles of grenade pros out there, saying "they're excellent and do tons of damage!" They are right, but for the average FAQ reader and myself, Grenades are a tool that is difficult to use. Sure, it's very hard to hit an Arwing with 'em effectively, so why in the hell am I recommending them? They can be used in bulk, and drop the hammer on Landmasters. That's about it though. __________________________________ / \ --------- WHAT NOT TO USE --------- --------- --------- \__________________________________/ Wear whatever you like, fellas, but consider ignoring these weapons when choosing your main kicks before taking off. __________________________________ -------- MACHINE GUN --------- --------__________________________________--------- Ammo Per Item: 200 Max: 999 Blow Off a Wing? 12 bullets "But it's a lot more accurate!" That's very true. However, when we consider the purpose of using it or the GGun while riding (gunning down vehicles or anything at close range) we can easily see that the Gatling Gun does its job 3 times better (4 times if we're talking Pilots) because it does much more damage per bullet. Do, however, get a Machine Gun ready if you get shot down, so you can last longer on foot. __________________________________ -------- BLASTER --------- --------__________________________________--------- Ammo Per Item: Start with weapon Max: N/A (infinite ammo) Blow Off a Wing? 2 full Charge Shots The Blaster violates the very first rule. Y'know, the one we said was obvious? **The weapon won't harm you. This is rather obvious, but we don't want to use Sensor Bombs when riding, do we? The Blaster will hit yourself with the large chargeup often enough (less so when firing from the wings, however) to cause damage. This is most particularly noticed when you are firing down at something. However, the fully Charged Blaster does quite a significant amount of damage, and it's definitely possible to have a great riding run using your Blaster to own Landmasters - I just don't recommend it because of the charge time that wastes your valuable airtime - yet another reason why high jumpers make the best riders. __________________________________ -------- SENSOR BOMB --------- --------__________________________________--------- Ammo Per Item: 5 Bombs Max: 99 Blow Off A Wing? 2 Hits (or maybe 1) The only practical use of these while riding is doing a plane-to-plane transfer (will be explained), dropping them and transferring back, and who the hell is good enough to regularly do that? =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 4. RIDER COMBAT (rdvsveh) | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= __________________________________ / \ --------- UNITY AND ALL THAT --------- --------- --------- \__________________________________/ When you're riding, there's one thing that should be the first one in your mind: YOUR UNITY between you and your plane CANNOT BE BROKEN. Why? If you get shot off, lose a wing, get mauled but miraculously survive, whatever, it's a hassle to get back together. You are then left not with a unified vehicle but with an aircraft and a pilot, separate. The sturdier you stand, the greater chance of survival your unified vehicle has. The longer you last, the more you can whittle your opponent down. This means no going off into your own little world, but instead shooting down whatever threatens your plane. Your opponent, likely thinking that if your plane goes down, you go down, will break the bond by wrecking the plane. Thus, you protect your plane - the bonding force - over saving your own life. If the plane goes down, you effectively both go down. __________________________________ / \ --------- AGAINST THE LANDMASTER --------- --------- --------- \__________________________________/ Landmasters pose huge problems for any aircraft. With an extra set of guns, the threat is rather diminished, because you can use stuff like Grenades, Homing Launchers, and most of all Gatling Guns. Using a swoop attack that involves the Arwing pointing down and firing, you can simultaneously charge Landmasters with Arwing fire and Rider fire. This will likely destroy the Master before it can even hit you with a second Charge Shot, allowing you to go heal yourself of pain. MAKE SURE you are close before you start GGunning - you don't want to waste precious ammo and have to go get some more. Alternatively, you can annoy them by gaining lots of altitude, briefly swooping down into the range of the Homing Launcher, firing, and boost away. Doing this means that in about 3 hits, you'll wreck the Master. But you get the basic idea - since the tank is grounded, use your altitude advantage to keep yourself safe, and attack in bursts. __________________________________ / \ --------- AGAINST AN AIRCRAFT --------- --------- --------- \__________________________________/ Well, what do Pilots normally use against Arwings and stuff? Homing Launchers. This same principle applies here too, because several of your other weapons (GGun, Grenades) are unusable. Homing Launchers will always be the anti-air weapon of choice, but since you're flying at their level and all, Snipers (and definitely Demon ones) become a very viable weapon. The reason for this phenomenon is because being up an the air with a plane bearing you eliminates the flaw of trying to Snipe an Arwing from the ground - it can't evade you as well anymore. Height no longer will work as an escape, and neither will Boosting off. Previously, it was like this... ----------------------------------------------------- |--------| | ARWING | <--------------< *MISS!* |--------| B O O S T S ^ H ^ O ^ T ^ |--------| | SNIPER | |--------| ------------------------------------------------------- Now, it's like this... ------------------------------------------------------- |--------| B O O S T | ARWING | <--------------< |--------| |--------|<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<| RIDER | *BOOM* SNIPER SHOT |--------| -------------------------------------------------------- That explain it? Snipers are one of the weapons of choice for riding against aircraft. Missile Launchers aren't practical against Arwings, because not only are they luxury items that you will rarely find, they do almost twice as much damage to Landmasters. Still, if there are no Landmasters around, go ahead. Another thing about aircraft to use for your advantage: planes take a while to turn around, so if you get on their six you can start hammering them. If the aircraft tries to mix it up by doing a Loop, put on the brakes and try either of two approaches. For one, you could probably get off 2 free Homers. Or, if you're looking for the more complete ownage type attack, you could try and Snipe it as it comes out of the Loop...hard to pull off, but deadly. __________________________________ / \ --------- AGAINST PILOTS --------- --------- --------- \__________________________________/ Pilots are kinda hard to fight, because the height advantage actually HINDERS you. Not only that, but they're small as a pinhead, so Sniping is hard. What's the solution? Go really low. This too is dangerous because it leaves you open to GGuns and the like, which are the same kind of weapons that you should be using on them. Thus, it's basically whoever goes down first. Or it would be, but I'm forgetting that the Arwing has attacks too. Arwing swoop attacks are fatal to Pilots, let alone with an extra set of guns on your back. Use it. This does NOT mean that you should do an almost vertical Dive-Bomb; that's a nice way to lose your rider. Instead, you need to boost in and start from a long way away. After, pull out quickly (to avoid GGun damage), circle around in a large arc and repeat. So, let's recap, the main idea with Pilots is to use your noodle and your vehicle's mobility to outclass the munchkins. If Homing Launchers come your way, which they undoubtedly will, seek to destroy the guy (next time you dive) in one round of battle because you don't have the luxury of time (you'll get Homered). This is plenty easy to accomplish if you can get your Arwing bearer some upgraded lasers. Then the one Homer that they get on you won't pose a long term problem and can be shrugged off with a ring. Well, if you wanted to be cheap, you could use Homing Launchers every now and then while staying high so he couldn't hit you. But let's have fun, right? =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 5. HOW TO BEAR A RIDER (htbaruo) | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= "I'm awake, I'm awake!" Okay, ladies and gents, this is the section teaching you how to be a big, predatory furry animal commonly seen as dangerous to wandering hikers. So, this whole time we've been discussing the Rider, but never the Bearer, right? Bearers are still half the job, and must not be ignored. Besides helping with combat, Bearers do plenty of things. __________________________________ / \ --------- MOVES AND STRATEGY --------- --------- --------- \__________________________________/ Your powerful ally, riding on your wing, is your main asset and is what gives you an edge. This person is also a weapon to be deployed - upon mutual agreement, there are several tactics to be employed letting you divide and conquer, or even just conquer. __________________________________ -------- SPEEDBOARDING --------- --------__________________________________--------- Speedboarding can be defined as two things. One is slower, the other one truly is SPEED-boarding. Anyways, they both entail a rider boarding an aircraft without it having to land. The first method involves your ARWING (not Wolfen, it won't work) braking to a stop and letting the rider jump on, just like landing (but not actually landing). This is hard except for Peppy/Slippy, really, and will usually end up making you take some damage. This is handy if you don't have the time to land, but daredevils have another option. The rider jumps, you fly towards the rider, and pick them up. Sounds insane, right? It's wicked fast if you do it, and you won't be wasting a moment in getting the rider on your wing. Again, good for high jumpers. __________________________________ -------- THE CATAPULT --------- --------__________________________________--------- WHEEEE! This is always fun to do and never gets old. It's a perfect way to either drop your little spy via airplane into enemy territory or to send him flying to new heights. Hell, you can even make it up Katina Tower or the Spire in Titania without booster packs. Get your rider on a wing and simply barrel roll pointing the control stick to the wing opposite the one they're on. WHEEE! You just sent your rider flying as the wing rolled upwards. Often done accidentally by reflex; TheUnruly1 is not responsible for injuries caused by mad riders after doing this by accident. When done right, though, it's pretty cool. __________________________________ -------- MIXING IT UP --------- --------__________________________________--------- Be sure to switch positions once in a while, it surprises your opponent - especially if you have very different sets of weaponry. Get one person to pack heavy weapons and missiles, the other one packs GGun and Grenades. The switch can be performed simply by the plane pilot pressing Z and then the rider pressing Z: AT LEVEL FLIGHT. (Although, sometimes it is fun having two riders on one plane at once. Can you say...battle on the planetop?) For fun, you could even drop your rider onto an enemy plane for a little sabotage. You may think that packing Motion Sensors is a good idea, but they will only hurt yourself. (The Sensor Bombs often fly off the plane anyway.) For this kind of battle, the best weapon is the GGun, because you're confined to the space of the plane. Just point down, hold A and watch the plane die. __________________________________ -------- TRANSFERS --------- --------__________________________________--------- Here is, everyone, the ultimate in gosu-skill indulgence; the plane-to-plane transfer. This is, in fact, the way to pull off the above paragraph. A plane flies high above another, which is slightly behind it. Jump while holding the control stick forward, and you just jumped from plane to plane. Can be perfected for use in almost any environment. You can also do something I call a catapult transfer by basic term, and a more real-sounding thing for specific term. When done with a Landmaster I call it a Defense Turret. In this, you are flying above a Master, braking to a halt and Catapulting your rider down to the vehicle who gets inside posthaste. I named it such because this is exactly what you're doing. You are quickly and efficiently deploying an Arwing counter to destroy someone who's following you. So, I call it a catapult transfer/Turret because I don't like the ring of "Quick Deployment of Rider to Arwing Countering Vehicle by Barrel Roll". A catapult transfer can also be done with an Arwing, but this is basically just doing a fast P2P transfer. __________________________________ -------- PLAN Bs --------- --------__________________________________--------- Sometimes, for the rider's sake, ya gotta know when to quit. If your plane is smokin' purple, it's good to not go down together. Get the rider off you and go suicidal. Although you probably already knew that, there are signs you can read as to WHEN to start thinking of a Plan B. 1. Your wings are gone. Not very rider-friendly, and your plane will teeter and totter a little. If it's just the wings, chances are you can keep going, but it's likely your body will be shot up too. Get a new plane for either of you. If it's just wings, then go find a laser upgrade...they heal your wings, strangely enough. Then you're good to go. Not one of the real problems that can stab you in the back, but it happens a lot. 2. Your rider has no Barriers and little health. Barriers are lifesavers when that Arwing flies at you shooting Rapid-fire lasers at your wing. If you get on without a Barrier to begin with, it's a little troublesome. But you can get over it. When the rider's health is low and with no barrier for backup, they're as good as gone from the enemy's next attack. The solution? Bail him out onto the ground by a vehicle. This way he can have another chance at life. 3. Your rider beigns to run out of weapons. DO NOT LET THIS SHUT YOU DOWN. The enemy will love you for that. Say your rider runs out of GGun. Does he still have ANYTHING to use against the enemy? Grenades, say? If not, then he always has the Blaster, right? Although if your only option is the Blaster, there's a far better solution; pack weapons yourself and let him fly the jalopy. If you wanna read about that, look above in "Mixing It Up". __________________________________ -------- MACHINE GUN FIRE --------- --------__________________________________--------- The infamous MG (and GGun) pose problems for the Riding system possibly more than anything else, except maybe the Homing Launcher. You can't Barrel Roll, Loop or U-Turn to avoid it, so how do you safeguard your rider from MG or GG fire? Let's fly away. Well, that only works if the person you're fighting is on the ground. Your Arwing attacks have longer range than an MG, and even more so for a GG. Strafing runs beginning from far away work well, and Homers will save your butt by killing them from long range. Of course, you could always outgun them, but you'll be licking your wounds after that. __________________________________ -------- THIS IS MY BOOMSTICK! --------- --------__________________________________--------- Haha...a category about Bombs. As you may already know, one of these badasses will wreck both you and your cargo in about a second. The way to go against this is switch your positions, since you have more health from being covered in the plane. This might be the time where you... __________________________________ -------- GET ANOTHER PLANE --------- --------__________________________________--------- It happens! No one lives forever in SFA/life! Should the purple smoke arise, bail and find a new prize, be it plane or Landmasters. Get both people out of the plane immediately. The way to do it: The Pilot flies above a vehicle, brakes, and Catapults the rider down to it, who gets in immediately. (Also known as a Defense Turret if done with Landmasters. HA! MEMORY TEST!) The guy still left in then seeks out another fresh vehicle OR a Ring/Platinum Star to refresh the one he's in if you've got the time. __________________________________ -------- LANDMASTER RIDING --------- --------__________________________________--------- WHAT? Yes, you can do it, have someone standing atop your Landmaster being an extra set of guns. It may not be as useful as Arwing riding, and it may be hazardous to the rider's health now and then but it can be done. It saves both your butts if you're plane's going down and you would otherwise both need to Defense Turret or do something like that. It's definitely easier to jump on top of one. The key of Master riding is to stay away from 1: the cannon, and 2: the wheels. The cannon is for obvious reasons, and the wheel is so you won't get hurt as if you were run over when you touch them. The perfect spot is on top of one of the two protrusions on the back of the Landmaster, on either side. You're behind the cannon and above the wheels. Weapons? Since you're slow and low, MGs or GGuns are the weapons of choice. If not, Snipers work well (just not as well as from a plane), Grenades are of course more viable and the blaster could also work. The awesome Homing Launcher, as you know, works anywhere. ---------------------------- Now you see the Bearer's huge role? Maybe this FAQ should be called the Bearing FAQ. =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 6. CHARACTERS TO USE (chartus) | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= An OPINION BASED list on who is the best rider in the game, who is 2nd best, who is... ------------- --> Slippy Toad and Peppy Hare #1 ------------- WHAT THE HELL? People think this toad is the worst player in the game, but alas, he is one of two best riders, and I'm gonna prove it. Is Peppy better? No. Slip's jump is but one less than Peppy's, and this still lets him do boardings with ease compared to others. His running is a little better, too. Slippy is small, and can't be seen easily, although the reticules take this advantage away often. Your enemy might not realize you were there if you position yourself behind something on the plane, though. The speedy chargeup time for the blaster is again not as spectacular as Peppy's but still gets the job done if you ever have to use it. There are two more trump cards I haven't mentioned yet. The first? Slippy OWNS at Defense Turrets with his five stars in Landmaster. The last, everyone, is the health - just look, bigger than anyone's but the cheapo that is Wolf. That'll help you survive a lot, since Slippy can survive two Grenades/three Homers/ten GGun bullets/two Bombs. An awesome asset for a Rider. If skilled, Slippy surpasses all else on top of a plane with his only disadvantage being the low crosshair size. Health bar sizes might be overrated statistics that don't matter to skilled pros, but even discounting the fact that most of you aren't, it sure helps in riding to have lots of it. Then again, so are crosshair sizes... =D Peppy seems to have no disadvantage at this, since he makes it all seem so damn easy. A five star jump, the fastest blaster charge in the game and the largest of any Pilot crosshair size means even someone with horrible reflexes can play well as Peppy as a rider. You can board with ease by tapping Y and soaring sky-high. Defense Turrets and other maneuvers like that can be done well, but not as well as with Slippy. And there's the two best. ------------- --> Wolf O'Donnell #2 ------------- Wolf is Falco with a crapload of health, an even faster run and a slightly better jump. Thus, 'tis general consensus he is really cheap, and people don't like him. I guess it makes sense why he's #2, then, right? Wolf can even live out a Sniper shot if someone hits you while you're riding. (which means in a very stupid way Wolf will last as long as the freakin' PLANE ITSELF when it comes to Snipers.) ------------- --> Fox McCloud #3 ------------- A team of two Foxes can pull off any maneuver they like, because all across the screen their stats are even. A Defense Turret could work as well as an abrupt position switch, since they have high score in Arwing, Pilot and Master alike. Not much more to say here than that, this position is earned for being able to do anything with a four-star crosshair size. ------------- --> Krystal and Falco Lombardi #4 ------------- Now that I've finished my Slippy rant, I don't have too much more to say. Maybe I'm biased here, but Krystal is my main character above all else. Her stats suck, but that's alright, 'cause she's got Barriers to spare, an invaluable resource during an all out assault (heh...Assault is the name of the game, after all). She thus has the best chance of survival if shot down. But tell me this, is it really bias putting your favorite character in FOURTH place and not first? Falco does not have the health for riding...and also not the jump. Neither does Krystal, but this is more blatantly extreme with Falco. He may be one of the better characters (and hands down the best bearer character) in the game, but not under these circumstances. He just gets killed a little too easy. Mind you, this is if you actually manage to board with his (and Krystal's too) weak jump. =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 6. CONCLUSION | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= Question answering time: HOW THE HELL DID I DISCOVER THIS? Well, it started with a friend of mine who introduced me to the game. On the second ever time I played it, you could guess I wasn't that great...I tried to bail out of my plane in mid-flight, to see what would happen. I joyfully found I was still astride my plane, with a machine gun in hand. I then said "Look! Here comes something new!" and shot up his plane with my MG. He turned to my screen, saw me standing atop my Arwing with a machine gun finishing him off and asked "How did you do THAT?" I explained. Something like that. I then looked around online and saw I wasn't the only one doing this. Now, I wrote a guide to it. That's all the questions, look me up at the-unruly-1@hotmail.com for more questions. Include something about Star Fox Assault or Riding in the title, or through the spam filter you'll go on my direct order. See you at my next guide! TUn1 =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= ____ _____ _____ ____ |__/ |____ |___| | | 7. COMMENTS | \ | | | |_\| =-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-==-_-= Sender: nivlac91. FAQ Display Options: Printable Version
http://www.gamefaqs.com/gamecube/561297-star-fox-assault/faqs/48975
CC-MAIN-2016-26
en
refinedweb
Opened 8 years ago Closed 8 years ago Last modified 5 years ago #7076 closed (fixed) qs.exclude(something=123) should not exclude objects with something=None Description from django.db import models class A(models.Model): str = models.CharField(null=True, blank=True, max_length=20) def __unicode__(self): return self.str or 'None' In [1]: from test import models In [2]: a = models.A.objects.create(str='a') In [3]: b = models.A.objects.create(str=None) In [4]: models.A.objects.exclude(str='a') Out[4]: [] In [5]: models.A.objects.filter(str__isnull=True).exclude(str='a') Out[5]: [] In [6]: models.A.objects.all() Out[6]: [<A: a>, <A: None>] Expected output: Out[4]: [<A: None>] Out[5]: [<A: None>] Tested with trunk & queryset-refactor. Attachments (1) Change History (9) comment:1 Changed 8 years ago by gav - Keywords qsrf-cleanup added - Needs documentation unset - Needs tests unset - Patch needs improvement unset Changed 8 years ago by emulbreh comment:2 Changed 8 years ago by emulbreh - Has patch set - Needs documentation set - Needs tests set - Owner changed from nobody to anonymous - Patch needs improvement set - Status changed from new to assigned comment:3 Changed 8 years ago by anonymous - Owner anonymous deleted - Status changed from assigned to new - Triage Stage changed from Unreviewed to Accepted comment:4 Changed 8 years ago by jacob - milestone set to 1.0 comment:5 Changed 8 years ago by mtredinnick Thanks for the patch, but ... one way to tell that a patch isn't quite correct is when the existing test suite doesn't pass when it's applied. :-) This patch breaks the exclude(field__in=[]) case. I'll fix it before committing and add some tests for the issues in this ticket, but please check the test suite and include tests for any new functionality. It makes things easier. comment:6 Changed 8 years ago by mtredinnick - Resolution set to fixed - Status changed from new to closed comment:7 Changed 8 years ago by mtredinnick comment:8 Changed 5 years ago by jacob - milestone 1.0 deleted Milestone 1.0 deleted I can reproduce this (r7573 + postgres). It could be solved easyly by making __exact NULL-safe: But I couldn't find something comparable for sqlite and oracle (except for (a=b OR (a IS NULL AND b IS NULL))) - and I don't know if anything depends on NULL == 1=NULL == NOT (1=NULL) and would break with FALSE == 1<=>NULL != NOT(1<=>NULL) == TRUE. While trying to come up with a patch I found some dead code that should handle the issue for related tables (see patch, missing [JOIN_TYPE] in if final > 1 ...) and just threw in a similar weakening for the unjoined case. This will be a backwards incompatible change.
https://code.djangoproject.com/ticket/7076
CC-MAIN-2016-26
en
refinedweb
System.Console.Haskeline Contents Description A (REPL) is the following: import System.Console.Haskeline main :: IO () main = runInputT defaultSettings loop where loop :: InputT IO () loop = do minput <- getInputLine "% " case minput of Nothing -> return () Just "quit" -> return () Just input -> do outputStrLn $ "Input was: " ++ input loop Synopsis - data InputT m a - runInputT :: MonadException m => Settings m -> InputT m a -> m a -) - outputStr :: MonadIO m => String -> InputT m () - outputStrLn :: MonadIO m => String -> InputT m () - data Settings m = Settings { - defaultSettings :: MonadIO m => Settings m - setComplete :: CompletionFunc m -> Settings m -> Settings m - data Prefs - readPrefs :: FilePath -> IO Prefs - defaultPrefs :: Prefs - runInputTWithPrefs :: MonadException m => Prefs -> Settings m -> InputT m a -> m a - runInputTBehaviorWithPrefs :: MonadException m => Behavior -> Prefs -> Settings m -> InputT m a -> m a - data Interrupt = Interrupt - withInterrupt :: MonadException m => InputT m a -> InputT m a - handleInterrupt :: MonadException m => m a -> m a -> m a - module System.Console.Haskeline.Completion - module System.Console.Haskeline.MonadException Settings Application-specific customizations to the user interface. Constructors Instances defaultSettings :: MonadIO m => Settings mSource A useful default. In particular: defaultSettings = Settings { complete = completeFilename, historyFile = Nothing, autoAddHistory = True } setComplete :: CompletionFunc m -> Settings m -> Settings mSource User preferences Prefs allow the user to customize the terminal-style Instances readPrefs :: FilePath -> IO PrefsSource Read Prefs from a given file. If there is an error reading the file, the defaultPrefs will be returned. defaultPrefs :: PrefsSource The default preferences which may be overwritten in the .haskeline file.. Ctrl-C handling The following functions provide portable handling of Ctrl-C events. These functions are not necessary on GHC version 6.10 or later, which processes Ctrl-C events as exceptions by default. withInterrupt :: MonadException m => InputT m a -> InputT m aSource Arguments
http://hackage.haskell.org/package/haskeline-0.6.3.2/docs/System-Console-Haskeline.html
CC-MAIN-2016-26
en
refinedweb
IRC log of er on 2007-08-08 Timestamps are in UTC. 14:03:03 [RRSAgent] RRSAgent has joined #er 14:03:03 [RRSAgent] logging to 14:03:09 [JohannesK] Zakim, this will be ERT 14:03:09 [Zakim] ok, JohannesK; I see WAI_ERTWG()10:00AM scheduled to start 3 minutes ago 14:03:29 [JohannesK] Agenda+ Revised simplified approach for HTTP-in-RDF 14:03:35 [JohannesK] Meeting: ERT WG 14:04:16 [JohannesK] Agenda: 14:04:29 [JohannesK] Chair: JohannesK 14:04:46 [Zakim] WAI_ERTWG()10:00AM has now started 14:04:52 [JohannesK] Regrets: SAZ, CarlosI 14:04:53 [Reinhard] Reinhard has joined #er 14:04:54 [Zakim] + +49.224.114.aaaa 14:05:19 [Zakim] +CarlosV 14:05:50 [Zakim] +Reinhard 14:06:03 [JohannesK] Zakim, CarlosV is really JohannesK 14:06:03 [Zakim] +JohannesK; got it 14:06:12 [JohannesK] Zakim, +49.224.114.aaaa is really CarlosV 14:06:12 [Zakim] +CarlosV; got it 14:07:34 [JohannesK] Zakim, take up agendum 1 14:07:34 [Zakim] agendum 1. "Revised simplified approach for HTTP-in-RDF" taken up [from JohannesK] 14:07:57 [JohannesK] Scribe: JK 14:08:09 [JohannesK] Scribenick: JohannesK 14:12:47 [JohannesK] scribe: RR 14:13:59 [Zakim] -CarlosV 14:14:21 [Reinhard] JK: http:methodName has to be declared and additionally its possible to use http:method 14:14:29 [Zakim] +JohannesK.a 14:14:45 [JohannesK] Zakim, JohannesK.a is really CarlosV 14:14:45 [Zakim] +CarlosV; got it 14:15:52 [Reinhard] JK: a similar approach is taken with http:MesaggeHeader 14:17:04 [Reinhard] JK: status codes are externally defined 14:19:24 [Reinhard] JK: http:Content is superclass of http:TextContent, http:XMLContent and http:Base64Content 14:20:48 [Reinhard] JK: Each of them can be used for the message body 14:21:23 [Reinhard] CV: is this a redundancy with earl:content? 14:22:43 [Reinhard] CV: the problem is, that http:TextContent, http:XMLContent and http:Base64Content are in the http-namespace 14:23:21 [Reinhard] JK: http:Content makes sense, but the sublclasses should be in a different namespace 14:23:57 [Reinhard] CV: Proposal: Leave http:Content here and move SubClasses to EARL 14:29:09 [Zakim] -Reinhard 14:29:11 [Zakim] -JohannesK 14:29:14 [Zakim] -CarlosV 14:29:16 [JohannesK] Zakim, close agendum 1 14:29:16 [Zakim] WAI_ERTWG()10:00AM has ended 14:29:17 [Zakim] Attendees were Reinhard, JohannesK, CarlosV 14:29:19 [Zakim] agendum 1, Revised simplified approach for HTTP-in-RDF, closed 14:29:20 [Zakim] I see nothing remaining on the agenda 14:29:29 [JohannesK] RRSAgent, make minutes 14:29:29 [RRSAgent] I have made the request to generate JohannesK 14:29:40 [JohannesK] Zakim, bye 14:29:40 [Zakim] Zakim has left #er 14:30:38 [JohannesK] RRSAgent, bye 15:59:50 [shadi] shadi has joined #er 16:00:06 [shadi] rrsagent, make logs world 16:00:11 [shadi] rrsagent, make minutes 16:00:11 [RRSAgent] I have made the request to generate shadi 16:00:14 [shadi] rrsagent, make logs world 16:00:21 [shadi] rrsagent, bye 16:00:21 [RRSAgent] I see no action items
http://www.w3.org/2007/08/08-er-irc
CC-MAIN-2016-26
en
refinedweb
MongoDB::Async - Asynchronous Mongo Driver for Perl This driver uses Coro and EV. It switches to another Coro thread while receiving response from server. Changes: MongoDB::Async::Pool - pool of persistent connects Added ->data method to MongoDB::Async::Cursor. Same as ->all, but returns array ref Please report bugs to nyaknyan@gmail.com use MongoDB::Async; use Coro; use EV; use Coro::EV; my $connection = MongoDB::Async::Connection->new(host => 'localhost', port => 27017); my $database = $connection->foo; my $collection = $database->bar; my $id = $collection->insert({ some => 'data' }); my $data = $collection->find_one({ _id => $id }); # or you can run threads my $data; async{ $data = MongoDB::Async::Connection->new->testdb->testcol->find({ _id => ["somebigdata", ....]})->data; }; use Coro::AnyEvent; async{ while(! $data ){ print "waiting for data...\n"; Coro::AnyEvent::sleep(1); } print "done\n"; }; async{ # parallel query MongoDB::Async::Connection->new->testdb->testcol->save({ ... }); }; EV::loop; This is the Perl driver for MongoDB, a document-oriented database. This section introduces some of the basic concepts of MongoDB. There's also a; isn't required, but it::Async::Connection->new("host" => "localhost:27017"); As this is the default, we can use the equivalent shorthand: $conn = MongoDB::Async::Connection->new; Connecting is relatively expensive, so try not to open superfluous connections. There is no way to explicitly disconnect from the database. When $conn goes out of scope, the connection will automatically be closed and cleaned up. The classes are arranged in a hierarchy: you cannot create a MongoDB::Async::Collection instance before you create MongoDB::Async::Database instance, for example. The full hierarchy is: MongoDB::Async::Connection -> MongoDB::Async::Database -> MongoDB::Async::Collection This is because MongoDB::Async::Database has a field that is a MongoDB::Async::Connection and MongoDB::Async::Collection has a MongoDB::Async::Database field. When you call a MongoDB::Async::Collection function, it "trickles up" the chain of classes. For example, say we're inserting $doc into the collection bar in the database foo. The calls made look like: $collection->insert($doc) Calls MongoDB::Async::Database's implementation of insert, passing along the collection name ("foo"). $db->insert($name, $doc) Calls MongoDB::Async::Connection's implementation of insert, passing along the fully qualified namespace ("foo.bar"). $connection->insert($ns, $doc) MongoDB::Async::Connection does the actual work and sends a message to the database. These functions should generally not be used. They are very low level and have nice wrappers in MongoDB::Async::Collection. my ($insert, $ids) = MongoDB::Async::write_insert("foo.bar", [{foo => 1}, {bar => -1}, {baz => 1}]); Creates an insert string to be used by MongoDB::Async::Connection::send. The second argument is an array of hashes to insert. To imitate the behavior of MongoDB::Async::Collection::insert, pass a single hash, for example: my ($insert, $ids) = MongoDB::Async::write_insert("foo.bar", [{foo => 1}]); Passing multiple hashes imitates the behavior of MongoDB::Async::Collection::batch_insert. This function returns the string and an array of the the _id fields that the inserted hashes will contain. my ($query, $info) = MongoDB::Async::write_query('foo.$cmd', 0, 0, -1, {getlasterror => 1}); Creates a database query to be used by MongoDB::Async::Connection::send. $flags are query flags to use (see MongoDB::Async:::Async::Connection::recv to get the database response to the query. my ($update) = MongoDB::Async::write_update("foo.bar", {age => {'$lt' => 20}}, {'$set' => {young => true}}, 0); Creates an update that can be used with MongoDB::Async::Connection::send. $flags can be 1 for upsert and/or 2 for updating multiple documents. my ($remove) = MongoDB::Async::write_remove("foo.bar", {name => "joe"}, 0); Creates a remove that can be used with MongoDB::Async::Connection::send. $flags can be 1 for removing just one matching document. my @documents = MongoDB::Async::read_documents($buffer); Decodes BSON documents from the given buffer MongoDB main website Core documentation MongoDB::Async::Tutorial, MongoDB::Async::Examples
http://search.cpan.org/~nyaknyan/MongoDB-Async-0.45/lib/MongoDB/Async.pm
CC-MAIN-2016-26
en
refinedweb
I am cutting and pasting a long Word doc into Coffee Cup. The doc. file has paragraphs indicated only by an indentation on the first line. There is no double return (white space) between adjacent paragraphs. In Coffee Cup, even though the indent is lost, paragraphs can still be determined based upon the unequal length of the last line in one paragraph and the first line in the next. Is there anyway this, or another method, can be used to "auto-insert" <p> </p> tags; eg. not having to enter hundreds (thousands) manually for an entire book?
http://www.codingforums.com/printthread.php?t=281189
CC-MAIN-2016-26
en
refinedweb
EJB 2.1 The Timer Service By Richard Monson-Haefel 01 Oct 2002 | TheServerSide.com Column 2: EJB 2.1 Web Services (Part 1) Business systems frequently make use of scheduling systems, which can be configured to run other programs at specified times. In many cases, scheduling systems run applications that generate reports, reformat data, or do audit work at night. In other cases, scheduling systems provide callback APIs that can alert subsystems of temporal events such as due dates, deadlines, etc. Scheduling systems often run "batch jobs" (a.k.a. "scheduled jobs"), which perform routine work automatically at a prescribed time. Users in the UNIX world frequently run scheduled jobs using cron, a simple but useful scheduling system that runs programs listed in a configuration file. Other job-scheduling systems include the OMG's COS Timer Event Service, which is a CORBA API for timed events, as well as commercial products. Scheduling systems are also common in workflow applications, systems that manage document processing that typically spans days or months, and involves many systems and lots of human intervention. In workflow applications scheduling is employed for auditing tasks that periodically take inventory of the state of an application, invoice, sales order, or whatever to ensure that everything is proceeding as scheduled. The scheduling system maintains timers, and delivers events to alert applications and components when a specified date and time is reached, or when some period has expired. In the EJB world, there has been a general interest in scheduling systems that can work directly with enterprise beans. Some products exist to support scheduling systems in J2EE, like Sims Computing's Flux® and BEA's WebLogic Time Services, but until EJB 2.1 there has not been a standard J2EE scheduling system. Enterprise JavaBeans 2.1 introduces a standardized, but limited, scheduling system of its own called the Timer Service. The Java 2 Platform, Standard Edition includes the class java.util.Timer, which allows threads to schedule tasks for future execution in a background thread. This facility can be very useful for a variety of applications, but it's too limited to be used in enterprise computing. Note, however, that the scheduling semantics of java.util.Timer are very similar to those of the EJB Timer Service. The Timer Service is a facility of the EJB container system that provides a timed-event API, which can be used to schedule timers for specified dates, periods, and intervals. A timer is associated with the enterprise bean that set it, and calls that bean's ejbTimedout() method when it goes off. The rest of this article describes the EJB Timer Service API, its use with entity, stateless session, and message-driven beans, and provides some criticism and suggested improvements of the Timer Service. Timer Service API The Timer Service enables an enterprise bean to be notified when a specific date has arrived, or when some period of time has elapsed, or at recurring intervals. The TimedObject Interface To use the Timer Service, an enterprise bean must implement the javax.ejb.TimedObject interface, which defines a single callback method, ejbTimeout(): package javax.ejb; public interface TimedObject { public void ejbTimeout(Timer timer) ; } When the scheduled date and time is reached, or the specified period or interval has elapsed, the container system invokes the enterprise bean's ejbTimeout() method. The enterprise bean can then do any processing it needs to in response to the timeout, such as run reports, audit records, modify the states of other beans, etc. The TimerService Interface An enterprise bean schedules itself for a timed notification using a reference to the TimerService, which it obtains from the EJBContext. The TimerService allows a bean to register itself for notification on a specific date, or after some period of time, or at recurring intervals. The following code shows how a bean would register for notification exactly 30 days from now. // Create a Calandar object that represents the time 30 days from now. Calendar time = Calendar.getInstance(); // the current time. time.add(Calendar.DATE, 30); // add 30 days to the current time. Date date = time.getTime(); // Create a timer that will go off 30 days from now. EJBContext ejbContext = // ...: get EJBContext object from somewhere. TimerService timerService = ejbContext.getTimerService(); timerService.createTimer( date, null); The above example creates a Calendar object that represents the current time, then increments this object by 30 days so that it represents the time 30 days from now. The code then obtains a reference to the container's TimerService and calls the TimerService.createTimer() method, passing it the java.util.Date value of the Calendar object, thus creating a timer that will go off after 30 days. The TimerService interface provides an enterprise bean with access to the EJB container's Timer Service so that new timers can be created and existing timers can be listed. The TimerService interface is a part of the javax.ejb package in EJB 2.1 and has the following definition: package javax.ejb; import java.util.Date; import java.io.Serializable; public interface TimerService { // Create a single-action timer that expires on a specified date. public Timer createTimer(Date expiration, Serializable info) throws IllegalArgumentException,IllegalStateException,EJBException; // Create a single-action timer that expires after a specified duration. public Timer createTimer(long duration, Serializable info) throws IllegalArgumentException,IllegalStateException,EJBException; // Create an interval timer that starts on a specified date.. public Timer createTimer(Date initialExpiration, long intervalDuration, Serializable info) throws IllegalArgumentException,IllegalStateException,EJBException; // Create an interval timer that starts after a specified durration. public Timer createTimer(long initialDuration, long intervalDuration, Serializable info) throws IllegalArgumentException,IllegalStateException,EJBException; // Get all the active timers associated with this bean public java.util.Collection getTimers() throws IllegalStateException,EJBException; } Each of the TimerService.createTimer() methods establishes a timer with a different temporal configuration. There are essentially two types: single-action timers and interval timers. A single-action timer expires once, while an interval timer expires many times, at specified intervals. The term "expires" means the timer "goes off" or is activated. When a timer expires, the Timer Service calls the bean's ejbTimeout() method. When a timer is created the Timer Service automatically makes it persistent in some type of secondary storage, so it will survive system failures. If the server goes down, the timers will still be active when it comes back up again. While the specification isn't clear, it's assumed that any timers that expire while the system is down will go off when it comes back up again. If an interval timer expires many times while the server is down, it may go off multiple times when the system comes up again. Consult your vendors' documentation to learn how they handle expired timers following a system failure. The TimerService.getTimers() method returns all the timers that have been set for a particular enterprise bean. This method returns a java.util.Collection, an unordered collection of zero or more javax.ejb.Timer objects. The Timer Interface A timer object is an instance of a class that implements the javax.ejb.Timer interface, and represents a timed event that has been scheduled for an enterprise bean using the Timer Service. Timer objects are returned by the TimerService.createTimer() and TimerService.getTimers() methods, and a Timer is the only parameter of the TimedObject.ejbTimeout() method. The Timer interface is defined as follows: package javax.ejb; public interface Timer { // Cause the timer and all its associated expiration notifications to be canceled public void cancel() throws IllegalStateException,NoSuchObjectLocalException,EJBException; // Get the information associated with the timer at the time of creation. public java.io.Serializable getInfo() throws IllegalStateException,NoSuchObjectLocalException,EJBException; // Get the point in time at which the next timer expiration is scheduled to occur. public java.util.Date getNextTimeout() throws IllegalStateException,NoSuchObjectLocalException,EJBException; // Get the number of milliseconds that will elapse before the next scheduled timer expiration public long getTimeRemaining() throws IllegalStateException,NoSuchObjectLocalException,EJBException; //Get a serializable handle to the timer. public TimerHandle getHandle() throws IllegalStateException,NoSuchObjectLocalException,EJBException; } A Timer instance represents exactly one timed event and can be used to cancel the timer, obtain a serializable handle, obtain the application data associated with the timer, and find out when the timer's next scheduled expiration will occur. The Timer.getInfo() method returns a serializable object; an instance of a class that implements the java.io.Serializable interface. This serializable object is called the info object, and is associated with a timer when the Timer object is created. You can use pretty much anything for the info object. It should contain application information which helps the bean identify the purpose of the timer as well as data that will help in processing. The Timer.getHandle() method returns a TimerHandle object. The TimerHandle is similar in purpose to the javax.ejb.Handle and javax.ejb.HomeHandle. It's a reference that can be saved to a file or some other resource, then used to regain access to the Timer at a later date. The TimerHandle interface is simple: package javax.ejb; public interface TimerHandle extends java.io.Serializable { public Timer getTimer() throws NoSuchObjectLocalException, EJBException; } Transactions and Timers When a bean calls TimerService.createTimer(), the operation is done in the scope of the current transaction. If the transaction rolls back, the timer is undone; it's not created. In most beans, the ejbTimeout() method should have a transaction attribute of RequiresNew, to ensure that the work performed by the ejbTimeout() method is in the scope of container-initiated transactions. An Example Timer Bean In a discount stockbroker system, buy-at-limit orders can be created for a specific number of shares, but only at a specified price or lower. Such orders typically have a time limit. If the stock price falls below the specified price before the time limit, the order is carried out. If the stock price does not fall below the specified price before the time limit, the timer goes off and the order is canceled. In a J2EE application the buy-at-limit order can be represented as an entity bean called the BuyAtLimit EJB. When the bean is created it sets a timer for the expiration date handed into the create method. If the buy-at-limit order is executed, the timer is canceled. If it's not, when the timer goes off the BuyAtLimit EJB will mark itself as canceled. (We don't delete the order, because we need the record.) The class definition for the BuyAtLimit EJB would look something like the following. package com.xyzbrokers.order; import javax.ejb.*; import java.util.Collection; import java.util.Iterator; import java.util.Date; public class BuyAtLimitBean implements javax.ejb.EntityBean, javax.ejb.TimedObject { public EntityContext ejbCntxt; public void setEntityContext(EntityContext cntxt) {ejbCntxt = cntxt;} public void ejbCreate(CustomerLocal cust, String stockSymbol, int numberOfShares, double priceCap, Date expiration){ setNumberOfShares(numberOfShares); setStockSymbol(stockSymbol); setPriceCap(priceCap); setExpiration(expiration); } public void ejbPostCreate(CustomerLocal cust, String stockSymbol, int numberOfShares, double priceCap, Date expiration){ setCustomer(cust); TimerService timerService = ejbCntxt.getTimerService(); timerService.createTimer( expiration, null ); } public void ejbTimeout(Timer timer){ cancelOrder(); } public void cancelOrder(){ setCanceled(true); setDateCanceled( new Date()); cancelTimer(); } public void executeOrder(){ setExecuted(true); setDateExecuted( new Date()); cancelTimer(); } private void cancelTimer(){ TimerService timerService = ejbCntxt.getTimerService(); Iterator timers = timerService.getTimers().iterator(); if(timers.hasNext() ){ Timer timer = (Timer) timers.next(); timer.cancel(); } } // EJB callback methods, persistent fields, and relationships fields not shown } When the ejbTimeout() method is called by the container, it simply calls the bean's own cancelOrder() method, which in turn sets the persistent fields to indicate that the order is canceled and then calls a private cancelTimer() method. The cancelTimer() method obtains a reference to the bean's current timer and cancels it. This method is called when the buy order is executed or canceled by the timer, or when a client calls the cancelOrder() method directly. Entity Bean Timers Entity beans set timers on a specific type of entity bean (e.g., Ship, Customer, Reservation, etc.) with a specific primary key. When a timer goes off, the first thing that the container does is use the primary key associated with the timer to load the entity bean with proper data. Once the entity bean is in the ready state -- its data is loaded and it's ready to service requests -- the ejbTimeout() method is invoked. The container associates the primary key with the timer implicitly. Using timers with entity beans can be very useful, because it allows entity beans to manage their own timed events. This makes sense, especially when the timed events are critical to the entity's definition. For example, paying claims on time is intrinsic to the definition of a claim. This is also true of the time limit on a buy-at-limit order, a payment-past-due alert on mortgages, etc. Stateless Session Bean Timers Stateless session bean timers can be used for auditing or batch processing. As an auditing agent, a stateless session timer can monitor the state of the system to ensure that tasks are being completed and that data is consistent. This type of audit work spans entities and possibly data sources. Such EJBs can also perform batch processing work such as database clean up, transfer of records, etc. Stateless session bean timers can also be deployed as agents that perform some type of intelligent work on behalf of the organization they serve. An agent can be thought of as an extension of an audit: it monitors the system but it also fixes problems automatically. While entity timers are associated with a specific type of entity bean and primary key, stateless session bean timers are associated only with a specific type of session bean. When a timer for a stateless session bean goes off, the container automatically selects an arbitrary instance of that stateless bean type from the instance pool and calls its ejbTimeout() method. Message-Driven Bean Timers Message-driven bean timers are similar to stateless session bean timers in several ways: Timers are associated only with the type of bean. When a timer expires, a message-driven bean instance is selected from a pool to execute the ejbTimeout() method. In addition, message-driven beans can be used for performing audits or other types of batch jobs. The primary difference between a message-driven bean timer and a stateless session bean timer is the way in which they're initiated: timers are created in response to an incoming message or, if the container supports it, from a configuration file. If you've read this column before than you may remember that in the first installment I criticized the specification for not including message-driven beans as one of bean types that can implement the timer interface. That changed a few weeks after that article was published so that now message-driven beans can be timers. There are still problems with message-driven bean timers, though; specifically, they can not be configured at deployment time. This problem is addressed in the next section. Problems with the Timer Service The Timer Service is an excellent addition to the EJB platform, but it's too limited. A lot can be learned from cron, the Unix scheduling utility that's been around for years. A very little bit about cron Cron is a Unix program that allows you to schedule scripts (similar to batch files in DOS) , commands, and other programs to run at specified dates and times. Unlike the EJB Timer Service, cron allows for very flexible calendar-based scheduling. Cron jobs (anything cron runs is called a job) can be scheduled to run at intervals of a specific minute of the hour, hour of the day, day of the week, day of the month, and month of the year. For example, you can schedule a cron job to run every Friday at 12:15 p.m., or every hour, or the first day of every month. While this level of refinement may sound complicated, its actually very simple to specify. Cron uses a simple text format of five fields of integer values, separated by spaces or tabs, to describe the intervals at which scripts should be run. Figure 1 shows the field positions and their meanings. Figure 1 Cron Date and Time Format The order of the fields is significant. since each specifies a different calendar designator: minute, hour, day, month, and day of the week. The following examples show how to schedule cron jobs: 20 * * * * ---> 20 minutes after every hour. (00:20, 01:20, etc.) 5 22 * * * ---> Every day at 10:05 p.m. 0 8 1 * * ---> First day of every month at 8:00 a.m. 0 8 4 7 * ---> The fourth of July at 8:00 a.m. 15 12 * * 5 ---> Every Friday at 12:15 p.m. An asterisk indicates that all values are valid. For example, if you use an asterisk for the minute field, you're scheduling cron to execute the job every minute of the hour. The following examples illustrate the impact of the asterisk on schedules: * 10 * * * ---> Every day at 10:00, 10:01, ...10:59 0 10 * 7 * ---> Every day in July at 10:00 a.m. * * * * * ---> Every minute of every hour of every day. You can define more complex intervals by specifying multiple values, separated by commas, for a single field. In addition you can specify ranges of time using the hyphen: 0 8 * * 1,3,5 ---> Every Monday, Wednesday, and Friday at 8:00 a.m. 0 8 1,15 * * ---> The first and 15th of every month at 8:00 a.m. 0 8-17 * * 1-5 ---> Every hour from 8 a.m. through 5 p.m., Monday through Friday Cron jobs are scheduled using crontab files, which are simply text files in which you configure the date/time fields and a command, usually a command to run a script. Improving The Timer Service The cron date/time format offers a lot more flexibility than is currently offered by the EJB Timer Service. The Timer Service requires you to designate intervals in exact milliseconds, which is a bit awkward to work with (you have to convert days, hours, and minutes to milliseconds), but more important is that it's not flexible enough for many real-world scheduling needs. For example, there is no way to schedule a timer to expire on the first and 15th of every month, or every hour between 8 a.m. and 5 p.m., Monday through Friday. You can derive some of the more complex intervals, but only at the cost of adding logic to your bean code to calculate them, and in more complicated scenarios you'll need multiple timers for the same task. Cron is not perfect either. Scheduling jobs is like setting a timer on a VCR: everything is scheduled according to the clock and calendar. You can specify that cron run a job at specific times of the day on specific days of the year, but you can't have it run a job at relative intervals from an arbitrary starting point. For example, cron's date/time format doesn't let you schedule a job to run every 10 minutes, starting now. You have to schedule it to run at specific minutes of the hour (e.g., 0,10,20,30,40,50). Cron's is limited to scheduling recurring jobs; you can't set up a single-action timer, and you can't set a start date. A problem with both cron and the EJB Timer Service is that you can't program a stop date -- a date that the timer will automatically cancel itself. You also may have noticed that cron granularity is to the minute rather than the millisecond. At first glance this looks like a weakness, but in practice it's perfectly acceptable. For calendar-driven scheduling, more precision simply isn't very useful. A solution is to change the Timer Service interface so that it can handle a cron-like date/time format, with a start date and end date. Rather than discard the current createTimer() calls (which are useful, especially for single-action timers and arbitrary millisecond intervals), it would be preferable simply to add a new method with the desired cron-like semantics. In addition, instead of using 0 - 6 to designate the day of the week, it would be better to follow the lead set by the Linux version of cron, which uses the values Sun, Mon, Tue, Wed, Thu, Fri, and Sat. For example, code to schedule a timer that would run every weekday at 11:00 p.m. starting October 1, 2003, and ending May 31, 2004, would look like this: TimerService timerService = ejbContext.getTimerService(); Calendar start = Calendar.getInstance().set(2003, Calendar.OCTOBER, 1); Calendar end = Calendar.getInstance().set(2004, Calendar.MAY, 31); String dateTimeString = "23 * * * Mon-Fri"; timerService.createTimer(dateTimeString, start, end, null); This proposed change to the Timer Service explicitly retains the other millisecond-based createTimer() methods, because they are very useful. While cron-like configuration is powerful, it's not a silver bullet. If you need to schedule a timer to go off every 30 seconds starting now (or any arbitrary point in time), you need to use one of the existing createTimer() methods. As I pointed out earlier, the cron-like configuration is bound to specific calendar times; it's like programming your VCR. The other millisecond-based createTimer() methods, by contrast, are more like an egg timer with millisecond precision. At any time you can set them to go off after a fixed number of milliseconds. It should be noted, however, that true millisecond accuracy is difficult because (a) normal processing and thread contention tend to delay response time, and (b) a server clock must be properly synchronized with the correct time (i.e. UTC1 ), to the millisecond, and most are not. In the long run, any changes to the Timer Service will need to be hammered out by the EJB expert group based on vendor and developer feedback, but the solution proposed in this article illustrates some of the possibilities of using a cron-like date/time format. Scheduling systems can have much richer semantics than proposed above. For example, Sims Computing's Flux® supports capabilities that just can't be duplicated with a cron-like date/time format or the present EJB Timer Service. For example, scheduling jobs that skip public holidays, or that quit running after N intervals, or that run for one week and then pause for two weeks, or that run every business hour except during lunch. A Timer Service this flexible is worth shooting for, but may never be realized in a vendor-agnostic standard like Enterprise JavaBeans. Message-Driven Bean Timers: Standard configuration properties There is enormous potential for using message-driven beans as cron-like jobs which are configured at deployment and run automatically. Unfortunately, there is no standard way to configure a message-driven bean timer at deployment time. Some vendors may support this while others do not. Pre-configured message-driven bean timers are going to be in high demand by developers who want to schedule message-driven beans to perform work at specific dates and times. Without support for deployment-time configuration, the only reliable way to program an enterprise bean timer is to have a client call a method or send a JMS message. This is unacceptable. Developers will need deployment-time configuration and it should be added to the next version of the specification. Building on the proposed cron-like semantics described in the previous subsection, it would be easy to devise standard activation configuration properties for configuring message-driven bean timers at deployment time. For example the following configures a message-driven bean, the Audit EJB, to run at 11 p.m., Monday through Friday, starting October 1, 2003, and ending May 31, 2004 (start and end dates are not required). <activation-config> <description>Run Monday through Friday at 11:00 p.m. Starting on Oct 1st,2003 until May 31st, 2004</description> <activation-config-property> <activation-config-property-name>dateTimeFields</activation-config-property-name> <activation-config-property-value> 23 * * * Mon-Fri</activation-config-property-value> </activation-config-property> <activation-config-property> <activation-config-property-name>startDate</activation-config-property-name> <activation-config-property-value>October 1, 2003</activation-config-property-value> </activation-config-property> <activation-config-property> <activation-config-property-name>endDate</activation-config-property-name> <activation-config-property-value>May 31, 2004</activation-config-property-value> </activation-config-property> </activation-config> This configuration would be fairly easy for providers to implement if they supported enhanced cron-like semantics, as outlined in the previous subsection. In addition, you could configure message-driven beans to use the millisecond-based timers EJB 2.1 already supports. Other Problems with Timer API The semantics of the Timer object convey very little information about the timer object itself. There is no way to determine whether a timer is a single-action timer or an interval timer. If it's an interval timer, there is no way to determine the configured interval, or whether the timer has executed its first expiration or not. To solve these problems, additional methods should be added to the Timer interface that provide this information. As a stopgap, it's a good idea to place this information in the info object, so that it can be accessed by applications that need it. Wrapping Up The primary purpose of this article was to explain how the EJB 2.1 Timer Service works and to propose some changes that I think are needed in order to make the Timer Service more useful in the real world. Whether or not the changes outlined in this article are adopted is a matter for the EJB expert group, which should be responsive to the EJB developer community. It's likely that others will find ways to improve these proposed changes. Regardless of the outcome, the current limited semantics of the Timer Service, and the complete lack of support for configurable message-driven bean timers is a problem. As you develop timers, you will quickly discover the need for a much richer way of describing expirations, and the desire for some way to configure timers at deployment time, rather than having to use a client application to initiate a scheduled event. Next month's article will cover changes to EJB QL, which include, among others, the new ORDER BY clause. Notes - Coordinated Universal Time (UTC) is the international standard reference time. Servers can be coordinated with UTC using the Network Time Protocol (NTP) and public time servers. Coordinated Universal Time is abbreviated UTC as a compromise among standardizing nations. A full explanation is provided by the National Institute of Standards and Technology's FAQ on UTC.
http://www.theserverside.com/news/1365551/EJB-21-The-Timer-Service
CC-MAIN-2016-26
en
refinedweb
Introducing SPARQL: Querying the Semantic WebIntroducing SPARQL: Querying the Semantic Web This tutorial,. Happily the SPARQL specifications don't exist in isolation. There are several tools and APIs that already provide SPARQL functionality, and most of them are up to date with the latest specifications. A brief list includes: My SPARQL query tool Twinkle offers a simple GUI interface to the ARQ library, and supports multiple output formats and simple facilities for loading, editing, and saving queries. Handy if you want to play with SPARQL on the desktop. But for a minimum of installation fuss you can't beat an online SPARQL query tool, which we'll use throughout the rest of the tutorials. As it happens, the service is also a self-contained example of the SPARQL protocol in action. Tutorial writers can burn a lot of time crafting a good set of examples. A balance needs to be struck between making the data clear versus making it too trivial. What you really want is for the examples to reflect the power of the technology being introduced. For this series, I'm going to dispense with the art of data design and instead pick up some data already published wild on the Web. That is, we're doing real RDF processing of real-world data. Not only will this help illustrate SPARQL's utility, we may even learn a few interesting facts along the way. Bob DuCharme has done an excellent job of curating public collections of RDF on his site rdfdata.org. I've picked out this RDF representation of the periodic table for our purposes. It's data that most people will have at least a passing familiarity with, so won't take a great deal of review in order for you to get started. Here's a handy periodic table to use as a reference if your chemistry is a little rusty. The RDF data provides some essential facts about each element including its name, symbol, atomic weight and number, plus a good deal more. We'll focus on these simple properties for now. A slightly edited extract of the data, showing a description of sodium, is included here: <Element rdf: <name>sodium</name> <symbol>Na</symbol> <atomicNumber>11</atomicNumber> <atomicWeight>22.989770</atomicWeight> <group rdf: <period rdf: <block rdf: <standardState rdf: <color>silvery white</color> <classification rdf: <casRegistryID>7440-23-5</casRegistryID> </Element> Note that the namespace for this data is -- that'll be important when we start formulating our SPARQL queries. The RDF includes a mixture of properties; some are simple literals such as name and atomicWeight, while others such as group and standardState have resources as values. RDF is built on the triple, a 3-tuple consisting of subject, predicate, and object. Likewise SPARQL is built on the triple pattern, which also consists of a subject, predicate and object. In fact an RDF triple is also a SPARQL triple pattern. A triple from our data expressed using the SPARQL triple pattern syntax looks like this: <> table:name "sodium". A triple pattern is written as subject, predicate, and object and is terminated with a full stop. URIs, e.g. for identifying resources, are written inside angle brackets. Literal strings are denoted with either double or single quotes. While properties, like name, can be identified by their URI, it's more usual to use a qname-style syntax to improve readability. Later in the tutorial I'll show you how to associate a prefix with a URI using a mechanism very similar to XML namespaces. SPARQL specifies a number of handy abbreviations for writing complex triple patterns. Both the basic syntax and abbreviations borrow heavily from Turtle, a very terse RDF serialization alternative to RDF/XML. As a text rather than XML format, Turtle can be used to express RDF very succinctly. Rather than exhaustively list all of the SPARQL syntax shortcuts here, we'll introduce them throughout the examples contained in this and later tutorials. The triple pattern above is fine for demonstrating syntax but isn't very useful as a query. If we know all the data, there's no need to run a query. However, unlike a triple, a triple pattern can include variables. Any or all of the subject, predicate, and object values in a triple pattern may be replaced by a variable. Variables are used to indicate data items of interest that will be returned by a query. The next example shows a pattern that uses variables in place of both the subject and the object: ?element table:name ?name. Since a variable (which has in SPARQL an alternative spelling using the $ character, like $element) matches any value, this pattern will match any RDF resource that has a name property. Each triple that matches the pattern will bind an actual value from the RDF dataset to each of the variables. For example, there is a binding of this pattern to our dataset where the element variable is bound to < and the name variable is "chlorine." In SPARQL all possible bindings are considered, so if a resource has multiple instances of a given property, then multiple bindings will be found. Which is a good thing to remember if you end up with more data than expected in your query results. At this point you may be wondering if it's legal for a triple pattern to include only variables. Well, it is: ?subject ?predicate ?object. This pattern matches all triples in an RDF graph. Triple patterns can also be combined to describe more complex patterns, known as graph patterns. These will be clearer when seen within the context of some sample queries. So let's look at the basic structure of our first SPARQL query. This SPARQL query selects the names of all the elements in the periodic table: PREFIX table: <> SELECT ?name FROM <> WHERE { ?element table:name ?name. } Let's break down the query into its parts to better understand the syntax. Starting from the top we encounter the PREFIX keyword. PREFIX is essentially the SPARQL equivalent of declaring an XML namespace: it associates a short label with a specific URI. And, just like a namespace declaration, the label applied carries no particular meaning. It's just a label. A query can include any number of PREFIX statements. The label assigned to a URI can be used anywhere in a query in place of the URI itself; for example, within a triple pattern. In the single triple pattern included in this query we can see the table prefix in use as a shorthand for, the full URI of the name property. The start of the query proper is the SELECT keyword. Like its twin in a SQL query, the SELECT clause is used to define the data items that will be returned by a query. In Example 6 we're returning a single item, the name of the element. As you might expect, the FROM keyword identifies the data against which the query will be run. In this instance, the query references the URI of the periodic table in RDF. A query may actually include multiple FROM keywords, as a means to assemble larger RDF graphs for querying. We'll have more to say about that (and SPARQL datasets in general) in the next tutorial. For now, think of all the lovely mashups . . . Finally, we have the WHERE clause. A graph pattern is a collection of triple patterns that identify the shape of the graph that we want to match against. In this instance you'll recognize the pattern for this query as the triple pattern we used earlier. The WHERE keyword is actually optional and can legally be omitted to make queries slightly terser: BASE <> PREFIX table: <PeriodicTable#> SELECT ?name FROM <PeriodicTable.owl> { ?element table:name ?name. } URIs are often long and unwieldly, and you can never have too much syntactic sugar to help avoid typing them out repeatedly. BASE is another form of URI abbreviation, defining the base URI against which all relative URIs in the query will be resolved, including those defined with PREFIX. As you can see, the common prefixes of the two URIs in the previous example have been factored out into a BASE URI declaration. Now that we've written a complete query, let's run it and get some results. Here's a table that lists the first few results (you can view the complete results using the online query tool): The result of a SPARQL SELECT query is a sequence of results that, conceptually, form a table or result set. Each row in the table corresponds to one query solution. And each column corresponds to a variable declared in the SELECT clause. If you've done any kind of database development, this kind of table-oriented result set should be immediately familiar. In later sections we'll look at how that sequence can be modified, e.g. to apply a sort order, limit the number of returned results, etc. We'll also take a quick look at the XML results format. But for now, let's make the query to do something more interesting. Taking what we've learned about the simplest kind of triple patterns and the structure of a SPARQL query, we can now explore how to do more complex and useful queries. The next example shows a query that selects the name, symbol, and atomic number of all elements in the periodic table: PREFIX table: <> SELECT ?name ?symbol ?number FROM <> WHERE { ?element table:name ?name. ?element table:symbol ?symbol. ?element table:atomicNumber ?number. } What's new here is that the query pattern consists of multiple triple patterns. A collection of triple patterns is a graph pattern. In this instance the graph pattern consists of three triple patterns, one to match each of the desired properties: name, symbol, and atomicNumber. Understanding how this query operates involves a bit more background on the pattern matching process. The most important point is that within a graph pattern a variable must have the same value no matter where it is used. So in the previous example the variable element will always be bound to the same resource. In other words, this query will match any resource that has all three of the desired properties. A resource that does not contain all of these properties will not be included in the results because it won't satisfy all of the triple patterns. We'll cover optional matching in a later section. The other notable item here is that there is one triple pattern for each of the variables required to be present in the result set. In SPARQL one cannot SELECT a variable if it is not listed in the graph pattern. This may seem slightly odd if you're only used to SQL; in that language it is quite common to return variables that are not listed in a WHERE clause. But remember a SPARQL query processor has no data dictionary that lists all columns (i.e. properties) of a resource. Variables must be bound to an RDF term via a triple pattern in order for the processor to be able to extract that term from the graph. SPARQL includes a number of syntax shortcuts that simplify the writing of patterns. Let's rewrite our query more succinctly: PREFIX table: <> SELECT * FROM <> WHERE { ?element table:name ?name; table:symbol ?symbol; table:atomicNumber ?number. } We've used two shortcuts here. The first should be familar to SQL users: *. This shortcut means "return all variables listed in the graph pattern." It saves having to itemize every variable at the cost of relying on the processor to order the columns in the result set. The second shortcut is, formally, the use of a predicate-object list. This shortcut allows a query author to list the subject of a series of triple patterns only once. When we're using this form, each triple pattern is terminated with a semicolon rather than a full stop. This shortcut can be used when several patterns share the same subject. SPARQL offers a similar shortcut, an object list, which simplifies patterns that differ only in their subject. OPTIONALPatterns RDF graphs are often semi-structured; some data may be unavailable or unknown. How do we allow for this when querying for data? Let's work through an example to illustrate the problem. Imagine that we wanted to adapt the previous query to also return the color of the element. Our first attempt may look like this: PREFIX table: <> SELECT ?name ?symbol ?number ?color FROM <> WHERE { ?element table:name ?name. ?element table:symbol ?symbol. ?element table:atomicNumber ?number. ?element table:color ?color. } We've extended our SELECT statement to include the new variable, color, and have also added a match for the relevant property ( table:color) to the graph pattern. So far, so good. If you run this query though, you'll notice that some elements are missing. Ununtrium, for example. (No, I'd never heard of it either). If we look closely at the RDF data, we find that this ununtrium, and several other of the heavier elements, do not have the relevant table:color property. So these elements are not returned in the results. We need to alter the query to allow for the fact that we have some missing or incomplete data. We achieve this by indicating that the relevant triple pattern is optional: PREFIX table: <> SELECT ?name ?symbol ?number ?color FROM <> WHERE { ?element table:name ?name. ?element table:symbol ?symbol. ?element table:atomicNumber ?number. OPTIONAL { ?element table:color ?color. } } If you run this version of the query you'll find that all of the elements are now correctly included. The OPTIONAL keyword must be followed by a sub-pattern containing the optional aspects of the query. Within the result set, if an element doesn't have a color property, then the color variable is said to be unbound for that particular solution (row). UNION Now that we've seen how to explore optional data, let's see how we can select from alternatives. If we were interested in the chemistry of the halogens and the noble gases, we might simply construct and run separate queries in order to find out their atomic weights and CAS registry numbers. But using the SPARQL UNION keyword we can write a single query that matches all of the elements. That query looks like this: PREFIX table: <> SELECT ?symbol ?number FROM <> WHERE { { ?element table:symbol ?symbol; table:atomicNumber ?number; table:group table:group_17. } UNION { ?element table:symbol ?symbol; table:atomicNumber ?number; table:group table:group_18. } } There are a few things to notice. First, the query pattern consists of two nested patterns joined by the UNION keyword. If an element resource matches either of these patterns, then it will be included in the query solution. For clarity the patterns use the predicate-object list shortcut. The query also includes another demonstration of URI shortening, this time within the object of a triple pattern. The value (range) of the table:group property is a resource. Each of the groups in the table is modeled as a resource with its own URI. The full URI for group 17 is. As we've already declared a URI PREFIX for we can truncate this to table:group_17. Any number of UNIONs can be included in a query, providing a great deal of flexibility in assembling data from alternatives. With all of the examples we've seen so far, we've been content to let the results be returned in whatever order the query engine chooses. This is rarely desirable in practice, as we commonly need to impose some sensible and relevant ordering to the data. SPARQL offers the ORDER BY clause to let us do precisely that. The next example demonstrates the new syntax: PREFIX table: <> SELECT ?name ?number FROM <> WHERE { ?element table:name ?name; table:atomicNumber ?number; table:group table:group_18. } ORDER BY ?number This example selects the name and atomicNumber of all of the elements in group 18 of the periodic table, the noble gases. The ORDER BY clause indicates that the elements should be ordered by their atomic number property, in ascending order. Formally, ORDER BY is a solution sequence modifier -- it manipulates the result set prior to it being returned by the query processor. As such, it is not part of the graph pattern and so is listed after the WHERE clause in the query syntax. An ORDER BY clause can list one or more variable names, indicating the variables that should be used to order the result set. The query processor will sort by each variable in turn, in order of their declaration. By default all sorting is done in ascending order, but this can be explicitly changed using the DESC (descending) and ASC (ascending) functions. The next example sorts all of the elements in the periodic table in descending order of atomic weight: PREFIX table: <> SELECT ?name FROM <> WHERE { ?element table:name ?name; table:atomicWeight ?weight. } ORDER BY DESC(?weight) SPARQL also allows us to limit the total number of results in a result set using the LIMIT keyword, which indicates the maximum number of rows that should be returned. A value of zero will return no results; if the value is greater than the size of the result set, then all rows will be returned. Used in combination with ORDER BY we can modify our query to create a new query that returns the ten heaviest elements in the periodic table: PREFIX table: <> SELECT ?name FROM <> WHERE { ?element table:name ?name; table:atomicWeight ?weight. } ORDER BY DESC(?weight) LIMIT 10 When building user interfaces to navigate through a database or set of results, it's common to break the results into pages, e.g. displaying 10 search results at a time. SPARQL supports such paging by allowing a query to specify an OFFSET into the result set. This indicates that the processor should skip a fixed number of rows before constructing the result set. This usage is naturally combined with ORDER BY in order to ensure a consistent and meaningful order. By way of example, let's assume that we've already listed the ten heaviest elements in the periodic table and now want to fetch the next ten heaviest. In this query we use OFFSET to skip the data we've already seen: PREFIX table: <> SELECT ?name FROM <> WHERE { ?element table:name ?name; table:atomicWeight ?weight. } ORDER BY DESC(?weight) LIMIT 10 OFFSET 10 For readability the examples we've viewed so far have been rendered as HTML tables. Most SPARQL processors will include a custom API to allow the direct manipulation of a result set, allowing a programmer to manipulate results in whatever way suits an application. But if we want to serialize a SPARQL result set in a standard way, perhaps to return data via a web service, we can use the SPARQL Query Results XML Format. By way of an example, here's an extract of the results from the first example above. To view the complete set of results, refer to the online service: <sparql xmlns=""> <head> <variable name="name"/> </head> <results ordered="false" distinct="false"> <result> <binding name="name"><literal datatype="">sodium</literal></binding> </result> <result> <binding name="name"><literal datatype="">neon</literal></binding> </result> <result> <binding name="name"><literal datatype="">iron</literal></binding> </result> <!-- more results --> </results> </sparql> As you can see, the format is fairly simple and regular: sparql, which contains a headand a resultselement that together describe the result set headsection declares all variables that will be returned in the result set. It's equivalent to the column headings in an HTML table resultssection lists each query result, i.e. one resultelement for each row in the result set resultelement contains one bindingfor each variable. A binding is one of literalor uri. These elements contain the actual values returned. If a variable is not bound in a query (see the above section on OPTIONALPatterns), then it is marked as unbound. Given its obvious simplicity and regular structure, manipulating this format with XSLT or XQuery is fairly trivial. The SPARQL Query Results XML Format specification includes several relevant examples. This brings us to the end of our first look at SPARQL. We've seen how SPARQL allows us to match patterns in an RDF graph using triple patterns, which are like triples except they may contain variables in place of concrete values. The variables are used as "wildcards" to match RDF terms in the dataset. We introduced the SELECT query which can be used to extract data from an RDF graph, returning it as a tabular result set. We built up more complex graph patterns from simple triple patterns and illustrated how to deal with both required and OPTIONAL data. UNION queries were also introduced as a way of dealing with selecting alternatives from our dataset. Finally, we demonstrated how to apply ordering to our results, LIMIT the amount of data returned, and jump forward through results using OFFSET. Along the way we took a brief look at the SPARQL XML Query Results Format, and a number of the syntax shortcuts that make writing queries much simpler. These are especially useful with repetitive graph patterns and long URIs. Armed with this information, and the growing range of SPARQL implementations, you can start to investigate the language yourself and put it to good use. As you begin working with the language you'll no doubt find Dave Beckett's query language reference a handy resource. In our next tutorial in this series we'll look more closely at how SPARQL deals with data typing, applying constraints to our data, and the facilities for querying data from multiple sources. Finally, I'd like to thank Katie Portwin and Priya Parvatikar for early feedback on this article. XML.com Copyright © 1998-2006 O'Reilly Media, Inc.
http://www.xml.com/lpt/a/1628
CC-MAIN-2016-26
en
refinedweb
Each module has its own configuration section named after the module name. The configuration section will be " [::module_filename_prefix ]" where module_filename_prefix is the filename of the module with the .p4m extension removed. For example, the init_exp.p4m module has a configuration section of [::init_exp]. Each module is loaded into its own namespace, also named after the module in the same manner. Thus in the above example the Initialise Experiment Files module uses the namespace ::init_exp. This means that all local variables within that module will be within that name space and will not clash with identical named variables in other modules. When pregap4 reads the configuration file any configuration section starting in double colon is taken to be a name space and the following configuration is executed in that namespace. So the following example enables the Initialise Experiment Files module, but disables the Estimate Base Accuracies module. [::init_exp] set enabled 1 [::eba] set enabled 0 In the following sections the variables, inputs and outputs of each module are listed. Every module has an enabled local variable. This may be either 0 for disabled or 1 for enabled. Disabled modules are still listed in the configuration panel, although they will not be executed. The tables in each section below list the module filename, the local variables and a very brief description of their valid values, the files used or produced by this module, the possible sequence specific errors that can be produced (which will be written to the failure file as the reason for failure), and the format of any SEQ lines in the module report. Other information may also be reported, but the SEQ lines are easily recognisable to facilitate easy parsing of results.
http://staden.sourceforge.net/manual/pregap4_unix_61.html
CC-MAIN-2016-26
en
refinedweb
IRC log of xmlsec on 2007-09-25 Timestamps are in UTC. 15:57:53 [RRSAgent] RRSAgent has joined #xmlsec 15:57:53 [RRSAgent] logging to 15:57:59 [tlr] Meeting: XML Security Workshop 15:58:07 [ht] Scribe: Henry S. Thompson 15:58:11 [ht] ScribeNick: ht 15:58:20 [tlr] bridge is setup 15:58:36 [Zakim] -Ed_Simon 15:59:08 [Zakim] +Ed_Simon 15:59:17 [ht] Agenda: 15:59:59 [smullan] smullan has joined #xmlsec 16:00:10 [esimon2] esimon2 has joined #xmlsec 16:00:36 [esimon2] Hi, I've phoned in and can hear voices; not sure if you can hear me. 16:00:43 [tlr] we haven't heard you so far 16:00:51 [esimon2] I'm talking now. 16:00:56 [tlr] we don't hear you 16:01:22 [esimon2] Don't worry too much about not hearing me, I'll type in anything I need to say. 16:01:28 [esimon2] I can hear you. 16:02:20 [esimon2] I can hear you. 16:02:23 [ht] We can't hear you :-( 16:02:37 [esimon2] Don't worry too much about not hearing me, I'll type in anything I need to say. 16:02:54 [PHB] Wait a mo, Chris will turn on the voice from God feature 16:02:56 [PHB] Speak now 16:03:05 [ht] you can stop now 16:03:07 [esimon2] I'm countin 16:04:12 [esimon2] Was there a web link to see slides; I think Phill had mentioned something about a Sharepoint server. Again, if it is not set up, that is fine. I'm OK with just listening in. 16:04:54 [esimon2] Yes, I can hear Frederick. 16:05:12 [Zakim] -Ed_Simon 16:06:42 [hiroki] hiroki has joined #xmlsec 16:06:47 [Zakim] +Ed_Simon 16:07:40 [tlr] what? 16:08:06 [ht] FH: The purpose is not to do the work here, but to decide if and how to take work forward -- how much interest is there in participating in a follow-on to the XMLSec Maint WG -- what would the charter look like, what issues we would addresss 16:08:20 [ht] ... We will use IRC to log and to provide background info 16:08:36 [sdw] sdw has joined #xmlsec 16:08:48 [ht] Chair: Frederick Hirsch 16:09:17 [ht] s/The purpose/The purpose of this workshop/ 16:10:43 [MikeMc] MikeMc has joined #xmlsec 16:10:48 [ht] ht has changed the topic to: Agenda: 16:11:26 [ht] FH: Please consider joining the XMLSecMaint WG 16:11:39 [ht] ... Weekly call, interop wkshp on Thursday 16:11:54 [ht] ... Thanks to the members of the WG who reviewed papers for this workshop 16:12:48 [ht] ThomasRoessler: Existing WG has a limited charter, maintenance work only 16:12:58 [esimon2] I can hear very well, thanks. 16:13:11 [ht] ... ALso chartered to propose a charter for a followon WG 16:13:37 [ht] ... We won't draft a charter at this workshop, but we hope to produce a report which indicates support and directions 16:13:44 [gpilz] gpilz has joined #xmlsec 16:13:49 [ht] ... That in turn will turn into a charter, if the outcome is positive 16:14:12 [ht] ... Which then goes to the Advisory Committee for a decision 16:14:23 [ht] ... The timescale is next year and beyond 16:15:17 [ht] s/what?/Topic: Introduction/ 16:15:42 [ht] TR: [walks throught the agenda] 16:17:04 [ht] Attendance: approx. 25 people in the room 16:19:47 [ht] TR: Slides which are on the web, drop URI here; otherwise send in email to ht@w3.org and tlr@w3.org 16:22:25 [FrederickHirsch] FrederickHirsch has joined #xmlsec 16:25:05 [FrederickHirsch] zakim, who is here? 16:25:06 [Zakim] On the phone I see [Workshop], Ed_Simon 16:25:06 [Zakim] On IRC I see FrederickHirsch, gpilz, MikeMc, sdw, hiroki, esimon2, RRSAgent, ht, rdmiller, tlr, trackbot-ng, Zakim 16:25:25 [esimon2] I spoke, did you hear me? 16:25:38 [tlr] ed, you were 100% clear 16:27:02 [cgi-irc] cgi-irc has joined #xmlsec 16:27:12 [FrederickHirsch] 16:27:45 [ht_] ht_ has joined #xmlsec 16:27:52 [ht_] [Talks will not be scribed, scribing will resume for discussion] 16:28:14 [tlr] 16:29:17 [cgi-irc] zakim, cgi-irc is brich 16:29:17 [Zakim] sorry, cgi-irc, I do not recognize a party named 'cgi-irc' 16:29:29 [tlr] bruce, don't worry. ;) 16:30:00 [klanz2] klanz2 has joined #xmlsec 16:32:33 [klanz2] test ... 16:33:35 [FrederickHirsch] can use xslt and xpath2.0 to create signature malware 16:38:43 [esimon2] Audio has ended for me. 16:38:58 [MikeMc] we can tellsomething changed - phil is looking into it 16:39:08 [esimon2] audio is back! 16:39:16 [MikeMc] switched mics 16:39:47 [FrederickHirsch] ed do you have audio now? 16:41:42 [ht_] [Following is approximate, apologies for mistakes] 16:41:45 [ht_] Attendance: Frederick Hirsch, Konrad Lanz, Juan Carlos Cruelas, Bruce Rich, Mike McIntosh, Hugo Krawczyk, Gilbert Pilz, Hiroki Ito, Brad Hill, Rob Miller, Jeannine Schmidt, Henry Thompson, Sean Mullan, Gary Chung, Michael Leventhal, Steven Williams, Pratik Data, ? Chen, Phillip Hallam-Baker, Thomas Roessler, Ed Simon (via phone) 16:45:52 [ht_] TR: Questions of clarification 16:46:12 [ht_] FH: Restricting to only a few transformations? 16:46:32 [ht_] BH: Yes, restrict to just small well-known set 16:47:10 [ht_] SC: Mostly to implementations, rather than specs 16:47:18 [smullan] smullan has joined #xmlsec 16:47:29 [ht_] ... Need to reduce the attack surface of implementations 16:47:39 [ht_] ... so we need an implementors guide, right? 16:48:12 [ht_] BH: Right 16:48:39 [ht_] KL: XSLT should default to _dis_able features, not _en_able 16:49:26 [ht_] s/Sean Mullan/Scott Cantor, Sean Mullan/ 16:50:01 [ht_] s/Pratik Data/Pratik Datta/ 16:50:46 [tlr] 16:55:04 [tlr] zakim, code? 16:55:04 [Zakim] the conference code is 965732 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), tlr 16:59:02 [esimon2] Ed's comment: If the structure of a document is important to the meaning of the document (as shown in the examples), then signing by ID (which is movable) is insufficient. 16:59:22 [ht_] BH: How would you compare doing a hashed retrieval compared to ??? 17:00:02 [esimon2] Presentation highlights the need to rethink the XPointer functionality. 17:00:05 [ht_] [scribe didn't get the question] 17:00:25 [ht_] MM: Apps certainly need to interact better with signature processing 17:01:07 [ht_] ... Need for overlapping signatures implies a need for a signature object model, so you can iterate over all the signatures and treat them independently 17:02:18 [esimon2] Ed says: I don't know that apps need to interact with signature processing better; rather, apps need to ensure the signatures they use sign all the critical information -- content as well as structure. 17:02:32 [ht_] TR: Open up discussion of security vulnerabilities, other than crypto 17:03:17 [ht_] MM: It's a pain that I have to encrypt the signature block 17:04:42 [tlr] MM: DigestValue should be optional 17:04:55 [ht_] ScribeNick: tlr 17:05:07 [klanz2] q+ 17:05:52 [tlr] ack klanz 17:06:34 [tlr] MM: presence of DigestValue means that plaintext guessing attack is possible if plaintext encrypted 17:06:51 [tlr] ... therefore, would have to encrypt the signature as well ... 17:06:55 [tlr] FH: why is tha tpainful? 17:06:58 [tlr] MM: tried xml ecn? 17:07:01 [tlr] s/ecn/enc/ 17:07:37 [FrederickHirsch] Konrad: having digest necessary for manifest procesing 17:07:51 [FrederickHirsch] Scott: should be optional to have digests 17:08:18 [FrederickHirsch] Konrad: also verification on constant parts that are archived separately etc 17:08:29 [tlr] klanz: Know of manifest use in electronic billing context 17:10:08 [ht] s/Datta/Datta, Corinna Witt, Jeff Hodges, Jimmy Zhang/ 17:11:59 [klanz2] 17:12:17 [klanz2] Signing XML Documents and the Concept of “What You See Is What You Sign” 17:12:47 [tlr] scott: need profiles *and* implementation guidelines 17:13:25 [klanz2] q+ 17:14:45 [FrederickHirsch] frederick: asks about clarifying what implementation guide is versus profiling 17:15:12 [FrederickHirsch] Scott: need to have hooks in code that enable best practices to be followed ,implementation guide 17:15:55 [FrederickHirsch] ... for example, saying signature is valid isn't enough if you are not sure what has been signed, hooks may be needed for this 17:16:59 [klanz2] ack klanz2 17:17:05 [klanz2] q- 17:18:49 [esimon2] Ed: I am OK with just listening in and typing comments on IRC. No need to complicate things for others on my account. 17:19:46 [klanz2] q+ 17:19:52 [FrederickHirsch] Symon: policy can be used to limit what is done with xml security, anohter approach to avoid problems 17:20:29 [FrederickHirsch] ack klanz 17:20:59 [FrederickHirsch] discussion as to whether xmlsig spec is broken 17:21:14 [esimon2] The XSLT 2.0 specification mentions a number of security considerations dealing with issues raised earlier. 17:21:31 [esimon2] Agree with Konrad. 17:22:24 [tlr] Hal Lockhart presenting; slides later 17:23:48 [esimon2] yes 17:24:24 [tlr] q? 17:25:01 [esimon2] Thanks very much. 17:27:12 [jcc] jcc has joined #xmlsec 17:37:55 [FrederickHirsch] hal: notes various issues have been document in ws-security, ws-i basic security profile and other places 17:38:08 [FrederickHirsch] frederick: also liberty alliance work 17:41:53 [klanz2] q+ 17:42:09 [FrederickHirsch] ack klanz2 17:42:36 [FrederickHirsch] klanz2: false negatives will be perceived very badly 17:43:21 [FrederickHirsch] ... need to focus on what you see is what you sign, then false negatives main issue 17:43:32 [FrederickHirsch] hal: agrees 17:44:34 [FrederickHirsch] hal: challenge is interface between applicatoin and security processing to get proper security for applciation 17:45:23 [FrederickHirsch] henry thomson: liaison issues - schema, processing model wg, 17:45:48 [FrederickHirsch] ... say to validate you must decrypt, perhaps 17:45:57 [klanz2] q+ 17:47:08 [esimon2] I agree, I think, with Henry re his comments about XPointer to help resolve the ID issue. 17:47:15 [FrederickHirsch] ... re id issue , maybe new xpointer version? 17:47:50 [ht] s/xpointer version/xpointer scheme/ 17:48:12 [FrederickHirsch] scott: +1 to klanz, concern about false positivies, issues for adoption 17:48:26 [FrederickHirsch] ack klanz2 17:48:30 [FrederickHirsch] ack klanz 17:48:40 [FrederickHirsch] ack klanz 17:49:28 [FrederickHirsch] scott: most xml processing is not schema aware, xsi:type is not visible to processing 17:50:24 [FrederickHirsch] ht: would issue be solved if sig re-worked to be signing of Infosets 17:52:03 [FrederickHirsch] klanz: tradeoff performance & infoset signing 17:53:47 [PHB] PHB has joined #xmlsec 17:54:19 [PHB] q+ 18:02:22 [FrederickHirsch] ack PHB 18:02:59 [FrederickHirsch] PHB: could use some examples of difference between infoset and current signing approach. What is really different. 18:05:43 [Symon] Symon has joined #xmlsec 18:06:08 [tlr] q- 18:30:58 [tlr] ScribeNick: tlr 18:31:03 [tlr] Topic: cryptographic aspects 18:31:27 [tlr] 18:31:35 [tlr] Hugo Krawczyk presenting 18:31:46 [MikeMc] the slides for this session are at 18:32:47 [tlr] hugo: post-wang trauma, how do we deal with it... 18:33:01 [MikeMc] actually - the papers are there - not the slides - sorry 18:33:22 [tlr] slides are at the link I gave above 18:33:42 [sean] sean has joined #xmlsec 18:35:55 [sean] sean has joined #xmlsec 18:36:06 [FH] FH has joined #xmlsec 18:36:36 [tlr] zakim, who is on the phone? 18:36:36 [Zakim] On the phone I see [Workshop], Ed_Simon 18:36:58 [FH] zakim, who is here? 18:36:58 [Zakim] On the phone I see [Workshop], Ed_Simon 18:36:59 [Zakim] On IRC I see FH, sean, Symon, PHB, jcc, smullan, ht, cgi-irc, gpilz, MikeMc, sdw, hiroki, esimon2, RRSAgent, rdmiller, tlr, trackbot-ng, Zakim 18:48:13 [FrederickH] FrederickH has joined #xmlsec 18:48:21 [klanz2] klanz2 has joined #xmlsec 18:49:14 [tlr] hal: any attacks for which we need to check whether random strings are diferent? 18:49:25 [tlr] hugo: critical for the signer to check that these strings are different 18:49:35 [tlr] hal: if random value same for every signature, then can do offline attacks 18:49:51 [tlr] mike: every time you create new signature, you create new value 18:49:58 [tlr] hal: how important is it to the verifier that this is the case? 18:50:05 [tlr] ... suppose there's no real signer, just a blackhat sending messages ... 18:50:15 [tlr] ... do you have to keep track of fact that he sends same random number? ... 18:50:53 [tlr] hugo: if you don't find 2nd preimage on one-way function, then attacker can't 18:51:03 [tlr] hal: thinking about guessing attack or so 18:51:17 [tlr] ... there are attacks against CBC if IV isn't always different ... 18:51:30 [tlr] hugo: uniqueness of randomness per signature is not requirement 18:51:41 [tlr] ... requirement is that the attacker must not know randomness that legitimate signer is going to use ... 18:52:09 [tlr] ... question is a valid concern, though ... 18:52:14 [tlr] ... in this case, there's no more to it .. 18:52:36 [tlr] phb: fuzzy about what security advantage is ... 18:52:55 [tlr] ... we're nervous about hash functions for which malicious signer can create signature collisions ... 18:53:01 [tlr] ... that's attack we're concerned with ... 18:53:26 [tlr] ... randomness proposal makes this the same difficulty as the legitimate signer signing document, and attacker tries to do duplicate ... 18:53:44 [tlr] ... how does this make anything more secure against malicious signers? ... 18:54:04 [tlr] hugo: technique does not prevent legitimate signer from finding two messages that have same hash value ... 18:54:17 [tlr] ... legitimate (not honest) signer could in principle find two messages that map to same hash value ... 18:54:26 [tlr] ... can't be case if hash function is collision resistant .. 18:54:33 [tlr] ... if it isn't, problem could in principle occur ... 18:54:48 [tlr] ... if you receive message with signature, then signer is committed to that signature ... 18:55:01 [tlr] ... (example) ... 18:56:12 [tlr] ... point is: every message that has legitimate signature commits signer ... 18:56:41 [tlr] ... note that hash function might be collision-resistant, but signature algo might not be ... 18:57:15 [tlr] hal: attack is to get somebody to sign a document, and have that signature make something else 18:57:19 [tlr] phb: ok, now i get it 18:57:28 [tlr] ... more relevant to XML than certificate world ... 18:58:02 [tlr] "not *any* randomness" backup slide 18:58:40 [tlr] phb: what I can see as attractive here is -- once SHA3 discussions -- .... 18:59:01 [tlr] ... instead of having standard compressor, have compressor, MAC, randomized digest all at once ... 18:59:04 [tlr] ... with parameters ... 18:59:49 [tlr] frederick: time! 19:00:13 [tlr] hugo: re nist doc, it applies to any hash function 19:00:32 [tlr] ... exactly like CBC and block cyphers ... 19:01:12 [FrederickH] q? 19:01:48 [tlr] mcIntosh on implementing it 19:02:04 [tlr] ... implemented preprocessing as Transform 19:02:29 [tlr] (occurs after c14n on slide) 19:07:50 [tlr] 19:08:54 [tlr] hugo: rsa-pss doesn't solve same problem as previous randomization scheme ... 19:08:57 [tlr] ... orthogonal problem ... 19:09:01 [tlr] konrad: ack 19:14:16 [FrederickH] second hash function in diagram for RSA-PSS 19:21:35 [FrederickH] tlr: asks about unique urls for two different randomizations, yet could they be combined? 19:22:03 [FrederickH] e.g. RSA-PSS vs Randomized hashing as described by Hugo 19:22:07 [tlr] tlr: these are two different randomization schemes, they're orthogonal to each other, yet both affect the same URI space to be addressed 19:22:15 [tlr] ... so the proposed integrations can't be integrated ... 19:22:25 [tlr] konrad: ?? 19:22:30 [tlr] hugo: streaming issue (?) 19:22:56 [tlr] s/??/maybe can share randomness between two approaches/ 19:23:20 [tlr] s/streaming issue (?)/want randomness in different places from ops perspective; streaming issue/ 19:27:20 [FrederickH] sean: why did tls not adopt RSA-PSS 19:27:58 [FrederickH] hugo: inertia, people also are staying with SHA-! versus SHA-256 19:29:27 [FrederickH] phill: tls different in terms requirements it is meeting. Documents different than handshake reuqiremnets 19:30:10 [tlr] konrad: moving defaults... 19:30:12 [tlr] ... time for that 19:31:15 [tlr] zakim, who is on the phone? 19:31:16 [Zakim] On the phone I see [Workshop], Ed_Simon 19:31:40 [tlr] 19:31:47 [tlr] jeanine schmidt presenting. 19:32:41 [tlr] jeanine: Crypto Suite B algorithms ... 19:32:45 [tlr] ... regrets from Sandi ... 19:34:57 [FrederickH] use of 1024 through 2010 by NIST, indicates potential key size growth issue 19:35:46 [FrederickH] ecc offers benefits for key size and processing 19:36:33 [tlr] looking for convergence of standards in suite B 19:37:02 [FrederickH] NSA would like to see Suite B incorporated in XML Security 19:37:28 [FrederickH] DoD requirements aligned with this 19:38:27 [tlr] details could be worked out in collaboration 19:39:10 [tlr] hugo: specifically saying key agreement is ECDH? 19:39:15 [tlr] jeanine: yes, preliminarily 19:39:22 [tlr] hugo: IP issue behind not talking about ECMQV? 19:39:31 [tlr] jeanine: yeah, that's an issue ... 19:39:36 [tlr] ... but ECDH might be more appropriate algorithm for XML ... 19:39:44 [tlr] ... whether one or both is a question for future work ... 19:39:50 [tlr] hugo: Can you make this analysis available? 19:40:08 [tlr] jeanine: this is something that should be worked out betw w3c and nsa 19:40:12 [tlr] ... preliminary recommendation ... 19:40:35 [tlr] tlr: w3c would need to mean "community as a whole" 19:40:50 [tlr] frederick: I hear "nsa could participate in WG"? 19:40:52 [tlr] jeanine: yes 19:41:03 [tlr] phb: ECC included with recent versions of Windows ... 19:41:10 [tlr] ... doesn't believe they've licensed that from Certicom ... 19:41:16 [tlr] ... given MS's caution in areas to do with IP ... 19:41:23 [tlr] ... maybe ask them how they navigate this particular minefield ... 19:41:36 [tlr] ... if there is a least encumbered version ... 19:41:45 [tlr] ... then will follow the unencumbered path ... 19:41:56 [tlr] hal: what is involved here in terms of spec? 19:42:02 [tlr] jeanine: primarily identifiers 19:42:48 [tlr] frederick: some unifying effort for identifiers might be needed 19:43:04 [tlr] konrad: spirit of specs is to reuse identifiers 19:43:18 [tlr] frederick: also recommended vs required 19:43:43 [FrederickH] rfc 4050 has identifiers 19:43:46 [tlr] sean: in RFC 4054, there's already identifiers for ECDSA with SHA-1 19:44:13 [tlr] phb: keyprov would like to track down as many of algo ids as possible 19:44:26 [tlr] ... if you have uncovered any (OIDs, URIs), please send a link 19:44:34 [tlr] hal: start with gutmann's list 19:44:42 [tlr] frederick: please share with xmlsec WG 19:44:50 [sean] has URIs for ECDSA-SHA1 19:45:01 [tlr] frederick: what is next step for NSA at this point -- see what happens here? 19:45:05 [tlr] jeanine: yes 19:45:38 [tlr] lunch break; reconvene at 1:30 19:46:12 [esimon2] yes 19:46:28 [tlr] reconvene: 1:45 20:06:39 [Zakim] -Ed_Simon 20:18:17 [smullan] smullan has joined #xmlsec 20:45:12 [Zakim] +Ed_Simon 20:46:42 [esimon2] I am on the call. 20:47:54 [sdw] scribe: sdw 20:48:13 [esimon2] are slides available online? 20:49:14 [sdw] Must Implement 20:49:30 [sdw] A-la-cart - Combinatorial explosion 20:50:08 [sdw] Hidden Constraints: Often not any to any for implementations 20:51:33 [tlr] 20:51:57 [esimon2] Thanks 20:51:58 [sdw] Result: many variations to test, many configurations for analysis, deviation from specification 20:52:28 [sdw] Proposal: Quantum Profiles 20:53:02 [sdw] Unique URI for profile that fully specifies choices at each level. 20:55:46 [sdw] Discrete options combinations, modes are more complicated. 20:59:09 [sdw] Negotiation of specific combinations. 20:59:29 [sdw] URIs that are intentionally opaque, not sub-parsed. 21:02:11 [FrederickH] sdw: a possible analogy is font strings in X11 21:02:57 [FrederickH] konrad: would be useful to have uri's that indicate strength (eg weakest key length) 21:02:59 [sdw] Partial ordering of profiles may make sense, but might not be good. 21:03:58 [smullan] smullan has joined #xmlsec 21:04:15 [sdw] Meeting certain requirements, such as for a country, may be more of a private profile, possibly including country name, for instance. 21:04:42 [sdw] Hugo: Are you making similar proposals in other groups such as IETF? 21:05:43 [sd. 21:06:21 [sdw] Where picking profiles, CFIG and other groups would likely participate to define OIDs, etc. for certain coherent suites. 21:06:53 [sdw] How is that approach applicable to signature? We are already in A-la-carte situation. 21:10:23 [sdw] Presentation: The Importance of incorporating XAdES extensions into ongoing XML-Sig work 21:13:24 [FrederickHirsch] FrederickHirsch has joined #xmlsec 21:13:48 [tlr] 21:13:48 [esimon2] slides? 21:13:57 [esimon2] thanks 21:14:15 [ht] Agenda now has more slide pointers. . . 21:14:47 [FH] FH has joined #xmlsec 21:14:49 [gpilz] gpilz has joined #xmlsec 21:16:06 [sdw] Defines XAdES forms that incorporates specific combinations of properties. 21:16:28 [FH] Agenda: 21:17:11 [sdw] Use of these profiles allows much later use and auditing of signed data. 21:18:07 [FH] jcc slides 21:21:02 [sdw] Supports signer, verifier, and storage service. 21:23:46 [sdw] Signature policy identifier references specific rules that are followed when generating and verifying signature. 21:24:00 [sdw] Includes digest of policy document. 21:26:50 [sdw] SignatureTimeStamp verifies that signature was performed earlier. 21:27:17 [sdw] CompleteCertificateRefs has references to certificates in certpath that must be checked. 21:31:38 [sdw] Has change been made to change countersignatures to include whole message rather than just original signature? 21:31:47 [sdw] Don't believe that has been done yet. 21:34:24 [sdw] Report in ETSI summarizes state of current cryptographic algorithms and makes certain recommendations. 21:35:40 [sdw] Only minor changes to the standards are in process. 21:39:12 [sdw] Can individuals use these signatures with the force of laws? 21:41:47 [sdw] Depends on legal system: Rathole. 21:42:36 [esimon2] Thanks. 21:47:16 [sdw] Presentation: XML Signature Performance and One-Pass Processing issues 21:47:30 [klanz2] klanz2 has joined #xmlsec 21:48:14 [sdw] DOM provided good implementation but has performance issues 21:50:20 [sdw] Event processing requires one or more passes. 21:51:12 [sdw] Two passes, 1+, cache all elements with ID, or use profile-specific knowledge 21:56:04 [sdw] Signature information needed before data vs. signature data etc. needed after data. 21:56:12 [sdw] Can't do with current XML Signature standards. 21:57:38 [sdw] XML DSig Streaming Impl.: STaX, JSR 105 API, exclusive C14N, forward references, enveloping signatures, Bas64 transform 21:58:28 [FH] sean: recommend best practices for streaming implementations 21:58:47 [sdw] Apache project... 21:59:07 [FH] hal: integrity protecting data stream? 21:59:15 [FH] ... example is movie 22:00:09 [FH] ht: w3c xml pipelining language wg 22:01:00 [FH] q+ 22:01:08 [sdw] q+ 22:02:22 [ht] ack FH 22:03:27 [ht] ack sdw 22:03:39 [klanz2] q? 22:04:21 [FH] steve: xml fragments are similar to streaming, but can sign/integrity protect fragments 22:04:34 [FH] s/similar to/can be used in/ 22:06:58 [ht] ScribeNick: ht 22:07:44 [ht] FH: The combination of streaming and signature is odd -- you can't release the beginning of the document until you've verified the signature at the end 22:07:54 [FH] pratik: streaming is for performance, rationale for doing it 22:08:14 [FH] one point I was making is that sometime you do not need integrity protection for streaming, e.g. in cases where it is ok to drop data 22:08:46 [ht] HT: Following on, it's precisely for that reason that not doing signature generation is at least odd, since in that case you surely can ship the beginning of the doc while still working on the signature 22:09:18 [FH] brad: +1 to pratik, value of streaming is performance 22:09:40 [ht] various: Dispute the relevance of signature to streaming XML and/or dispute the value of streaming at all 22:10:30 [hal] hal has joined #xmlsec 22:11:00 [ht] HT: Requirements on XML Pipeline to support streaming of simple XML operations, interesting to understand how to integrate some kind of integrity confirmation _while_ streaming XML 22:11:51 [FH] s/FH: The combination/?? The combination/ 22:12:08 [jcc] jcc has joined #xmlsec 22:12:08 [esimon2] Audio seems to be dead. 22:12:30 [esimon2] yes i can, thanks 22:12:42 [sdw] Streaming is important in memory constrained or bandwidth / processing constrained applications. 22:13:11 [esimon2] Yes, thanks. 22:14:17 [ht] RRSAgent, pointer 22:14:17 [RRSAgent] See 22:15:24 [FH] scott: notes adoption in scripiting languages an issue, using c library not good enough 22:15:39 [FH] jeff: example is use of XMLSig is barrier to saml adoption in OpenID 22:18:16 [FH] peter gutman, "why xml security is broken" 22:18:50 [FH] s/gutman/Gutmann/ 22:18:54 [FH] s/peter/Peter/ 22:19:51 [FH] Scott: Liberty Alliance worked at producing xml signature case that addresses many of the threats discussed. 22:20:02 [FH] s/case/usage/ 22:20:59 [FH] scott: need simpler way of conveying bare public keys 22:21:10 [FH] ... eg pem block 22:21:43 [tlr] 22:21:43 [bhill] bhill has joined #xmlsec 22:22:29 [FH] scott: Retrieval method point to KeyInfo or child, issue with spec 22:26:03 [FH] simplesign - sign whole piece of xml as a blob 22:27:30 [esimon2] Ed: I agree with the above. If the XML is not going to be transformed by intermediate processes, one can just sign the XML as one does text. And use a detached signature. 22:28:17 [bhill] have seen this approach successfully in use with XML in DRM and payment systems as well 22:28:30 [esimon2] What is needed is perhaps a packaging convention like ODF and OOXML use. 22:28:52 [MikeMc] how is this different from PKCS7 detached? is it the embedding of the signature in the signed data? 22:30:19 [esimon2] I would have to review PKCS7 detached but I would say the idea is quite similar. 22:30:59 [FH] konrad: need XML Core to allow nesting of XML, e.g. no prolog etc 22:32:24 [FH] jeff: using for protocols is different use case than docs, sign before sending to receiver 22:33:20 [tlr] q? 22:34:10 [tlr] jimmy: how aboutnamespaces? 22:34:21 [tlr] jeff: well, we don't care. 22:34:52 [tlr] jimmy: has to be processed in context of original XML 22:35:09 [tlr] mike: Why not PKCS#7 detached? 22:35:09 [bhill] re: PKCS#7 - average Web-era developer doesn't like ASN.1 22:35:25 [bhill] XML is successful and text wrangling is simple in any scripting language 22:35:37 [tlr] cantor&hodges: this is for an as simple as possible use case 22:35:59 [tlr] ... point is, people tend to back off from XML Signature in certain use cases ... 22:36:05 [tlr] ... perhaps find a common way for the very simple cases ... 22:36:48 [tlr] mike: well, there's a simple library, and then there's been 90% of the way to an XML Signature gone 22:37:05 [tlr] sdw: want to emphasize that there are a number of different situations where you just simply want to encrypt a blob 22:37:07 [tlr] ... or sign it ... 22:37:18 [tlr] ... and be able to validate later without necessarily having complexity ... 22:37:26 [tlr] ... not only protocol-like situations (WS being a good example) ... 22:37:42 [tlr] ... but also in cases where you have sth that resembles more a traditional signed document ... 22:38:06 [tlr] ... store in a database, that way, archival ... 22:39:30 [tlr] scott: what may be needed to solve my problem is basically a lot more ID attributes than schema (?) 22:39:37 [FH] scott: more id atttributes in xml sig schema might be helpful 22:39:43 [tlr] ... s/than schema (?)/than in the current schema/ 22:39:59 [tlr] ... there is room for improvement here for the ID attributes ... 22:40:08 [tlr] ... with more of these, a lot of referencing is likely to become possible ... 22:40:11 [tlr] konrad: xml:id? 22:40:18 [tlr] scott: might be a rationalization here 22:40:30 [tlr] ... if I want to say "htis key is the same as that key", ... 22:40:40 [tlr] ... looks like you need to reference keyInfo and then find the child with XPath ... 22:40:45 [tlr] ... which seems to be a heck of a lot of work ... 22:41:34 [tlr] konrad: historic context -- at the time, wary of using mechanisms, hence "reference + transform" element 22:41:52 [bhill] Dinner: anyone coming back by SJC airport? 22:43:20 [MikeMc] Dinner @ 22:43:52 [gpilz] gpilz has joined #xmlsec 22:45:36 [tlr] rragent, please make log public 22:45:38 [ht] RRSAgent, make logs world-visible 22:48:23 [tlr] (unminuted discussion about xpath vs id attributes) 22:49:19 [tlr] scott: standard minimal version of xpath? 22:49:25 [tlr] ... preferably not implement the whole pile of work ... 22:49:38 [tlr] ... all of this is begging the question: ... 22:49:45 [tlr] ... ought to be standardized profiles for different problem domains ... 22:49:48 [esimon2] Ed: ID is simple, but flawed for apps. XPath can be complicated but applications, including XML Signature, can profile its use for specific uses. 22:50:19 [bhill] +1 for minimal XPath 22:50:41 [tlr] ... without standardized profiles for specific problem domains, a bit too much ... 22:50:55 [esimon2] What time do we resume? 22:51:15 [sdw] We called our implementation of "Simplified XPath" Spath. 22:51:43 [esimon2] OK 22:51:44 [tlr] sdw, is that publicly visible anywhere? 22:55:48 [brich] brich has joined #xmlsec 23:27:24 [sdw] Not currently. 23:31:04 [esimon2] OK 23:33:49 [esimon2] I am interested. 23:34:48 [FH] topic: profiles summary 23:35:00 [FH] basic robust profile 23:35:05 [klanz2] klanz2 has joined #xmlsec 23:35:05 [FH] bulk signing - blob signing 23:35:11 [FH] use specific? 23:36:14 [FH] metadata driven implementation 23:36:20 [FH] brad - like policy 23:39:18 [bhill] konrad: can this be done with / expressed as a schema? 23:40:29 [bhill] FH: policy implies a general language vs. a hard/closed specification for a profile 23:41:05 [bhill] tlr: difference between runtime and non-runtime profiles 23:41:07 [esimon2] I believe the next version of XML Signature and XML Encryption should have an attribute designating the profile. I have also pondered whether this should not even be in XML Core. 23:41:59 [bhill] tlr: implementation time avoids unwanted complexity - teach how to do this with use case examples 23:42:54 [bhill] scott: implementers want to build a general library and constrain behavior, rather than many implementations 23:44:01 [bhill] phb: profile reuse: catalog, wiki 23:45:07 [Pratik_] Pratik_ has joined #xmlsec 23:45:54 [PratikDatta] PratikDatta has joined #xmlsec 23:45:57 [Pratik_] Pratik_ has left #xmlsec 23:46:24 [bhill] michael leaventhal: robust is misleading, ease more important than flexibility, more performance and interop fundamentals over flexibility 23:47:53 [bhill] jz (?): keep spec understandable and as short as possible 23:47:57 [FrederickH] FrederickH has joined #xmlsec 23:51:02 [bhill] brad: be able to limit total resource consumption even in languages like Java and .Net where platform services to limit low-level resource usage do not exist 23:51:20 [bhill] konrad: some of these issues belong to Core, not just XML Security 23:51:54 [bhill] fh: need to support scripting languages like Python 23:53:48 [bhill] bhill: implementation guidelines to partition attack surface, order of operations 23:53:54 [bhill] tlr: wrapping countermeasures 23:54:46 [bhill] eric: possible to make it easier to verify than to sign? 23:56:20 [bhill] konrad and jimmy: what is the scope/charter? 23:56:36 [bhill] tlr: exploring interest, from profiling to deep refactoring 23:57:22 [hiroki] hiroki has joined #xmlsec 23:58:28 [bhill] fh: how to really do see what you sign? 23:59:37 [bhill] topic: referencing model
http://www.w3.org/2007/09/25-xmlsec-irc
CC-MAIN-2016-26
en
refinedweb
This document is also available in these non-normative formats: ODD/XML document , self-contained zipped archive , XHTML Diff markup to ITS 1.0 Recommendation 3 April 2007 , and XHTML Diff markup to publication from 26 June 2012 , . Copyright © 2012 W3C Â.. The ITS 2.0 specification both identifies concepts (such as “Translateâ€) that are important for internationalization and localization, and defines implementations of these concepts (termed “ITS data categoriesâ€) “localization†and “internationalizationâ€, see [l10n i18n] ./xml.local.with-ns.attribute.translate"/> <xs:attribute </xs:attributeGroup> <xs:element ="path"/> :element <xs:complexType <xs:attributeGroup </xs:complexType> </xs:element> </xs:schema> 5. In HTML5, ITS local selection is realized via dedicated, data category specific attributes . For the so-called “ global approach †in HTML5, this specification defines a link type for referring to files with global rules. These rules are then processed as described in Section 5.2.2: Global selection within HTML5 . The link element points to the rules file EX-translateRule-html5-1.xml The rel attribute identifies the ITS specific link relation <!DOCTYPE html> [Source file: examples/html5/EX-translate-html5-global-1.html ] The rules file linked in Example 8 . <its:rules xmlns: <its:translateRule </its:rules> [Source file: examples/html5/EX-translateRule-html5-1.xml ]. Note: “ XML localization properties †is a generic term to name the mechanisms and data formats that allow localization tools to be configured in order to process a specific XML format. Examples of XML localization properties are the Trados “DTD Settings†file, and the SDLX “Analysis†file. Abstraction via data categories : ITS defines data categories as an abstract notion for information] , of HTML nodes (primarily element and attribute nodes). In a sense, ITS markup “selects†the relevant node(s). Selection may be explicit or implicit. ITS distinguishes two approaches to selection: (1) local, and (2) using.[Ed. note: The following two examples sketch the distinction between the local and global approaches, using the translate as one example of ITS markup.. <dbk:articlexmlns: <dbk:info> <dbk:title>An example article</dbk:title> <dbk:authorits: examples/xml/EX-basic-concepts-1.xml ] For this example, e.g., in a “headâ€").: *[self::myElement]/@* | myElement//*/@* “faux pas†as non-translatable. <text xmlns: <head> <revision>Sep-10-2006 v5</revision> <author>Ealasaidh McIan</author> <contact>ealasaidh@hogw.ac.uk</contact> <title its:The Origins of Modern Novel</title> <its:rules version=: examples/xml/EX-basic-concepts-3.xml ] For some data categories, special attributes add or point to information about the selected nodes. For example, the Localization Note data category can add information to selected nodes (using a locNote element), or point , Locale Filter and Elements Within Text . The functionalities of adding information and pointing to existing information are mutually exclusive . That is to say, attributes for pointing and adding must not appear at the same rule element. This section is normative. The keywords “MUSTâ€, “MUST NOTâ€, “REQUIREDâ€, “SHALLâ€, “SHALL NOTâ€, “SHOULDâ€, “SHOULD NOTâ€, “RECOMMENDEDâ€, “MAYâ€, and “OPTIONAL†in this document are to be interpreted as described in [RFC 2119] . The namespace URI that MUST be used by implementations of this specification is: The namespace prefix used in this specification for this URI is “itsâ€. It is recommended that implementations of this specification use this prefix. In addition, the following namespaces are used in this document: for the XML Schema namespace, here used with the prefix “xs†for the RELAX NG namespace, here used with the prefix “rng†for the XLink namespace, here used with the prefix “xlink†[.] Example 15: XPath expressions with namespaces The term element from the TEI is in a namespace <its:rules xmlns: <its:termRule </its:rules> [Source file: examples/xml/EX-selection-global-1.xml ] Example 16: XPath expressions without namespaces. One way to associate a document with a set of external ITS rules is to use the optional XLink [XLink 1.1]== <its:withinTextRule <its:withinTextRule withinText="no" selector="//prolog/title|//prolog. For example, the [DITA 1.0] format can use its translate attribute to apply to “transcludedâ€= its:locNote attribute. <Res xmlns: . version=: examples/xml/EX-locNote-element-1.xml ] The locNotePointer attribute is a relative XPath expression pointing to a node that holds the note. pointing to a node that holds the note. <Res xmlns: <prolog> <its:rules version= <its:locNoteRule </its:rules> </head> <body> <msg id="NotFound">Cannot find {0} on {1}.</msg> </body> </my=: examples/xml/EX-locNote-selector-2.xml ] <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"/><title>LocNote test: Default</title> </head> <body> <p>This is a <span its-motherboard</span>.</p> </body> </html> 30042].]">פעילות ×”×‘×™× ×ו×, W3C</quote> means <quote>Internationalization Activity, W3C</quote>.</par> </body> </text> [Source file::نشاط التدويل، W3C</quote> means <quote>Internationalization Activity, W3C</quote>.</p> </body> </html> . Note: Where legacy formats do not contain ruby markup, it is still possible to associate ruby text with a specified range of document content using the rubyRule element.".: category will be defined in an updated version of this document. For details , , disambigIdentRef in HTML., , disambigIdentRef in HTML. <!DOCTYPE html> See Example 54 for the companion document with the mapping data. <!DOCTYPE html> Companion document, having the mapping data for Example 53. <!DOCTYPE html> The Locale Filter data category specifies that a node is only applicable to certain locales, or that it is not applicable to certain locales. This data category can be used for several purposes, including, but not limited to: Include a legal notice only in locales for certain regions. Drop editorial notes from all localized output. The Locale Filter data category associates with each selected node a filter type and a list of language ranges conforming to [BCP47] . The list of language ranges is a comma-separated list of basic language ranges. Whitespace surrounding language ranges is ignored. "all": The node is included in all locales. "none": The node is included in no locales. "include": The node is only included in locales that match at least one language range in the from multiple rules or local attributes. GLOBAL: The localeFilterRule element contains the following: A required selector attribute. It contains an XPath expression which selects the nodes to which this rule applies. A required localeFilterType attribute with the value "all", "none", "include", or "exclude". An optional localeFilterList attribute with a comma-separated list of language ranges. The localeFilterRule element specifies that certain legal notice elements should only be shown in the specified locales. ]. The information applies to the textual content of the element, .
http://www.w3.org/TR/its20/diffs/diff-wd20120829-wd20120731.html
CC-MAIN-2016-26
en
refinedweb
On 23 Oct 2001, at 11:39, Robert Marcano wrote: Fo2pdf serializer based on Fop builds whole document in memory. That is why it uses so much memory. The only way to optimizy it is to optimize Fop. > I have doubts about using Cocoon+FOP to serve medium to large sized PDF > reports. I was doing more tests and noted that my servers needs 40 to > 60Mb of RAM per concurrent request to generated a 40 pages PDF report. I > think that the fo2pdf serializer is the responsible for this memory > usage, but i may be wrong. > > Someone has a clue of where to look to optimize this. I have tested > JInfonet JReport Enterprise server and it doesn't use this amounts of > memory (It is expensive and I will serve only a few reports). > > > > Robert Marcano wrote: > > > I haven't used Cocoon2 for about three months, and for the first time > > I need to generate a PDF file with data retrieved using SQL. My > > question is related to memory usage when using large xml structures. > > > > I generated a xml file using XSP and the ESQL stylesheets (a table > > with 1336 records) in order to try to isolate the problem source, this > > static file was saved and I copied it multiple times with diferent > > names to my web application in order to transform them with a XSL > > styleshhet to the XSL:FO namespace (a simple table with 3 columns), > > and serialized it with "fopdf" > > > > This is the sitemap fragment used: > > > > <map:match > > <map:generate > > <map:transform > > <map:serialize > > </map:match> > > > > I replaced the pipelines in cocoon.xconf with the NonCaching > > alternatives. When i access the first pdf file, it is generated and > > the heap grow to about 60Mb of RAM, when I access another one it grows > > near 40Mb, and it continues to grow with each new pdf request. The > > free JVM memory always remains low (around 10 or 15 Mb) so the memory > > is not reclaimed with garbage collection. > > > > I don't know what may be causing my problem, but if I'm not using > > caching pipelines, this Cocoon2 behavior is not normal. > > > > Note: I'm using Cocoon2rc1 > > > > Thanks in advance > > > > > > > > > > > -- > Robert Marcano (office: robmv@promca.com, personal: robert@marcanoonline.com) > System Architect IBM OS/2, VisualAge C++, Java, Smalltalk certified > > aol/netscape screen id: robmv > jabber id: robmv@jabber.org > msn messenger id: robert@marcanoonline.com > icq id: 101913663 > > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org > For additional commands, email: cocoon-dev-help@xml.apache.org > Maciek Kaminski maciejka@tiger.com.pl --------------------------------------------------------------------- To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org For additional commands, email: cocoon-dev-help@xml.apache.org
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200110.mbox/%3C3BD5B06F.309.49E18D1@localhost%3E
CC-MAIN-2016-26
en
refinedweb
[ ] Owen O'Malley commented on HADOOP-3149: --------------------------------------- Looking over your and Runping's patches, I'd suggest defining a subclass that looks like: {code} package org.apache.hadoop.mapred.lib; public class KeyValue<K,V> { private K key; private V value; public KeyValue(); public KeyValue(K key, V value); public K getKey() ; public V getValue(); public void setKey(K k); public void setValue(V v); } public class MultipleOutputStreams extends MultipleOutputFormat { // modifiy job conf to control how format a given stream // should be called once for each stream kind public static void addOutputStream(JobConf conf, String kind, Class<? extends OutputFormat> outFormat, Class<?> keyClass, Class<?> valueClass); } {code} So client code would look like: {code} In launcher: MultipleOutputStreams.addOutputStream(job, "foo", SequenceFileOutputFormat.class, Text.class, IntegerWritable.class); MultipleOutputStreams.addOutputStream(job, "bar", TextOutputFormat.class, Text.class, Text.class); In reducer: out.collect("foo", new KeyValue(new Text("hi"), new IntegerWritable(12)); out.collect("bar", new KeyValue(k2, v2)); {code} > supporting multiple outputs for M/R jobs > ---------------------------------------- > > Key: HADOOP-3149 > URL: > Project: Hadoop Core > Issue Type: New Feature > Components: mapred > Environment: all > Reporter: Alejandro Abdelnur > Assignee: Alejandro Abdelnur > Fix For: 0.17.0 > > Attachments:.
http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/200804.mbox/%3C2110911722.1207084344418.JavaMail.jira@brutus%3E
CC-MAIN-2016-26
en
refinedweb
2006/7/26, Alex Blewitt <alex.blewitt@gmail.com>: > On 26/07/06, Denis Kishenko <dkishenko@gmail.com> wrote: > > Frequently hash code is a sum of several other hashes. Such implementation > > kill hash idea, it's terrible. > > I agree that simple addition is not a good thing. However, exclusive > or can work just as well, as can a simple multiplication of numbers by > primes. > > > and many others... also I have found such "algorithms" =)> > > private long* *token; > > > > public int hashCode() { > > return (int) token; > > } > > This isn't an incorrect hash function, and if there's only one token, > it doesn't really matter that much. Even if the token itself isn't > widely distributed, chances are the bottom few bits are going to be > distributed fairly evenly, and often it's the bottom part of a > hashcode that's considered rather than the top part. So, it's not > optimal, but shifting some bits around doesn't make that much > difference. > > > The most of hashCode() functions are implemented in different ways. It's not > > problem, but it looks like scrappy, there is no single style. And finally we > > have class *org.appache.harmony.misc.HashCode *for this aim! > > Interesting, but I wonder what the impact is of using an additional > class to calculate the hash from each time is going to be. Hopefully a > decent JIT will remove much of the problem, but bear in mind that the > hashCode may get called repeatedly during a single operation with a > collection (and in the case that it's already in the collection, may > be called by every other operation on that collection too. You might > find that even a simple (say) sort of a list would easily overwhelm > the nursery generation of a generational GC. > > Of course, it's difficult to say without measuring it and knowing for sure :-) > > > But only several classes are using it. I suggest integrate HashCode in > > all hashCode() implementations (about 200 files), I can do this. Anybody > > else can improve HashCode work. > > > > Any comments? > > There are other approaches that could be used instead of this. For > example, the value of the hashCode could be cached in a private > variable and then invalidated when any setValue() methods are called. I agree with your concern and suggested solution. > One could even imagine a superclass performing this caching work: > > public abstract class HashCode { > private boolean valid = false; > private int hash; > public final int hashCode() { > if (!valid) { > hash = recalculateHashCode(); > valid = true; > } > return hash; > } > protected abstract int recalculateHash(); > protected void invalidate() { valid = false; } > } > > Of course, you could use the HashCode object to calculate the hash value :-) > > Such an approach won't work when the class hierarchy cannot be > changed, of course. That's exactly our case in public API... But we can use it in our private API (org.apache.harmony.*) > Lastly, an easier approach is to use a tool (such as Eclipse) to > generate the implementation of hashCode() automatically based on the > non-static, non-final variables of a class. This one sounds like (a) > the easiest, and (b) all-round better performant than any of the > above. Good idea! SY, Alexey > > Alex. > > --------------------------------------------------------------------- >
http://mail-archives.apache.org/mod_mbox/harmony-dev/200607.mbox/%3Cc3755b3a0607270132t7d427f1ao38b6d66666486a80@mail.gmail.com%3E
CC-MAIN-2016-26
en
refinedweb
. Learn at your own speed using our engaging, interactive online courses. The course covers a broad range of Adobe products and related design and development topics. Adobe's global network of training providers is ready to deliver superior classroom training on Adobe products. Learn from an expert in a public class or schedule an onsite class at your company. Look here for Enterprise training on Adobe Acrobat Connect Professional, Acrobat Connect Presenter, ColdFusion, FlashLite, Flash Media Server, Flex.
http://www.adobe.com/ap/training/
crawl-002
en
refinedweb