text
stringlengths 100
957k
| meta
stringclasses 1
value |
---|---|
College of Arts and Sciences
# Archived BC Mathematics Seminars
## department of mathematics
### 2012-2013 Seminars and Colloquia
#### BC-MIT Number Theory Seminar
October 16, 2012 at BC 9 Lake Street, room 100 Directions Jeff Hoffstein (Brown University) Title: Multiple Dirichlet series and shifted convolutions, with applications to number theory Sujatha Ramdorai (University of British Columbia) Title: Congruences and Noncommutative Iwasawa theory November 13, 2012 at MIT, room 4-163 Jim Cogdell (Ohio State University) Title: The local Langlands correspondence for GL(n) and the symmetric and exterior square epsilon–factors Yuri Tschinkel (New York University) Title: Igusa integrals December 4, 2012 at BC, McGuinn 521 Richard Taylor (Institute for Advanced Study) Title: Galois representations for regular algebraic cuspidal automorphic forms 3:00 - 4:00 p.m., McGuinn 521 Max Lieblich (University of Washington) Title: Recent results on supersingular K3 surfaces 4:30 - 5:30 p.m., McGuinn 521 February 5, 2013 at MIT, room 10-250 Abhinav Kumar (MIT) Title: Real multiplication abelian surfaces with everywhere good reduction Felipe Voloch (University of Texas) Title: Local-global principles in the moduli space of abelian varieties and Galois representations March 19, 2013 at BC, Fulton 220 Andrew Granville (Universite' de Montre'al) Title: A different way to use Perron's formula Michael Zieve (University of Michigan) Title: Polynomial mappings of number fields April 9, 2013 at MIT, room 32-144 Frank Calegari (Northwestern) Title: The cohomology of congruence subgroups of SL_N(Z) for large N and algebraic K-theory Rachel Pries (Colorado State University) Title: The geometry of the p-rank stratification of the moduli space of curves
#### BC Distinguished Lecturer in Mathematics series
Distinguished Lecturer: Dr. Bernd Sturmfels,
Professor of Mathematics, Statistics and Computer Science
University of California, Berkeley
##### Lecture 1: April 23, 2013
7:00 p.m. in Merkert 127
Title: Tropical Mathematics
Abstract: In tropical arithmetic, the sum of two numbers is their maximum and the product of two numbers is their usual sum. Many results familiar from algebra and geometry, including the Quadratic Formula and the Fundamental Theorem of Algebra, continue to hold in the tropical world. In this lecture we learn how to draw tropical curves and why evolutionary biologists might care about this.
##### Lecture 2: April 24, 2013
4:30 p.m. in Fulton 115
Title: The Convex Hull of a Space Curve
Abstract: The boundary of the convex hull of a compact algebraic curve in real 3-space defines a real algebraic surface. For general curves, that boundary surface is reducible, consisting of tritangent planes and a scroll of stationary bisecants. We express the degree of this surface in terms of the degree, genus and singularities of the curve. We present methods for computing their defining polynomials, and we exhibit a wide range of examples. Most of these are innocent-looking trigonometic curves such as (cos(t),sin(2t),cos(3t)). This is joint work with Kristian Ranestad.
##### Lecture 3: April 25, 2013
4:30 p.m. in Fulton 145
Title: Non-negative Polynomials versus Sums of Squares
Abstract: We discuss the geometry underlying the difference between non-negative polynomials and sums of squares. The hypersurfaces that discriminate these two cones for ternary sextics and quaternary quartics are shown to be Noether-Lefschetz loci of K3 surfaces. The projective duals of these hypersurfaces are defined by rank constraints on Hankel matrices. We compute their degrees using numerical algebraic geometry, thereby verifying results due to Maulik and Pandharipande. The non-SOS extreme rays of the two cones of non-negative forms are parametrized respectively by the Severi variety of plane rational sextics and by the variety of quartic symmetroids. This lecture is based on work of Greg Blekherman, and a joint paper with Jonathan Hauenstein, John Christian Ottem and Kristian Ranestad.
#### BC Math Society/Mathematics Department Undergraduate Lectures
BCMS Careers in Mathematics Series
"Careers in Mathematics: A Panel Discussion
Tuesday, November 13, 5:30 p.m.
Location: 9 Lake Street, Room 100
January 30, 2013
Information Session on Summer REUs
Carney 309, 4:00 p.m.
March 14, 2013
Pi Day
Special Events, 12:00 noon, Carney 309
BCMS Careers in Mathematics Series
April 3, 2013, 5:006:00 pm., Stokes S117
Mike Brown (US Navy)
"Careers in Mathematics: Cybersecurity"
#### BC Geometry/Topology Seminar
Schedule for the BC Geometry/Topology Seminar
Organizers: Ian Biringer, Eli Grigsby, Joshua Greene
#### BC Number Theory/Algebraic Geometry Seminar
Schedule for the BC Number Theory/Algebraic Geometry Seminar
Organizers: Avner Ash, Dawei Chen, Marksym Fedorchuk, Sol Friedberg, Ben Howard, Dubi Kelmer
#### BC Colloquium Series
To be determined.
The Mathematical Gazette is published weekly by the Worcester Polytechnic Institute Mathematical Sciences Department. It provides a list of mathematical seminars and colloquia in the Massachusetts area.
### 2012-2013 Seminar Schedule
Seminars will meet roughly monthly on Thursdays from 2:00 p.m. to 3:30 p.m.
##### Thursday, October 4, 2012, 2:00 p.m.
Place: Campion 139 Speaker: Prof. Hung-Hsi Wu (U. of California - Berkeley) Title: The School Mathematics Curriculum: 1975-2012 Abstract: This talk will discuss the ups and downs of the school math curriculum roughly between 1975 (beginning of Back to Basics) and 2010 (release of Common Core Standards), and what lies ahead beginning with 2010. Although the period 1975-2010 includes the Math Wars, it is not generally recognized that there is a common thread that runs through the curriculum of this period, namely, inattention to mathematical integrity. The talk will look at key examples of this curriculum of 1975-2010, and explain why it is basically not learnable. But can the Common Core live up to its promise
##### Thursday, November 29, 2012, 2:00 p.m.
Place: Campion 139 Speaker: Al Cuoco (Director, Center for Mathematics Education, EDC) Title: Mathematics for Teaching: Suggestions for the Mathematical Preparation and Professional Development of Secondary Teachers. Abstract: Based on work with secondary teachers, on my own high school teaching experience, and on the new CBMS report "The Mathematical Preparation of Teachers,'' I'll give some examples from undergraduate mathematics that have useful applications to middle and high school teaching. Some of these applications help connect topics in the pre-college curriculum with major themes in mathematics, while others are useful tools for teachers as they plan lessons, design problems, or develop ideas. Part of the talk will describe my joint work with Joseph Rotman (University of Illinois at Urbana-Champaign) to develop an abstract algebra course that addresses some needs of prospective high school teachers.
##### Thursday, March 14, 2013, 2:00 p.m.
Place: Campion 139 Speaker: Prof. Patricio Herbst (University of Michigan) Title: Conceptualizing and Measuring Teachers' Recognition of the Diagrammatic Register Abstract The presentation of proof problems in American high school geometry is semiotically different than what it used to be in the 1870s when proof problems started to appear in geometry textbooks and also different than the problems that might be assigned in geometry-for-teachers classes at the university. In earlier work we've described the presentation of those problems as relying on a diagrammatic register and proposed that it is a norm of the instructional situation of "doing proofs" for the teacher to present those problems using the diagrammatic register. Important consequences of the existence of such norm include (1) that a range of geometric properties (collinearity, concurrence, separation) are alienated from the proof problems that students do, and (2) that students' interactions with diagrams remains distal (hence they are unlikely to incorporate into a proof objects that were not provided with the problem). One might think that just coming up with a more diverse set of proof problems would help improve students' mathematical experience but if the proposition that the diagrammatic register is normative were true there might be a resistance to other proof problems—perhaps the norm is in place to prevent instructional problems that might arise otherwise?The problem space described above is one that the GRIP research group has been involved in in the context of a larger project where we investigate how to study empirically the norms of mathematics instruction using multimedia and the internet. What does the proposition that the diagrammatic register is normative mean and how can it be studied empirically? What is the likelihood that practitioners would appraise positively a departure from the norm and how might they justify it? In the talk I describe efforts to develop measures of teachers' recognition of this instructional norm both using traditional survey like instruments and multimedia questionnaires. I show how this instrument development process helped improved our conceptualization of the notion of a "diagrammatic register." The presentation illustrates how representations of instructional practice can be involved in the design of research instruments that preserve attention to the mathematics of classroom interaction and the complexities of teaching practice.
##### Thursday, March 21, 2013, 2:00 p.m.
Place: Campion 139 Speaker: Prof. Jacqueline Leonard (University of Denver) Title: Learning to Enact Social Justice Pedagogy in Early Childhood and Elementary Mathematics Classrooms Abstract: Some mathematics educators (e.g., Bartell (2012); Frankenstein (2012); Gonzalez (2009); Gutstein (2006); Stinson (2004)) assert that P-12 students respond better to mathematics when it is taught for culturally relevance and social justice. Providing teachers with examples of how to use culturally relevant pedagogy (CRP) and social justice pedagogy (SJP) is critical to enacting these strategies in mathematics classrooms. The results of this teacher-research study reveal that teacher candidates (TCs) had greater understanding about how to teach for social justice after taking a mathematics education course that used literature circles to learn and understand SJP. We also found that mathematics lesson plans aligned well with principles of teaching for social justice and that target TCs’ beliefs about teaching for social justice were malleable. However, additional studies are warranted to determine if activities like the ones described in this study actually lead to changes in classroom practice.
##### Friday, April 12, 2013, 2:00 p.m.
Place: Campion 139 Speaker: Prof. Roger Howe (Yale) Title: Problematics of Functions Abstract: Calls for an emphasis on functions as a basic theme in K-12 mathematics have come from many quarters, and curricula, even elementary curricula, have been developed that give functions a prominent role. They also feature in the Common Core State Standards for Mathematics. However, the concept of function is not a simple one, and many, perhaps most, of the treatments of functions at the K-12 level have significant flaws. This talk will discuss some observed errors in dealing with functions, some of the questions that arise in dealing with them, and will make some tentative proposals about their role in K-12 mathematics.
### 2011-2012 Seminar Schedule
##### Mathematics Education Seminar
October 27, 2011
Location: Campion 139
Title: Mostly Geometry with Some Algebra
Abstract
: Geometry is the part of school mathematics which is most in need of help. We will start with rectangles and triangles in late elementary school and show how some of the ideas used there can be used again in high school. The algebra part will involve setting up an algebraic version of a geometry problem, solving it and getting a surprising result, and then doing the same with another problem and also getting a surprising result. In both cases, factoring turns out to be very useful.
December 7, 2011
Dan Chazan (University of Maryland)
Location: Campion 139
Special time: 12:00 p.m. to 1:30 p.m.
Title: New Technologies and Challenges in Depicting and Discussing Teaching
Abstract: Teaching can be conceptualized as the ephemeral, time-bound activity that happens in classrooms among teachers, students, and the subject matter that is taught. A key challenge in discussing teaching is how to transcend the particular in order to have practitioners talk across differences of context (e.g., school, students, curricula, …). In this presentation, I’ll use analogies to representations of mathematics used in the high school algebra and geometry curriculum to consider how non-fictional videotapes of actual classroom interaction and fictional animations depicting scenes from classrooms can support talk about teaching. Examples will come from both research, the NSF-funded Thought Experiments in Mathematics Teaching (ThEMaT) project, and from professional development initiatives, the National Council of Teachers of Mathematics (NCTM)’s nascent Digital Library of Practice.
March 22, 2012
Natasha Speer (University of Maine)
Location: Campion 139
Title: Definitions of mathematical knowledge for teaching: Using these constructs in research on secondary and college mathematics teachers.
Abstract: The construct “mathematical knowledge for teaching” (MKT) has received considerable attention in the mathematics education community over the past decade. Much effort has been put towards the delineation and definition of particular types of knowledge used and needed by mathematics teachers, including Common Content Knowledge (CCK) and Specialized Content Knowledge (SCK). The various lines of research have yielded important and useful findings.
These efforts have been pursued almost exclusively in the context of elementary mathematics teaching. But what happens when researchers look instead at secondary or post-secondary teachers? Do these descriptions of various types of knowledge fit as well with data from non-elementary contexts given differences in background and content knowledge typically possessed by these populations of teachers?
I will present some theoretical questions that arose when using definitions of CCK and SCK in investigations into the nature of MKT at secondary and undergraduate levels. These questions will be illustrated with data from two mathematics instructional settings.
April 19, 2012
William McCallum (University of Arizona)
Location
: Campion 139
Title:
Illustrative Mathematics: Building a discerning community of mathematics educators.
Abstract:
Illustrative Mathematics (illustrativemathematics.org) provides guidance to states, assessment consortia, testing companies, and curriculum developers by illustrating the range and types of mathematical work that students experience in a faithful implementation of the Common Core State Standards, and by publishing other tools that support implementation of the standards. Equally important, it is building a discerning community that can discuss, critique, and revise tasks. I will discuss the project and engage the audience in examples of the work the community is carrying out.
##### BC-MIT Number Theory Seminar
Organizers: Sol Friedberg and Ben Howard at BC, and Ben Brubaker and Bjorn Poonen at MIT.
September 20, 2011 at BC - 9 Lake Street, Room 035 Directions 3:00–4:00 p.m. Marie-France Vigneras (Jussieu) Title: From $p$-adic Galois representations to $G$-equivariant sheaves on the flag variety $G/P$ 4:30–5:30 p.m. Kristin Lauter (Microsoft Research) Title: Arithmetic Intersection Theory on the Siegel Moduli Space October 18, 2011 at MIT, Room 2-132 3:00–4:00 p.m. Fernando Rodriguez Villegas (University of Texas at Austin) Title: Hypergeometric motives: the case of Artin L-functions 4:00–4:30 p.m. Xinyi Yuan (Princeton University) Title: On the height of the Gross-Schoen cycle November 15, 2011 at BC, McGuinn 521 Brian Conrey (AIM) Title: A reciprocity formula for a cotangent sum Steven D. Miller (Rutgers) Title: Fourier Coefficients of Automorphic Forms on Exceptional Groups February 14, 2012 at MIT, Room 3-333 Dihua Jiang (Minnesota) Title: Constructions of Cuspidal Automorphic Forms for Classical Groups Wenzhi Luo (Ohio State) Title: Asymptotic Variance for the Linnik Distribution March 20, 2012 at BC, McGuinn 521 3:00 - 4:00 p.m. Kannan Soundararajan (Stanford) Title: Moments and the distribution of values of L-functions 4:30 - 5:30 p.m. Samit Dasgupta (UC Santa Cruz) Title: On the p-adic L-functions of totally real fields. April 3, 2012 at MIT, Room 3-333 Wen-Ching Winnie Li (Penn State) Title: Recent progress on noncongruence modular forms Alex Kontorovich (Yale) Title: On Zaremba's Conjecture
##### BC Distinguished Lecturer in Mathematics series
April 18, 2012 3:00–4:00 p.m. McGuinn 121 Robert Grist, University of Pennsylvania Talk 1 (general audience) Title: The Mathematics of Holes Abstract: Mathematics merely begins with the study of numbers; it advances to motions and machines; computations and colorings; the strings and arrows of life. A singular expression of the beauty and power of Mathematics is revealed in its ability to quantify and qualify that which is not there—the holes. This talk introduces `topology'—the mathematical study of holes—and uses a century's worth of its innovations to explain why your cell phone drops calls, how to survive without GPS, and why you can't find good, cheap, healthy fast-food. April 19, 2012 4:00–5:00 p.m. Cushing 209 Talk 2 (colloquial) Title: Euler Calculus Abstract: This colloquium surveys a surprisingly beautiful integral calculus based on Euler characteristic, with a focus on computations and applications to problems of network data aggregation. April 20, 2012 4:00–5:00 p.m. Fulton 145 Talk 3 (specialized) Title: Sheaves and Data Abstract: Algebraic topology invented sheaves as a tool for integrating local data into a global structure. This talk will give an introduction to constructible sheaves; their (co)homology; and, most importantly, their recent applications to problems in optimization, networks, and sensing.
##### BC Math Society/Mathematics Department Undergraduate Lectures
September 26 4:00–5:00 p.m. Carney 102 Dr. Paul Garvey, Chief Scientist, MITRE Corporation Title: "Evaluating Risky Prospects, with a little Calculus" October 12 Noon Carney 309 Math for America Info Session November 17 4:30–5:30 p.m. Carney 102 Frank Sullivan (Boston College Alumnus) Title: "On the role of mathematics in my career" February 16 4:30 p.m. Devlin 227 Speaker: Puneet Batra Title: "The Science Behind Big Data" Abstract: Thousands of organizations now have the means to collect extreme amounts of data per year, from petabyes to exabytes. This explosion of 'Big Data' has led to a number of new insights, from Facebook's 'People You May Know' to algorithms that optimize drug choice for new cancer patients. The Big Data revolution has been driven by analyses on consumer behavior, web traffic, machine logs, cell-phone traffic and healthcare data. In this talk, I'll discuss some of the mathematical and statistical tools that data scientists use to derive meaningful results from 'Big Data' using some concrete examples from services that we've all used, including netflix and facebook. March 29 5:45 p.m. McGuinn 521 Speaker: Steven J. Miller Title: "Pythagoras at the Bat: An Introduction to Statistics and Modeling" Abstract: Let RS (resp., RA) denote the average number of runs scored (resp., allowed) in a baseball game by a team. It was numerically observed years ago that a good predictor of a team's won-loss percentage is RS^2 / (RS^2 + RA^2), though no one knew WHY the formula worked. We review elementary concepts of probability and statistics, and discuss how one can build and solve a model for this problem. We'll discuss how to attack problems like this in general (what are the features of a good model, how to solve it, and so on). The only pre-requisite is simple calculus (no baseball knowledge is required, though Red Sox knowledge is always a plus).
##### BC Geometry/Topology Seminar
Schedule for the BC Geometry/Topology Seminar
##### BC Number Theory/Algebraic Geometry Seminar
Schedule for the BC Number Theory/Algebraic Geometry Seminar
Organizers: Avner Ash, Dubi Kelmer, Rob Gross
##### BC Colloquium Series
October 13 2:30 – 3:30 p.m. Carney 309 Speaker: Izzet Coskun, University of Illinois, Chicago Title: Pictures and homogeneous spaces Abstract: Many important problems in representation theory have analogues in geometry. For example, decomposing tensor products of representations of GL(n) into irreducible representations is very closely tied to the geometry of the Grassmannian. Similarly, studying the restriction of a representation of GL(n) to subgroups such as SO(n) or SP(n) has geometric analogues in terms of the geometry of flag varieties. In this talk, I will show you how drawing a few pictures can make studying such lofty problems a lot of fun. I will specifically concentrate on Littlewood-Richardson rules and geometric branching rules. I intend to make the talk accessible to anyone who is willing to be seduced by pictures. October 27 4:00 – 5:00 p.m. Fulton 230 Speaker: Richard Askey, University of Wisconsin Title: The binomial theorem, beta and gamma functions, and some extensions of each. Abstract: It is well known that the number of permutations of the set 1,2,...,n is n!. An extension of this where one counts inversions was posed as a problem by M. Stern in 1839. These will be the starting place to build up the binomial theorem, the extension of n! which we now write as the gamma function, the beta integral of John Wallis, Euler's integral representation of the gamma function as an integral, and the connection between these three things. This connection will be looked at in two different settings, the classical one which most of you know reasonable well, and what will be called q-extensions of these classical results into a world which has finally started to come into its own. November 9, 4.00 – 5.00 p.m. Carney 309 Speaker: András Stipsicz (Rényi Institute of Mathematics) Title: 3-dimensional contact topology Abstract: After reviewing results about the existence of tight contact structures on closed 3-manifolds, we show how to use Heegaard Floer theory (in particular, the contact Ozsvath-Szabo invariant) to verify tightness of certain contact structures on 3-maniolds given by surgery along specific knots in S^3 November 17, 2:30 – 3:30 p.m Carney 309 Speaker: Joseph Harris, (Harvard University) Title: Title: The Interpolation Problem Abstract: See here. December 7, 4:15 – 5:15 p.m. Carney 309 Speaker: Ian Agol, (University of California, Berkeley) Title: Virtual properties of 3-manifolds Abstract: In his article "3-Dimensional manifolds, Kleinian groups, and hyperbolic geometry", William Thurston posed 24 problems related to the topology and geometry of Kleinian groups and hyperbolic 3-manifolds. We'll discuss four of the remaining open problems from this list, 15-18, having to do principally with finite-sheeted covers of hyperbolic 3-manifolds. We'll discuss how recent work of Kahn-Markovic and Wise implies that these problems are essentially equivalent, and the prospects for answering these questions combining their results. March 14, 2012 3:00 – 4:00 pm Carney 309 Speaker: Martin Moeller (Frankfurt) Title: Fuchsian differential equations and derivatives of theta functions Abstract: Usually the power series expansion of solutions to a Fuchsian differential equation with integral coefficients has huge denominators. One instance when the solution is actually integral was discovered by Apery and gave a proof of the irrationality of Zeta(3). We give another instance of such a special Fuchsian differential equation with integral expansion. It is related to derivatives of Hilbert theta functions. The proofs connect (Hilbert) modular forms to the geometry of billard tables. March 28, 2012 3:00 – 4:00 pm Carney 309 Speaker: Chen-Yu Chi (Harvard) Title: On the L^1-norm on the space of quadratic differentials Abstract: The spaces of quadratic differentials play essential roles in the studies of Riemann surfaces and their moduli spaces. Each of these spaces is equipped with a canonical norm. In 1971, Royden shows that two closed Riemann surfaces of genus greater than 1 are isomorphic if and only if their spaces of quadratic differentials are isometric with respect to their canonical norms. We will first review Royden's proof and then outline a program initiated in a joint-work with Yau which can be regarded as further developments in higher dimensions along the same direction. If time permits, we will also talk about a recent observation due to Stergios Antonakoudis. April 24, 2012 4:00 – 5:00 pm Carney 309 Speaker: Peter Kronheimer (Harvard) Title: Knots, webs and unitary representations Abstract: A basic invariant of a knot K in 3-space is the fundamental group, Π, of the knot complement; and a basic way to study any group such as Π is to look at its representations in a standard group G, such as a permutation group or a linear group. In the case of knot groups, representations of Π in dihedral groups contain information encoded in the Alexander polynomial of the knot. Already when we look at the case G=U(2), something emerges from the deep: if K is non-trivial then there always exists at least one non-abelian representation of Π in U(2). The proof of this result involves taking a distinguished set representations of Π in U(2) and assembling them to form an invariant of the knot K -- its instanton Floer homology group -- with surprising connections to both the Alexander polynomial and the Khovanov homology of the knot. In this talk, we will introduce some of these concepts, and look a little past the group U(2) to the case of U(N). In this case, it is natural to extend our objects of study, from knots to "webs", and to seek connections with the knot homology groups defined by Khovanov and Rozansky. May 3, 2012 3:00 – 4:00 pm Carney 309 Speaker: John Etnyre (Georgia Tech) Title: Curvature and (contact) topology Abstract: Contact geometry is a beautiful subject that has important interactions with topology in dimension three. In this talk I will give a brief introduction to contact geometry and discuss its interactions with Riemannian geometry. In particular I will discuss a contact geometry analog of the famous sphere theorem and more generally indicate how the curvature of a Riemannian metric can influence properties of a contact structure adapted to it. This is joint work with Rafal Komendarczyk and Patrick Massot.
### 2010-2011 Seminar Schedule
Seminars will meet roughly monthly on Thursdays from 2:00-3:00 p.m.
• November 4, Cushing 332
Prof. Katherine Merseth (Harvard) and Erica Litke (Harvard)
"Mathematical Tasks in the Secondary Classroom: The Development of an Analytic Tool"
• December 2, Fulton 513
Prof. Man Goo Park (Seoul National University of Education)
"Teaching and Learning Mathematics: Focused on Korean Case"
• February 3, Campion 139 Cancelled
Prof. William McCallum (Arizona)
"Preparing for the Common Core"
Abstract: When there were 50 different sets of state standards, there was an incentive for universities to keep teacher preparation program generic in order to prepare their students for a wide variety of curricular. Now, with over 40 states adopting the Common Core State Standards in Mathematics, universities have an opportunity as never before to develop focused teacher preparation programs based on consensus about what students should learn and when. I will present some thoughts on key focus areas and engage the audience present their own thoughts.
• February 17, Campion 139
Prof. William Schmidt (Michigan State University)
“Inequality for all: Why America needs Common Core Math Standards”
Abstract: Over 40 states have now officially adopted the Common Core Mathematics Standards. They must now be implemented into classrooms where the cultural and structural context may not be particularly supportive. This presentation focuses on what that context looks like and why, if not addressed, it could become the Achilles heel of what I believe is the best opportunity for improving mathematics learning for all students.
• March 24, Fulton 513 Cancelled
Dr. Liping Ma (Palo Alto)
• April 28, Campion 139
Prof. Karen King (NYU)
"The Impact on Student Achievement of Teachers' Use of Standards Based Instructional Materials"
Abstract: This effectiveness study explores the relationship between the use and adaptation of the Connected Mathematics Project instructional materials by middle grades teachers in an urban school district and their students’ achievement. All middle grades mathematics teachers in Newark, NJ Public Schools were surveyed using the Surveys of Enacted Curriculum and the CMP Implementation Survey. The 6th, 7th, and 8th grade students in these teachers’ first period classes completed the New Jersey Assessment of Knowledge and Skills for their grade. Using hierarchical linear modeling with two levels, we found that both increased use and adaptation of the instructional materials were related to increased student achievement. Implications for further research on instructional materials implementation and the design and implementation of materials are discussed.
##### BC-MIT Number Theory Seminar
The organizers are Sol Friedberg and Ben Howard at BC, and Ben Brubaker and Bjorn Poonen at MIT.
2010-2011 Tuesday, September 21 MIT - Room 56-114 Map of MIT 3:00 p.m. Kai-Wen Lan - Princeton/IAS "Vanishing theorems for torsion automorphic sheaves" abstract 4:30 p.m. Michael Rapoport - Bonn "The Langlands-Kottwitz method for the zeta function, beyond the parahoric case" abstract Tuesday, October 19 BC - 9 Lake Street, Room 035 (north of Commonwealth Avenue near the "B" line) Directions 3:00 p.m. Eyal Goren - McGill “Canonical subgroups over Hilbert modular varieties” abstract 4:30 p.m. Pierre Colmez - Jussieu “On the p-adic local Langlands correspondence for GL2” Tuesday, November 16 MIT - Room 56-114 Map of MIT 3:00 p.m. Brian Smithling - Toronto “On some local models for Shimura varieties” abstract 4:30 p.m. Matt Baker - Georgia Tech “Tropical and Berkovich analytic curves” abstract Tuesday, February 8 BC - Campion 009 Note room change Directions 3:00 p.m. Amanda Folsom - Yale "ell-adic properties of the partition function" abstract 4:30 p.m. Jordan Ellenberg - Wisconsin "Expander graphs, gonality, and Galois representations" abstract Tuesday, March 1 MIT - 4-159 Map of MIT 3:00 p.m. Michael Harris - Jussieu "The Taylor-Wiles Method for Coherent Cohomology" abstract 4:30 p.m. Laurent Clozel - Orsay “Presentation of the Iwasawa algebra of Gamma_1 SL_(2, Z_p)” abstract Tuesday, April 12 BC - Campion 009 Note room change Directions 3:00 p.m. Freydoon Shahidi - Purdue "Arthur Packets and the Ramanujan Conjecture" abstract 4:30 p.m. William Duke - UCLA “The interpretation and distribution of cycle integrals of modular functions” abstract
##### BC Distinguished Lecturer in Mathematics series
The distinguished number theorist Peter Sarnak, Eugene Higgins Professor of Mathematics at Princeton University and permanent member of the Institute for Advanced Study's School of Mathematics, is the fourth annual Boston College Distinguished Lecturer in Mathematics. Prof. Sarnak was awarded the Polya Prize of the Society of Industrial & Applied Mathematics in 1998, the Ostrowski Prize in 2001, the Levi L. Conant Prize in 2003 and the Frank Nelson Cole Prize in Number Theory in 2005. He was elected a member of the National Academy of Sciences (USA) and Fellow of the Royal Society (UK) in 2002. Prof. Sarnak gave 3 lectures April 4-6, 2011, and met with Boston College students and faculty during his visit. For event pictures, please click here.
Monday, April 4 5:00-6:00 p.m. Devlin 008 "Randomness in Number Theory" Tuesday, April 5 4:00-5:00 p.m. Cushing 209 "Thin groups and the affine sieve" Wednesday, April 6 4:15-5:15 p.m. Fulton 115 "Zeros of modular forms and ovals of random real projective curves"
##### BC Math Society/Mathematics Department Undergraduate Lecture
Thursday, April 14 5:00-6:00 p.m. Carney 309 A recent BC graduate from the NSA's Women in Mathematics Society will speak on "The Secret Lives of Mathematicians: Defending the Nation In A Pair of Chuck Taylors."
##### BC Geometry and Topology Seminar
Thursday, September 16 Carney 309 2:00 p.m. Professor Martin Bridgeman (Boston College) will speak on “The orthospectra of finite volume hyperbolic manifolds with totally geodesic boundary and associated volume identities.” Abstract: Given a finite volume hyperbolic n-manifold $M$ with totally geodesic boundary, an orthogeodesic of $M$ is a geodesic arc which is perpendicular to the boundary. For each dimension n, we show there is a real valued function $F_n$ such that the volume of any $M$ is the sum of values of $F_n$ on the orthospectrum (length of orthogeodesics). For $n=2$ the function $F_2$ is the Rogers L-function and the summation identities give dilogarithm identities on the Moduli space of surfaces. Thursday, September 23 Carney 309 2:00 p.m. Professor Daniel Mathews (Boston College) will speak on “Sutured topological quantum field theory and contact elements in sutured Floer homology.” Abstract: We consider a type of topological quantum field theory, a “sutured TQFT”, inspired by the work of Honda-Kazez--Matic on sutured Floer homology: contact elements in the sutured Floer homology of product manifolds forms a sutured TQFT. This theory has curious connections to structures seen in physics and representation theory. As an application, we obtain a “contact geometry free” proof that the contact element in sutured Floer homology of a contact structure with Giroux torsion is zero. Thursday, September 30 Carney 309 2:00 p.m. Professor Genevieve Walsh (Tufts) will speak on “Knot commensurability and the Berge Conjecture.” Abstract: We discuss the problem of understanding commensurability classes of hyperbolic knots in S^3. We show that generically, there are at most three knots in a commensurability class. If there is more than one knot in such a commensurability class, the knots are fibered. We also discuss how this relates to understanding lens space surgeries along knots in lens spaces. This is joint work with M. Boileau, S. Boyer, and R. Cebanu. Thursday, October 7 Carney 309 2:00 p.m. Professor Gabriel Katz (MIT) will speak on "Topological Invariants of Gradient Flows on Manifolds with Boundary." Abstract: Let f: X —> R be a Morse function on a manifold X and v its gradient-like vector field. Classically, the topology of a closed X can be described in terms of the spaces of v-trajectories that link the singular points of f. On manifolds with boundary, the situation is somewhat different: there, a massive set of nonsingular functions is available. For such Morse data (f, v), the interactions of the gradient flow with the boundary dX take central stage. We will introduce and measure the convexity and concavity of a v-flow relative to dX. “Some manifolds are intrinsically more concave than others with respect to any gradient flow” is the main slogan of the talk. Stated differently, the intrinsic concavity of X is a reflection of its complexity. We will explain how this approach leads to new topological invariants, both of the flow v and of the manifold X. In 3D, we have a good grasp of these invariants and their connection to the classification of 3-folds. Thursday, October 14 Carney 309 2:00 p.m. Professor Refik Baykur (Brandeis) will speak on "Round handles and smooth four-manifolds." Abstract: In this talk, we will unfold the strong affiliation of round handles with smooth four-manifolds. Several essential topics that appear in the study of smooth four-manifolds, such as logarithmic transforms along tori, exotic smooth structures, cobordisms, handlebodies, broken Lefschetz fibrations, one and all, will come into play as we discuss the relevant interactions between them. Thursday, October 21 Carney 309 2:00 p.m. Professor Tao Li (Boston College) will speak on “Rank and genus of amalgamated 3-manifolds." Abstract: The rank conjecture says that, for a hyperbolic 3-manifold, the rank of its fundamental group equals its Heegaard genus. We will discuss constructions of counterexamples involving hyperbolic JSJ pieces and candidate hyperbolic counterexamples to this conjecture. Thursday, October 28 Carney 309 2:00 p.m. Professor Sucharit Sarkar (Columbia) will speak on “Grid diagrams and the Ozsvath-Szabo tau-invariant.” Abstract: The Ozsvath-Szabo knot invariant $\tau$ satisfies the inequality that $|\tau(K_1)-\tau(K_2)|\leq g$, whenever there is a genus $g$ knot cobordism joining $K_1$ to $K_2$. We will give a new proof of this fact using grid diagrams. This will lead to a new and entirely grid diagram-based proof of Milnor's conjecture that the unknotting number the torus knot $T(p,q)$ is $\frac{(p-1)(q-1)}{2}$. Thursday, November 4 Carney 309 2:00 p.m. Professor Adam Levine (Brandeis) will speak on "A Combinatorial Spanning Tree Model for Knot Floer Homology." Abstract: We provide an explicit description of complex, based on spanning trees of the black graph of a diagram of a knot K in S^3, that computes the knot Floer homology of K. The strategy is to iterate Manolescu's unoriented skein exact sequence for knot Floer homology, using twisted coefficients in a Novikov ring, to form a cube of resolutions in which the only nonzero groups correspond to the connected resolutions. This construction has intriguing similarities with Ozsvath and Szabo's spectral sequence from the reduced Khovanov homology of K to the Heegaard Floer homology of the double branched cover of K. This is joint work with John Baldwin. Thursday, November 11 Carney 309 2:00 p.m. Professor Vera Vertesi (MIT) will speak on “Invariants for Legendrian knots in Heegaard Floer Homology.” Abstract: This talk will concentrate on invariants for contact 3--manifolds in Heegaard Floer homology. They can be defined both for closed 3--manifolds, in this case they live in Heegaard Floer homology and for 3--manifolds with boundary, when the invariant is in sutured Floer homology. There are two natural generalizations of these invariants for a Legendrian knot K in a contact manifold M. One can directly generalize the definition of the contact invariant to obtain an invariant L(K), or one can take the complement of the knot, and compute the invariant for that: EH(M-K). At the end of the talk I would like to describe a map that sends EH(M-K) to L(K). This is a joint work with Andras Stipsicz. Thursday, November 18 Carney 309 2:00 p.m. Professor Joshua Greene (Columbia) will speak on “The lens space realization problem.” Abstract: I will discuss the classification of the lens spaces which arise by integral Dehn surgery along a knot in the three-sphere. A related result is that if surgery along a knot produces a connected sum of lens spaces, then the knot is either a torus knot or a cable thereof, confirming the cabling conjecture in this case. The proofs rely on Floer homology and lattice theory. Tuesday, February 8 Carney 309 4:00 p.m. Prof. Andy Cotton-Clay (Harvard) will speak on “Sharp fixed point bounds for surface symplectomorphisms in each mapping class.” Abstract: Let S be a closed, oriented surface, possibly with boundary, and consider a connected component H of the space of symplectomorphisms of S with no fixed points on the boundary. We give sharp bounds on the number of fixed points for symplectomorphisms f: S to S with f in mapping class H, both with and without nondegeneracy assumptions on the fixed points of f. These bounds often exceed those for non-area-preserving maps coming from Nielsen theory. This generalizes the Poincaré-Birkhoff fixed point theorem, which states that area-preserving twist maps of the cylinder have at least two fixed points, to arbitrary surfaces and mapping classes. For the nondegenerate case, our techniques involve Floer homology computations with certain twisted coefficients plus a method for obtaining fixed point bounds on entire symplectic mapping classes from such. In the possibly degenerate case, we additionally use quantum-cup-length-type arguments for certain cohomology operations we define on Nielsen summands of the Floer homology. Tuesday, March 1 Carney 309 4:00 p.m. Prof. Stephan Wehrli (Syracuse)will speak on "On Quiver Algebras and Floer homology." Abstract: In this talk, I will discuss a connection between certain Khovanov- and Heegaard Floer-type homology theories for knots, braids, and 3-manifolds. Specifically, I plan to explain how the bordered Floer homology bimodule associated to the branched double cover of a braid is related to a similar bimodule defined by Khovanov and Seidel. This is joint work with D. Auroux and E. Grigsby. Tuesday, March 15 Carney 309 4:00 p.m. Professor Tejas Kalelkar (Washington University, St. Louis) will speak on “Normal surfaces and incompressible surfaces in 3-manifolds.” Abstract: Let S be a surface embedded in a triangulated 3-manifold M. S is said to be normal if it intersects each tetrahedron of this triangulation 'nicely'. S is said to be incompressible if it is \pi_1 injective. Haken showed that if S is incompressible then with respect to each triangulation of M, the minimal PL-area surface isotopic to S is a normal surface. In this talk the converse will be proved, that is, if with respect to each triangulation of M, a minimal PL-area surface isotopic to S is normal then in fact S is incompressible. Tuesday, April 12 Carney 309 4:00 p.m. Professor Candice Price (University of Iowa) will speak on “A Knot Theory Application to Biology: An overview of DNA Topology.”Abstract: Abstract: There exist proteins, such as topoisomerases and recombinases, that change the topology of DNA. These changes can inhibit or aid in biological processes that involve the structure of DNA. Because the mechanism of many proteins involves interaction with double stranded DNA, applications of knot theory to problems involving these proteins have been extensively studied. In the 1980's, DeWitt Sumners and Claus Ernst developed the tangle model of protein-DNA complexes, using the mathematics of tangles to model DNA-protein binding. An n-string tangle is a pair (B,t) where B is a 3-dimensional ball and t is a collection of n non-intersecting curves properly embedded in B. The protein is seen as the 3-ball and the DNA bound by the protein as properly embedded curves in the 3-ball. In this talk, I will give definitions and a description of the tangle model with a biological example. Tuesday, April 26 Carney 309 4:00 p.m. Professor Peter Ozsvath (MIT) will speak on “Bordered Floer homology.” Abstract: Heegaard Floer homology is an invariant, defined in joint work with Zoltan Szabo, which associates to a four-manifold, a number; to a three-manifold, a vector space; and to a four-dimensional cobordism, a morphism of vector spaces. I will describe aspects of a lower-dimensional invariant, Bordered Floer homology, defined in joint work with Robert Lipshitz and Dylan Thurston, which associates to a two-manifold, a differential graded algebra; and to a three-manifold with boundary, a module over that algebra. I will also sketch how this invariant can be used to compute parts of the higher-dimensional theory.
##### All talks are in Carney 309 at 4:00 p.m.
Thursday, October 7 Professor Solomon Friedberg (Boston College) will speak on “Eisenstein series and crystal graphs.” Abstract: The study of the Whittaker coefficients of Eisenstein series on reductive groups led Langlands to formulate his Conjectures. But the study of Whittaker coefficients on covers of such groups has not been carried out. In this talk I present a theorem for the simplest Eisenstein series on such a cover, showing that these series may be computed in a surprising way that involves the theory of crystal graphs. Thursday, October 14 David Hansen (Boston College) will speak on "Ranks of elliptic curves over (nearly) abelian extensions" Abstract: Given a modular elliptic curve E over a number field K, the theory of L-functions provides a powerful tool for studying the rank of E over K and over varying families of extensions of K. Most results of this flavor analyze the rank of E over "vertical" towers of abelian extensions of K. I will review these results, and then explain some recent progress on the corresponding question for some interesting "horizontal" families of abelian extensions. Thursday, October 21 Professor George McNinch (Tufts/MIT) will speak on “The special fiber of a parahoric group scheme.” Abstract: Let G be a connected and reductive algebraic group over the field of fractions K of a complete discrete valuation ring A with residue field k. Bruhat and Tits have associated with G certain smooth A-group schemes P -- called parahoric group schemes -- which have generic fiber P/K = G. The special fiber P/k of such a group scheme is a linear algebraic group over k, and in general it is not reductive. In some recent work, it was proved that P/k has a Levi factor in case G splits over an unramified extension of K. Even more recently, this result was (partially) extended to cover the case where G splits over a tamely ramified extension. The talk will discuss these results and some applications. In particular, it will mention possible applications to the description of the scheme-theoretic centralizer of suitable nilpotent sections in Lie(P)(A). Thursday, October 28 Professor Avner Ash (Boston College) will speak on "Reducible Galois representations and Hecke eigenclasses." Abstract: Serre's conjecture (now a theorem) was stated for irreducible Galois representations, but it could have been stated as well for reducible ones. When Warren Sinnott and I generalized the niveau 1 case to GL(n), we stated a conjecture for irreducible and reducible Galois representations. I have proved this conjecture for direct sums of one-dimensional characters with pairwise relatively prime conductors. This talk will describe the background and proof of this theorem. Thursday, November 11 Professor Andrew Ledoan (Boston College) will speak on “Zeros of partial sums of the Riemann zeta function.” Abstract: The Riemann zeta-function zeta(s) of the complex variable is defined in the half plane Re(s) > 1 by an absolutely convergent Dirichlet series 1+1/2^s+1/3^s+...which can be continued analytically to a meromorphic function in the complex plane with solely a simple pole situated at s = 1 with residue 1. The critical strip 0< Re(s) < 1 is the most important and mysterious region for zeta(s), and much attention has been given to the right half of the strip. Although a great deal is known and conjectured about the distribution of zeros of zeta(s), little is known about the zeros of its partial sums F_X(s) = 1+1/2^s+...+1/X^s, where X>1. By the absolute convergence of the Dirichlet series one sees that, even for X not very large, F_X(s) gives (at least away from the pole) a rather good approximation to zeta(s) with a remainder which is o(1) as X goes to infinity. To be more precise, zeta(s) is well-approximated unconditionally by arbitrarily short truncations of its Dirichlet series in the region sigma>1, |s-1| > 1/10. This is also true in the right half of the critical strip, if one assumes the Lindelof Hypothesis. In this talk, I will present recent results obtained in collaboration with S. M. Gonek on the distribution of zeros of F_X(s), in which we estimate the number of zeros up to height T, the number of zeros to the right of a given vertical line, and other aspects of their horizontal distribution. Thursday, November 18 Professor Sawyer Tabony (Boston College) will speak on “Finding Representation Theory in a Statistical Mechanical Model.” Tuesday, May 3 Professor Tasho Kaletha (IAS) will speak on “Simple wild L-packets.”Abstract: In a recent paper, Gross and Reeder have described an interesting class of smooth representations of reductive p-adic groups, which they call simple supercuspidal representations. Guided by the conjectural framework of the Langlands correspondence, they analyse the structure of the expected Langlands parameters for these representations. These so called simple wild parameters are wildly ramified, but in a minimal way. In this talk we will report on a construction which explicitly associates to each simple wild parameter a finite set of simple supercuspidal representations, and furthermore provides a description of this set in terms of the Langlands dual group.
##### BC Colloquium Series
Tuesday, February 15 Carney 309 4:00 p.m. Prof. Benedict Gross (Harvard) will speak on "Stable orbits and the arithmetic of curves." Abstract: Manjul Bhargava has recently made a great advance in the arithmetic of elliptic curves, giving the first bounds on the average rank of the group of rational points. He shows that the average order of the 2-Selmer group is equal to 3, by studying the stable orbits of the group PGL(2,Z) acting on the lattice of binary quartic forms. In this talk, I will begin by reviewing some basic material on elliptic curves, defining the 2-Selmer group, and describing the stable orbits in this representation, whose invariants were determined by Hermite. If time permits, I will discuss a possible generalization of Bhargava's result to hyperelliptic curveswith a rational Weierstrass point. Tuesday, February 22 Carney 309 4:00 p.m. Prof. Danny Calegari (Caltech) will speak on "Stable commutator length in free groups." Abstract: Stable commutator length (scl) answers the question: “what is the simplest surface in a given space with prescribed boundary?” where “simplest” is interpreted in topological terms. This topological definition is complemented by several equivalent definitions - in group theory, as a measure of non-commutativity of a group; and in linear programming, as the solution of a certain linear optimization problem. On the topological side, scl is concerned with questions such as computing the genus of a knot, or finding the simplest 4-manifold that bounds a given 3-manifold. On the linear programming side, scl is measured in terms of certain functions called quasimorphisms, which arise from hyperbolic geometry (negative curvature) and symplectic geometry (causal structures). I will discuss how scl in free groups is connected to such diverse phenomena as the existence of closed surface subgroups in graphs of groups, rigidity and discreteness of symplectic representations, phase locking for nonlinear oscillators, and the theory of multi-dimensional continued fractions and Klein polyhedra.
The Mathematical Gazette is published weekly by the Worcester Polytechnic Institute Mathematical Sciences Department. It provides a list of mathematical seminars and colloquia in the Massachusetts area.
##### BC-MIT Joint Number Theory Seminar
The organizers are Sol Friedberg and Ben Howard at BC, and Ben Brubaker and Bjorn Poonen at MIT.
### 2009/2010
September 22 MIT 3:00 p.m. Yiannis Sakellaridis - University of Toronto "A 'relative' Langlands program and periods of automorphic forms" 4:30 p.m. Matthew Emerton - Northwestern University "p-adically completed cohomology and the p-adic Langlands program" October 20 BC 3:00 p.m. Ze'ev Rudnick - Tel-Aviv University and IAS "Statistics of the zeros of zeta functions over a function field" 4:30 p.m. Haruzo Hida - UCLA "Characterization of abelian components of the 'big' Hecke algebra" November 17 MIT 3:00 p.m. Akshay Venkatesh - Stanford University "Torsion in the homology of arithmetic groups" 4:30 p.m. Ken Ono - University of Wisconsin "p-adic coupling of harmonic Maass forms" February 9 BC 3:00 p.m. Gautam Chinta - CUNY "Orthogonal periods of Eisenstein series" 4:30 p.m. Mihran Papikian - Pennsylvania State University "On the arithmetic of modular varieties of D-elliptic sheaves" March 9 MIT 3:00 p.m. Elena Mantovan - Caltech "l-adic etale cohomology of PEL Shimura varieties with non-trivial coefficients" 4:30 p.m. Karl Rubin - UC Irvine "Selmer ranks of elliptic curves in families of quadratic twists" April 13 BC 3:00 p.m. Shou-Wu Zhang - Columbia University "Calabi-Yau theorem and algebraic dynamics" 4:30 p.m. Ching-Li Chai - University of Pennsylvania "CM lifting of abelian varieties"
### 2009-2010
Professor Benson Farb (University of Chicago) will be speaking this spring as the department's third annual Boston College Distinguished Lecturer in Mathematics. Professor Farb is an internationally renowned mathematician who specializes in the interaction between geometry, topology and group theory.
March 10 4:00 p.m. McGuinn 121 "Geometry and the Imagination (with applications)" Abstract: Geometry and geometric reasoning underlie all of science. In this talk I will explore a few fundamental geometric notions, including symmetry, dimension (including dimensions bigger than 3), and orientation (i.e. left-handed vs. right-handed). I will give some examples illustrating important applications in chemistry, biology and physics, from the weak nuclear force to understanding the Thalidomide tragedy. Some questions to ponder before the talk: How can you turn a left sneaker into a right sneaker without ripping or bending the sneaker at all? Why do mirrors reflect left/right but not up/down? This talk is intended for all who are interested in mathematics. March 11 4:00 p.m. Cushing 212 "Topology, dynamics and geometry of surfaces (and their remarkable relationships)" Abstract: Surfaces can be considered from many different angles: their shape (i.e. topological structure), their geometry (e.g. curvature), and the behavior of fluid flows on them. In this talk I will describe three beautiful theorems, one for each of these aspects of surfaces. I will also try to explain the remarkable fact that these seemingly completely different viewpoints are intimately related. This talk will be geared towards those with some familiarity with calculus. March 12 3:00 p.m. Higgins 265 "Representation theory and homological stability"Abstract: Homological stability is a remarkable phenomenon in the study of groups and spaces. For certain sequences G_n of groups, for example G_n=GL(n,Z), it states that the homology group H_i(G_n) does not depend on n for big enough n. There are many natural sequences G_n, from pure braid groups to congruence groups to Torelli groups, for which homological stability fails horribly. In these cases the rank of H_i(G_n) blows up to infinity, and in many (e.g. the latter two) cases almost nothing is known about H_i(G_n); indeed there may be no nice "closed form" for the answers. While doing some homology computations for the Torelli group, Tom Church and I found what looked to us like the shadow of a broad pattern. In order to explain it and formulate a specific conjecture, we came up with a notion of "stability of a sequence of representations of group G_n". We began to realize that this notion can be used to make other predictions: from group representations to Malcev Lie algebras to the homology of congruence groups. Some of these predicitions are known results, while others are not known. In this talk I will explain our broad conjectural picture via some of its many instances. No knowledge of either representation theory or group homology will be assumed. This talk is intended for a mathematically sophisticated audience.
### 2009-2010
October 157:30 p.m. Dr. Paul Garvey - MITRE "MITRE and Systems Engineering"Dr. Garvey is Chief Scientist, and a Director, for the Center for Acquisition and Systems Analysis - a division at The MITRE Corporation. He is internationally recognized and widely published in cost analysis, cost uncertainty analysis, and in the application of advanced decision analytic methods to problems in engineering systems risk analysis and management. He is an alumnus of the BC Mathematics department. April 7 5:00 p.m. Carney Dr. Amir Aczel - visiting Boston College "Mathematics, Physics, and the LHC: the Largest Machine Ever Built"Abstract: In late February this year, the Large Hadron Collider at the international physics laboratory in Switzerland, CERN, began crashing protons at energy levels never seen before since the Big Bang, and will increase these levels over the next few years. The reason for this unprecedented \$10 billion effort is the search for new particles, including the mysterious Higgs boson, the so-called "God particle," believed to give all particles in the universe their mass. If the Higgs is found, along with other possible particles, this will be a major triumph not only for physics, but also for mathematics: Mathematical theories, including Lie groups, underlie much of the foundation that allows physicists to predict the existence of new particles. We will survey this fascinating topic.
#### BC Geometry and Topology Seminar
Martin Bridgeman, Eli Grigsby, Tao Li and Rob Meyerhoff conduct this seminar on the BC Campus.
### 2009-2010
#### BC Number Theory/Representation Theory Seminar
Avner Ash and Jay Pottharst conduct this seminar on the BC Campus.
### 2009-2010
April 84:00 p.m. Carney 309 Andre Reznikov - IAS/Bar Ilan University "Gelfand pairs and identities for automorphic periods"Abstract: I will discuss how the notion of Gelfand pairs from the representation theory leads to various identities for automorphic periods. These include the classical Rankin-Selberg integral, its anisotropic analog, and many other identities. Time permitting, I will discuss some applications towards bounds for L-functions. April 153:00 p.m. Carney 309 Jens Funke - University of Durham "Spectacle cycles and modular forms of half-integral weight"Abstract: The classical Shintani lift is the adjoint of the Shimura correspondence. It realizes periods of even weight cusp forms as Fourier coefficients of a half-integral modular form. In this talk we revisit the Shintani lift from a (co)homological perspective. In particular, we extend the lift to Eisenstein series and give a geometric interpretation of this extension. This is joint work with John Millson.
#### BC Colloquium Series
Martin Bridgeman, Rob Gross, Tao Li and Jay Pottharst conduct this seminar on the BC Campus.
### 2009-2010
October 14:00 p.m. McGuinn 521 Lounge Rob Kirby - University of California, Berkeley "Broken fibrations for 4-manifolds"Abstract: I will discuss the existence and uniqueness theorems for broken fibrations of arbitrary orientable, smooth 4-manifolds over either S^2, B^2, or S^1 x I. Existence always holds, and there is a nice set of moves relating different broken fibrations for a given 4-manifold. November 3 3:00 p.m. Carney 309 Sonal Jain - New York University "The minimum canonical height on an elliptic surface"Abstract: TBA April 27 4:00 p.m. Carney 309 Cameron Gordon - University of Texas at Austin "The unknotting number of a knot"Abstract: The unknotting number u(K) of a knot K is the minimal number of times you must allow K to pass through itself in order to unknot it. Although this is one of the oldest and most natural knot invariants, it remains mysterious. We will survey known results on u(K), including relations with 4-dimensional smooth topology, and describe some joint work with John Luecke on algebraic knots with u(K)=1. We will also discuss several open questions.
#### Mathematics Education Seminar Series
This monthly seminar series in Mathematics Education is supported by Teachers for a New Era (TNE), and is organized by Profs. Solomon Friedberg (Mathematics) and Lillie Albert (Teacher Education).
### 2009/2010
October 8 McGuinn 334 Dr. Andrew Chen President, EduTron Corporation "Cross Cultural Lore! A session on mathematical achievement in the U.S. and abroad" October 29 McGuinn 334 Prof. Deborah Hughes Hallett University of Arizona "Literacy: Teaching the Role of Numbers and Numeracy" December 3 McGuinn 334 Prof. Paul Sally University of Chicago "Algebra Initiative in the Chicago Public Schools" February 4 Canceled Dr. Liping Ma Author, Knowing and Teaching Elementary Mathematics "The learning of fractions: How can it be built on the learning of whole numbers?" February 25 McGuinn 334 Prof. Alan Schoenfeld University of California at Berkeley "How We Think" April 15 McGuinn 521 Prof. Yeap Ban Har National Institute of Education, Nanyang Technological University, Singapore "Mathematics Teaching and Learning in Singapore Schools" April 27 McGuinn 334 Prof. Sybilla Beckmann University of Georgia "What Is Worth Focusing on in Math Courses for Elementary Teachers, and Why?
#### BC-MIT Number Theory Seminar
The organizers are Sol Friedberg and Ben Howard at BC, and Ben Brubaker and Kiran Kedlaya at MIT. Further details
### 2008/2009
September 23 MIT 3:00 p.m. Wee Teck Gan - UC San Diego 4:30 p.m. Daniel Bump - Stanford October 28 BC 3:00 p.m. Steve Kudla - Toronto 4:30 p.m. Chris Skinner - Princeton November 18 BC 3:00 p.m. Henri Darmon - McGill 4:30 p.m. Peter Sarnak - Princeton February 17 MIT 3:00 p.m. Brooke Feigon - Toronto 4:30 p.m. Kartik Prasanna - Maryland March 17 BC 3:00 p.m. Dorian Goldfeld - Columbia 4:30 p.m. Brian Conrad - Stanford April 28 MIT Matt Papanikolas Texas A&M Dinakar Ramakrishnan Caltech
### 2008-2009
Ravi Vakil (Stanford University)
Professor Ravi Vakil will be speaking this spring as the department's second annual Boston College Distinguished Lecturer in Mathematics. Prof. Vakil is a renowned algebraic geometer who has received the Presidential Early Career Award for Scientists and Engineers, the Andre-Aisenstadt Prize from the CRM in Montreal, an American Mathematical Society Centennial Fellowship, a Frederick E. Terman fellowship, and an Alfred P. Sloan Research Fellowship. He will be the Mathematical Association of America's 2009 Hedrick Lecturer. He also received Stanford's 2004-05 Dean's Award for Distinguished Teaching and the Brown Faculty Fellowship.
March 31 "Hidden polynomials in geometry" Abstract: A number of recent developments in geometry have hinged on unexpected polynomials appearing in geometric phenomena. Interpreted appropriately, this behavior is in retrospect visible in the doodling I did as a child. I'll use this doodle as a jumping-off point. It will lead inevitably to ideas from topology, geometry, a Hilbert problem, and the work of several Fields Medalists. Gasson Hall, Room 202 at 3:00 p.m. This talk is intended for all who are interested in mathematics. March 31 "Murphy's Law in algebraic geometry: Badly-behaved moduli spaces" Abstract: We consider the question: "How bad can the deformation space of an object be?" (Alternatively: "What singularities can appear on a moduli space?") The answer seems to be: "Unless there is some a priori reason otherwise, the deformation space can be arbitrarily bad." I show this for a number of important moduli spaces, parametrizing objects such as smooth curves in projective space, smooth projective surfaces, and plane curves with nodes and cusps. This justifies Mumford's philosophy that even moduli spaces of well-behaved objects should be arbitrarily bad unless there is an a priori reason otherwise. This is good news, not bad: we now have a complete constructive understanding of the singularities of these fundamental spaces. I will begin by telling you what "moduli spaces" and "deformation spaces" are, and then explain our question and its answer. Gasson Hall, Room 202 at 4:30 p.m. This talk is intended for a broad but mathematically sophisticated audience. April 1 "A geometric Littlewood-Richardson rule" Abstract: I will describe an explicit geometric Littlewood-Richardson rule, interpreted as deforming the intersection of two Schubert varieties so that they break into Schubert varieties. There are no restrictions on the base field, and all multiplicities arising are 1; this is important for applications. This rule should be seen as a generalization of Pieri's rule to arbitrary Schubert classes, by way of explicit homotopies. It has a straightforward bijection to other Littlewood-Richardson rules, such as tableaux and Knutson and Tao's puzzles. This gives the first geometric proof and interpretation of the Littlewood-Richardson rule. It has a host of geometric consequences, which I may describe, time permitting. The rule also has an interpretation in K-theory, suggested by Buch, which gives an extension of puzzles to K-theory, and in fact a Littlewood-Richardson rule in equivariant K-theory (ongoing work with Knutson). The rule suggests a natural approach to the open question of finding a Littlewood-Richardson rule for the flag variety, leading to a conjecture, shown to be true up to dimension 5. Finally, the rule suggests approaches to similar open problems, such as Littlewood-Richardson rules for the symplectic Grassmannian and two-step flag varieties. McElroy Conference Room at 3:00 p.m. This talk is intended for a mathematically sophisticated audience.
#### BC Math Society/Mathematics Department Undergraduate Lecture
2008-2009 Thomas Banchoff (Brown University) February 25, 2009 "The Four-Dimensional Geometry and Theology of Salvador Dali" Co-sponsored by the Department of Mathematics, the Boston College Mathematics Society, the Department of Fine Arts, the Department of Theology, and the Jesuit Institute Abstract: Throughout his career, Salvador Dali was fascinated by mathematics and science, and he incorporated many geometric ideas and symbols into his paintings, especially his religious paintings. Where did he get his ideas and how did he carry them out? This presentation will feature images and stories from ten years of conversations with Dali, about the Fourth Dimension, impossible perspectives, catastrophe theory, art history and medieval philosophy. The talk will be illustrated by computer-generated images and animations, and is intended for a broad audience.
#### BC Geometry and Topology Seminar
Martin Bridgeman, and Rob Meyerhoff conduct this seminar on the BC Campus.
September 23 Yi Ni (AIM and MIT) will speak in 251 Carney Hall at 2:00 p.m. "Dehn surgeries that reduce the Thurston norm of a fibered manifold" Abstract: Suppose K is a knot on the fiber of a surface bundle over the circle. If we do surgery on K with slope specified by the fiber, then the Thurston norm of the homology class of the fiber will decrease in the new manifold. We will show that the converse is also true. Namely, if a Dehn surgery on a winding number 0 knot in a fibered manifold reduces the Thurston norm of the homology class of the fiber, then the knot must lie on the fiber and the slope is the natural one. September 25 Scott Taylor (Colby College) will speak in 251 Carney Hall at 2:00 p.m. "Adding a 2-handle to a sutured manifold" Abstract: Sutured manifold theory has long been used to study Dehn surgery on knots in 3-manifolds. It has not often been used to study 2-handle addition, a natural generalization of Dehn surgery. If a component F of a simple 3-manifold N has genus two, sutured manifold theory is particularly effective for studying degenerating separating curves on F. (A curve is degenerating if attaching a 2-handle to it creates a non-simple 3-manifold N[a].) For example, suppose that the boundary of N consists of tori and the genus two surface F containing essential separating curves a and b. Then if N[a] is reducible and N[b] is non-simple, a and b are istopic on F. Similar sutured manifold theory techniques are useful for studying knots and links obtained by "boring" a split link or unknot. Such a perspective allows a theorem to be proved which is a generalization of two seemingly unrelated theorems. The first theorem generalized is the superadditivity of genus under band connect sum (Gabai, Scharlemann) and the second is the fact that a tunnel for a tunnel number one knot or link can be slid and isotoped to be disjoint from a minimal genus Seifert surface (Scharlemann, Thompson). As time permits, I will discuss other applications of sutured manifold theory to questions about bored split links and unknots. December 8 Yoav Moriah (Technion and Yale University) will speak in 251 Carney Hall at 3:00 p.m. "Horizontal Dehn surgery and distance of Heegaard splittings" Abstract: Given a 3-manifold M with a Heegaard surface S of genus g at least 2 and an essential simple closed curve c in S, we can obtain a new Heegaard splitting by changing the gluing of the two handlebodies/compression bodies by a Dehn twist to some power m along c. If c is "sufficiently complicated", measured a priori by a parameter n, then there is at most a single value so that the obtained Heegaard splitting is of smaller distance than n-1. Furthermore, the curves c with this property are "generic" in the set of essential simple closed curves c in S. (Joint with M. Lustig) March 17 Bill Menasco (SUNY at Buffalo) will speak in 251 Carney Hall at 1:00 p.m. "Legedrian and Lorenz knots" April 15 Elmas Irmak (Bowling Green State University) will speak in 251 Carney Hall at 1:00 p.m. "Mapping Class Groups and Complexes of Arcs on Surfaces" Abstract: I will talk about a joint work with J.D. McCarthy: Each injective simplicial map of the arc complex of a compact, connected, orientable surface with nonempty boundary is induced by a homeomorphism of the surface, and the group of automorphisms of the arc complex is naturally isomorphic to the quotient of the extended mapping class group of the surface by its center. I will also talk about my similar results on nonorientable surfaces.
#### BC Number Theory/Representation Theory Seminar
Jay Pottharst and Mark Reeder conduct this seminar on the BC Campus.
### 2008-2009
September 18 Mark Reeder (Boston College) will speak in 309 Carney Hall at 3:15 p.m. October 23 Benjamin Howard (Boston College) will speak in 309 Carney Hall at 3:15 p.m. November 6 Avner Ash (Boston College) will speak in 309 Carney Hall at 3:15 p.m. November 13 Jay Pottharst (Boston College) will speak in 309 Carney Hall at 3:15 p.m. April 23 Riad Masri (University of Wisconsin) will speak in 309 Carney Hall at 4:00 p.m. "Equidistribution of Heegner points and integer partitions" Abstract: A classical problem in number theory concerns the asymptotic growth of the function p(n) which counts the number of partitions of a positive integer n. This problem led Hardy and Ramanujan to invent what is now known as the "circle method". In this talk, I will explain how the equidistribution of Galois orbits of Heegner points on X_0(6) can be used to obtain a new asymptotic formula for p(n). The resulting error term sharpens those obtained by Rademacher and D.H. Lehmer in the 1930s. This is joint with Amanda Folsom.
#### BC Colloquium Series
Martin Bridgeman, Rob Gross, Ben Howard and Jay Pottharst conduct this seminar on the BC Campus.
### 2008-2009
October 2 Dan Margalit (Tufts University) will speak in 309 Carney Hall. Refreshments at 4:00 p.m, followed by a talk at 4:15. "Homologies of mapping class groups" Abstract: The mapping class group is the group of topological symmetries of a surface. By understanding the homology and cohomology of the mapping class group and its subgroups, we gain insight into its finiteness properties (finite generation, finite presentability, etc.) and we can also classify topological invariants of surface bundles. In this talk, we will introduce basic notions about the mapping class group and explain how to compute its low dimensional homology groups. Then, we will explain some recent work with Mladen Bestvina and Kai-Uwe Bux concerning the homology of the Torelli subgroup of the mapping class group, the group of elements acting trivially on the homology of the surface. In particular, we answer a question of Mess by proving that the cohomological dimension of the Torelli group for a genus g surface is 3g-5. November 4 Eriko Hironaka (Florida State University) will speak in 309 Carney Hall at 1:00 p.m.. "Families of mapping classes with small dilatation" Abstract: R. Penner showed that the logarithm of the least dilatation of mapping classes on an oriented genus g surface is asymptotic to 1/g. In joint work with E. Kin, we construct a sequence of mapping classes with small dilatations improving on explicit bounds found previously by Penner and Bauer. Our examples arise as mapping classes associated to labeled graphs. For such mapping classes, we discuss the relation between dilatation and the spectral radius of the graph, and show how dilatation is affected by edge subdivision. December 4 Richard Kenyon (Brown University) will speak in 309 Carney Hall. Refreshments at 4:00 p.m, followed by a talk at 4:15. "Dimers and Harnack curves" Abstract: A polynomial P(z,w) with real coefficients is said to be Harnack if the real components of P(z,w)=0 satisfy a certain simple geometric property. These polynomials are somewhat analogous to one-variable polynomials with only real, negative roots. We describe a surprising parameterization of the space of all Harnack polynomials, coming from the dimer model of statistical mechanics. March 25 Bill Goldbloom Bloch (Wheaton College) will speak in 309 Carney Hall. Talk at 4:30. "Navigating the Mathematical and Literary Labyrinths in Jorge Luis Borges' story "The Library of Babel" " Abstract: Jorge Luis Borges, the poet, essayist, librarian, and master crafter of short stories, was arguably the most influential writer in Spanish in the 20th century. An autodidact, he read and reread works by (among others) Bertrand Russell on the foundations and philosophy of mathematics, and these kinds of considerations explicitly directed the arcs of many of his short stories. "The Library of Babel" is perhaps his most famous story, and in its scant seven pages, he deploys simple combinatorial ideas to help create a miasmic atmosphere in the service of raising issues about the meaningfulness of our existence. The story also evokes ideas from three-dimensional manifold theory, real analysis, and graph theory; and, moreover, it is open to an interpretation from the theory of computation. This talk will touch on a number of these themes and along the way illustrate how a mathematician can become (to everyone's surprise) a literary theorist. April 7 Teruyoshi Yoshida (Harvard University and Cambridge University) will speak in 309 Carney Hall. Talk at 3:00, followed by refreshments at 4:00. "Arithmetic Geometry related to Local Langlands Correspondence" Abstract: To be announced
### 2013-2014 Seminar Schedule
Tuesday, March 11, 2014: Diane J. Briars, Ph.D.
Campion Hall, Room 139, 3:30-4:30 p.m.
Diane J. Briars, Ph.D., a mathematics education consultant, is president-elect of the National Council of Teachers of Mathematics and will serve two years (2014 and 2015) as president beginning in April 2014.
Title: Effective Teaching Practices to Ensure All Students Are “Common-Core Ready”
Abstract: What are the most effective teaching practices to ensure that all students build the conceptual understanding, procedural fluency, and proficiency in the Standards for Mathematical Practice called for in the Common Core State Standards for Mathematics? This talk describes eight research-based Mathematical Teaching Practices, along with the conditions, structures and policies needed to support them to turn the opportunity afforded by CCSSM into reality in every classroom, school and district.
Tuesday, April 29, 2014: Professor Marta Civil
Higgins Hall Auditorium, Room 300, 5:00 p.m.
Marta Civil is a Distinguished Professor of Mathematics Education at The University of North Carolina at Chapel Hill.
Title: Language, Culture and Mathematics: English Language Learners in the Mathematics Classroom
October 10, 2013: Prof. Jim Lewis (Nebraska). McGuinn 521
Title: Teaching Teachers Mathematics
Abstract: What mathematics should teachers know and how should they come to know that mathematics? The Mathematical Education of Teachers II argues that the mathematical knowledge needed for teaching differs from that of other professions and that teachers need mathematics courses that develop a solid understanding of the mathematics they will teach. The publication also urges greater involvement of mathematicians in teacher education. We will discuss the MET2 recommendations and report on efforts at the University of Nebraska-Lincoln to create mathematics courses for teachers and to work in partnership with mathematics educators to educate mathematics teachers able to educate K-12 students who graduate college and career ready.
Biography: W. James “Jim” Lewis is Aaron Douglas professor of mathematics and Director of the Center for Science, Mathematics, and Computer Education at the University of Nebraska-Lincoln. He was the Carnegie Foundation’s 2010 Nebraska Professor of the Year, and received the UNL Chancellor’s Commission on the Status of Women Award for his support of opportunities for women in the mathematical sciences.
December 5, 2013 in Campion 139
Dr. Jason Zimba (Student Achievement Partners).
Title: The Common Core State Standards for Mathematics
Description: I will speak about the design of the standards, their implications for mathematics education, and the state of implementation efforts—aiming to leave abundant time for questions. | {} |
# Projection method (fluid dynamics)
The projection method is an effective means of numerically solving time-dependent incompressible fluid-flow problems. It was originally introduced by Alexandre Chorin in 1967 [1] [2] as an efficient means of solving the incompressible Navier-Stokes equations. The key advantage of the projection method is that the computations of the velocity and the pressure fields are decoupled.
## The algorithm
The algorithm of projection method is based on the Helmholtz decomposition (sometimes called Helmholtz-Hodge decomposition) of any vector field into a solenoidal part and an irrotational part. Typically, the algorithm consists of two stages. In the first stage, an intermediate velocity that does not satisfy the incompressibility constraint is computed at each time step. In the second, the pressure is used to project the intermediate velocity onto a space of divergence-free velocity field to get the next update of velocity and pressure.
## Helmholtz–Hodge decomposition
The theoretical background of projection type method is the decomposition theorem of Ladyzhenskaya sometimes referred to as Helmholtz–Hodge Decomposition or simply as Hodge decomposition. It states that the vector field $\mathbf{u}$ defined on a simply connected domain can be uniquely decomposed into a divergence-free (solenoidal) part $\mathbf{u}_{\text{sol}}$ and an irrotational part $\mathbf{u}_{\text{irrot}}$. .[3]
Thus,
$\mathbf{u} = \mathbf{u}_{\text{sol}} + \mathbf{u}_{\text{irrot}} = \mathbf{u}_{\text{sol}} + \nabla \phi$
since $\nabla \times \nabla \phi = 0$ for some scalar function, $\,\phi$. Taking the divergence of equation yields
$\nabla\cdot \mathbf{u} = \nabla^2 \phi \qquad ( \text{since,} \; \nabla\cdot \mathbf{u}_{\text{sol}} = 0 )$
This is a Poisson equation for the scalar function $\,\phi$. If the vector field $\mathbf{u}$ is known, the above equation can be solved for the scalar function $\,\phi$ and the divergence-free part of $\mathbf{u}$ can be extracted using the relation
$\mathbf{u}_{\text{sol}} = \mathbf{u} - \nabla \phi$
This is the essence of solenoidal projection method for solving incompressible Navier–Stokes equations.
## Chorin's projection method
The incompressible Navier-Stokes equation (differential form of momentum equation) may be written as
$\frac {\partial \mathbf{u}} {\partial t} + (\mathbf{u}\cdot\nabla)\mathbf{u} = - \frac {1}{\rho} \nabla p + \nu \nabla^2 \mathbf{u}$
In Chorin's original version of the projection method, one first computes an intermediate velocity, $\mathbf{u}^*$, explicitly using the momentum equation by ignoring the pressure gradient term:
$\quad (1) \qquad \frac {\mathbf{u}^* - \mathbf{u}^n} {\Delta t} = -(\mathbf{u}^n \cdot\nabla) \mathbf{u}^n + \nu \nabla^2 \mathbf{u}^n$
where $\mathbf{u}^n$ is the velocity at $\,n$th time step. In the second half of the algorithm, the projection step, we correct the intermediate velocity to obtain the final solution of the time step $\mathbf{u}^{n+1}$:
$\quad (2) \qquad \mathbf{u}^{n+1} = \mathbf{u}^* - \frac {\Delta t}{\rho} \, \nabla p ^{n+1}$
One can rewrite this equation in the form of a time step as
$\frac {\mathbf{u}^{n+1} - \mathbf{u}^*} {\Delta t} = - \frac {1}{\rho} \, \nabla p ^{n+1}$
to make clear that the algorithm is really just an operator splitting approach in which one considers the viscous forces (in the first half step) and the pressure forces (in the second half step) separately.
Computing the right-hand side of the second half step requires knowledge of the pressure, $\,p$, at the$\,(n+1)$ time level. This is obtained by taking the divergence and requiring that $\nabla\cdot \mathbf{u}^{n+1} = 0$, which is the divergence (continuity) condition, thereby deriving the following Poisson equation for $\,p^{n+1}$,
$\nabla ^2 p^{n+1} = \frac {\rho} {\Delta t} \, \nabla\cdot \mathbf{u}^*$
It is instructive to note that the equation written as
$\mathbf{u}^* = \mathbf{u}^{n+1} + \frac {\Delta t}{\rho} \, \nabla p ^{n+1}$
is the standard Hodge decomposition if boundary condition for $\,p$ on the domain boundary, $\partial \Omega$ are $\nabla p^{n+1}\cdot \mathbf{n} = 0$. In practice, this condition is responsible for the errors this method shows close to the boundary of the domain since the real pressure (i.e., the pressure in the exact solution of the Navier-Stokes equations) does not satisfy such boundary conditions.
For the explicit method, the boundary condition for $\mathbf{u}^*$ in equation (1) is natural. If $\mathbf{u}\cdot \mathbf{n} = 0$ on $\partial \Omega$, is prescribed, then the space of divergence-free vector fields will be orthogonal to the space of irrotational vector fields, and from equation (2) one has
$\frac {\partial p^{n+1}} {\partial n} = 0 \qquad \text{on} \quad \partial \Omega$
The explicit treatment of the boundary condition may be circumvented by using a staggered grid and requiring that $\nabla\cdot \mathbf{u}^{n+1}$ vanish at the pressure nodes that are adjacent to the boundaries.
A distinguishing feature of Chorin's projection method is that the velocity field is forced to satisfy a discrete continuity constraint at the end of each time step.
## General method
Typically the projection method operates as a two-stage fractional step scheme, a method which uses multiple calculation steps for each numerical time-step. In many projection algorithms, the steps are split as follows:
1. First the system is progressed in time to a mid-time-step position, solving the above transport equations for mass and momentum using a suitable advection method. This is denoted the predictor step.
2. At this point an initial projection may be implemented such that the mid-time-step velocity field is enforced as divergence free.
3. The corrector part of the algorithm is then progressed. These use the time-centred estimates of the velocity, density, etc. to form final time-step state.
4. A final projection is then applied to enforce the divergence restraint on the velocity field. The system has now been fully updated to the new time.
## References
1. ^ Chorin, A. J. (1967), "The numerical solution of the Navier-Stokes equations for an incompressible fluid" (PDF), Bull. Am. Math. Soc. 73: 928–931
2. ^ Chorin, A. J. (1968), "Numerical Solution of the Navier-Stokes Equations", Math. Comp. 22: 745–762, doi:10.1090/s0025-5718-1968-0242392-2
3. ^ Chorin, A. J.; J. E. Marsden (1993). A Mathematical Introduction to Fluid Mechanics (3rd ed.). Springer-Verlag. ISBN 0-387-97918-2. | {} |
# Math Help - Product Rule Question
1. ## Product Rule Question
Utilize the product rule and sum rule to answer the following question...
How many subsets of a set with 100 elements have more than one element?
2. Originally Posted by fusion1455
Utilize the product rule and sum rule to answer the following question...
How many subsets of a set with 100 elements have more than one element?
a finite set of cardinality $n$ has $2^n$ subsets. $n$ of these subsets contain only one element. 1 of these subsets is $\emptyset$ (the empty set) | {} |
Home When using VBA to open a CSV file directly from internet explorer, I cant then Interact with the file.
# When using VBA to open a CSV file directly from internet explorer, I cant then Interact with the file.
Christian T
1#
Christian T Published in 2018-02-14 12:56:09Z
So I have written some code to download analytic data from twitter by simulating a button click. It's not pretty code but its all I can find to work for now. I am successfully managing to click the file download, after which a 'frame notification bar' appears with the open, save options. I am successfully clicking open twice, however this is where I run into problems. The problem being that I then want to interact with the data in the CSV file which I have just chosen to open, however the CSV file doesn't come into existence until after the code finishes running. I know there must be a simple solution to this but I just don't know what to search for. I have tried to play with Wait and DoEvents to see if that helps but no luck so far. Here is my code: Private Sub CommandButton1_Click() UserForm1.Hide Dim appIE As Object Set appIE = CreateObject("internetexplorer.application") With appIE .Navigate "https://analytics.twitter.com/user/QinetiQ/tweets" .Visible = True End With Do While appIE.Busy DoEvents Loop Application.Wait (Now + TimeValue("0:00:04")) Set HTMLDoc = appIE.document Set btn = HTMLDoc.getElementsByClassName("btn btn-default ladda-button")(0) btn.Click Application.Wait (Now + TimeValue("0:00:07")) Application.SendKeys "%{S}" Dim o As IUIAutomation Dim e As IUIAutomationElement Set o = New CUIAutomation Dim h As Long h = appIE.Hwnd h = FindWindowEx(h, 0, "Frame Notification Bar", vbNullString) If h = 0 Then Exit Sub Set e = o.ElementFromHandle(ByVal h) Dim iCnd As IUIAutomationCondition Set iCnd = o.CreatePropertyCondition(UIA_NamePropertyId, "Open") Dim Button As IUIAutomationElement Application.Wait (Now + TimeValue("0:00:03")) SendKeys "%(o)" Dim wb As Workbook DoEvents Application.Wait (Now + TimeValue("0:00:15")) Set wb = GetWB Dim ws As Worksheet Set ws = wb.ActiveSheet End Sub Function GetWB() As Workbook Dim wb As Workbook wbName = "tweet" For Each wb In Application.Workbooks If wb.Name Like wbName & "*" Then Set GetWB = wb MsgBox ("Found it") Exit Function End If Next wb MsgBox ("failed to find worksheet") End Function I know I have used some really bad techniques and apologies for that. Please can anyone help, thankyou.
Chronocidal
2#
Chronocidal Reply to 2018-02-14 13:36:28Z
You can split the macro in two by using Application.OnTime - this will let you finish the macro (so that the CSV file can open) and then have a new macro scheduled to start a couple of seconds later: 'This is the end of CommandButton1_Click Application.Wait (Now + TimeValue("0:00:03")) SendKeys "%(o)" Dim wb As Workbook DoEvents Application.OnTime Now()+TimeSerial(0,0,15), "ContinueDownloadMacro" 'In 15 seconds the "ContinueDownloadMacro" Sub will start End Sub Public Sub ContinueDownloadMacro() Dim wb As Workbook, ws As Worksheet Set wb = GetWB Set ws = wb.ActiveSheet End Sub
You need to login account before you can post.
Processed in 0.463841 second(s) , Gzip On . | {} |
Address 1114 Texas Palmyra Hwy, Honesdale, PA 18431 (570) 253-2949
noise due to quantization error Cochecton Center, New York
Putting the two measurements together would suggest that it's probably between 52" and 54". The analysis of quantization involves studying the amount of data (typically measured in digits or bits or bit rate) that is used to represent the output of the quantizer, and studying It is in this domain that substantial rate–distortion theory analysis is likely to be applied. All the inputs x {\displaystyle x} that fall in a given interval range I k {\displaystyle I_{k}} are associated with the same quantization index k {\displaystyle k} .
Bennett, "Spectra of Quantized Signals", Bell System Technical Journal, Vol. 27, pp. 446–472, July 1948. ^ a b B. For example when M = {\displaystyle M=} 256 levels, the FLC bit rate R {\displaystyle R} is 8 bits/symbol. For other source pdfs and other quantizer designs, the SQNR may be somewhat different from that predicted by 6dB/bit, depending on the type of pdf, the type of source, the type Reconstruction: Each interval I k {\displaystyle I_{k}} is represented by a reconstruction value y k {\displaystyle y_{k}} which implements the mapping x ∈ I k ⇒ y = y k {\displaystyle
Sullivan, "Efficient Scalar Quantization of Exponential and Laplacian Random Variables", IEEE Transactions on Information Theory, Vol. The JPEG 2000 Suite. This is a different manifestation of "quantization error," in which theoretical models may be analog but physically occurs digitally. This distortion is created after the anti-aliasing filter, and if these distortions are above 1/2 the sample rate they will alias back into the band of interest.
In such cases, using a mid-tread uniform quantizer may be appropriate while using a mid-riser one would not be. Need Help? Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. In more elaborate quantization designs, both the forward and inverse quantization stages may be substantially more complex.
Q-error relates to the fact that an ADC cannot resolve an analogue signal closer than the nearest digital step. doi:10.1109/TIT.1982.1056456 ^ Stuart P. Quantizing a sequence of numbers produces a sequence of quantization errors which is sometimes modeled as an additive random signal called quantization noise because of its stochastic behavior. The analysis of a uniform quantizer applied to a uniformly distributed source can be summarized in what follows: A symmetric source X can be modelled with f ( x ) =
doi:10.1109/TIT.1968.1054193 ^ a b c d e f g h Robert M. All the inputs x {\displaystyle x} that fall in a given interval range I k {\displaystyle I_{k}} are associated with the same quantization index k {\displaystyle k} . AIEE Pt. Especially for compression applications, the dead-zone may be given a different width than that for the other steps.
The analysis of quantization involves studying the amount of data (typically measured in digits or bits or bit rate) that is used to represent the output of the quantizer, and studying IT-30, No. 3, pp. 485–497, May 1982 (Section VI.C and Appendix B). Sign up for the inSyncweekly roundup email Delivered every Friday. A quantizer designed for this purpose may be quite different and more elaborate in design than an ordinary rounding operation.
Proof: Suppose that the instantaneous value of the input voltage is measured by an ADC with a Full Scale Range of Vfs volts, and a resolution of n bits. In general, both ADC processes lose some information. Neglecting the entropy constraint: Lloyd–Max quantization In the above formulation, if the bit rate constraint is neglected by setting λ {\displaystyle \lambda } equal to 0, or equivalently if it is Quantization replaces each real number with an approximation from a finite set of discrete values (levels), which is necessary for storage and processing by numerical methods.
Reconstruction: Each interval I k {\displaystyle I_{k}} is represented by a reconstruction value y k {\displaystyle y_{k}} which implements the mapping x ∈ I k ⇒ y = y k {\displaystyle For a sine wave, quantization error will appear as extra harmonics in the signal. The relation $V_{ref} = 2^NQ$ comes from the fact that the range $V_{ref}$ is divided among $2^N$ steps, each with quantum $Q$. One way to do this is to associate each quantization index k {\displaystyle k} with a binary codeword c k {\displaystyle c_{k}} .
The 1.761 difference in signal-to-noise only occurs due to the signal being a full-scale sine wave instead of a triangle/sawtooth. That range is called quantum ($Q$) and is equivalent to the Least Significant Bit (LSB). In either case, the standard deviation, as a percentage of the full signal range, changes by a factor of 2 for each 1-bit change in the number of quantizer bits. The more levels a quantizer uses, the lower is its quantization noise power.
It is interesting to note that this error at times can be precisely zero, this happens when the ADC representation is at the precise level of the signal (shown in integer Chou, Tom Lookabaugh, and Robert M. David (1977), Analog & Digital Communication, John Wiley, ISBN978-0-471-32661-8 Stein, Seymour; Jones, J. It is common for the design of a quantizer to involve determining the proper balance between granular distortion and overload distortion.
The key observation comes that if the random slop one has added has the proper uniform distribution and is free of bias, the total of 100 measurements is 5,283", that would Gray, Vector Quantization and Signal Compression, Springer, ISBN 978-0-7923-9181-4, 1991. ^ Hodgson, Jay (2010). SAMS. The property of 6dB improvement in SQNR for each extra bit used in quantization is a well-known figure of merit.
John Wiley & Sons. CT-3, pp. 266–276, 1956. In an ideal analog-to-digital converter, where the quantization error is uniformly distributed between −1/2 LSB and +1/2 LSB, and the signal has a uniform distribution covering all quantization levels, the Signal-to-quantization-noise The noise is non-linear and signal-dependent.
The dead zone can sometimes serve the same purpose as a noise gate or squelch function. The step size Δ = 2 X m a x M {\displaystyle \Delta ={\frac {2X_{max}}{M}}} and the signal to quantization noise ratio (SQNR) of the quantizer is S Q N R more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed The use of this approximation can allow the entropy coding design problem to be separated from the design of the quantizer itself.
If it is assumed that distortion is measured by mean squared error, the distortion D, is given by: D = E [ ( x − Q ( x ) ) 2 | {} |
A note on the accuracy of symmetric eigenreduction algorithms
K. Veselić
Abstract
We present some experimental results illustrating the fact that on highly ill–conditioned Hermitian matrices the relative accuracy of computed small eigenvalues by QR eigenreduction may drastically depend on the initial permutation of the rows and columns. Mostly there was an “accurate” permutation, but there does not seem to be an easy method to get at it. For banded matrices, like those from structural mechanics, the accurate pre–permutation, if it existed, was mostly non–banded. This is particularly true of tridiagonal matrices which shows that the tridiagonalization is not the only factor responsible for the inaccuracy of the eigenvalues.
Full Text (PDF) [137 KB]
Key words
LAPACK, QR method, Jacobi method, Hermitian matrices, eigenvalue computation.
65F15.
< Back | {} |
# Colored mainfont with fontspec in a textblock yields to pdf error with LuaLaTex [closed]
When I (1) change the color of the mainfont with the fontspec package, (2) put some text in a textblock environment and (3) compile the document with LuaLaTex, the resulting pdf when opened with Adobe Reader 9 displays the following error message:
The document opens correctly and looks all good. However, this error message is annoying and does not look very professional. When I remove the colour specification in \setmainfont options, the error disappears.
Here is a MWE. Also, notice how the hyphens are kept black, unlike the main body of the text which is correctly painted in red:
\documentclass[]{article}
\usepackage[overlay, absolute]{textpos}
\usepackage{xcolor}
\usepackage[]{fontspec}
\setmainfont[Color = red]{Latin Modern Roman}
\begin{document}
\begin{textblock}{5}(5, 5)
LaTeX is a high-quality typesetting system; it includes features designed for the production of technical and scientific documentation. LaTeX is the de facto standard for the communication and publication of scientific documents.
\end{textblock}
\end{document}
Which results in:
I'm working in TeXstudio 2.6.6. in Ubuntu 14.04 LTS. The compilation is done with the following command "lualatex -synctex=1 -interaction=nonstopmode %.tex". The compilation ends without any error or warning.
Am I the only one experiencing this issue or am I doing something wrong?
## closed as off-topic by Philipp Gesang, user13907, user31729, Andrew, WernerApr 20 '15 at 21:21
• This question does not fall within the scope of TeX, LaTeX or related typesetting systems as defined in the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.
• I can not reproduce the error with "LuaTeX, Version beta-0.79.1 (TeX Live 2014) (rev 4971)" and AR 9.5.5. Which version of LuaTeX do you use? But the hyphens are black. – Martin Schröder Apr 14 '15 at 15:04
• This type of color support is a bit "inofficial", it was only added to to copy the xetex feature (github.com/lualatex/luaotfload/issues/41). I don't know if it can be fixed, but you should add an issue to the luaotfload issue tracker. – Ulrike Fischer Apr 15 '15 at 9:00
• The issue will be addressed in the upcoming release of Luaotfload. Also, as @Ulrike-Fischer rightfully pointed out, the font-based colorization is little more than a hack for compatibility. Please consider using a proper colorization package instead. – Philipp Gesang Apr 20 '15 at 20:38
• I'm voting to close this question as off-topic because it’s something in between a bug report and a feature request for Luaotfload concerning borderline supported functionality ;) The next release will come with a fix. – Philipp Gesang Apr 20 '15 at 20:41
• @Jean-Sébastien Gosselin leave the question for reference, it’s valid just IMO not appropriate for this site. Regarding colorization perhaps start with the luacolor package. It’s specifically geared toward the Luatex engine so it should be one of the best options. I’d recommend searching around for other suggestions especially by the crowd that actually do use Latex ;) – Philipp Gesang Apr 21 '15 at 4:59 | {} |
# In the circuit used for measuring resistance (shown below), readings of ammeter and voltmeter are 2 A and 200 V. The ammeter and voltmeter resistances are 0.1 Ω and 2000 Ω respectively. The error in the calculated value of resistance (voltmeter reading / ammeter reading) is-
This question was previously asked in
DSSSB JE EE 2019 Official Paper (Held on 25 Oct 2019)
View all DSSSB JE Papers >
1. 5%
2. 9%
3. 10%
4. 15%
Option 1 : 5%
## Detailed Solution
Concept:
Ammeter Voltmeter Method:
Connecting the voltmeter across the load is suitable for low resistance measurements.
$$I+I_V=\frac{E(R+R_V)}{R.R_V}$$
The calculated resistance is:
$$\frac{E}{I+I_V}=\frac{R.R_V}{R+R_V}$$
Explanation:
Measured reading of ammeter = 2 A
Measured reading of voltmeter = 200 V
Measured value of resistance = 200/2 = 100 Ω
The current diverted through voltmeter because of voltmeter resistance is
$${I_v} = \frac{{200}}{{2000}} = 0.1\;{\rm{A}}$$
True value of voltage = 200 V
True value of current = 1.9 A
True value of resistance = 200/1.9 =105.263 Ω
The percentage error in the measurement of resistance is
$$= \frac{{100 - 105.263}}{{105.263}} \times 100 = - 5\%$$
Error calculated = 5 % | {} |
# Single Slit Diffraction and Monochromatic light
jones268
Monochromatic light with a wavelength of 419 nm passes through a single slit and falls on a screen 88 cm away. If the distance of the first-order dark band is 0.29 cm from the center of the pattern, what is the width of the slit?
My knowns are as follows:
L= 88 cm (distance from slit to screen)
λ= 419 nm
WCM= 0.58 cm (width of the central max, I assumed it was twice the distance from the center of the patter to the first order dark band)
w=? unknown width of the slit
I thought I should use the following equation:
WCM=[(2)(λ)(L)]/[(square root of : (w^2 - λ^2))]
I plugged the numbers into the equation and solved for w, but came up with the wrong answer, I'm not quite sure what I'm doing wrong...
dacruick
you should be able to solve for an angle using the distance of the first order band. After that, there will be a formula that relates all of your other components to theta and slit width.
Stonebridge
Shouldn't you be using the formula for the direction to the first minimum
sin θ = nλ/a where a is the slit width?
jones268
Thanks stonebridge for the help, I never thought of using that equation for L. But I'm still coming up short on the right answer. Instead of 0.0127145 cm as my answer, I'm coming up with 0.0165528450 cm. :/ This is so frustrating.
dacruick
use the formula that stonebridge just gave you. I got the right answer. make sure that you calculate your theta using arctan(.29/88) | {} |
# V. V. Sharko
Search this author in Google Scholar
Articles: 5
### Topological equivalence to a projection
Methods Funct. Anal. Topology 21 (2015), no. 1, 3-5
We present a necessary and sufficient condition for a continuous function on a plane to be topologically equivalent to a projection onto one of the coordinates.
### About Kronrod-Reeb graph of a function on a manifold
V. V. Sharko
Methods Funct. Anal. Topology 12 (2006), no. 4, 389-396
We study Kronrod-Reeb graphs of functions with isolated critical points on smooth manifolds. We prove that any finite graph, which satisfies the condition $\Im$ is a Kronrod-Reeb graph for some such function on some manifold. In this connection, monotone functions on graphs are investigated.
### The $L^2$-invariants and flows on manifolds
V. V. Sharko
Methods Funct. Anal. Topology 11 (2005), no. 2, 195-204
### On classification of flows on manifolds. I
Methods Funct. Anal. Topology 2 (1996), no. 2, 51-60
### Bott functions and vector fields
V. V. Sharko
Methods Funct. Anal. Topology 1 (1995), no. 1, 86-92 | {} |
#### Vol. 12, No. 7, 2019
Recent Issues
The Journal About the Journal Editorial Board Editors’ Interests Subscriptions Submission Guidelines Submission Form Policies for Authors Ethics Statement ISSN: 1944-4184 (e-only) ISSN: 1944-4176 (print) Author Index Coming Soon Other MSP Journals
Sign pattern matrices that allow inertia $\mathbb{S}_{n}$
### Adam H. Berliner, Derek DeBlieck and Deepak Shah
Vol. 12 (2019), No. 7, 1229–1240
##### Abstract
Sign pattern matrices of order $n$ that allow inertias in the set ${\mathbb{S}}_{n}$ are considered. All sign patterns of order 3 (up to equivalence) that allow ${\mathbb{S}}_{3}$ are classified and organized according to their associated directed graphs. Furthermore, a minimal set of such matrices is found. Then, given a pattern of order $n$ that allows ${\mathbb{S}}_{n}$, a construction is given that generates families of irreducible sign patterns of order $n+1$ that allow ${\mathbb{S}}_{n+1}$.
##### Keywords
sign pattern, zero-nonzero pattern, inertia, digraph, Jacobian
##### Mathematical Subject Classification 2010
Primary: 15B35, 15A18, 05C50
Secondary: 05C20 | {} |
# Homework Help: A question about limit of a continuous function
1. May 21, 2012
### mike1988
I am trying to solve a question and I need to justify a line in which |lim(x-->0)(f(x))|≤lim(x-->0)|f(x)| where f is a continuous function.
Any help?
2. May 21, 2012
### micromass
The absolute value is a continuous function. That is:
$$|\ |:\mathbb{R}\rightarrow \mathbb{R}:x\rightarrow |x|$$
is continuous. Does that help?? What do you know about continuity and limits?
3. May 21, 2012
### Ray Vickson
Show your work. Where are you stuck?
RGV
4. May 21, 2012
### mike1988
actually I figured this out. Since || is a continuous, |lim(x-->0)(f(x))|= lim(x-->0)|f(x)| which is obvious from one of the theorems in my book.
Thanks though! | {} |
## Abstract and Applied Analysis
### Discussion on Generalized-(αψ, β$\phi$)-Contractive Mappings via Generalized Altering Distance Function and Related Fixed Point Theorems
#### Abstract
We extend the notion of (α ψ, β $\phi$)-contractive mapping, a very recent concept by Berzig and Karapinar. This allows us to consider contractive conditions that generalize a wide range of nonexpansive mappings in the setting of metric spaces provided with binary relations that are not necessarily neither partial orders nor preorders. Thus, using this kind of contractive mappings, we show some related fixed point theorems that improve some well known recent results and can be applied in a variety of contexts.
#### Article information
Source
Abstr. Appl. Anal., Volume 2014 (2014), Article ID 259768, 12 pages.
Dates
First available in Project Euclid: 2 October 2014
https://projecteuclid.org/euclid.aaa/1412273187
Digital Object Identifier
doi:10.1155/2014/259768
Mathematical Reviews number (MathSciNet)
MR3176729
Zentralblatt MATH identifier
07022026
#### Citation
Berzig, Maher; Karapınar, Erdal; Roldán-López-de-Hierro, Antonio-Francisco. Discussion on Generalized-( α ψ , β $\phi$ )-Contractive Mappings via Generalized Altering Distance Function and Related Fixed Point Theorems. Abstr. Appl. Anal. 2014 (2014), Article ID 259768, 12 pages. doi:10.1155/2014/259768. https://projecteuclid.org/euclid.aaa/1412273187
#### References
• A. C. M. Ran and M. C. B. Reurings, “A fixed point theorem in partially ordered sets and some applications to matrix equations,” Proceedings of the American Mathematical Society, vol. 132, no. 5, pp. 1435–1443, 2004.
• J. J. Nieto and R. Rodríguez-López, “Contractive mapping theorems in partially ordered sets and applications to ordinary differential equations,” Order, vol. 22, no. 3, pp. 223–239, 2005.
• T. G. Bhaskar and V. Lakshmikantham, “Fixed point theorems in partially ordered metric spaces and applications,” Nonlinear Analysis: Theory, Methods & Applications, vol. 65, no. 7, pp. 1379–1393, 2006.
• V. Berinde and M. Borcut, “Tripled fixed point theorems for contractive type mappings in partially ordered metric spaces,” Nonlinear Analysis: Theory, Methods & Applications, vol. 74, no. 15, pp. 4889–4897, 2011.
• M. Borcut and V. Berinde, “Tripled coincidence theorems for contractive type mappings in partially ordered metric spaces,” Applied Mathematics and Computation, vol. 218, no. 10, pp. 5929–5936, 2012.
• E. Karap\inar, “Quadruple fixed point theorems for weak $\phi$-contractions,” ISRN Mathematical Analysis, vol. 2011, Article ID 989423, 15 pages, 2011.
• E. Karapinar and V. Berinde, “Quadruple fixed point theorems for nonlinear contractions in partially ordered metric spaces,” Banach Journal of Mathematical Analysis, vol. 6, no. 1, pp. 74–89, 2012.
• M. Berzig and B. Samet, “An extension of coupled fixed point's concept in higher dimension and applications,” Computers & Mathematics with Applications, vol. 63, no. 8, pp. 1319–1334, 2012.
• E. Karap\inar, A. Roldán, J. Martínez-Moreno, and C. Roldán, “Meir-Keeler type multidimensional fixed point theorems in partially ordered metric spaces,” Abstract and Applied Analysis, vol. 2013, Article ID 406026, 9 pages, 2013.
• A. Roldán and E. Karapinar, “Some multidimensional fixed point theorems on partially preordered ${G}^{\ast\,\!}$-metric spaces under ($\psi$, $\varphi$)-contractivity conditions,” Fixed Point Theory and Applications, vol. 2013, article 158, 2013.
• A. Roldán, J. Martínez-Moreno, and C. Roldán, “Multidimensional fixed point theorems in partially ordered complete metric spaces,” Journal of Mathematical Analysis and Applications, vol. 396, no. 2, pp. 536–545, 2012.
• M. S. Khan, M. Swaleh, and S. Sessa, “Fixed point theorems by altering distances between the points,” Bulletin of the Australian Mathematical Society, vol. 30, no. 1, pp. 1–9, 1984.
• M. A. Alghamdi and E. Karap\inar, “$G$-$\beta$-$\psi$-contractive-type mappings and related fixed point theorems,” Journal of Inequalities and Applications, vol. 2013, article 70, 2013.
• M. Berzig and E. Karapinar, “Fixed point results for ($\alpha \psi$,$\,\beta$$\varphi$)-contractive mappings for a generalized altering distance,” Fixed Point Theory and Applications, vol. 2013, no. 1, article 205, 2013.
• S. Moradi and A. Farajzadeh, “On the fixed point of $(\psi \text{-}\varphi )$-weak and generalized $(\psi \text{-}\varphi )$-weak contraction mappings,” Applied Mathematics Letters, vol. 25, no. 10, pp. 1257–1262, 2012.
• O. Popescu, “Fixed points for $(\psi ,\phi )$-weak contractions,” Applied Mathematics Letters, vol. 24, no. 1, pp. 1–4, 2011.
• B. Samet and M. Turinici, “Fixed point theorems on a metric space endowed with an arbitrary binary relation and applications,” Communications in Mathematical Analysis, vol. 13, no. 2, pp. 82–97, 2012.
• M. Berzig, “Coincidence and common fixed point results on metric spaces endowed with an arbitrary binary relation and applications,” Journal of Fixed Point Theory and Applications, vol. 12, no. 1-2, pp. 221–238, 2012.
• W. A. Kirk, P. S. Srinivasan, and P. Veeramani, “Fixed points for mappings satisfying cyclical contractive conditions,” Fixed Point Theory, vol. 4, no. 1, pp. 79–89, 2003. \endinput | {} |
LoggingInterface.hh 6.88 KB
Jan Möbius committed Aug 05, 2009 1 /*===========================================================================*\ Jan Möbius committed Nov 25, 2010 2 3 * * * OpenFlipper * Jan Möbius committed Jan 26, 2011 4 * Copyright (C) 2001-2011 by Computer Graphics Group, RWTH Aachen * Jan Möbius committed Nov 25, 2010 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 * www.openflipper.org * * * *--------------------------------------------------------------------------- * * This file is part of OpenFlipper. * * * * OpenFlipper is free software: you can redistribute it and/or modify * * it under the terms of the GNU Lesser General Public License as * * published by the Free Software Foundation, either version 3 of * * the License, or (at your option) any later version with the * * following exceptions: * * * * If other files instantiate templates or use macros * * or inline functions from this file, or you compile this file and * * link it with other files to produce an executable, this file does * * not by itself cause the resulting executable to be covered by the * * GNU Lesser General Public License. This exception does not however * * invalidate any other reasons why the executable file might be * * covered by the GNU Lesser General Public License. * * * * OpenFlipper is distributed in the hope that it will be useful, * * but WITHOUT ANY WARRANTY; without even the implied warranty of * * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * * GNU Lesser General Public License for more details. * * * * You should have received a copy of the GNU LesserGeneral Public * * License along with OpenFlipper. If not, * * see . * * * Jan Möbius committed Aug 05, 2009 33 34 35 \*===========================================================================*/ /*===========================================================================*\ Jan Möbius committed Nov 25, 2010 36 37 38 39 40 * * * $Revision$ * * $LastChangedBy$ * * $Date$ * * * Jan Möbius committed Aug 05, 2009 41 \*===========================================================================*/ Jan Möbius committed Aug 29, 2008 42 43 44 45 46 47 #ifndef LOGGINGINTERFACE_HH #define LOGGINGINTERFACE_HH Jan Möbius committed Feb 22, 2011 48 49 50 51 52 53 /** \file LoggingInterface.hh * * Interface for sending log messages to the log widget.\ref */ /** \page loggingInterfacePage Logging Interface Jan Möbius committed Feb 22, 2011 54 \image html LoggingInterface.png Jan Möbius committed Feb 22, 2011 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 The Logging interface can be used by plugins to print messages in OpenFlippers log widget below the gl viewer (See image). The log widget can apply filters to visualize only the messages of a certain importance. The widget has 3 different modes, which can be toggled by pressing Ctrl + L:
• Docked automatic mode: In this mode the widget is at the bottom and will hide itself if not used.
• Docked mode: The widget is always visible and docked at the bottom
• Undocked: The widget is undocked into a separate window
The message importance level can be specified by the Logtype enum. There are four different levels which describe the importance and are represented by different colors in the log widget.
• Out: Simple output messages in black
• Info: Information messages printed in green
• Warn: Warnings in yellow
• Errors: Error messages printed in red
To use the LoggingInterface:
• include LoggingInterface.hh in your plugins header file
• derive your plugin from the class LoggingInterface Jan Möbius committed Feb 22, 2011 77
• add Q_INTERFACES(LoggingInterface) to your plugin class Jan Möbius committed Feb 22, 2011 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98
• And add the signals or slots you want to use to your plugin class (You don't need to implement all of them)
If you use the interface, all messages printed from that plugin will be prepended with the name of the plugin to identify the origin of the message.\n The following code sends a log message \code emit log(LOGERR,tr("Error message")); \endcode */ /** \brief Log types for Message Window * * Use this enum to specify the importance of log messages. */ enum Logtype { LOGOUT , /*!< Standard log messages. Will be printed in black in the logwidget */ LOGINFO , /*!< Info log messages. Will be printed in green in the logwidget */ LOGWARN , /*!< Warning messages. Will be printed in yellow in the logwidget */ LOGERR /*!< Error messages. Will be printed in red in the logwidget */ }; Jan Möbius committed Aug 29, 2008 99 100 101 /** \brief Interface for all Plugins which do logging to the logging window of the framework * Jan Möbius committed Feb 22, 2011 102 103 * \ref loggingInterfacePage "Detailed description" * \n Jan Möbius committed Aug 29, 2008 104 * By emitting the given signals you can log information to the main logger window of the core. Jan Möbius committed Feb 22, 2011 105 * To simplify debugging, the core will prepend the plugin name to every log message. You dont Jan Möbius committed Aug 29, 2008 106 * have to do this yourself!\n Mike Kremer committed Mar 26, 2009 107 108 * The log message will either be black or will be colored depending on the Logtype you specified. * Jan Möbius committed Feb 22, 2011 109 110 * A more detailed description of the LoggingInterface can be found \ref loggingInterfacePage "here" or * read our tutorials \ref ex2 and \ref ex3 for an example of how to use logging output. Jan Möbius committed Aug 29, 2008 111 112 113 114 */ class LoggingInterface { signals : Jan Möbius committed Feb 24, 2011 115 116 /** Send a log message to the mainwindow of the widget * Jan Möbius committed Aug 29, 2008 117 118 119 * @param _type Message type (LOGINFO,LOGOUT,LOGWARN,LOGERR) * @param _message Message to be displayed */ Jan Möbius committed Feb 24, 2011 120 virtual void log(Logtype _type, QString _message) = 0; Jan Möbius committed Aug 29, 2008 121 122 123 /** Send a log message to the mainwindow of the widget \n * defaults to LOGOUT message type Jan Möbius committed Feb 24, 2011 124 * Jan Möbius committed Aug 29, 2008 125 126 * @param _message Message to be displayed */ Jan Möbius committed Feb 24, 2011 127 virtual void log(QString _message) = 0; Jan Möbius committed Aug 29, 2008 128 129 130 131 private slots: /** Through this slot you can receive all logging information emitted by OpenFlipper Jan Möbius committed Feb 24, 2011 132 133 134 135 136 137 * or one of its plugins * * @param _type Message type * @param _message Message */ virtual void logOutput( Logtype _type , QString _message ) {}; Mike Kremer committed Feb 10, 2009 138 139 140 141 142 public: /// Destructor virtual ~LoggingInterface() {}; Jan Möbius committed Aug 29, 2008 143 144 145 146 147 }; Q_DECLARE_INTERFACE(LoggingInterface,"OpenFlipper.LoggingInterface/1.0") #endif // LOGGINGINTERFACE_HH | {} |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 12 Dec 2019, 09:36
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# The line represented by which of the following equation does
Author Message
TAGS:
### Hide Tags
Intern
Joined: 11 Apr 2012
Posts: 36
The line represented by which of the following equation does [#permalink]
### Show Tags
Updated on: 02 Jun 2013, 05:52
3
15
00:00
Difficulty:
15% (low)
Question Stats:
73% (01:12) correct 27% (01:53) wrong based on 276 sessions
### HideShow timer Statistics
The line represented by which of the following equation does not intersect with the line represented by y = 3x^2+5x+1
A. y = 2x^2+5x+1
B. y = x^2+5x+2
C. y = 3x^2+5x+2
D. y = 3x^2+7x+2
E. y = x^2+7x+1
@Bunuel: i couldn't find this problem addressed in the forums(apologies if i have overlooked any).
Could some one clarify if the lines with ax^2+b equal would be parallel and thus WILL NOT intersect as the logic behind solving this problem quickly.
Originally posted by GMATBaumgartner on 25 Aug 2012, 04:20.
Last edited by Bunuel on 02 Jun 2013, 05:52, edited 2 times in total.
Edited the question.
Intern
Joined: 28 Sep 2012
Posts: 7
GMAT Date: 01-25-2013
GPA: 3.38
Re: The line represented by which of the following equation does [#permalink]
### Show Tags
06 Nov 2012, 10:34
8
2
We can also solve this problem as follows
the equation given in the question is
y= 3x^2 + 5x+1
=> y = x(3x + 5) + 1 (Taking x as common)
from the above equation we can say that m(slope) = 3x + 5
Therefore whichever equation in the answer choices has same slope as above, is our answer.
Because two lines having same slope are parallel to each other and does not intersect.
C. y= 3x^2 + 5x+2
=> y= x(3x + 5) + 2
m= 3x +5
Cheers,
Suman.
##### General Discussion
Director
Joined: 22 Mar 2011
Posts: 584
WE: Science (Education)
Re: The line represented by which of the following equation does [#permalink]
### Show Tags
25 Aug 2012, 04:39
3
vinay911 wrote:
The line represented by which of the following equation does not intersect with the line represented by y = 3x2+ 5x+1
a)y = 2x2+ 5x+1
b) y = x2+ 5x+2
c)y = 3x2+ 5x+2
d)y = 3x2+ 7x+2
e)y = x2 + 7x+1
@Bunuel: i couldn't find this problem addressed in the forums(apologies if i have overlooked any).
Could some one clarify if the lines with ax^2+b equal would be parallel and thus WILL NOT intersect as the logic behind solving this problem quickly.
Because $$y=3x^2+5x+2=(3x^2+5x+1)+1$$ meaning the graph of C (which is a parabola) is that of the given equation, just shifted one unit up. Obviously, the two graphs don't intersect.
How to pick the right answer?
First of all, you can eliminate A and E, because for $$x=0,$$ they both give the same value $$y=1,$$ the same for the given expression in the stem.
Then, try to look for the expressions that have most terms in common with the given one. All the graphs of the given expressions are upward parabolas, so try to think when they cannot intersect. One case is the translation (moving the parabola vertically up or down).
_________________
PhD in Applied Mathematics
Love GMAT Quant questions and running.
Intern
Joined: 11 Apr 2012
Posts: 36
Re: The line represented by which of the following equation does [#permalink]
### Show Tags
25 Aug 2012, 09:54
EvaJager wrote:
vinay911 wrote:
The line represented by which of the following equation does not intersect with the line represented by y = 3x2+ 5x+1
a)y = 2x2+ 5x+1
b) y = x2+ 5x+2
c)y = 3x2+ 5x+2
d)y = 3x2+ 7x+2
e)y = x2 + 7x+1
@Bunuel: i couldn't find this problem addressed in the forums(apologies if i have overlooked any).
Could some one clarify if the lines with ax^2+b equal would be parallel and thus WILL NOT intersect as the logic behind solving this problem quickly.
Because $$y=3x^2+5x+2=(3x^2+5x+1)+1$$ meaning the graph of C (which is a parabola) is that of the given equation, just shifted one unit up. Obviously, the two graphs don't intersect.
How to pick the right answer?
First of all, you can eliminate A and E, because for $$x=0,$$ they both give the same value $$y=1,$$ the same for the given expression in the stem.
Then, try to look for the expressions that have most terms in common with the given one. All the graphs of the given expressions are upward parabolas, so try to think when they cannot intersect. One case is the translation (moving the parabola vertically up or down).
@EvaJager/Bunuel: How did we conclude that the 2 parabolas (one that is shifted up vertically w.r.t the other) does NOT intersect each other ?
I guess i am missing something basic here.
Thanks!
Director
Joined: 22 Mar 2011
Posts: 584
WE: Science (Education)
Re: The line represented by which of the following equation does [#permalink]
### Show Tags
25 Aug 2012, 10:06
vinay911 wrote:
EvaJager wrote:
vinay911 wrote:
The line represented by which of the following equation does not intersect with the line represented by y = 3x2+ 5x+1
a)y = 2x2+ 5x+1
b) y = x2+ 5x+2
c)y = 3x2+ 5x+2
d)y = 3x2+ 7x+2
e)y = x2 + 7x+1
@Bunuel: i couldn't find this problem addressed in the forums(apologies if i have overlooked any).
Could some one clarify if the lines with ax^2+b equal would be parallel and thus WILL NOT intersect as the logic behind solving this problem quickly.
Because $$y=3x^2+5x+2=(3x^2+5x+1)+1$$ meaning the graph of C (which is a parabola) is that of the given equation, just shifted one unit up. Obviously, the two graphs don't intersect.
How to pick the right answer?
First of all, you can eliminate A and E, because for $$x=0,$$ they both give the same value $$y=1,$$ the same for the given expression in the stem.
Then, try to look for the expressions that have most terms in common with the given one. All the graphs of the given expressions are upward parabolas, so try to think when they cannot intersect. One case is the translation (moving the parabola vertically up or down).
@EvaJager/Bunuel: How did we conclude that the 2 parabolas (one that is shifted up vertically w.r.t the other) does NOT intersect each other ?
I guess i am missing something basic here.
Thanks!
For the same value of x, we get some y for one expression and y + 1 for the other expression. y cannot be equal to y + 1.
_________________
PhD in Applied Mathematics
Love GMAT Quant questions and running.
Senior Manager
Joined: 24 Aug 2009
Posts: 441
Schools: Harvard, Columbia, Stern, Booth, LSB,
Re: The line represented by which of the following equation does [#permalink]
### Show Tags
10 Sep 2012, 15:23
The other way to solve this question is to create a graph for -2,-1,0,1,2.
Now put these values in the option to see which option doesn't intersect.
This solution is not meant for those who are aware of parabola & base shift or twist
Hope it helps
Manager
Joined: 04 Apr 2013
Posts: 111
Re: The line represented by which of the following equation does [#permalink]
### Show Tags
01 Jun 2013, 07:35
manjusu wrote:
We can also solve this problem as follows
the equation given in the question is
y= 3x^2 + 5x+1
=> y = x(3x + 5) + 1 (Taking x as common)
from the above equation we can say that m(slope) = 3x + 5
Therefore whichever equation in the answer choices has same slope as above, is our answer.
Because two lines having same slope are parallel to each other and does not intersect.
C. y= 3x^2 + 5x+2
=> y= x(3x + 5) + 2
m= 3x +5
Cheers,
Suman.
Manju,
concept of slope for lines & parabolas are different. Bunuel, please correct if I am wrong. Also please help to solve this problem if its a GMAT type question.
Manager
Joined: 27 Feb 2012
Posts: 111
Re: The line represented by which of the following equation does [#permalink]
### Show Tags
01 Jun 2013, 08:17
manjusu wrote:
We can also solve this problem as follows
the equation given in the question is
y= 3x^2 + 5x+1
=> y = x(3x + 5) + 1 (Taking x as common)
from the above equation we can say that m(slope) = 3x + 5
Therefore whichever equation in the answer choices has same slope as above, is our answer.
Because two lines having same slope are parallel to each other and does not intersect.
C. y= 3x^2 + 5x+2
=> y= x(3x + 5) + 2
m= 3x +5
Cheers,
Suman.
Manju,
concept of slope for lines & parabolas are different. Bunuel, please correct if I am wrong. Also please help to solve this problem if its a GMAT type question.
The general form of parabolic equ. is y^2= 4ax which implies the axis is x or x^2 = 4ay where axis is y.
We have a similar form as x^2 = 4ay.
here the vertex is origin.
So if we have same values of x and y but constant term changes then we will have parallel parabolas.
This is same as for straight line which are parallel for different values of constant term c
ax + by +c1 = 0 and ax +by+ c2 =0
Math Expert
Joined: 02 Sep 2009
Posts: 59712
Re: The line represented by which of the following equation does [#permalink]
### Show Tags
02 Jun 2013, 05:58
BangOn wrote:
manjusu wrote:
We can also solve this problem as follows
the equation given in the question is
y= 3x^2 + 5x+1
=> y = x(3x + 5) + 1 (Taking x as common)
from the above equation we can say that m(slope) = 3x + 5
Therefore whichever equation in the answer choices has same slope as above, is our answer.
Because two lines having same slope are parallel to each other and does not intersect.
C. y= 3x^2 + 5x+2
=> y= x(3x + 5) + 2
m= 3x +5
Cheers,
Suman.
Manju,
concept of slope for lines & parabolas are different. Bunuel, please correct if I am wrong. Also please help to solve this problem if its a GMAT type question.
The general form of parabolic equ. is y^2= 4ax which implies the axis is x or x^2 = 4ay where axis is y.
We have a similar form as x^2 = 4ay.
here the vertex is origin.
So if we have same values of x and y but constant term changes then we will have parallel parabolas.
This is same as for straight line which are parallel for different values of constant term c
ax + by +c1 = 0 and ax +by+ c2 =0
We have quadratic equations. These equations when drawn give parabolas, not lines. The question is: which of the following parabolas does not intersect with the parabola represented by y=3x^2+5x+1.
This CANNOT be transformed to the question: "which of the following parabolas is parallel to the parabola represented by y=3x^2+5x+1." In the wast majority of cases the word "parallel" is used for lines. Well, we can say that concentric circles are parallel, BUT GMAT, as far as I know, uses this word ONLY about the lines. Next, the word "parallel" when used for curves (lines, ...) means that these curves remain a constant distance apart. So strictly speaking two parabolas to be parallel they need not only not to intersect but also to remain constant distance apart. In this case, I must say that this cannot happen. If a curve is parallel (as we defined) to the parabola it won't be quadratic: so curve parallel to a parabola is not a parabola.
_________________
SVP
Joined: 06 Sep 2013
Posts: 1545
Concentration: Finance
Re: The line represented by which of the following equation does [#permalink]
### Show Tags
31 Mar 2014, 13:43
Hi all,
Now we see from the statement that y = 3x^2+5x+1 is a parabola.
The y intercept represents the vertex therefore if +1 is replaced by +2 such as in answer choice C the parabola only move upwards but means that it will never intersect with the original equation.
Hope this helps
Cheers
J
Director
Status: Everyone is a leader. Just stop listening to others.
Joined: 22 Mar 2013
Posts: 699
Location: India
GPA: 3.51
WE: Information Technology (Computer Software)
Re: The line represented by which of the following equation does [#permalink]
### Show Tags
19 Jun 2014, 09:41
The line represented by which of the following equation does not intersect with the line represented by y = 3x^2+5x+1
Calculate Discriminant (D) for each equation :$$\sqrt{b^2-4ac}$$
y = 3x^2+5x+1 ==> $$\sqrt{13}$$ -- cutting Y axis at 1 -- to calculate intercept put x=0
A. y = 2x^2+5x+1 ==> $$\sqrt{17}$$ -- D > $$\sqrt{13}$$ means curve is below original curve cutting Y axis at 1 -- cutting at same point.
B. y = x^2+5x+2 ==> $$\sqrt{17}$$ -- D > $$\sqrt{13}$$ means curve is below original curve and Y intercept at 2-- cut is unavoidable.
C. y = 3x^2+5x+2 ==> $$\sqrt{1}$$ -- D < $$\sqrt{13}$$ means closest to X axis -- cutting y axis at 2 above 1 -- cutting right above on Y axis and curve is also passing above as D = 1.
D. y = 3x^2+7x+2 ==> $$\sqrt{25}$$ -- D > $$\sqrt{13}$$ means curve is below original curve and Y intercept at 2-- cut is unavoidable.-- not plotted on attached graph.
E. y = x^2+7x+1 ==> $$\sqrt{45}$$ -- D > $$\sqrt{13}$$ means curve is below original curve cutting Y axis at 1 -- cutting at same point.
Refer following graph to relate the nature of equations and value of D.
Attachment:
2014-06-19_1101.jpg [ 34.26 KiB | Viewed 5181 times ]
GMAT Tutor
Joined: 24 Jun 2008
Posts: 1829
Re: The line represented by which of the following equation does [#permalink]
### Show Tags
03 Jul 2015, 04:32
2
GMATBaumgartner wrote:
The line represented by which of the following equation does not intersect with the line represented by y = 3x^2+5x+1
A. y = 2x^2+5x+1
...
I'd emphasize that none of the equations in this question represent lines, despite what the question appears to say. The equations represent parabolas, and you don't need to know about parabolas for the GMAT.
There is one concept in this question that is occasionally tested - the concept of translation. If you have any equation at all in coordinate geometry, say:
y = x^2
that will be some curve in the coordinate plane (technically it will be a 'parabola', or U-shape). If you then modify the equation by adding a constant on the right side, say by adding 5:
y = x^2 + 5
then the graph of this new equation will look exactly the same as the graph of the first equation, except that it will be exactly 5 units higher. So when we add a constant on the right side of an equation, we're simply moving the picture of the equation up or down.
_________________
GMAT Tutor in Toronto
If you are looking for online GMAT math tutoring, or if you are interested in buying my advanced Quant books and problem sets, please contact me at ianstewartgmat at gmail.com
CEO
Joined: 20 Mar 2014
Posts: 2560
Concentration: Finance, Strategy
Schools: Kellogg '18 (M)
GMAT 1: 750 Q49 V44
GPA: 3.7
WE: Engineering (Aerospace and Defense)
The line represented by which of the following equation does [#permalink]
### Show Tags
03 Jul 2015, 06:01
IanStewart wrote:
GMATBaumgartner wrote:
The line represented by which of the following equation does not intersect with the line represented by y = 3x^2+5x+1
A. y = 2x^2+5x+1
...
I'd emphasize that none of the equations in this question represent lines, despite what the question appears to say. The equations represent parabolas, and you don't need to know about parabolas for the GMAT.
There is one concept in this question that is occasionally tested - the concept of translation. If you have any equation at all in coordinate geometry, say:
y = x^2
that will be some curve in the coordinate plane (technically it will be a 'parabola', or U-shape). If you then modify the equation by adding a constant on the right side, say by adding 5:
y = x^2 + 5
then the graph of this new equation will look exactly the same as the graph of the first equation, except that it will be exactly 5 units higher. So when we add a constant on the right side of an equation, we're simply moving the picture of the equation up or down.
I think Bunuel and Ian have provided sufficient information to solve this problem.
Lines are always represented by LINEAR equations (equations that have maximum degree of the variables as 1). A quadratic equation (max. degree =2) can NEVER represent lines.
I would like to add one thing that people who are not familiar with 'conics', usually do not remember that $$y^2=4ax$$ is the standard equation of a parabola. One way to eliminate such rote learning is to look at an equation and plot 2-3 points and see what shape of the curve do you get and then proceed from there. GMAT does not require you to remember fancy names.
$$y^2=4ax$$ and $$y^2=4ax+Z$$, where Z is any value (4,5,7.8,0.4 etc). These curves belong to 'same family' of parabolas with the only exception that these 2 curves will have their vertices offset by 'Z' amount.
Finally, for the curious minds out there, the attached picture shows all the possible combinations of 'simple' parabolas.
Attachments
Parabolas.png [ 7.88 KiB | Viewed 4345 times ]
The line represented by which of the following equation does [#permalink] 03 Jul 2015, 06:01
Display posts from previous: Sort by | {} |
CAT Practice : Percents, Profits
Three Wokers
Burger Competition
Q.33: A, B and C participated in a burger eating competition. A beat C by 18 burgers. A also beat B by eating 50% more burger than B. Also B had eaten 5 percentage points more burger than C. Find the overall number if burgers that were eaten.
1. 90 burgers
2. 81 burgers
3. 72 burgers
4. 100 burgers
Choice A. 90 burgers
Detailed Solution
Let the burgers eaten by C be x%
= ) Burgers eaten by B = x + 5%
Since A ate 50% more than B = (x + 5%) + 50% of (x+5%) = 1.5 (x+5%)
= ) x + x + 5 + 1.5(x+ 5) = 100%
= ) x = $\frac{87.5}{3.5} = 25% = )C - 25%, B – 30%, A – 45%$
Therefore, A beat C by 20 percentage points = ) 18 = ) Total burger = $18 * \frac{100}{20} = 90$
Our Online Course, Now on Google Playstore!
Fully Functional Course on Mobile
All features of the online course, including the classes, discussion board, quizes and more, on a mobile platform.
Cache Content for Offline Viewing
Download videos onto your mobile so you can learn on the fly, even when the network gets choppy!
More questions from Averages, Ratios, Mixtures
A number of beautiful questions from three allied topics. Most sportsmen will tell you - If you know the percentage play, you can profit well from it, | {} |
## Significance of hybridization
$sp, sp^{2}, sp^{3}, dsp^{3}, d^{2}sp^{3}$
Emily Glaser 1F
Posts: 156
Joined: Thu Jul 27, 2017 3:01 am
### Significance of hybridization
I understand how to find the hybridization of an atom in a compound, but I don't understand the significance of this concept. Why can Carbon be both sp2 and sp3
Tim Foster 2A
Posts: 73
Joined: Fri Sep 29, 2017 7:07 am
### Re: Significance of hybridization
The necessity of the hybridization of orbitals can be traced back to Valence Bond theory, the theory that calculates covalent bonds angles and lengths. When we try to apply this theory to a compound like methane (CH4), it begins to fall apart. This is due to something called Electron Promotion, in which an electron from a lower energy shell is "promoted" to an empty higher energy orbital. For example, the shorthand electron configuration of Carbon is [He]2s2,2px1,2py1. In a compound such as methane, Carbons configuration can more accurately depicted as [He]2s1,2px1,2py1,2pz1, because this is a lower energy and more stable configuration(atoms love that).The latter configuration is lower in energy because the 2s electron that got moved up to an empty p orbital here experiences less repulsion from the other electrons. As a result, a very small amount of energy is required to "promote" this electron. The resulting s orbital and p orbitals are combined into four "sp3" hybrid orbitals because of the property of s and p orbitals to act as waves of electron density that interfere with each other and form brand new patterns. In carbon, we have one 2s orbital and three 2p orbitals (Px, Py and Pz). This means you can mix the s orbital with x, y or z of the 3 p orbitals, which will result in the SP, SP2, and SP3 orbitals respectively.
Hellen Truong 2J
Posts: 50
Joined: Sat Jul 22, 2017 3:00 am
Been upvoted: 1 time
### Re: Significance of hybridization
There is also conservation of orbitals. Even when carbon orbitals are hybridized into sp or sp2, there are 2 and 1 p orbitals left over, respectively. The total number of orbitals following hybridization stays the same. | {} |
# Let's check divisibility
Number Theory Level 4
Fine the number of 7-digit integers of the form $$\overline{30a0b03}$$ (where $$a,b$$ are digits), which are divisible by 13.
× | {} |
# Let X be a discrete random variable that takes on the values 1, 2, 3, 4, and 5. Assume that f is the
Let X be a discrete random variable that takes on the values 1, 2, 3, 4, and 5. Assume that f is the probability mass function of X. Given that f(1)=0.1, f(2)=0.2, f(3)=0.3, and f(4)=0.1, what is the value of f(5)?
Document Preview:
Q 1. Let ?? be a discrete random variable that takes on the values 1, 2, 3, 4, and 5. Assume that ?? is the probability mass function of ?? . Given that ??(1)=0.1, ??(2)=0.2, ??(3)=0.3, and ??(4)=0.1, what is the value of ??(5)?
Attachments: | {} |
# ToricVectorBundle ** ToricVectorBundle -- the tensor product of two toric vector bundles
## Description
If $E_1$ and $E_2$ are defined over the same fan and in the same description, then tensor computes the tensor product of the two vector bundles in this description
i1 : E1 = toricVectorBundle(2,hirzebruchFan 3) o1 = {dimension of the variety => 2 } number of affine charts => 4 number of rays => 4 rank of the vector bundle => 2 o1 : ToricVectorBundleKlyachko i2 : E2 = tangentBundle hirzebruchFan 3 o2 = {dimension of the variety => 2 } number of affine charts => 4 number of rays => 4 rank of the vector bundle => 2 o2 : ToricVectorBundleKlyachko i3 : E = E1 ** E2 o3 = {dimension of the variety => 2 } number of affine charts => 4 number of rays => 4 rank of the vector bundle => 4 o3 : ToricVectorBundleKlyachko i4 : details E o4 = HashTable{| -1 | => (| -1 1/3 0 0 |, | -1 0 -1 0 |)} | 3 | | 3 0 0 0 | | 0 0 -1 1/3 | | 0 0 3 0 | | 0 | => (| 0 1 0 0 |, | -1 0 -1 0 |) | -1 | | -1 0 0 0 | | 0 0 0 1 | | 0 0 -1 0 | | 0 | => (| 0 1 0 0 |, | -1 0 -1 0 |) | 1 | | 1 0 0 0 | | 0 0 0 1 | | 0 0 1 0 | | 1 | => (| 1 0 0 0 |, | -1 0 -1 0 |) | 0 | | 0 1 0 0 | | 0 0 1 0 | | 0 0 0 1 | o4 : HashTable
i5 : E1 = toricVectorBundle(2,hirzebruchFan 3,"Type" => "Kaneyama") o5 = {dimension of the variety => 2 } number of affine charts => 4 rank of the vector bundle => 2 o5 : ToricVectorBundleKaneyama i6 : E2 = tangentBundle(hirzebruchFan 3,"Type" => "Kaneyama") o6 = {dimension of the variety => 2 } number of affine charts => 4 rank of the vector bundle => 2 o6 : ToricVectorBundleKaneyama i7 : E = E1 ** E2 o7 = {dimension of the variety => 2 } number of affine charts => 4 rank of the vector bundle => 4 o7 : ToricVectorBundleKaneyama i8 : details E o8 = (HashTable{0 => (| 0 -1 |, | 1 1 -3 -3 |) }, HashTable{(0, 1) => | | 1 3 | | 0 0 -1 -1 | | 1 => (| 0 -1 |, | 1 1 3 3 |) | | -1 3 | | 0 0 1 1 | | 2 => (| 1 0 |, | -1 -1 0 0 |) (0, 2) => | | 0 1 | | 0 0 -1 -1 | | 3 => (| 1 0 |, | -1 -1 0 0 |) | | 0 -1 | | 0 0 1 1 | | (1, 3) => | | | | (2, 3) => | | | | ------------------------------------------------------------------------ 1 0 0 0 |}) 0 1 0 0 | 0 0 -1 0 | 0 0 0 -1 | -1 0 0 0 | 0 -1 0 0 | 3 0 1 0 | 0 3 0 1 | -1 0 0 0 | 0 -1 0 0 | -3 0 1 0 | 0 -3 0 1 | 1 0 0 0 | 0 1 0 0 | 0 0 -1 0 | 0 0 0 -1 | o8 : Sequence | {} |
# BBO Discussion Forums: The neutrinos from the future... - BBO Discussion Forums
• 4 Pages
• 1
• 2
• 3
• Last »
## The neutrinos from the future... Faster then c?
### #1akhare
• Posts: 1,261
• Joined: 2005-September-04
• Gender:Male
Posted 2011-September-22, 17:52
foobar on BBO
0
### #2whereagles
• Posts: 14,900
• Joined: 2004-May-11
• Gender:Male
• Location:Portugal
• Interests:Everything!
Posted 2011-September-22, 19:19
interesting... if it gets confirmed, it's major news
0
### #3semeai
• Group: Full Members
• Posts: 582
• Joined: 2010-June-10
• Gender:Male
• Location:USA
• Interests:Having eleven-syllable interests
Counting modulo five
Posted 2011-September-22, 20:06
Crazy. Of course, extraordinary claims require extraordinary evidence, so none of us should really believe this yet. It would be very exciting, though.
According to their measurements, it sounds like the neutrinos were going roughly 299800 km/s compared to roughly 299792 km/s for light.
0
### #4JLOGIC
• 2011 Poster of The Year winner
• Posts: 6,002
• Joined: 2010-July-08
• Gender:Male
Posted 2011-September-22, 21:55
Dammit I came here to post this lol
0
### #5JLOGIC
• 2011 Poster of The Year winner
• Posts: 6,002
• Joined: 2010-July-08
• Gender:Male
Posted 2011-September-22, 21:55
I can't wait to travel back in time, it's only a matter of time... or is it?
0
### #6mike777
• Posts: 16,575
• Joined: 2003-October-07
• Gender:Male
Posted 2011-September-22, 23:31
faster than light
effect before cause
2 photons occupy the same space and time
the smaller you go the more space there is.
aint the universe full of wonder.
0
### #7hotShot
• Axxx Axx Axx Axx
• Posts: 2,975
• Joined: 2003-August-31
• Gender:Male
Posted 2011-September-23, 04:30
1
### #8JLOGIC
• 2011 Poster of The Year winner
• Posts: 6,002
• Joined: 2010-July-08
• Gender:Male
Posted 2011-September-23, 11:48
I knew John Titor was not a hoax!!!
0
### #9hrothgar
• Posts: 12,826
• Joined: 2003-February-13
• Gender:Male
• Location:Natick, MA
• Interests:Travel
Cooking
Brewing
Hiking
Posted 2011-September-23, 11:53
Particle man, particle man
Doing the things a particle can...
Alderaan delenda est
0
### #10y66
• Posts: 3,215
• Joined: 2006-February-24
Posted 2011-September-24, 07:46
Guess what?
If you lose all hope, you can always find it again. Richard Ford in The Sportswriter
1
### #11Winstonm
• Posts: 11,268
• Joined: 2005-January-08
• Gender:Male
• Location:Tulsa, Oklahoma
• Interests:Art, music
Posted 2011-September-24, 08:03
The neutrino did not speed up; the observer slowed down.
If something cannot go on forever, it will stop. - Herb Stein
0
### #12gwnn
• Csaba the Hutt
• Posts: 12,661
• Joined: 2006-June-16
• Gender:Male
• Location:Enschede, the Netherlands
• Interests:matching LaTeX delimiters :(
Posted 2011-September-24, 11:53
Lorentz transforms are just too cool to be false.
... and I can prove it with my usual, flawless logic.
George Carlin
1
### #13BunnyGo
• Lamentable Bunny
• Posts: 1,503
• Joined: 2008-March-01
• Gender:Male
• Location:Portland, ME
Posted 2011-September-24, 11:58
gwnn, on 2011-September-24, 11:53, said:
Lorentz transforms are just too cool to be false.
But they're just the 4-dimensional projections of what are really going on. Maybe neutrinos jump through another dimension.
(I'll still take the bet that this is a measurement failure)
Bridge Personality: 44 44 43 34
Never tell the same lie twice. - Elim Garek on the real moral of "The boy who cried wolf"
2
### #14barmar
• Posts: 15,751
• Joined: 2004-August-21
• Gender:Male
Posted 2011-September-24, 22:04
When I first heard the story, my first thought was that there might be some quantum tunneling going on. But if this is a possibility, I can't imagine the scientists wouldn't have thought of it, it's way too obvious. Quantum dynamics causes a number of behaviors that appear to violate relativity, such as black holes giving off Hawking radiation (due to spontaneous creation of a particle and its antiparticle just outside the event horizon -- one of them falls in, the other zips away).
### #15BunnyGo
• Lamentable Bunny
• Posts: 1,503
• Joined: 2008-March-01
• Gender:Male
• Location:Portland, ME
Posted 2011-September-25, 01:56
barmar, on 2011-September-24, 22:04, said:
When I first heard the story, my first thought was that there might be some quantum tunneling going on. But if this is a possibility, I can't imagine the scientists wouldn't have thought of it, it's way too obvious. Quantum dynamics causes a number of behaviors that appear to violate relativity, such as black holes giving off Hawking radiation (due to spontaneous creation of a particle and its antiparticle just outside the event horizon -- one of them falls in, the other zips away).
Unless you're using the same term to refer to two different things, quantum tunneling isn't a way of traveling faster, it's a way of escaping energy wells (i.e. if a particle is "trapped" in a place requiring more enegy to escape than the particle has it can "tunnel" it's way under the energy hump).
Bridge Personality: 44 44 43 34
Never tell the same lie twice. - Elim Garek on the real moral of "The boy who cried wolf"
0
### #16helene_t
• The Abbess
• Posts: 15,141
• Joined: 2004-April-22
• Gender:Female
• Location:Hamilton, New Zealand
Posted 2011-September-25, 05:29
BunnyGo, on 2011-September-25, 01:56, said:
Unless you're using the same term to refer to two different things, quantum tunneling isn't a way of traveling faster, it's a way of escaping energy wells (i.e. if a particle is "trapped" in a place requiring more enegy to escape than the particle has it can "tunnel" it's way under the energy hump).
I think barmar had wormholes rather than tunelling in mind? At least that was my first thought. But it isn't plausible.
I think the notion of captaincy (and all other intentional language used to describe bidding, for that matter) is ultimately dispensable. --- nullve
0
### #17blackshoe
• Posts: 15,552
• Joined: 2006-April-17
• Location:Rochester, NY
Posted 2011-September-25, 09:31
Whatever they're doing, if they aren't carrying information (for example "Ron Paul Wins!" ) they're useless.
--------------------
As for tv, screw it. You aren't missing anything. -- Ken Berg
I have come to realise it is futile to expect or hope a regular club game will be run in accordance with the laws. -- Jillybean
0
### #18Gerben42
• Posts: 5,564
• Joined: 2005-March-01
• Gender:Male
• Location:Erlangen, Germany
• Interests:Astronomy, Mathematics
Nuclear power
Posted 2011-September-25, 10:08
Well if it's quantum tunneling, the average particle would still arrive just in time. Even if you would for some reason only see the "too early" group, you would see much fewer particles than the total number of particles that were sent.
Two wrongs don't make a right, but three lefts do!
My Bridge Systems Page
BC Kultcamp Rieneck
0
### #19babalu1997
• Duchess of Malaprop
• Group: Full Members
• Posts: 720
• Joined: 2006-March-09
• Gender:Not Telling
• Interests:i am not interested
Posted 2011-September-25, 11:47
damn you all smart!
Free, on 2011-May-10, 03:57, said:
Babalu just wanted a shoulder to cry on, is that too much to ask for?
0
### #20barmar
• Posts: 15,751
• Joined: 2004-August-21
• Gender:Male
Posted 2011-September-25, 20:18
I'm not an expert on quantum theory, but I thought that tunneling could essentially allow a particle to disappear here and reappear instantaneously somewhere else. We call it tunneling because the interesting case is when somewhere else is on the other side of some barrier, but it could just as easily be just some distance away. Quantum processes allow things like this because it's about probabilities and statistics -- individual particles can do almost anything, but if you have lots of them they average out to the macroscopic expectations.
But it sounds like the scientists were measuring the speed of a stream of particles. Tunneling, if it works like I thought, would perhaps allow a few individual particles to jump ahead of the pack, but the stream as a whole should obey relativity. Unless they can tag individual neutrinos and detect their arrivals, I don't think they'd be able to detect this.
I don't even know how they measure the neutrino speed in the first place. I guess they transmit a very short burst of neutrinos, and then detect when they arrive. But neutrinos are extremely hard to detect, so the burst must have lots of particles in it, and they'll only detect a small number of them arriving. Maybe some of the tests detect the particles that tunnel ahead. But I'd expect them to repeat the test many times and average the results, to filter out quantum effets. | {} |
# The vertices of a tetrahedron correspond to four alternating corners of a cube. By using analytical geometry, demonstrate that the angle made by connecting two of the vertices to a point at the center of the cube is 109.5^@, the characteristic angle for tetrahedral molecules.
The vertices of a tetrahedron correspond to four alternating corners of a cube. By using analytical geometry, demonstrate that the angle made by connecting two of the vertices to a point at the center of the cube is ${109.5}^{\circ }$, the characteristic angle for tetrahedral molecules.
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Fatema Sutton
The smallest unit which on repetition gives a crystal lattice is named as a unit cell. The net atoms that are contained in a unit cell determine the type of packing for the lattice and thus the lattice parameters can also be evaluated on the basis of the same.
Let the side of the cube be such that coordinates of center are $\left(\frac{a}{2},\frac{a}{2},\frac{a}{2}\right).$
$\stackrel{\to }{AO}=\left(\frac{a}{2},\frac{a}{2},\frac{a}{2}\right)$
$\stackrel{\to }{CO}=\left(\frac{a}{2},\frac{a}{2},\frac{a}{2}\right)$
$\stackrel{\to }{AO}.\stackrel{\to }{CO}={\left(\frac{a}{2}\right)}^{2}+\left(-\frac{a}{2}\right)\left(\frac{a}{2}\right)+\left(\frac{a}{2}\right)\left(-\frac{a}{2}\right)$
$=\left(\frac{{a}^{2}}{4}\right)-\left(\frac{{a}^{2}}{4}\right)-\left(\frac{{a}^{2}}{4}\right)$
$=-\frac{{a}^{2}}{4}$
The magnitude of AO and CO is:
$|\stackrel{\to }{AO}|=|\stackrel{\to }{CO}|=\sqrt{\frac{3{a}^{2}}{4}}$
The angle between AO and CO is calculated as follows:
$\mathrm{cos}\theta =\frac{\stackrel{\to }{AO}.\stackrel{\to }{CO}}{|\stackrel{\to }{AO}|=|\stackrel{\to }{CO}|}$
$=\left(\frac{-\frac{{a}^{2}}{4}}{\frac{3{a}^{2}}{4}}\right)$
$=-\frac{1}{3}$
$\theta =\frac{{\mathrm{cos}}^{-1}1}{3}$
$={109.5}^{\circ }$
Hence, the angle made by connecting two of the vertices to a point at the center of the cube is ${109.5}^{\circ }.$
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee | {} |
# GRE Subject Test: Math : Vectors & Spaces
## Example Questions
### Example Question #51 : Vectors & Spaces
What is the vector form of ?
Possible Answers:
Correct answer:
Explanation:
In order to derive the vector form, we must map the -coordinates to their corresponding , and coefficients.
That is, given , the vector form is .
So for , we can derive the vector form .
### Example Question #52 : Vectors & Spaces
What is the vector form of ?
Possible Answers:
Correct answer:
Explanation:
In order to derive the vector form, we must map the -coordinates to their corresponding , and coefficients.
That is, given , the vector form is .
So for , we can derive the vector form .
### Example Question #53 : Vectors & Spaces
Write the following parametric equation in vector form.
Possible Answers:
Correct answer:
Explanation:
When converting parametric equations to vector valued functions, remember that the order of vectors goes as follows.
Given the question
the vector would be given as,
.
### Example Question #54 : Vectors & Spaces
What is the vector form of ?
Possible Answers:
Correct answer:
Explanation:
In order to derive the vector form, we must map the , , -coordinates to their corresponding , , and coefficients.
That is, given , the vector form is .
So for , we can derive the vector form
### Example Question #55 : Vectors & Spaces
What is the vector form of ?
Possible Answers:
Correct answer:
Explanation:
In order to derive the vector form, we must map the -coordinates to their corresponding , and coefficients.
That is, given , the vector form is .
So for , we can derive the vector form
### Example Question #56 : Vectors & Spaces
What is the vector form of ?
Possible Answers:
Correct answer:
Explanation:
In order to derive the vector form, we must map the -coordinates to their corresponding , and coefficients.
That is, given , the vector form is .
So for , we can derive the vector form
### Example Question #57 : Vectors & Spaces
Find the vector in standard form if the initial point is located at and the terminal point is located at .
Possible Answers:
Correct answer:
Explanation:
We must first find the vector in component form.
If the initial point is and the terminal point is then the component form of the vector is .
As such, the component form of the vector in the problem is
Next, any vector with component form can be written in standard form as .
Hence, the vector in standard form is
### Example Question #58 : Vectors & Spaces
What is the vector form of ?
Possible Answers:
Correct answer:
Explanation:
In order to derive the vector form, we must map the , , -coordinates to their corresponding , , and coefficients.
That is, given , the vector form is .
So for , we can derive the vector form .
### Example Question #59 : Vectors & Spaces
What is the vector form of ?
Possible Answers:
Correct answer:
Explanation:
In order to derive the vector form, we must map the -coordinates to their corresponding , and coefficients.
That is, given , the vector form is .
So for , we can derive the vector form .
### Example Question #60 : Vectors & Spaces
Given points and , what is the vector form of the distance between the points?
Possible Answers:
Correct answer:
Explanation:
In order to derive the vector form of the distance between two points, we must find the difference between the , and elements of the points.
That is, for any point
and ,
the distance is the vector
Subbing in our original points and , we get: | {} |
Appendix Chapters typeset as Sections in ToC
I am very new to LaTeX and still more in a trial and error phase than on a real understanding level.
My goal is to have all appendices - which are chapters and I would like to keep it that way - to appear in the TOC as sections, under a 'chapter' appendices.
Here is what I tried, using the tocloft package
\newpage\phantomsection
\appendix
\setlength{\cftchapindent}{\cftsecindent}
\renewcommand{\cftchapfont}{\cftsecfont}
\chapter{chapter title}
So my idea was to just tell LaTeX to write chapters to the TOC as though they were sections. My code has no effect whatsoever. Any ideas/solutions are very much appreciated
-
You had the right idea, but you need to add your changes to the TOC itself rather than in the main document (and because of this you need to \protect some of the commands.) Note that this solution won't look very nice if your appendices have sections.
\documentclass{report}
\usepackage{tocloft}
\begin{document}
\tableofcontents
\chapter{A chapter}
\section{A section}
\newpage\phantomsection
\appendix | {} |
# 2012 TC4
2012 TC4 is a tumbling micro-asteroid classified as a bright near-Earth object of the Apollo group, approximately 10 meters (30 feet) in diameter.[6][7][8] It was first observed by Pan-STARRS at Haleakala Observatory on the Hawaiian island of Maui, in the United States. As of 1 October 2017, it had a small Earth minimum orbital intersection distance of 0.000149 AU (22,300 km).[2] On 12 October 2017, it passed Earth at 0.00033524 AU (50,151 km).[2]
Discovery [2][3] Radar movie of 2012 TC4[1] Pan-STARRS 1 Haleakala Obs. 4 October 2012 2012 TC4 NEO · Apollo [2][4] Epoch 1 October 2017 (JD 2458027.5) Uncertainty parameter 1 5.19 yr (1,897 d) 1.8786 AU 0.9335 AU 1.4061 AU 0.3361 1.67 yr (609 d) 332.79° 0° 35m 27.96s / day 0.8572° 198.23° 222.58° 0.000149 AU (0.0580 LD) 0.03 AU[4] 15 m × 8 m[5][6] 7–13 m[7][8]15 m[8] 0.2038 h[9][a]0.142 h[8]NPAR [8][9] 0.35±0.1[7]0.30±0.01 (radiometric) E/Xe[7][6]S (assumed)[9]V–R = 0.41±0.02[7] 12.9–31[10] 26.7[2]26.8[4][10]
## Approaches to large bodies
Date (UT) Object distance in km
(center – center)
3-Σ
uncertainty
in km
Speed-
relative
in km/s
13 October 1996 Earth 753 000 ± 7000 6.445
13 October 1996 Moon 530 000 ± 6300 7.144
3 March 1997 Earth 46 928 000 ± 5400 4.190
2 August 2009 Earth 68 969 000 ± 510 7.039
12 February 2010 Earth 40 183 000 ± 1200 14.640
24 January 2011 Mars 12 875 000 ± 210 5.957
12 October 2012 Earth 94 965 ± 0.32 7.123
12 October 2012 Moon 113 886 ± 0.64 6.773
12 October 2017 – 05:42 Earth 50 151 ± 0.14 7.647
12 October 2017 – 19:19 Moon 277 697 ± 0.34 6.101
28 August 2019 Earth 68 910 000 ± 810 2.807
29 December 2019 Earth 34 467 000 ± 1100 13.817
30 September 2034 Mars 7 481 000 ± 3500 9.980
5 September 2048 Earth 51 740 000 ± 15000 6.166
29 July 2050 Mars 9 227 000 ± 11000 9.980
19 October 2050 Moon 1 402 000 ± 7900 7.146
19 October 2050 Earth 1 792 000 ± 7500 6.164
2 January 2053 Earth 47 270 000 ± 44000 16.447
13 February 2076 Mars 6 260 000 ± 140000 9.723
7 September 2077 Earth 47 690 000 ± 730000 16.458
21 November 2079 Earth 2 478 000 ± 300000 6.031
Data from JPL 60 solution date 3 November 2017[11]
### 2012 Earth encounter
2012 TC4 appears as a dot of this composite of 37 individual 50-second exposures.[12]
2012 TC4 was discovered on 4 October 2012 at apparent i-band magnitude 20.1 while the asteroid was 0.03 AU (4,500,000 km; 2,800,000 mi) from Earth.[3] It came within 0.000634 AU (0.247 LD, 94,800 km, 58,900 mi) from Earth on 12 October 2012.[11]
During the 2012 close approach, the asteroid only had an observation arc of 7 days, between 4 and 11 October 2012, so the exact distance of the 2017 closest approach was poorly constrained. With the 7 day observation arc, the asteroid had a 3-sigma chance of passing between 0.00008818 and 0.002896 AU (0.034 to 1.127 LD, 13,200–433,200 km, 8,200–269,200 mi) from Earth on 12 October 2017.[13] Astronomers were certain that it would not pass closer than 6,800 km from the surface of Earth, ruling out any possibility that it could hit the Earth in 2017.[14]
### 2017 Earth encounter
Orbit in yellow among inner solar system planets before 2017 flyby
Orbit of 2012 TC4 deflected by Earth's gravity. Earth is blue. For reference, a geosynchronous satellite and the Moon are included in this animation. The green path of 2012 TC4 turns darker as it dips below the ecliptic plane (click for animation).
Orbits from pre-2012 to post-2017 encounters. The orbit of 2012 TC4 has changed twice since 2012.
On 12 October 2017 at 5:42 UT, the asteroid passed 0.00033524 AU (50,151 km; 31,163 mi) from Earth.[11] Observations between July and October reduced the uncertainty region from several hundred thousand kilometers[13] to about ±140 meters.[15][2] The asteroid was removed from the Sentry Risk Table on 16 October 2017 using JPL solution #56.[16] Prior to the encounter, it was rated −4.11 on the Palermo scale, with a 1 in 1,000 chance of impact over the next hundred years.[17]
Paul Chodas of NASA's Center for near-Earth Object Studies, and Vishnu Reddy of the University of Arizona's Lunar and Planetary Laboratory, viewed the 2017 flyby (inside of the orbit of the Moon) as a way to test and refine the global asteroid detection and tracking network designed to give warning of objects heading toward Earth.[14] Reddy coordinated the effort, involving over a dozen institutions worldwide.[14][18][19] In addition to the observation campaign, NASA used this exercise to test communications between the many observers and also to test internal U.S. government messaging and communications up through the executive branch and across government agencies, as it would during an actual predicted impact emergency. Results of the campaign were published on 3 November 2017.[5]
The asteroid remained too faint to be recovered with automated astronomical surveys until early September,[20] but a more targeted observation with the Very Large Telescope recovered it on 27 July 2017 at apparent magnitude 26.8, while the asteroid was 0.4 AU (60,000,000 km; 37,000,000 mi) from Earth, making it one of the dimmest asteroid recoveries ever. As such, 2012 TC4 has become the first known asteroid ever to be observed passing less than 1 Lunar distance from Earth twice in a row.[21] At the time of recovery the asteroid was about 100 million times fainter than what can be seen with the naked eye[22] and 500 times fainter than when it was discovered in 2012. As a result of the 2017 recovery observations, it was known that on 12 October 2017 at 5:42 UT, the asteroid would pass 0.0003352 AU (50,150 km; 31,160 mi) from Earth.[11] Then at 19:19 UT, the asteroid would pass 0.001856 AU (277,700 km; 172,500 mi) from the Moon.[11] 2012 TC4 peaked at about apparent magnitude 12.9,[23] and was too faint to be seen without a telescope. The Earth approach of 2017 increased the asteroid's orbital period from 1.67 years to 2.06 years.[24]
2012 TC4 reached a maximum apparent magnitude of 12.9 just prior to its closest approach, soon after which it came too close to the Sun to be seen with telescopes.[23] It was last seen from Mauna Kea Observatories on 21 October 2017 at an apparent magnitude of 24,[4] when the asteroid was 57° from the Sun.[10]
## Physical properties
size reference of 2012 TC4, based on radar observations.
### Fast rotator and tumbler
Studies of the asteroid's light curve in October 2012, found it to have a rotation period of 0.2038 hours (or 12 minutes and 14 seconds) with a brightness variation of 0.93 magnitude (U=3-), which is indicative for a non-spherical shape.[9][a] 2012 TC4 is a fast rotator, which is rather typical for its small size. The fastest rotator currently known is 2014 RC, a similarly-sized NEO, with a period of only 16 seconds. Lightcurves obtained during the 2017 encounter confirmed that 2012 TC4 is in a non-principal axis rotation, commonly known as tumbling.[7][5] The spin axis varies on timescales of minutes, with a second period of 0.142 hours (or 8.5 minutes).[8] The lightcurve amplitude suggests a ratio of largest to smallest axis of at least 2.3.[7]
Radar images were taken from Goldstone Observatory and Green Bank Telescope on 12 October 2017. The delay-doppler images had a range resolution of 1.9 meters/pixel, the highest resolution ever obtained using Goldstone transmissions.[6][1] The images showed that 2012 TC4 was a very elongated object about 50 feet (15 meters) long and roughly 25 feet (8 meters) wide.[5] The high circular polarization ratio found for 2012 TC4 is consistent with results seen from E- and V-type NEAs previously.[6] Observations from Arecibo Observatory were planned, but had to be cancelled due to damage to the observatory as a result of Hurricane Maria.[25]
### Composition
The spectrum of 2012 TC4 is that of an E- or Xe-type asteroid.[7][6] E-type asteroids tend to have a high albedo (>0.30). This agrees with the albedo of 0.35 found for 2012 TC4.[1] This type of asteroids is commonly found in the inner Main Belt.[26]
2012 TC4 is composed of igneous material.[5] The short rotation period of 2012 TC4 implies that it is not a rubble pile but rather a monolithic object of non-negligible strength, which is typical for very small asteroids.[7]
## Orbit change
As a result of 2012 TC4's frequent approaches to Earth, its orbit changes significantly over short periods of only decades. Its two observed close approaches and their effects are shown below:
Date Event Semimajor axis (AU) Perihelion (AU) Aphelion (AU) Eccentricity Inclination (°) Argument of perihelion (°) Ascending node (°)
2012-10-01 pre-2012 approach 1.2744 0.9015 1.6472 0.2926 1.4097 234.7282 198.5560
2012-10-12 2012 approach 1.3837 0.9115 1.8559 0.3413 1.2320 228.5354 198.4622
2012-10-30 post-2012 approach 1.3893 0.9305 1.8480 0.3302 0.8582 223.1271 198.1033
2017-10-01 pre-2017 approach 1.4155 0.9410 1.8901 0.3353 0.8566 221.8553 198.0054
2017-10-12 2017 approach 1.7076 0.9522 2.4630 0.4424 0.1693 218.4570 193.6520
2017-10-30 post-2017 approach 1.6492 0.9711 2.3273 0.4112 0.5327 248.6359 208.5051
2050-01-01 pre-2050 approach 1.6226 0.9688 2.2765 0.4030 0.5266 266.6192 197.8009
Between 2012 and 2017, 2012 TC4's average distance from the sun increased by almost 0.4 AU, with the time it takes to orbit the sun increasing by 250 days. Its closest approach to the Sun also increased significantly, from 90% of the Earth's distance to the Sun to 97%, and its inclination lowered slightly, going from 1.4 degrees to less than 1/2 a degree relative to Earth's orbit.
As a result of non-gravitational forces such as the Yarkovsky effect on small bodies, it is difficult to constrain its orbit more than a few decades into the past or future.
## Notes
1. ^ a b Lightcurve plot of 2012 TC4 at the Palmer Divide Observatory by Brian D. Warner (2012). Rotation period 0.2038±0.0002 hours with a brightness amplitude of 0.93±0.05 mag. Summary figures for 2012 TC4 at the LCDB
## References
1. ^ a b c "The 2012 TC4 Observing Campaign -- Radar animations". University of Maryland. Retrieved 2017-11-02.
2. "JPL Small-Body Database Browser: (2012 TC4)" (2017-10-13 last obs.). Jet Propulsion Laboratory. Retrieved 2017-10-17.
3. ^ a b "MPEC 2012-T18 : 2012 TC4". IAU Minor Planet Center. 2012-10-07. Retrieved 2017-03-14. (K12T04C)
4. ^ a b c d "2012 TC4". Minor Planet Center. Retrieved 2017-11-04.
5. "Astronomers Complete First International Asteroid Tracking Exercise". JPL. 3 November 2017. Retrieved 2017-11-03.
6. "The 2012 TC4 Observing Campaign – Radar observations UPDATE October 12, 2017". University of Maryland. Retrieved 2017-11-02.
7. Ryan, William H.; Ryan, Eileen V. (2017). "Physical Characterization of NEA 2012 TC4" (PDF) (PDF). University of Maryland. Retrieved 8 March 2018.
8. "NEA 2012 TC4 -- Physical Properties". University of Maryland. Retrieved 2017-11-02.
9. ^ a b c d "LCDB Data for 2012 TC4". Asteroid Lightcurve Database (LCDB). Retrieved 2017-09-28.
10. ^ a b c "NEODyS-2 2012TC4". Department of Mathematics, University of Pisa, Italy. Retrieved 2017-11-04.
11. "Close Approach table for 2012 TC4". JPL. NASA. Retrieved 2017-11-03.
12. ^ "A Very Close Encounter". www.eso.org. Retrieved 2017-08-11.
13. ^ a b "Close Approach table for 2012 TC4 (using 7 day obs arc)". JPL. NASA. Archived from the original on 3 April 2013. Retrieved 2014-10-22.
14. ^ a b c "Asteroid Flyby Will Benefit NASA Detection and Tracking Network". JPL. NASA. 28 July 2017. Retrieved 8 March 2018.
15. ^ "TC4: HOW NASA PLANS TO TEST ITS PLANETARY DEFENSE SYSTEMS ON CLOSE-APPROACH ASTEROID". Retrieved 2017-10-07.
16. ^ "Date/Time Removed". NASA/JPL Near-Earth Object Program Office. Archived from the original on 17 October 2017. Retrieved 17 October 2017.
17. ^ "Earth Impact Risk Summary: 2012 TC4". NASA/JPL Near-Earth Object Program Office. Archived from the original on 2012-11-24. Retrieved 2017-03-13.
18. ^ Coleman, Nancy (1 August 2017). "NASA's planetary defense system will be put to the test in October". CNN. Retrieved 2 August 2017.
19. ^ "The 2012 TC4 Observing Campaign". University of Maryland. Retrieved 2017-11-02.
20. ^ "NEODyS-2 Possible recovery list". Department of Mathematics, University of Pisa, Italy. Archived from the original on 2017-03-14. Retrieved 2017-03-14.
21. ^ "MPEC 2017-P26 : 2012 TC4". IAU Minor Planet Center. 2017-08-06. Retrieved 2017-08-06. (K12T04C)
22. ^ Math: ${\displaystyle ({\sqrt[{5}]{100}})^{26.8-6.5}\approx 131800000}$
23. ^ a b "2012TC4 Ephemerides for 12 October 2017". NEODyS (Near Earth Objects – Dynamic Site). Retrieved 2017-08-07.
24. ^ "Orbit of NEA 2012 TC4". University of Maryland. Retrieved 2017-10-11.
25. ^ "Goldstone Radar Observations Planning: 2012 TC4". JPL. Retrieved 2017-11-02.
26. ^ Fornasier, S.; Clark, B. E.; Dotto, E. (July 2011). "Spectroscopic survey of X-type asteroids" (PDF). Icarus. 214 (1): 131–146. arXiv:1105.3380. Bibcode:2011Icar..214..131F. doi:10.1016/j.icarus.2011.04.022. Retrieved 13 January 2017. | {} |
# Range Chmin Chmax Add Range Sum
AC一覧
## Problem Statement問題文
Given a size $N$ interger sequence $a_0, a_1, \dots, a _ {N - 1}$. Process the following $Q$ queries in order:
• 0 $l$ $r$ $b$: For each $i = l, \dots, {r-1}$, $a_i \gets \min(a_i, b)$
• 1 $l$ $r$ $b$: For each $i = l, \dots, {r-1}$, $a_i \gets \max(a_i, b)$
• 2 $l$ $r$ $b$: For each $i = l, \dots, {r-1}$, $a_i \gets a_i + b$
• 3 $l$ $r$: Print $\sum _ {i = l} ^ {r-1} a_i$
• 0 $l$ $r$ $b$: $i = l, \dots, {r-1}$ のそれぞれについて $a_i \gets \min(a_i, b)$
• 1 $l$ $r$ $b$: $i = l, \dots, {r-1}$ のそれぞれについて $a_i \gets \max(a_i, b)$
• 2 $l$ $r$ $b$: $i = l, \dots, {r-1}$ のそれぞれについて $a_i \gets a_i + b$
• 3 $l$ $r$: $\sum _ {i = l} ^ {r-1} a_i$ を出力
## Constraints制約
• $1 \leq N, Q \leq 200{,}000$
• $\vert a_i \vert \leq 10^{12}$ is satisfied always while processing queries.
クエリ処理の過程で常に $\vert a_i \vert \leq 10^{12}$ が成り立つ
• $0 \leq l < r \leq N$
## Input入力
$N$ $Q$
$a_0$ $a_1$ ... $a_{N - 1}$
$\textrm{Query}_0$
$\textrm{Query}_1$
:
$\textrm{Query}_{Q - 1}$
### # 1
5 7
1 2 3 4 5
3 0 5
2 2 4 100
3 0 3
0 1 3 10
3 2 5
1 2 5 20
3 0 5
15
106
119
147
Timelimit: 10 secs
Before submitting, please confirm terms and conditions | {} |
# Re: SOFT HYPHEN
From: Markus Kuhn (Markus.Kuhn@cl.cam.ac.uk)
Date: Tue Nov 16 1999 - 08:42:40 EST
Klaus Weide wrote on 1999-11-10 12:27 UTC:
> I asume you are familiar with the dissenting treatise at
>
> http://www.hut.fi/~jkorpela/shy.html
I wasn't familiar with it, but a quick look at it tells me that I wish I
had written it myself. I like it very much and fully agree with Jukka
I really do believe that
- HTML documents should never contain soft hyphens. HTML formats
are unformatted and therefore should not contain characters such as
SHY that can only have been inserted as the result of a paragraph
formatting process.
- If people feel a real need to have control characters inside the text
that control hyphenation, then they could introduce a new ZERO WIDTH
HYPHENATION POINT, which would have a similar semantic as \-
under TeX (marking an explicit hyphenation opportunity in this word,
preferably also suppressing at the same time any implicit hyphenation
points that the hyphenation algorithm would otherwise provide).
May be there could be even both ZERO WIDTH HYPHENATION POINT and ZERO
WIDTH EXCLUSIVE HYPHENATION POINT, depending on whether its presence
is disabling the normal hyphenation algorithm for the remaining word
or not. (See also the \- in TeX versus the "- in the German.TeX
macro package, the latter of which is non-exclusive.)
- Inserting hyphenation points directly into a document in the
running text is usually a very bad idea, because it does not aid in
allowing to reformat the text later, it leads to inconsistent hyphenation
across a document, and it complicates search/replace algorithms.
The right solution is to allow the user to add to the document an
extension or exception list of the hyphenation dictionary for all
those words for which the default hyphenation algorithm leads to
unsatisfactory results. Similar to TeX's \hyphenation{Do-nau-dampf-
schiff-fahrt} command, which makes sure in the header that this
remarkably long word will be hyphenated correctly everywhere (!)
in the document, no matter how often it appears.
So I somewhat don't like the idea of adding a ZERO WIDTH {EXCLUSIVE}
HYPHENATION POINT to Unicode, because implementing it would probably be
abused as an excuse for not adding the only proper solution (hyphenation
exception lists). But even more I dislike the idea of simply abusing SHY
as an ill-defined ZERO WIDTH (EXCLUSIVE?) HYPHENATION POINT. See HTML. Yuck.
Markus
```--
Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK
Email: mkuhn at acm.org, WWW: <http://www.cl.cam.ac.uk/~mgk25/>
```
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:56 EDT | {} |
### Author Topic: 9.3 problem 18 (Read 3692 times)
#### Brian Bi
• Moderator
• Full Member
• Posts: 31
• Karma: 13
##### 9.3 problem 18
« on: March 25, 2013, 12:32:43 AM »
I'm having some trouble getting this problem to work out. There are four critical points: (0,0), (2, 1), (-2, 1), and (-2, -4). At the critical point (-2, -4), the Jacobian is \begin{pmatrix} 10 & -5 \\ 6 & 0 \end{pmatrix} with eigenvalues $5 \pm i\sqrt{5}$. Therefore it looks like it should be an unstable spiral point. However, when I plotted it, it looked like a node. Has anyone else done this problem?
http://www.math.psu.edu/melvin/phase/newphase.html
« Last Edit: March 25, 2013, 12:46:22 AM by Brian Bi »
#### Victor Ivrii
• Elder Member
• Posts: 2572
• Karma: 0
##### Re: 9.3 problem 18
« Reply #1 on: March 25, 2013, 06:57:38 AM »
I'm having some trouble getting this problem to work out. There are four critical points: (0,0), (2, 1), (-2, 1), and (-2, -4). At the critical point (-2, -4), the Jacobian is \begin{pmatrix} 10 & -5 \\ 6 & 0 \end{pmatrix} with eigenvalues $5 \pm i\sqrt{5}$. Therefore it looks like it should be an unstable spiral point. However, when I plotted it, it looked like a node. Has anyone else done this problem?
http://www.math.psu.edu/melvin/phase/newphase.html
Explanation:
http://weyl.math.toronto.edu/MAT244-2011S-forum/index.php?topic=48.msg159#msg159
#### Brian Bi
• Moderator
• Full Member
• Posts: 31
• Karma: 13
##### Re: 9.3 problem 18
« Reply #2 on: March 25, 2013, 01:28:57 PM »
So it is a spiral point but I didn't zoom in closely enough?
#### Victor Ivrii
No, the standard spiral remains the same under any zoom. However your spiral rotates rather slowly in comparison with moving away and as it makes one rotation ($\theta$ increases by $2\pi$) the exponent increases by $5 \times 2\pi/\sqrt{5}\approx 14$ and the radius increases $e^{14}\approx 1.2 \cdot 10^6$ times. If the initial distance was 1 mm, then after one rotation it becomes 1.2 km.
Try plotting $x'=a x- y$, $y'=x+ ay$ for $a=.001, .1, .5, 1, 1.5, 2$ to observe that at for some $a$ you just cannot observe rotation. | {} |
# Homework Help: Temperature and Heat
1. May 9, 2006
### superdave
A .1 kg piece of aluminum at 90 degrees C is submerged in 1 KG of water at 10 degrees C. What is the temperature when they reach equalibrium?
Now I know, aluminum has a c of 920 J/(kg * degree C) and water has a c of 4186 J/(kg * degree C).
I'm not sure how to approach this.
Best I can think of is setting m*c*dT for the water equal to m*c*dT for the aluminum. But I have two unknown variables there. (The dT's) So that can't be right. Help is appreciated.
2. May 9, 2006
### FredGarvin
What is the definition for your delta T? It is $$\Delta T = T_{end} - T_{start}$$
Does that point you in the right direction? Also, think about the physical situation when the two have reached equillibrium...
3. May 9, 2006
### Andrew Mason
This means you will need to work out two equations to determine the values of these two different delta T's.
AM | {} |
How to center text blocks in memoir class under twoside/default option?
I'm using memoir class to write a book; the output is fine for the purpose of printing -- the text on odd pages being placed slightly to the left, and the text of the even pages being placed slightly to the right.
However, now I want each page to have a central text alignment, i.e., the text on both the odd and the even page numbers to be aligned. Could one achieve this with memoir?
I understand that letter or article instead of memoir do the trick, but then the paragraphs change, etc.
• Welcome to tex.sx!
– TivV
Feb 27, 2020 at 15:53
• Note that 'centered text' usually refers to a form of symmetrical text alignment with uneven margins on both sides, not a single-sided page layout.
– TivV
Feb 27, 2020 at 16:14
This is discussed in section 2.8 Side margins of the memoir documentation:
Some documents are designed to have, say, a very wide righthand margin in which to put illustrations; this leads to needing the spine margin on verso pages to be much larger than the spine margin on recto pages. This can be done with the oneside option. However, different headers and footers are required for the recto and verso pages, which can only be done with the twoside option. The way to get the desired effects is like this (twoside is the default class option):
\documentclass{memoir}
%%% set up the recto page layout
\checkandfixthelayout% or perhaps \checkandfixthelayout[lines]
\setlength{\evensidemargin}{\oddsidemargin}% after \checkandfix...
...
\documentclass{memoir}
%%% set up the recto page layout
\checkandfixthelayout% or perhaps \checkandfixthelayout[lines]
\setlength{\evensidemargin}{\oddsidemargin}% after \checkandfix...
\usepackage{blindtext}
\begin{document}
\blinddocument
\end{document} | {} |
Q
# Carry out the following divisions -36 y^3 / 9 y^2
q1) Carry out the following divisions
ii)
$-36 y^3 \div 9 y^2$
Views
We have,
$-36$$y^{3}$ $= -1 \times 2 \times 2 \times 3 \times 3 \times y \times y \times y$
$9$$y^{2 }$ $= 3 \times 3 \times y \times y$
Therefore,
$\frac{-36y^{3}}{9y^{2}} = \frac {-1 \times 2 \times 2 \times 3 \times 3 \times y \times y \times y}{3 \times 3 \times y \times y} = -4y$
Exams
Articles
Questions | {} |
# Continuous reactor
(Redirected from Continuous reactors)
Jump to: navigation, search
Continuous reactors (alternatively referred to as flow reactors) carry material as a flowing stream. Reactants are continuously fed into the reactor and emerge as continuous stream of product. Continuous reactors are used for a wide variety of chemical and biological processes within the food, chemical and pharmaceutical industries. A survey of the continuous reactor market will throw up a daunting variety of shapes and types of machine. Beneath this variation however lies a relatively small number of key design features which determine the capabilities of the reactor. When classifying continuous reactors, it can be more helpful to look at these design features rather than the whole system.
## Batch versus continuous
Reactors can be divided into two broad categories, batch reactors and continuous reactors. Batch reactors are stirred tanks sufficiently large to handle the full inventory of a complete batch cycle. In some cases, batch reactors may be operated in semi batch mode where one chemical is charged to the vessel and a second chemical is added slowly. Continuous reactors are generally smaller than batch reactors and handle the product as a flowing stream. Continuous reactors may be designed as pipes with or without baffles or a series of interconnected stages. The advantages of the two options are considered below.
### Benefits of batch reactors
• Batch reactors are very versatile and are used for a variety for different unit operations (batch distillation, storage, crystallisation, liquid-liquid extraction etc.) in addition to chemical reactions.
• There is a large installed base of batch reactors within industry and their method of use is well established.
• Batch reactors are excellent at handling difficult materials like slurries or products with a tendency to foul.
• Batch reactors represent an effective and economic solution for many types of slow reactions.
### Benefits of continuous reactors
• The rate of many chemical reactions is dependent on reactant concentration. Continuous reactors are generally able to cope with much higher reactant concentrations due to their superior heat transfer capacities. Plug flow reactors have the additional advantage of greater separation between reactants and products giving a better concentration profile.
• The small size of continuous reactors makes higher mixing rates possible.
• The output from a continuous reactor can be altered by varying the run time. This increases operating flexibility for manufacturers.
## Heat transfer capacity
The rate of heat transfer within a reactor can be determined from the following relationship:
${\displaystyle q_{x}=UA(T_{p}-T_{j})}$
where:
qx: the heat liberated or absorbed by the process (W)
U: the heat transfer coefficient of the heat exchanger (W/(m2K))
A: the heat transfer area (m2)
Tp: process temperature (K)
Tj: jacket temperature (K)
From a reactor design perspective, heat transfer capacity is heavily influenced by channel size since this determines the heat transfer area per unit volume. Channel size can be categorised in various ways however in broadest terms, the categories are as follows:
Industrial batch reactors : 1 – 10 m2/m3 (depending on reactor capacity)
Laboratory batch reactors : 10 – 100 m2/m3 (depending on reactor capacity)
Continuous reactors (non micro) : 100 - 5,000 m2/m3 (depending on channel size)
Micro reactors : 5,000 - 50,000 m2/m3 (depending on channel size)
Small diameter channels have the advantage of high heat transfer capacity. Against this however they have lower flow capacity, higher pressure drop and an increased tendency to block. In many cases, the physical structure and fabrication techniques for micro reactors make cleaning and unblocking very difficult to achieve.
## Temperature control
Temperature control is one of key functions of a chemical reactor. Poor temperature control can severely affect both yield and product quality. It can also lead to boiling or freezing within the reactor which may stop the reactor from working altogether. In extreme cases, poor temperature control can lead to severe over pressure which can be destructive on the equipment and potentially dangerous.
### Single stage systems with high heating or cooling flux
In a batch reactor, good temperature control is achieved when the heat added or removed by the heat exchange surface (qx) equals the heat generated or absorbed by the process material (qp). For flowing reactors made up of tubes or plates, satisfying the relationship qx = qp does not deliver good temperature control since the rate of process heat liberation/absorption varies at different points within the reactor. Controlling the outlet temperature does not prevent hot/cold spots within the reactor. Hot or cold spots caused by exothermic or endothermic activity can be eliminated by relocating the temperature sensor (T) to the point where the hot/cold spots exists. This however leads to overheating or overcooling downstream of the temperature sensor.
Many different types of plate or tube reactors use simple feed back control of the product temperature. From a user’s perspective, this approach is only suitable for processes where the effects of hot/cold spots do not compromise safety, quality or yield.
### Single stage systems with low heating or cooling flux
Micro reactors can be tube or plates and have the key feature of small diameter flow channels (typically less than <1 mm). The significance of micro reactors is that the heat transfer area (A) per unit volume (of product) is very large. A large heat transfer area means that high values of qx can be achieved with low values of Tp – Tj. The low value of Tp – Tj limits the extent of over cooling that can occur. Thus the product temperature can be controlled by regulating the temperature of the heat transfer fluid (or the product).
The feedback signal for controlling the process temperature can be the product temperature or the heat transfer fluid temperature. It is often more practical to control the temperature of the heat transfer fluid.
Although micro reactors are efficient heat transfer devices, the narrow channels can result in high pressure drops, limited flow capacity and a tendency to block. They are also often fabricated in a manner which makes cleaning and dismantling difficult or impossible.
### Multistage systems with high heating or cooling flux
Conditions within a continuous reactor change as the product passes along the flow channel. In an ideal reactor the design of the flow channel is optimised to cope with this change. In practice, this is achieved by breaking the reactor into a series of stages. Within each stage the ideal heat transfer conditions can be achieved by varying the surface to volume ratio or the cooling/heating flux. Thus stages where process heat output is very high either use extreme heat transfer fluid temperatures or have high surface to volume ratios (or both). By tackling the problem as a series of stages, extreme cooling/heating conditions to be employed at the hot/cold spots without suffering overheating or overcooling elsewhere. The significance of this is that larger flow channels can be used. Larger flow channels are generally desirable as they permit higher rate, lower pressure drop and a reduced tendency to block.
## Mixing
Mixing is another important classifying feature for continuous reactors. Good mixing improves the efficiency of heat and mass transfer.
In terms of trajectory through the reactor, the ideal flow condition for a continuous reactor is plug flow (since this delivers uniform residence time within the reactor). There is however a measure of conflict between good mixing and plug flow since mixing generates axial as well as radial movement of the fluid. In tube type reactors (with or without static mixing), adequate mixing can be achieved without seriously compromising plug flow. For this reason, these types of reactor are sometimes referred to as plug flow reactors.
Continuous reactors can be classified in terms of the mixing mechanism as follows:
### Mixing by diffusion
Diffusion mixing relies on concentration or temperature gradients within the product. This approach is common with micro reactors where the channel thicknesses are very small and heat can be transmitted to and from the heat transfer surface by conduction. In larger channels and for some types of reaction mixture (especially immiscible fluids), mixing by diffusion is not practical.
### Mixing with the product transfer pump
In a continuous reactor, the product is continuously pumped through the reactor. This pump can also be used to promote mixing. If the fluid velocity is sufficiently high, turbulent flow conditions exist (which promotes mixing). The disadvantage with this approach is that it leads to long reactors with high pressure drops and high minimum flow rates. This is particularly true where the reaction is slow or the product has high viscosity. This problem can be reduced with the use of static mixers. Static mixers are baffles in the flow channel which are used to promote mixing. They are able to work with or without turbulent conditions. Static mixers can be effective but still require relatively long flow channels and generate relatively high pressure drops. The oscillatory baffled reactor is specialised form of static mixer where the direction of process flow is cycled. This permits static mixing with low net flow through the reactor. This has the benefit of allowing the reactor to be kept comparatively short.
### Mixing with a mechanical agitator
Some continuous reactors use mechanical agitation for mixing (rather than the product transfer pump). Whilst this adds complexity to the reactor design, it offers significant advantages in terms of versatility and performance. With independent agitation, efficient mixing can be maintained irrespective of product throughput or viscosity. It also eliminates the need for long flow channels and high pressure drops.
One less desirable feature associated with mechanical agitators is the strong axial mixing they generate. This problem can be managed by breaking up the reactor into a series of mixed stages separated by small plug flow channels.
The most familiar form of continuous reactor of this type is the continuously stirred tank reactor (CSTR). This is essentially a batch reactor used in a continuous flow. The disadvantage with a single stage CSTR is that it can be relatively wasteful on product during start up and shutdown. The reactants are also added to a mixture which is rich in product. For some types of process, this can affect quality and yield. These problems are managed by using multi stage CSTRs. At the large scale, conventional batch reactors can be used for the CSTR stages. | {} |
# What is the Sandbox?
This "Sandbox" is a place where Code Golf users can get feedback on prospective challenges they wish to post to the main page. This is useful because writing a clear and fully specified challenge on the first try can be difficult. There is a much better chance of your challenge being well received if you post it in the Sandbox first.
See the Sandbox FAQ for more information on how to use the Sandbox.
## Get the Sandbox Viewer to view the sandbox more easily
To add an inline tag to a proposal use shortcut link syntax with a prefix: [tag:king-of-the-hill]
# Be Rational! Finding Rational Roots of Polynomials
In this challenge you are to find all rational zeroes of a polynomial. The results have to be exact. I would suggest using The Rational Root Theorem.
## Input
Input can be through function argument, command argument, or user input. Input will be a polynomial. The polynomial may have rational coefficients. If a term has a coefficient of zero, that term will not be included in the input. x^1 will be abbreviated as x.
Examples:
-x^7+4x^4/7-21x^2/2+5x+23/19
...//More to be added when posted
## Output
Output will be a list of the rational roots of the input polynomial. Output can be through function return value or stdout. If output in string format, you will use improper fractions separated by commas. The output must be simplified as much as possible. Duplicate roots should not be printed more than once.
Examples:
4/5,2/3,-15/2
...//More to be added when posted.
## Example Cases
> x^2-1
1,-1
...//More to be added when posted.
Just like all questions, the answer with the lowest byte count wins.
## Questions:
Is this too much like Peter's earlier question?
Are there any points I haven't covered or are not clear?
Any grammar/spelling mistakes?
Any tips on improved formatting?
• Somewhat related question – Sp3000 Aug 17 '15 at 2:01
• The title is one character shy of the minimum title length and "abbreviate" should be "abbreviated." Can the roots be listed in any order? Do they have to be fully reduced or could we, for example, use 2/4 in place of 1/2? I would also suggest rewording "fractional coefficients" to "rational coefficients." – Alex A. Aug 17 '15 at 2:38
# Rec(ursion)less execution
We have a simple (non-Turing complete) language.
Each line of program is a set of terms separated by single space. Some of terms (ending with ()) are function calls. Some lines (whose first term ends with :) are function definitions. The lines that are not function definitions are called expressions.
This is a sample program:
funa: one two three
funb: funa() four oclock rock
here we go funb()
Here we have two function definition lines and one expression line.
And this is BNF for this language just for clarity:
literal ::= [any printable char other than ' ', ':', '(', ')']+
function_call ::= literal '()'
term ::= literal | function_call
expression ::= term | expression_list ' ' term
program_line ::= function_definition | expression
program ::= [program_line '\n']+
The task is to write a program or function that validates the program P and performs EXECUTE(P) if the program adheres to validation rules.
Validation rules:
1. EXECUTE(P) eventually stops (it's not Turing complete - enough to check if one of the called functions would eventually cause itself to be called - either being recursive itself or "mutually recursive" with other function it calls),
2. while calling EXECUTE(P) -> EXECUTE_LINE(P,L), each function definition search succeedes (in other words - the program will not try to call undefined function).
If program does not pass validation rule 1 or 2, ERROR: RECURSIVE FUNCTION or ERROR: UNRECOGNIZED FUNCTION should be printed respectively.
When both rules seem to be violated, assume that search for undefined function causes the algorithm to fail (stop) instantly, so recursion that would occur later if the function was found, is not reported. We only report ERROR: UNRECOGNIZED FUNCTION in this case (see Example 5 below).
In similar way, if recursion prevents a call to function that would not be found otherwise, even though the function containing call to unrecognized function is called only ERROR: RECURSIVE FUNCTION is reported (see Example 4 below).
If validation does not report any of those two errors, EXECUTE(p) should be called.
Executing a program is defined like this:
EXECUTE(P)
- for each line L in the program P:
- if L is not a function definition EXECUTE_LINE(P,L)
EXECUTE_LINE(P,L)
- for each term T in L:
- if T is a literal
print T followed by single space
else
FH = T without '()' + ':'
FDL = find in P a line starting with term FH
FD = all terms of FDL after FH
EXECUTE_LINE(FD)
Example 1:
a: b()
b: a()
c: cucumber
other: nofun()
d: apple banana and c()
we have d()
Output:
we have apple banana and cucumber
Note, that in spite of existence of mutually recursive function definitions a and b and function definition other calling undefined function nofun error was not raised, because execution never goes to any to this functions.
Example 2:
other: nofun()
c: apple banana and cucumber
we have other()
Output:
ERROR: UNRECOGNIZED FUNCTION
Example 3:
a: b()
b: a()
c: apple banana and cucumber
we have b()
Output:
ERROR: RECURSIVE FUNCTION
Example 4:
a(): a() nofun()
hello a()
Output:
ERROR: RECURSIVE FUNCTION
We don't output ERROR: UNRECOGNIZED FUNCTION, because the program would never try to execute nofun, recurring infinitely into first term of definition of a.
Example 5.
a(): nofun() a()
hello a()
Output:
ERROR: UNRECOGNIZED FUNCTION
We don't output ERROR: RECURSIVE FUNCTION here, because the program would fail to find definition for nofun before even tring to recurse into a.
Accepted solution: a function or a program that takes a program in the above-defined language, validates and runs it. You can assume that program has already been split into single lines, however you can use raw input or accept program as single string when convenient.
This is code-golf, so the shortest submission in term of bytes will win. However, all working submissions in all languages will be appreciated.
• The two rules seem to be inconsistent: the first one prohibits the recursive function f: f() even if f isn't called; and the second prohibits the function which calls an undefined function f: fail(), but only if f is called. It would be less confusing to make them both syntactic (based on the text of P) or both semantic (based on the execution of P). – Peter Taylor Aug 22 '15 at 7:35
• @PeterTaylor I didn't mean to prohibit existence of recursive functions that are not called, this was my description that was not precise enough. I clarified the description and added some more examples. So now, both rules are based on execution of P rather than text of P. – pawel.boczarski Aug 22 '15 at 8:57
(anything in italic parenthesis is a note for the sandbox)
Triple Triad is a card game from the Final Fantasy series. I've never played a FF game that included it, though, so I'm only familiar with the version in the Pokémon fangame Pokémon Insurgence. It may or may not be different than the original version, so apologies in advance if this isn't quite what you're expecting. :)
In Triple Triad, each card has 4 numerical stats that range from 1 to 10*: An "up" value, a "left" value, a "right" value, and a "down" value. Here's an example of a card with an "up" value of 1, a "left" value of 6, a "right" value of 3 and a "down" value of 2:
At the beginning of each game, players construct a "deck" of five cards, chosen from their entire collection. These cards are kept secret from the other player.
Triple Triad is played on a 3x3 grid. Players take turns choosing a card from their deck and placing it on an empty square of the grid. The goal of the game is to control the majority of the cards when the grid is filled. When the game is complete, one card is randomly selected from the loser's deck and given to the winner.
### Control
When a card is placed on the grid, it is controlled by its owner. In order to come out of the game victorious, you must gain control of cards that the opponent played.
To gain control of an opponent's card, you must place a card of your own that "beats" it adjacent (not including diagonals) to the card you want to take control of. Whether or not your card beats the opponent's depends on their stats and which side you place your card on.
Imagine this board as the current game state and the blue card as a card in my deck:
If I want to take control of the opponent's Numel, I have to place my Mareep adjacent to it. This leaves only two options: The top middle square or the middle right square. If I were to place it on the top middle square, my card would be to the left of the opponent's. As a result, Mareep's "right" value of 2 would be contested against Numel's "left" value of 4. 2 is not greater than 4, so my opponent would retain control of Numel. Note that my value must be strictly greater; a tie would be the same as a loss.
If I were to place my card in the middle right corner, it would be below the opponent's card. As a result, Mareep's "up" value of 4 would be contested against Numel's "down" value of 3. 4 is greater than 3, so I would gain control of Numel (which would turn blue to indicate that).
This process is applied in all four directions at once. If there was a card below Mareep with an "up" value of 1 or 2, I would gain control of it as well. However, gaining control is not done passively or recursively. Control can only be contested at the exact moment a card is placed, and gaining control of a card does not count as "placing it".
## Tournament Rules
Each bot is given a budget of (TBD) with which to purchase cards before the tournament (this will be done by the author, not the bot itself, and will be hardcoded into the bot). Here are the cards, along with their costs:
There - 1000
will - 1500
be - 1500
a - 2500
list - 3000
here - 4000
The bots will play in a Round Robin tournament with Bo3 matches (subject to change. not sure if round robin will work well or if i should be using Bo5 or what). The bot that wins the most matches will be declared the winner.
### Match Procedure
1. Each bot chooses 5 cards from their collection to create a deck. (If they have less than five cards, they forfeit the match.)
2. The game is played as described above, until the board is filled. (i'm not sure how to decide who should go first. alternate? winner of the last game? loser?)
3. When the game is complete, a random card from the loser's deck is removed from their collection and inserted into the winner's collection. (At the end of the match, each bot's collection is reset to its original state.) (i'm not totally sure about choosing a random card. it's how the original game works, but it's not necessarily 100% fair. the idea is that over the course of a round robin tournament, any RNG variance will be smoothed out, but i don't know...)
4. Repeat steps 1 through 3 until one bot has won 2 games.
## Input / Output Specifications and Controller Details
(none yet lmao)
* The Insurgence variant has some nonsense regarding Pokémon types at higher difficulty levels, but this challenge will ignore that.
• Your mention of excluding recursive moves makes me wonder what the game would be like including recursion, on a much larger board... (perhaps as a separate KotH) – trichoplax Sep 1 '15 at 14:34
• Choosing a random card for the winner will introduce unfair variation in a single round robin, but I think that's worth it for the increased variety of games that will test strategies more thoroughly. As long as you don't mind running the round robin multiple times until it converges on a fair score, I would keep the random element. – trichoplax Sep 3 '15 at 7:23
# Is a point inside a polygon?
Given the polygon with 2 < N < 11 sides, on 2D plane, find out if a given point is inside the polygon.
The input can be an array of points in x, y, each determining a vertex, or by a string in format X Y x1 y1 x2 y2 ... xN yN (you may choose other separator). The X and Y are the coordinates of the point to be tested. The list then contains N verices, and the last point is connected to the first point. All x and y are integers.
Using any built-in functions performing the test is prohibited (like this one)
You should consider that a point is inside a polygon also when it is one of its vertices or it lies on one of its edges.
# Notes
1. Tags would be and
2. Should max x and y be determined?
3. Should they be positive (uint) values only?
4. Some questions of this type have already been asked on StackOverflow (example 1, example 2). Is it ok to ask this question here? (I didn't find it)
5. Should the question also allow non-convex polygons? A non-convex (concave) is a polygon which has at least one of its angles larger than 180 deg., and can borders intersect? (I think it can be too complicated and in my opinion it should be for convex polygons only).
• The question should definitely allow non-convex polygons, but if it allows self-intersection then you'll need to pick a winding rule and explain how it works. – Peter Taylor Sep 8 '15 at 19:58
• Be aware that some of the answers here generalize to polygons. – xnor Sep 12 '15 at 18:57
# Literal Fourier Transform (or Fouriest Numbers?)Dupe
(inspired by this Saturday Morning Breakfast Cereal comic)
It's called a fourier tranform when you take a number and convert it to the base system where it will have more fours, thus making it "fourier". If you pick the base with the most fours, the number is said to be "fouriest."
Goal: Given a positive integer n in base 10:
$$4 \le n_{10} \le 2^{31} -1$$
Write a function that displays its equivalent in another base in the format below that maximizes the number of 4s, i.e. the fouriest:
Base x; Fouriest y
Notes:
1. If multiple bases tie on the number of 4s, any base will be accepted. E.g. if the input has the fouriest value already, it's fine to return/display the input.
2. Numbers may not necessary yield a 4, see last example below.
Examples:
• (from comic) 624 -> Base 5; Fouriest 4444
• 2316780 -> Base 14; Fouriest 444444
• 4 -> Base 10; Fouriest 4
• 5 -> Base 10; Fouriest 5
Bonus:
1. Have your function accept a second argument m for the input base: 1/4 reduction in submission size.
Winning:
Shortest code in bytes wins.
This is .
• If two different bases give the same number of 4s, can either be chosen? – trichoplax Sep 8 '15 at 7:18
• @trichoplax yes, I think that'll be an extension of point 1 under "Notes". :) – h.j.k. Sep 8 '15 at 7:22
• I'm not personally a fan of bonuses (I prefer just one well defined objective), but if you choose to have a bonus it might work better to make it a percentage reduction instead of a number of bytes. Otherwise the bonus has very different effect on different languages. – trichoplax Sep 8 '15 at 7:38
• P.S. With or without a bonus, I think this is a brilliant challenge. – trichoplax Sep 8 '15 at 7:42
• @trichoplax along the lines of everything-to-do-with-four, I was thinking of shaving 1/4 for the bonus component, but then I'm not too sure if that's too much. :p I guess I can still consider that... while awaiting for other suggestions. :D – h.j.k. Sep 8 '15 at 8:01
• Dupe – Peter Taylor Sep 8 '15 at 13:32
• @PeterTaylor thanks for pointing that out! I wouldn't be posting this then. It didn't mention "fourier", which explains why it slipped through my search for posted questions. – h.j.k. Sep 8 '15 at 14:23
## Find all matchings code-golfpermutations
Golf this SO question in any language. Fewest bytes wins.
Given two equal-size sets of positive integers,
A={3,1,5}
B={2,4,3}
a matching pairs up elements from each set, like:
{(5, 2), (1, 4), (3, 3)}
There's one matching for each permutation of n elements, where n is the size. Your goal is to print or return all the matchings.
{(3, 2), (1, 4), (5, 3)}
{(3, 2), (1, 3), (5, 4)}
{(1, 2), (3, 4), (5, 3)}
{(1, 2), (5, 4), (3, 3)}
{(5, 2), (3, 3), (1, 4)}
{(3, 4), (1, 3), (5, 2)}
Input: Two nonempty equal-size collections (lists, arrays, sets) of positive integers. Numbers won't repeat within a collection, but may overlap between them. If your collection is ordered, you may assume it to be sorted.
Output: Print or return all possible matchings.
Each matchings must appear exactly once, in any order. They must be somehow grouped or separated, so you can tell where each one beings and ends. These rules also apply to the pairs in each matching.
Banned: Built-ins that generate matchings or permutations.
I have an idea for a challenge but I'm not sure if it would be best as a or , I'm also not sure what rules I should apply to make it more interesting.
## "Convert an image to LEGO safe-colours"
The task is to convert the existing colours from a JPEG image into LEGO-safe colours.
## What are LEGO-Safe colours?
For the purpose of this challenge, LEGO-Safe colours are defined as the seven oldest solid colours produced by LEGO that are still in production. (The exception being grey which has changed in recent years, for the purpose of this challenge, the original grey will be used).
The colours are hexadecimal approximations from this list.
White, #f2f3f2
Grey, #a1a5a2
Black, #000000
Bright Red, #c4281b
Bright Yellow, #f5cd2f
Dark Green, #287f46
Bright Blue, #0d69ab
## Images
You may demonstrate your results using images provided by yourself or the ones shown below.
Lego Factory (Colour)
Lego and Duplo Bricks (Greyscale)
• Dupe – Peter Taylor Sep 18 '15 at 14:27
• @PeterTaylor Bummer – Ambo100 Sep 19 '15 at 9:26
# Another cake question - Share it fairly!
I'm having a party, and there were going to be 8 of us. As I like to cut the slices of cake fairly, I normally get a round cake and make the cuts with the help of a protractor (any code golfer would!) But this time I found the bakery were making octagonal cakes, so I bought one of these to help me with my cutting.
The problem is, now there are only 7 of us! Some people are so inconsiderate, dropping out at the last minute! How am I going to cut the cake fairly now?
Well it turns out that at https://puzzling.stackexchange.com/a/18244/4768 they have the answer. Although my protractor is no good, it's still true that if I start my cuts at evenly spaced points on the perimeter of the cake and end at the centre, all the slices will be of equal size and have an equal area of icing. This is very important. This is quite easy to prove for cakes in the shape of any regular polygon, using the fact that the area of a triangle is base*height/2.
I need you to write me a program or function to show me how to cut my cake.
The code will take 2 inputs: the number of edges on the cake (3 to 15) and the number of pieces to cut it into (3 to 40).
It will output a diagram showing the cake (a regular polygon) and the positions where the cuts are to be made (lines radiating out from the centre to equally spaced points on the perimeter.)
Some examples are shown below. Note for example that for the case 3,9 the slices are all equal size, but the angles at the centre of the cake are not.
You can orient the cake any way you like, but one of the cuts has to pass through a vertex for easy comparison of answers.
Scoring: this is code golf. Shortest code in bytes wins.
# Last Minute Shipments
Here's the situation: You're an engineer for Acme Rail Shipping Inc. There's a string of shipments to make for tomorrow but it turns out at the last moment that they're actually expected to arrive today! There isn't enough time to stop at every destination. It doesn't matter if you skip a few stops as long as you get to others as fast as possible. Your task is to figure out which ones to skip.
### Challenge
Given a list of stops and a minimum number of stops to make, write a program or function that outputs the list of stops to make that results in the lowest total time taken. Here's the catch: Your train is very long and very heavy, so it takes a long time to accelerate. Sometimes it may be more efficient to skip a stop rather than to slow down to make it.
• Your train starts with the front at the origin and is at rest.
• Each stop is a point along your 1D route defined by a positive distance from the origin.
• To make a stop, you must slow down to rest with the front of the train on the point. The shipment is delivered immediately so right as you reach rest, you start accelerating again.
• The train accelerates uniformly at 0.3 m/s^2 and brakes uniformly at 1.2 m/s^2 (I'm not sure how realistic these values are. Subject to change. Feedback would be helpful.)
• Assume that there is no upper speed limit to the train. Therefore, you should be accelerating at every point between stops.
• Added: You cannot go backwards.
• Added: Total time will be measured from when you start to when you pass or arrive at the last stop, regardless of whether you decide to make it or not. You can't just leave your train in the middle of the route! This means that, for example, if you skip the last stop, then the total time will include the time taken for the train to accelerate from the second to last stop and reach the last stop at some velocity. If you don't skip the last stop, the measured time will end when you come to rest at the last stop.
### Input
Input will be a number for the minimum number of stops to make, followed by a list of distances from the origin for each stop. The first item in the list will be the distance for stop #1, the next will be distance for stop #2, then distance for stop #3, etc. Distances in the list are strictly increasing and are defined in meters.
You can take input in any reasonable format, such as a delimited list on stdin with the first item as the minimum number of stops, as program arguments with the first item as the minimum number, or as parameters to a function.
### Output
Output will be a list of stops to make. This list will contain the numbers of each stop, as defined in the previous section. For example, if it is determined that stops 2, 3, and 5 out of five stops need to be made, the output would be 2, 3, 5.
Output can be any reasonable format, such as a delimited list on stdout, or an array return value from a function. Trailing whitespace or newlines are acceptable. The list doesn't necessarily have to be sorted.
### Example I/O
Coming soon
Standard rules apply. Shortest code wins, however clever solutions will get my upvotes. Good luck!
• This feels like a minor variant on the several existing single-source shortest path questions. It's borderline enough that I wouldn't use my unilateral close-as-dupe powers, but I wouldn't be surprised to see it closed as a dupe. – Peter Taylor Sep 21 '15 at 17:52
• @PeterTaylor I did search before I posted this but I didn't find anything, maybe I didn't use the right terms. Are there any examples specifically that you are referring to? – DankMemes Sep 21 '15 at 18:41
• codegolf.stackexchange.com/search?q=shortest+path+is%3Aquestion Not all of them are shortest path in a graph, but I think most of them are. – Peter Taylor Sep 21 '15 at 21:42
• @PeterTaylor I intended distance to always be the same, and the optimization to be for shortest time (I've updated the answer to clarify distance). The point here isn't that you must visit all nodes as fast as possible, it's that you must visit n of the nodes and decide which ones to pass. – DankMemes Sep 21 '15 at 22:07
• Yes, I understood that. So the graph has vertices which are pairs (stop, num visited), for each k > j >= i there is an edge (stop_j, i) --> (stop_k, i+1) with weight corresponding to the time to get from stop j to stop k, and you start at (0, 0) and want the shortest path to (any, n). – Peter Taylor Sep 22 '15 at 6:28
# Strata
Strata is a puzzle game in which you lay coloured ribbons across a grid. When two ribbons intersect, the cell under the intersection takes on the colour of the uppermost ribbon. Here's an example puzzle, ready to solve:
After laying the first ribbon, no cells have been assigned a colour yet:
Laying a perpendicular ribbon colours a cell in:
Notice that, if the uppermost ribbon isn't the correct colour, the cell isn't filled in to let you know you've got it wrong. Also, if a cell doesn't have a target colour, it doesn't matter what colour ends up on top of it; the cell remains colourless when the second ribbon is laid across it:
And a completed solution:
# The Challenge
The object of this challenge is to write a program or function that will provide a step-by-step solution for a Strata puzzle. Here is the layout for the example puzzle provided above, rotated 45 degrees clockwise and with letters a-c substituted for the cell colours:
ba
ab
a c
For ease of the following discussion, I've labelled the columns 1-3 and the rows A-C.
ABC
+---
1| ba
2|ab
3|a c
The notation for the output commands will be a single character representing the row or column to lay a ribbon upon, and then another character representing the ribbon type. For example, the command Cb represents laying a ribbon of type b on the rightmost column of this layout.
One of a number of valid solutions for this puzzle is 3a, Cc, 1a, 2a, Bb, Aa. Another is Ca, 3c, 2a, 1a, Aa, Bb.
# Input
Input will consist of the layout for a Strata puzzle. The puzzle will always form a square, with side length of 2-9 inclusive. Each character in the input will be one of the following:
• a lower case letter, representing the ribbon type which should be laid on top of the intersection in the completed puzzle
• a space, representing a cell where the type of the uppermost ribbon does not matter
Note that a puzzle can use between 2-26 (inclusive) ribbon types, and that the types will not necessarily a the first nth letters of the alphabet. Your program/function won't be provided these separately, and should be acquired from the puzzle layout if required.
Input may be provided in any reasonable form that is convenient for your chosen language. For example, you may accept input as single newline-delimited string, as an array or list of strings, etc. Please provide a description of how your submission will expect its input for testing purposes.
Similarly, input can be provided in any appropriate manner. For example, as command line arguments, function arguments, as a stream via STDIN, etc. You should only specify this if it is not immediately obvious.
# Output
Output should consist of a valid solution for the given puzzle. It should consist of an ordered series of instructions, each consisting of two characters:
• The first character should be a number or upper case letter; a number represents a row, starting with 1 for the uppermost row, a letter represents a column, starting with A for the leftmost column (e.g., in the puzzle above, the instruction 4a would be invalid as there are only 3 rows)
• The second character should be the type of the ribbon to lay on the grid; this should be a lower case character, corresponding to one of the types provided on the input (e.g., in the example puzzle above, the instruction Az would be invalid as z is not one of the types used in the grid)
Your program/function can provide the output pairs in any reasonable form, and on any reasonable medium. For example, as a series of comma, space, or newline separated values on STDOUT, as an array for return from a function, written to a file with specified name, etc.
# Other Rules
• A puzzle is only considered complete when all rows and columns have had a single ribbon laid across them, and no row or column can have more than one ribbon laid on it. This means that your output will consist of 2 * (side length) instructions.
• This is code golf, so the winner is the shortest solution in bytes. In the event of a tie, the earliest submission wins.
# Test Cases
Input:
ba
ab
a c
Possible output:
3a, Cc, 1a, 2a, Bb, Aa
This is my first PPCG question, so I tried to make sure every angle was covered. I think I may have gone overboard though, do you think I should get rid of any sections?
As this isn't a puzzle of my own invention, would there be any problems with posting in-game screenshots?
This puzzle is actually pretty easy to work out if you employ a backtracking technique - find a row or column consisting of a single colour, ignoring spaces and cells which have been crossed once. Add this instruction pair to the end of the prototype solution, then mark all the cells as having been crossed once (or twice). Repeat this 2 * (side length) times and you'll have a solution, if there is one to be found.
I want to discourage brute force solutions, so I'm going to come up with a 9x9 test case with more than 10 different types. My stats skills aren't up to much, but I think that, for a puzzle with side length n and number of ribbon types t, the total number of possible ways to lay ribbons on the grid is:
(2n)! * (2n)^t
Could anyone double check that for me? Also, if I were to put in a 9x9, 10-type test case, would that be big enough to rule out a brute force solution? Should I impose some form of computation time limit, and if so, how long on what sort of machine?
• @trichoplax There is indeed a solution which is significantly faster than brute force, which is as described in my comments. For a puzzle of side length n, it requires exactly 2n iterations to find a solution if one exists, and requires a maximum of [the (2n)th triangle number] row/column inspections in the worst case scenario. I can add a discussion of this to the main body of the challenge, but I'm worried that it's already too long! – Sok Sep 22 '15 at 11:32
• I misread that part and thought that was the brute force solution - but I can see now that it is much faster - I'll delete my irrelevent comment... – trichoplax Sep 22 '15 at 11:37
• I confirm your count for a really brute force solution. It's possible to optimise slightly by observing that if there's a sequence of parallel ribbons then the order in which they're placed is irrelevant. – Peter Taylor Sep 23 '15 at 20:26
## Just repeat yourself
Write a program that outputs "Do not repeat yourself!"
Your program code must respect the following constraints :
• its length must be an even number
• each character that is in position 2n (where n is an integer > 0) must be equal to the character in position 2n-1. The second character of the program is equal to the first, the fourth is equal to the third, etc.
Examples:
HHeellllooWWoorrlldd is a valid program
123 or AAABBB or HHeello are incorrect
This is code-golf, so the shortest code wins!
• This rules out most languages which require a keyword to output. For example, print, put or output would be excluded. Maybe there is some way of specifying the constraints to allow many languages to compete, while still being highly restrictive? I can't think of a way, but I wonder if it would help to say "meet 2 of 3 constraints" rather than "meet 2 constraints". Hopefully someone else can come up with a better way that I can't think of... – trichoplax Sep 22 '15 at 11:52
• @trichoplax maybe "each character in the source code must have one and only one neighbour at the left, the right, the bottom or the top with the same character value". – Arnaud Sep 22 '15 at 13:29
• @SuperChafouin Based on that, would comments be allowed? – ASCIIThenANSI Sep 22 '15 at 18:45
• @ASCII yes comments should not be allowed, that would be too easy (just double each line and add "\\") – Arnaud Sep 23 '15 at 2:14
• I think I'll stay on my current rules - they sure rule out some languages, but a lot can still compete. These questions also exclude a lot of languages, yet they are popular and interesting imho : codegolf.stackexchange.com/questions/52809/… codegolf.stackexchange.com/questions/39993/… – Arnaud Sep 23 '15 at 2:19
The shortest code for testing reliable password ( for Vault Password Rank 3 puzzle )
Introduction
I started playing Empire of Code recently, and there was some challenge. The player is supposed to write a code on a python or on javascript to detect if passed string is reliable password, that is, contains at least one lowercase Latin letter, one uppercase Latin letter and one digit and has at least 10 characters.
It was quite easy for me to fit in 130 characters limit for rank 3 using javascript, however, I spent a lot of time trying to fit in 100 characters limit for rank 3 using Python. Some guy said that he has managed to fit in 71 characters for Python. I was trying hard but still couldn't reduce the code less than 90 characters. Is it possible to use even less than 71 character?
Challenge Vault Password [ the following description is mostly copied from https://empireofcode.com/ ]
We've installed a new vault to contain our valuable resources and treasures, but before we can put anything into it, we need a suitable password for our new vault. One that should be as safe as possible.
The password will be considered strong enough if its length is greater than or equal to 10 characters, it contains at least one digit, as well as at least one uppercase letter and one lowercase letter. The password may only contain ASCII latin letters or digits, no punctuation symbols.
You are given a password. We need your code to verify if it meets the conditions for a secure password.
In this mission the main goal to make your code as short as possible. The shorter your code, the more points you earn. Your score for this mission is dynamic and directly related to the length of your code.
Input: A password as a string.
Output: A determination if the password safe or not as a boolean, or any data type that can be converted and processed as a boolean. When the results process, you will see the converted results.
Example:
golf('A1213pokl') === false
golf('bAse730onE') === true
golf('asasasasasasasaas') === false
golf('QWERTYqwerty') === false
golf('123456123456') === false
golf('QwErTy911poqqqq') === true
Precondition:
password matches by regexp expression "[a-zA-Z0-9]+"
Scoring:
Scoring in this mission is based on the number of characters used in your code (comment lines are not counted).
Rank1:
Any code length.
Rank2:
Your code should be shorter than 230 characters for Javascript code or shorter than 200 characters for Python code.
Rank3:
Your code should be shorter than 130 characters for Javascript code or shorter than 100 characters for Python code.
How it is used:
If you are worried about the security of your app or service, you can use this handy code to personally check your users' passwords for complexity. You can further use these skills to require that your users passwords meet or include even more conditions, punctuation or unicode.
# Compute factorials code-golf
In the style of the Hello, World! catalog, this question is a collection of the shortest programs that compute a factorial (a common task for new programmers) in any given language.
## Specifications
Your program must take a positive integer as input from STDIN, and output the corresponding factorial to STDOUT (or your language's closest alternatives).
Your program must also accept the special case of 0! = 1 if 0 is entered. No negative numbers will be entered.
Your program must handle numbers up to 40 factorial (8.159152832×10⁴⁷). Sandbox question: Is 40 factorial too large a minimum requirement? I was also considering 50 factorial is 40 is too small.
## Test Cases
3
6
6
720
0
1
11
39916800
• This is not about finding the language with the shortest approach for computing factorials, this is about finding the shortest approach in every language. Because of this, no answer will be marked as accepted.
• Submissions are scored in bytes in an appropriate preexisting encoding, usually (but not necessarily) UTF-8. For example, Piet is scored in codels rather than bytes. If you're not sure how your language is scored, you can ask on Meta.
• Nothing can be printed to STDERR.
• Feel free to use a language (or language version) even if it's newer than this challenge. If anyone wants to abuse this by creating a language where the empty program computes factorials, then congrats, you've just created a boring answer.
• Your language must have a valid way to test your program (through an interpreter, compiler, etc.) If there aren't any, you can write one yourself.
• Standard loopholes are disallowed except where specified by these rules.
• For languages with several integer types/ranges, how high do we need to support? There's a big difference between doing this with int and BigInteger in Java, for instance. – Geobits Sep 24 '15 at 14:15
• It's up to you, but I think it'd be more interesting to include 0! = 1 as valid input as well (i.e. input nonnegative integer rather than positive). Also, if FizzBuzz is happening soon, it might be good to wait a while before doing another catalogue. – Sp3000 Sep 24 '15 at 14:16
• @Sp3000 Thanks for that reminder, I overlooked the special 0! = 1 rule when writing this challenge. As for FizzBuzz, if it gets posted soon I'll make sure to leave this unposted for a little while. – ASCIIThenANSI Sep 24 '15 at 14:25
• @Geobits Thanks for pointing that out, didn't think there would be a problem. Programs must support numbers between 0 and 2^31 -1 inclusive. – ASCIIThenANSI Sep 24 '15 at 14:28
• Hmm. I meant more a limit on the output rather than input, since it grows so quickly. Trying to find the factorial of 2^31-1 would probably break most languages :) – Geobits Sep 24 '15 at 14:58
• @Geobits Yup, I tried 50 factorial and it was really big. I've changed it so programs must support numbers up to 100 factorial, but I'm not sure if this is too big. – ASCIIThenANSI Sep 24 '15 at 15:17
• @ASCIIThenANSI I'd argue that, because the amount of observable atoms in Universe is about 10^80 atoms, 50! is almost to big. It might be annoying to check results with slower languages. – MatthewRock Sep 25 '15 at 22:10
• Also, I'd leave out the requirement for valid interpreter - because, depending on language, there might be no such thing - I'd take C++ as an example - I'm almost sure that there can't be valid C++ interpreter, because it wouldn't be compatible with standard (I may be wrong though). – MatthewRock Sep 25 '15 at 22:11
• @MatthewRock Thanks for your suggestions. I've changed the limit to 40 factorial, and changed the interpreter rule to "some valid way to run". – ASCIIThenANSI Sep 26 '15 at 15:37
• I also think that allowing the competition to have a winner could be more appealing, but that's a side note. – MatthewRock Sep 26 '15 at 15:39
• @Geobits WA (1, 2) suggests that you would need about 7.93 gigabytes just to store the number as binary. – LegionMammal978 Sep 26 '15 at 16:24
• There's an old challenge to find factorials with 100 answers. What does this add to that? – xnor Oct 16 '15 at 9:18
# Golf a game of Nim
Similar to my previous Write the shortest game of Alak challenge, this time you have to golf another simple game - Nim.
You may already know how to play, but if you don't, here are the rules:
• In Nim, two players take turns removing objects from heaps (piles).
• Each turn, one player removes at least one object from any heap.
• You can take as many objects as you want, provided they all come from the same heap.
• You can take from any heap you want, but you can't take objects from two different heaps in the same move.
• The player to take the last piece(s) wins.
There are 3 heaps, each starting out with a random number of objects between 2 and 20.
Input
Input is in the form of two numbers - a heap number and the number of objects to take from that heap.
For example, the input 1 2 means "take 2 objects from heap #1".
Output
Every turn, the program must print to STDOUT (or your language's closest alternative) the amount of objects in each heap. (This includes at the start of the game.)
For example, if there were 5 objects in heap #1, 2 objects in heap #2, and 0 objects in heap #3, you would output this:
5 2 0
When one player wins by taking the last piece(s), you have to output P# wins and end the game, where # is the number of the player who won (1 or 2.)
Assumptions
• Input will always be in the form of Heap# Amount. Any invalid input can be handled however you like.
• The input will never ask to take from a heap that doesn't exist, or take more objects than a heap contains.
# Questions for Meta
• Are there any loopholes?
• Should the sizes of each heap be set, rather than random?
• Should there be a random number of heaps?
• Should programs have to handle taking objects from non-existent heaps, or more objects that a heap has?
• I'm 99% certain I've covered everything, but have I left out any rules of Nim?
• Maybe I'm too tired, but I don't see any specification for how the initial sizes of the heaps are set. With respect to your questions, personally I think the rules of Nim are trivial; and that it's if not standard then at least typical for interactive code-golf to not require handling bad inputs. – Peter Taylor Aug 29 '15 at 19:29
• @PeterTaylor Thanks, I've added that to the challenge. – ASCIIThenANSI Aug 29 '15 at 22:56
Cookie Clicker: Simple, stupid, and yet strangely addictive. In it you must click a cookie (hence the title). Once you have enough cookies, you can spend them on items that will produce cookies for you. Eventually you will be getting hundreds, then thousands, then millions of cookies per second.
There are a few different items that you can buy for cookies: A clicker (that clicks the cookies for you), a grandma (that bakes the cookies for you), a farm (that grows cookies for you), a factory (that mass produces cookies for you), a mine (that will mine and process veins of dough for you), a shipment (that ships cookies from other planets to you), an alchemy lab (that transforms gold into cookies), and others that we won't worry about.
Let's golf a simplified Cookie Clicker.
# Challenge
Write a full program. Your program should always display the number of cookies as a whole integer. Every second, your program should add the current cookies per second (defaulted to 0) to the cookie count.
• When the spacebar is pressed, it adds the base click amount (defaulted to 1) to your cookie count.
• When the key "1" is pressed, if there are 10 or more cookies in the cookie count, subtracts the cookie count by 10 and adds 0.1 to the current cookies per second.
• When the key "2" is pressed, if there are 100 or more cookies in the cookie count, subtracts the cookie count by 100 and adds 0.5 to the current cookies per second.
• When the key "3" is pressed, if there are 500 or more cookies in the cookie count, subtracts the cookie count by 500 and adds 4 to the current cookies per second.
• When the key "4" is pressed, if there are 3,000 or more cookies in the cookie count, subtracts the cookie count by 3,000 and adds 10 to the current cookies per second.
• When the key "5" is pressed, if there are 10,000 or more cookies in the cookie count, subtracts the cookie count by 10,000 and adds 40 to the current cookies per second.
• When the key "6" is pressed, if there are 40,000 or more cookies in the cookie count, subtracts the cookie count by 40,000 and adds 100 to the current cookies per second.
• When the key "7" is pressed, if there are 200,000 or more cookies in the cookie count, subtracts the cookie count by 200,000 and adds 400 to the current cookies per second.
• When the key "0" is pressed, if there are 50 or more cookies in the cookie count, subtracts the cookie count by 50 and adds 1 to the base click amount.
There is no input, only output, being changed every second to update the cookie count. No other key should do anything, so you cannot press enter after every key and have it do something.
# Other information
• This is code golf so shortest program in bytes wins.
# Thoughts for sandbox
• There are many, many more features I could add to this challenge if it is too simple. I feel that challenge entries for this will already be long enough.
• I do not see many challenges that ask for constant input. Does this mean that this challenge is a bad idea?
• Have I crossed a line?
• Requiring real-time user input is definitely rare. One big reason is that it's fairly hard to do and (probably) often requires a library. Many if not most esolangs won't be able to do it at all. – El'endia Starman Oct 3 '15 at 3:08
• A lot of people will ask if they can require the player to press enter after typing each number, so you should be explicit that this is not allowed. – feersum Oct 3 '15 at 3:30
• It's a real shame that non-blocking terminal reading isn't easier to work around, although it has been done a few times: 1, 2, 3 it probably would be a bit of a barrier. I do like the sound of this though! – Dom Hastings Oct 8 '15 at 9:35
• The list should show what is being bought here (instead you can skim down the introductory paragraph). – Paŭlo Ebermann Oct 11 '15 at 8:19
This is a raw draft about an idea for a popularity contest. Any input would be appreciated.
# My watch, it has two buttons
I have this watch with two buttons and a display that can show six characters split into groups of two by colons like this
12:34:56
Each character is displayed by a 5x7 LCD-Matrix, so arbitrary ASCII-characters can be displayed.
I'd like to call the buttons "select" and "modify".
The problem is that the watch is dead. It needs a new operating system.
Since I'm not very trained at designing operating systems I want you to write an emulator for my watch. The emulator should be programmable using the following commands.
• big letters A-Z represent short presses of "select" the corresponding number of times.
• small letters a-z have the same meaning for the modify button.
• < represents keeping "select" pressed for half a second (or something like that).
• > represents the same for the modify button.
• numbers in the code mean to wait for that number of hundredth of seconds.
You're free to program any kind of functionality into my watch, but it should at least be usable as a watch showing the time and as a stopwatch showing minutes, seconds and hundredth of seconds.
One thing I know about my watch is that it can be programmed to receive data from my stdin and send data to my stdout. So once the operating system is installed I could send data and a program to the watch and print the results of the execution to my console.
# Spot the differences
Little Timmy is waking you up on this Saturday morning once again to help him solve his puzzles. You love the little bugger, but those Spot the Differences games are starting to undermine your patience. Like always, you plan to delegate this tedious task to Robotic Dad™ so you can better spend your time... planning for your child education? Yeah, I think that was the plan.
Anyway, you tell Timmy not to worry, that you're going to help him soon enough, grab a beer and sit in front of your computer to help your child solve those puzzles, once and for all.
Your task is to write code that will take two similar pictures which differ in a few spots and somehow output the differences between them. The format of the output is free, however a 5 year-old child should be able to get it.
Here are examples of input :
Since there is no formally defined output, this is a .
Please also keep in mind that you'd like to spend a little time sipping your beer calmly in front of your computer. In this regard, built-in solution should be regarded less highly.
Meta : I plan to post a community answer as an example output, linking to the http://franklinta.com/2014/11/30/image-diffing-using-css/ article which made me think of this challenge and using a snippet to illustrate it. Is it enough?
• I don't understand the last sentence. If I want to sip beer calmly, surely a built-in solution is the best? – Peter Taylor Oct 11 '15 at 20:33
• It immediately occurs to me that the easiest way of doing this is to XOR the two images together. BTW What is the input format? – Level River St Oct 11 '15 at 23:35
• The technical side seems like it'd simply be subtracting or xor'ing the two arrays, then the popularity side is very open-ended (just draw freehand circles around them?) – Nick T Oct 12 '15 at 0:18
• @PeterTaylor the father described in this question seems to enjoy his time in front of his computer more than with his child. He still wants to improve his child's future but will use any excuse to do it in front of his computer. Disclaimer : I do not encourage bad parenting ;) – Aaron Oct 12 '15 at 8:47
• @NickT & steveverrill I do not know the first thing about image processing so my challenge may indeed be too way too easy. Do you know how I could avoid simple XOR answers? – Aaron Oct 12 '15 at 8:51
# Sudoku with handicap
Note: I've completely reworked this, as the comments convinced me that there's not a good way to describe the restrictions I originally was after in a language-independent way without unreasonably restricting languages. Thanks to all the commenters.
I now reworked the question in a way that also inhibits traditional recursive solving (at least doing so in a straightforward way), and at the same time even allows to add a metrics about the "efficiency" of the algorithm. The basic idea being that your program is called not once, but many times, each time only having limited information about the field.
Also note that this new version requires me to write a driver program; so the question cannot go live until the driver program is written.
Questions are set in italics inside the text
The goal of this challenge is to solve a given Sudoku. However there's a twist: The program cannot access the full board at any time. Instead it is called repeatedly, and each time it has only limited information about the board. I'll refer to the totality of all calls as the "calling loop". The program can then request different information for the next run, or declare that it is finished (that is, request to not be called again; the call loop is terminated).
The only way to pass information between different runs is through the Sudoku board, and a small amount of scratch space. The Sudoku board is initialized before the first call with the Sudoku to solve (obviously) and is then checked after the call loop terminated. During the call loop, the Sudoku board is not checked, so you may "abuse" it to store additional information, as long as at the end, a valid result is generated.
Since it may not be possible to completely solve all Sudokus using such an algorithm, the only hard requirement is that the call loop is guaranteed to eventually terminate, the Sudoku field after termination is in a valid state. The rest is covered by scoring.
Standard loopholes are explicitly disallowed.
# The stored data
The data that is stored outside the program consists of 90 nine-it unsigned numerical values (that is, minimal nmumber 0, maximal number 511), 81 of which represent the Sudoku field, and 9 values are scratch space. The values of the field are interpreted as bit fields, as described below.
In the following I'll use as example the Sudoku field
4.5|.7.|89.
..2|.5.|6..
..7|9..|542
---+---+---
..3|5.6|489
...|3.8|...
684|7.9|1..
---+---+---
238|..5|9..
..6|.9.|3..
.79|.3.|2.1
where dots contain fields that have not been filled.
Initially, the data gets filled as follows:
• Each field pre-filled with number $n$ is represented by the value $2^{n-1}$, that is, the bit corresponding to that number is set, and all other bits are unset.
• The unfilled fields are represented by the value $511$ (that is, all nine bits are set).
• The scratch space is filled with $0$.
After the run loop terminates, each pre-filled field needs to have the same value as initially, and each initially empty field must have at least the bit corresponding to the correct solution set. That is, every zero bit represents a value that your program excluded for that field, and a program that excludes the correct solution is disqualified.
The contents of the field is only evaluated at the end of the call loop. So in between your program is free to make creative use of the storage space given.
# The input
The program receives its data through standard input of the following form:
The first line contains a description of which data is given to/set by the program in this run. It consists of one to three space-separated words from the following list. On the first run, it is just "S". At later runs, it is exactly what the program requested at its previous run.
The possible values and corresponding interpretation are:
• R1 to R9: The indicated row of the Sudoku, 1 being the uppermost row.
• C1 to C9: The indicated column of the Sudoku, 1 being the leftmost column.
• F1 to F9: The indicated $3\times 3$ subfield of the Sudoku, numbered left to right, up to down. So for example 1 denotes the upper left subfield, 6 denotes the middle right subfield.
• S: The scratch space.
The next one to three lines contain the corresponding data, from left to right, and from up to down, as space separated decimal numbers.
S
0 0 0 0 0 0 0 0 0
At the second run with the example Sudoku field, the input to your program might be:
R2 C3 F4
511 511 2 511 16 511 32 511 511
16 2 64 4 511 8 128 32 256
511 511 4 511 511 511 32 128 8
# Output
The first one to three lines are the new values to replace the ones given in the input. The number of the lines must be the same as the number of fields in the first input line, and each line must contain nine values separated by whitespace (leading/trailing whitespace gets ignored).
If some field appears in more than one data line, the corresponding values are bitwise anded together. For example, if the initial line of your program's input was
R1 C1
and the first two line of your output read (with question marks replacing values that are irrelevant for this example — of course your code may not actually output question marks here)
3 ? ? ? ? ? ? ? ?
5 ? ? ? ? ? ? ? ?
then the upper left value us the Sudoku storage field will be 3 & 5, that is, 1
Following those data lines, there will be a single line containing either the single word STOP, in which case the run loop is terminated and the resulting field is created, or a line containing one to three whitespace separated words requesting data to be served in the next run, that is, the words to be presented in the first line of the next run of the program.
# Scoring:
The score for qualifying entries is calculated as follows (lower score is better):
• You get 1 score point for each run of your program.
• You get 5 score points for each set bit in the final representation of your Sudoku field
• At the end, subtract 45 (because a perfectly solved Sudoku will have nine bits set; if your program leaves less bits set, it will be disqualified anyway).
The total score is then calculated as weighted mean of the test cases, where the difficulty is used as weight, rounded up to the next integer. That is, if $d_k$ is the difficulty assigned to test case $k$, and $S_k$ is the score you achieved at test case $k$, your total score is $$S = \left\lceil \frac{\sum_k d_k S_k}{\sum_k d_k}\right\rceil$$
Sandbox question: Should I change the relative weight of program runs versus unsolved fields? And is the difficulty weighting a good idea, or should I simply add up all scores?
# Test cases:
(Hardness as reported by GNOME Sudoku)
Test case 1: Easy (0.17)
4.5|.7.|89.
..2|.5.|6..
..7|9..|542
---+---+---
..3|5.6|489
...|3.8|...
684|7.9|1..
---+---+---
238|..5|9..
..6|.9.|3..
.79|.3.|2.1
Solution:
415|672|893
892|453|617
367|981|542
---+---+---
723|516|489
951|348|726
684|729|135
---+---+---
238|165|974
146|297|358
579|834|261
Test case 2: Hard (0.63)
.6.|52.|..8
7..|...|9.2
.82|71.|56.
---+---+---
.59|...|..6
.76|...|14.
8..|...|72.
---+---+---
.18|.36|25.
6.3|...|..1
5..|.41|.9.
Solution:
961|524|378
745|683|912
382|719|564
---+---+---
159|472|836
276|398|145
834|165|729
---+---+---
418|936|257
693|257|481
527|841|693
Test case 3: Very hard (0.96)
.35|.94|...
..8|.53|..9
4..|8..|...
---+---+---
..1|9..|.85
..9|1.5|3..
54.|..8|9..
---+---+---
...|..7|..1
6..|58.|7..
...|41.|82.
Solution
135|294|678
268|753|149
497|861|532
---+---+---
371|946|285
829|175|364
546|328|917
---+---+---
982|637|451
614|582|793
753|419|826
Sandbox question: Should I add more test cases?
• Pretty sure you meant code golf not gode golf. – Blue Sep 2 '15 at 10:49
• Is using constraint programming libraries/capabilities of a language allowed, since I'm only calling them and not writing them? – Fatalize Sep 2 '15 at 14:06
• Also do you intend to add a time limit constraint to the challenge? I could write an answer that tries every possible grid until one is valid, without recursion or stacks – Fatalize Sep 2 '15 at 14:09
• @muddyfish: Definitely. Thanks, fixed. – celtschk Sep 2 '15 at 20:12
• @Fatalize: I don't know constrained programming libraries; it might be something I also want to ban. Maybe ban every built-in library that could not be written without recursion? Also, good point on the brute force method. I don't really like time limits, because they are too vague (different computers have different speed), maybe limitations on loops would be an alternative. Or limitations on how often the same variable/memory location may be changed. – celtschk Sep 2 '15 at 20:25
• Limitations on how often a variable can change would be useless in python because you can setattr globals. – Blue Sep 2 '15 at 22:05
• @muddyfish: setattr also changes a variable (by adding attributes to it), doesn't it? – celtschk Sep 3 '15 at 7:38
• Yes but if you're saying you can't do that, you're saying you can only have a certain number of variables. – Blue Sep 3 '15 at 7:59
• I really don't know what's banned as a recursive technique. For instance, what is a stack data structures? Can I use a list and extract the last element? What if I used dynamic programming instead of recursion? – xnor Sep 4 '15 at 7:46
• Are stack-based languages (CJam, GolfScript, PostScript, FORTH, etc) banned? If so, are languages which use a stack for function/method calls (C, Java, etc.) also banned? Would a better approach be to forget talking about stacks and instead allow only a certain number of memory locations to be used, and limit each of them to 8-bit values? Then C-like language programmers can use a single global array for all their memory, or split it between a few global arrays and some loop variables; stack-based language programmers can work with a limited maximum stack depth; etc. – Peter Taylor Sep 4 '15 at 14:07
• Basically what I want to prevent is algorithms trying to insert some numbers, and when it fails, track back and try something different. – celtschk Sep 5 '15 at 5:43
# The Drunken Knight
Inputs
• a: Starting location of the knight, e.g. A2
• b: Target location of the knight, e.g. B4. The starting and target locations may be the same.
• n: An integer equal to or greater than 0.
Output
The probability that a knight starting at a, moving at random for n turns on an 8x8 board, ends at b.
Notes
• The knight has equal probability to move to any of the squares which it can access.
• There are no other pieces on the board that could block any squares from the knight.
• Any time constraints? – Sp3000 Oct 7 '15 at 14:21
• @Sp3000 I haven't really thought about that yet. Do you think it would be more interesting if I tried to disallow brute force solutions via a time constraint? – absinthe Oct 7 '15 at 14:24
• From a simple test (assuming my implementation is right), caching seems to be all you need if you want to bypass a time constraint (brute force ~10 moves in a lot of secs, caching > 100 in less than a sec). So I guess it might be better off without a time constraint after all... – Sp3000 Oct 7 '15 at 14:51
• 1. Is the knight moving on an 8x8 board? 2. Please tag markov-chain – Peter Taylor Oct 11 '15 at 20:50
• Ah, another thought which occurs: valid output formats? The obvious three are floating point to a certain precision and accuracy; exact rational; and exact rational reduced to simplest form. – Peter Taylor Oct 13 '15 at 13:31
# Time Series Analysis
Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data (from here).
## The challenge
Write a program or function that take a time series vector ts and output a the coefficient of determination of a simple linear regression model estimated by the ordinary least squares.
The coefficient of determination should be the squared correlation between the predicted values of the model and the real values of the input, in such way that any perfectly linear time series should have 1 as it's coefficient.
### Considerations
• ts is a vector of racional numbers of length $\dpi{80} \boldsymbol{n | 2 \leq n \leq 99}$.
• To make the model assume ts is time sucessive and use time as its regressor.
• You can assume that ts is already loaded.
• You can't use build-in modeling functions such as lm() or similars.
• The input should be in any reasonable format.
### Examples
# input
1,2,3,4,5,6
# output
0
#input
0.244,0.569,1.575,1.965,2.604,3.493,4.084,4.436,5.209,6.110,6.979,7.245,8.229,9.161,10.309
# output
0.971
Let the shortest code win!
• This could use either a link to a page that clearly explains the terms and method used, or preferably a better explanation within the post. – Geobits Oct 14 '15 at 18:42
• Good idea, I will link it. – Mutador Oct 14 '15 at 18:43
• The fact that this is time series doesn't change anything since it's just OLS estimation, which is not specific to time series. OLS requires two dimensions, so x is essentially position in the series and y is the value of the series? – Alex A. Oct 14 '15 at 18:43
• I thought about saying time series to add some context, but yeah, it is just that, not sure about the output though. – Mutador Oct 14 '15 at 18:46
• What does lm() do? – flawr Oct 14 '15 at 18:53
• @flawr lm() is a function in R for fitting linear models. – Alex A. Oct 14 '15 at 19:33
# Cops and Robbers: Text Transformations
### Cops' challenge
The cops must write a fully deterministic program that reads input from STDIN and writes output to STDOUT as its only side effects. The mapping from strings to strings performed by this program will be called f.
A cop's post consists of such a program's source code, along with its length and the name of the language the program is written in. The poster must also prepare a possible crack (see below), and release it when their post is safe. A cop's post is safe when it remains uncracked for exactly two weeks.
A safe post where the original code is n bytes long is worth 1/n² points. The author with the highest point total wins. The tiebreaker is popularity (sum of votes of answers in the robbers' thread.)
A single author may not use the same language twice in two different cop answers.
### Robbers' challenge
To crack a cop's post, a robber must figure out which transformation f the program in the post is performing, and write a program P in the language used by the cop, so that both P and f(P) perform the transformation f.
The length, method, or complexity of P are irrelevant; as long as is produces the same output as the cop's original code for any input you pass it, the solution is valid.
Successfully cracking a cop's post is worth one point. The author with the highest point total wins. The tiebreaker is popularity (sum of votes of answers in the robbers' thread.)
This is a bit hard to conceptualize, so here's a very simple example.
If the cop's post is:
## Python 3, 20 bytes
print(input()[::-2])
(i.e., reverse STDIN and remove every other character) The robber's answer might be:
print(input()[::-2])# ) ] 2 - : : [ ) ( t u p n i ( t n i r p
as passing this program as input to itself yields a new program that does the same thing:
print(input()[::-2])#]-:)tpitip
As another example, if a cop writes a C++ program that rotates lines on STDIN by 90 degrees, a valid solution is a C++ program that also rotates lines by 90 degrees, and does the very same thing if you rotate it by 90 degrees.
The difficulty for cops is to come up with transformations that are short to express, but difficult to code around (and, of course, they essentially have to crack their own post -- but at least they know f in advance.)
The difficulty for robbers is to decipher the cops' solutions to find out which transformation f they're performing, and then write any program P such that both P and f(P) perform f.
• "A single author may not use the same language twice in two different cop answers." Not a fan of rules like that. Anyway, is there anything to prevent a cop from using a function like "Return the (n/2)nd character of the string." In that case it's pretty much impossible for f(P) to compute this function. Or should the mapping be surjective? Also do cops and robbers have to use the same language? Finally, how do we prove that a robber's implementation computes the exact same function as the cop's, especially if the cop's code is obfuscated and undocumented? – Martin Ender Oct 19 '15 at 12:07
• If it's impossible for f(P) to compute f, the original post is invalid anyway -- cops must be able to crack their own cop answers and release the solutions if they go uncracked. Also, yeah, they would use the same language. – Lynn Oct 19 '15 at 13:16
• @feersum pointed out a more serious problem in chat: cops can post something like x = readline(); if (md5(x) == 'f0a92b8efc0...') print x, which is nearly impossible for other people to solve. – Lynn Oct 19 '15 at 13:18
# Watermelon Contest
You and your buddies are contesting a lone piece of watermelon left in the middle of the table. You decide to make a program to contest for you.
## The Goal
You want to be the last program standing. Then you get the watermelon.
## The Process
Every iteration 1 program will be eliminated from the watermelon contest. This will be decided by a vote among all the remaining programs. This means that your program will have 1 and only 1 vote to spend on the elimination of another program. Whichever program ends up with the most votes is eliminated.
This continues until there are only 1 program left, the winner. This entire process is considered a "round".
After there is a winner, another round will be started with a new piece of watermelon. All programs will be re-entered. When 10,000 rounds have been completed, the program with the most "wins" will be considered the "grand champion". All the rounds combined is considered the "tournament".
## The Catch
Every program will have an opportunity to send a message to all the other programs. The message must be the same for every program. The message is a string, up to 500 bytes long.
You may have a file in which you may store any data you wish from previous rounds. This will persist over the entire tournament.
## The Program
Write a program or function that accepts the following input in any (convenient) form:
[program-name], [message], "The Slug", "hey! don't vote for me!", "Chucknorium II", "a2TEI5ds#" ...
and outputs the name of the program that you vote for:
Chucknorium II
## Notes
• In the likely event of a tie, one of the high scoring programs will be randomly eliminated
• Messages can be anything that doesn't mess with stuff it's not supposed to (e.g. don't mess with the controller or other people's programs). This is what makes the challenge interesting.
• You may not hard-code program names into your program! In other words, numbering the programs randomly at the beginning of the game should produce the same output. Names are just more fun.
• For observation purposes, your program will still be run even if it has been eliminated. It will not, however, have a chance to vote that round.
• This seems to reward out-of-band collaboration (or posting multiple answers, which is effectively the same thing). – Peter Taylor Oct 20 '15 at 8:20
• So the winner between the last two is basically always random (since I assume neither will vote themselves out)? – Geobits Oct 20 '15 at 18:34
• @Geobits correct. Over several thousand rounds this should even out. – Stretch Maniac Oct 20 '15 at 19:49
• I see. There's nothing about how many rounds will be played, so I was wondering about that. If you add this, you should make sure to distinguish turns/rounds or rounds/games, etc. – Geobits Oct 20 '15 at 19:53
• Also, do programs continue to be called once they're "out"? If I want to track who did well each round, for instance, I don't think I can (since I think I only get input up to when I get voted out). – Geobits Oct 20 '15 at 19:56
• Good point. Yes. You just won't be able to vote. I'll change that (and other things) when I get to a computer. – Stretch Maniac Oct 20 '15 at 19:59
• I don't need to hard-code program names: I just need to agree a signature algorithm whereby the combination of name and message means that either the program is a collaborator or they're piggybacking on our agreement. – Peter Taylor Oct 20 '15 at 20:37
• I see... How about a statement that prevents engineering to specific programs? Would that prevent pre-determined collaboration? – Stretch Maniac Oct 20 '15 at 20:42
• Can you clarify the format of the input – user193661 Oct 21 '15 at 9:05
• I hate to be a party pooper, but I honestly don't see the point of this KOTH since it seems to be more politics than programming. Then again I didn't get the cake cutting one either and look at how that one turned out... – Sp3000 Oct 23 '15 at 12:36
# "The" Gaidhlig Challenge
The Gaidhlig language has some non-trivial rules when it comes to putting "the" in front of a word.
You're challenge is to create a program that takes two inputs, the first input is a string of text, a real word or made up that we can pretend is a noun. The second input is either the letter 'f' or the letter 'b' to denote whether the word is masculine or feminine.
The type of delimiter between these two inputs is your choice but must not be the letters a to z, a dash, or an apostrophe.
1. The first input is always assumed to be a noun.
2. The second input denotes whether the noun is masculine (f) or feminine (b).
3. We will always assume all inputs is valid.
You're output will be the the first input, modified for the following rules:
## Masculine Nouns (where 'f' is supplied.)
1. Before vowels: An t-
2. Before b f m p: Am
3. Before all other instances: An
## Feminine Nouns (where 'b' is supplied.)
1. Before sl sr sn so se si su: An t-
2. Before b m p c g : A' [with lentition]
3. Before f: An [with lentition]
Before all other instances: An
## Whether the word is masculine or feminine:
Words that start with l n r sg sm sp st always start with: An
## Lentition
When lentition is asked for, you must add the letter h after the first letter of your word in cases where the word starts with b c d f g m p s t. Otherwise the word remains unchanged. Further, you must not add an additional letter h if there is already a h in place.
Examples
Lentition of Aran: Aran
Lentition of Ghoul: Ghoul
Lentition of Goul: Ghoul
Lentition of House: House
# Examples
Cat f An Cat
Cat b A' Chat
fear f Am fear
fear b A' fhear
Obair f An t-Obair
Obair b An Obair
snow f An snow
snow b An t-snow
Shortest code in bytes wins.
• I think you have a consistent typo: it should be lenition, right? – Peter Taylor Oct 23 '15 at 21:54
I would love to hear your thoughts about the following challenge. Too difficult or contrived? Or should we actually want more complicated and challenging tasks? I'll provide an example implementation in MATLAB by the time I post the challenge.
## Concert Harp: Pedal Meddle code-golf
Sometimes I hear people say, ‘Why would anyone outside the ICT business learn how to program?’ Often, they get replies like ‘well, sometimes there are problems to which there is no software available’, but when asked about what kind of problems these would be, they’re forced to admit that all they really wanted was to make Conway’s Game Of Life for their own entertainment.
However, I recently found a problem that I think the home-and-garden programmer could face in reality. It considers a harp (a side effect sharing an apartment with a significant other) and a completely dumbfound pianist/programmer, who’s struggling enough with one pedal as it is. The harp in question has seven.
Now, for some background music/harp theory. You may skip as much as your musical background, or lack thereof, allows.
# Music theory (a very condensed version)
Both in a harp and a piano, the strings/keys are laid out as follows:
… C | D | E F | G | A | B C | D | E F | G A | B …
There are seven root notes, [A-G], with at some locations a | in between to signify that there’s a note in between. These |’s are address by making a note higher by appending a #, or lower by appending a b. For example, C#==Db, F#==Gb (and also, Fb==E). Using these notes, we can make a scale. The difference between D and D# is called a half note, and between D and E a whole note.
Scales are made as follows: 1: take the root note 1: Find the next notes by going up a whole or half number of notes in the following pattern (last step in () because that makes you end up at the root note again)
Major: 1 1 ½ 1 1 1 (½)
Minor: 1 ½ 1 1 1 1 (½)
For example, D major and A# minor D E F# G A B C# (D) A# B# C# D# E# F# G# A#
Of course, these notations are not unique, since for example E#==F.
# Problem
A harp has seven pedals, each responsible for one note. This note, they can either raise half a note, or lower half a note. For example, the C pedal can either make all C’s sound like C#(==Db) or like Cb(==B). Let’s designate raising by a pedal setting of +1 and lowering by -1, and leaving it as-is as 0. Given an input scale, write a program or function that outputs how each of the pedals should be set to achieve all of the tones in that scale.
# Input
A scale designation. Scales are designated as follows: R[m][k]
• R: Root. [A-G]
• [m]: Optional: modification. Either flat b or sharp #
• [k]: Optional: minor key, designated as m.
Valid inputs would be for example
• C C major
• Dm D minor
• Fbm F flat minor
# Output
The pedal setting -1, 0 or 1 for each of the pedals, in the following order, reflecting the actual location of the pedals on a harp:
D C B | E F G A
Test cases: (not exhaustive; i.e., there may be more solutions, I only included a double solution to one input)
C , B#, Am -> 0 0 0 | 0 0 0 0
Cm, B#m -> 0 0 0 | -1 0 0 0
F#m, Gbm -> 1 1 0 | 1 1 1 1
or -> -1 -1 -1 | -1 0 -1 -1
• I'm confused: you say "Given an input scale" and then describe the input as "A chord designation". Which is it? – Peter Taylor Oct 25 '15 at 7:38
• @PeterTaylor thanks, I was doubting between two versions of this challenge so that must've slipped through, edited now. What do you think of the challenge itself though? – Sanchises Oct 25 '15 at 10:18
# Stupid leaks
Considering how immensely successful my last two challenges have been, I'll do a different style this time.
Drip, drip, drip, drip...
It's the year 3000. Due to clean air shortages, a system was created that turns water into air. However, that caused a water shortage (don't you love progress?). Therefore, a new (expensive!) system was created to convert the air back into water.
All that to say that water prices have skyrocketed.
And yet, here you are, stuck with a leaky faucet. Plumbers are expensive, but if you do it yourself, you have to order the parts online and wait for them to get here. You need a way to determine what is cheaper: calling a plumber and getting it fixed in a day, or buying the parts online but having water leak until they get here.
## The input
You need to take seven positive numbers as input:
• The price of water per gallon g.
• The number of drops leaking per hour d.
• The number of gallons wasted per drop z. This will always be a floating-point number less than 1.
• The price of calling a plumber to fix it p.
• The price of ordering the parts online o.
• The number of hours it takes for the plumber to fix the leak l.
• The number of hours it takes for the parts to get here s.
Only g, z, p, and o can be floats; all the rest will be counting numbers (integers greater than 0).
The gallons of water wasted per hour from the leak is d*z*g. For the sake of brevity, let's call that rate R. If R*l+p is less than R*s+o, then you should print/return DIY!. If greater than, print/return Call the plumber!. If equal, print/return Whatever....
## Precision
Floating-point precision through calculations is very...weird. Basically, your program can use whatever your language's default is. If you're using a language where the default floating-point type has infinite precision. If your language's default floating-point type can lose precision throughout calculations (like in Python, where .1+.1+.1+.1+.1+.1+.1+.1 != .8), then that works, too.
Short version: floating-point semantics and precision are however your language is by default.
## Scoring
Code-golf. Shortest wins. Standard loopholes banned.
• What is a "counting number"? Does it include 0? You also might want to specify at what precision the equality will be checked (presumably hundredths?), and how rounding should be handled for the "Whatever..." case. – FryAmTheEggman Nov 2 '15 at 18:24
• @FryAmTheEggman Is this better? – kirbyfan64sos Nov 2 '15 at 21:03
• My language doesn't support floating point at all, so can I always just output Whatever...? More seriously, I'm failing to see the point of this question. Do some trivial arithmetic and then output one of three strings which between them take up 90% of the code? – Peter Taylor Nov 2 '15 at 21:15
• I think it is better, but I think you do have to spend a bit addressing Peter's comment about languages that don't support floats. – FryAmTheEggman Nov 2 '15 at 23:55
Oh, no! There's been a fire at Claus HQ, and it's destroyed Santa's flight route! He has called on you to come up with a route that has him arriving at every home between the hours of 9PM Christmas Eve and 7AM Christmas Day. He'd also like to finish his deliveries in as little time as possible.
Input
Your program will take data for as many geographical areas as you or Santa chooses to enter. For every geographical area to be added, your program will accept:the name of the area, and the geographical coordinates of the area's center and the number of "nice" children who live in that geographical area.
Output In a .csv file your program will place:
1) Each geographical area's name, 1 per line, listed in order, with the area to be visited first placed first, and the area to be visited last placed last.
2) Next to the name, an ETA to the area in local time, and estimated time of departure, assuming Santa takes about 1/6100 seconds per child.
3) At the last line of the file, the total number of miles travelled, as determined by the sum of the great-circle distances - determined using the Vincenty formula, assuming an oblate spheroid Earth - from each stop to the next. Other than as stated above, I don't care what your file looks like. Rules
-You may not use any external library to perform any task except the following:
--Converting from one timezone to another.
-You may give your output file whatever name you choose.
-Estimated departure times must be no later than 7:01 AM.
Scoring For simplicity, scoring will be done using US states as areas. The population inputted will be the number of 14-and-under Christians residing in each state.
You get 1 point for every thousand miles travelled and an additional point for every hour of travel.
• 1. "Vincenty formula" should be hyperlinked to a clear explanation, and geode parameters should be supplied. 2. Converting between timezones isn't hard, and doesn't need an external library, but the input does need to include the timezones. It would also be good to state explicitly whether or not the International Date Line needs to be taken into consideration. 3. We have no way of calculating travel times, because you haven't given us a speed. We also have no way of scoring our programs, because you haven't supplied the test data. – Peter Taylor Nov 8 '15 at 8:11
## Convert your Language to Turing Machine Code
You are locked into a room, with only a laptop and a single-bidirectionallyinfinite-tape, two symbols Turing Machine (Therefore supports only 0s and 1, and it has a tape which is infinite in both directions). Your perverted captor set you a task: he will set you free only if solve all of the problems on the Project Euler page.
However, there is a catch. You are not allowed to solve the problem using your laptop, but you'll need to use the Turing Machine instead.
Since you think it will be incredibly tedious to convert your code to Turing Machine Code, you decided to write an source-to-source compiler on your computer, and since you're incredibly eager to get out, you decide to write your code in the shortest form possible.
## Summary
• Write in your language of choice an compiler that converts your language into your Turing Machine Code.
• Your language may not need to be completely transformed, but at least the basic operations needed for mathematical computations need to be implemented, therefore you will need to implement at least three of the following, with mandatorily being able to translate a looping construct of your language, then for each more operator implemented you will get a 10% bonus:
• Subtraction
• Division
• Multiplication
• Modulus
• Looping (Mandatory)
• Bitwise operators: &(AND) |(OR) ^(XOR) !(NOT) (they count as 3 distinct ones)
In practice your code should be able to translate at least a primality testing algorithm into Turing Machine Code.
• When I refer to Turing Machine Code, I refer to code for TML (Syntax explained later here)
## Technicalities
• The turing machine does not support decimal numbers, only binary, so you may (or may not if you have a better method) write numbers in unary. e.g 11111011 are respectively 5 and 2
• Since the Turing Machine does not have a predefined IO, you may consider leaving the return value on the tape and halting as returning a value. e.g 111110000 and halting will return 5.
• For the Input, you have full access to the starting tape, according that you don't do any other operation rather than initializing the variables. e.g if you implemented add(a,b) and run add(5,7) you may initialize the tape to this: 1111101111111 or 11111001111111 or 111110001111111 exc. but you may not initialize the tape to: 111111111111
• TML Language description. TML which is the language your are interpreting your code to, uses a systems of cards, in this form 0{0-1}{0-1}{Integer}-1{0-1}{0-1}{Integer} where the first value determines which piece of code to execute (reading a value from the tape and comparing it, if it is 0 it will execute the code after the 0 until the dash, else the other piece of code), the second one tells what to write on the tape(0, 1), the third one finally tells the tape whether to go left(1) or right(0). The last value tells us to which card to go next, with the card 0 reserved for halting.
Note that TML is not 100% complete, so if your code follows the specs, but doesn't actually work just let me know, so I can fix the Language interpreter (if it's actually broken)
This is Code Golf, so the shortest code wins!
• 1. The mention of interpreters is confusing: what you're asking for is a source-to-source compiler. 2. It would be useful to mention in the first sentence that the TM only supports two symbols. 3. What does "when to execute the card" mean? 4. In what circumstances could n be useful? It just seems to overcomplicate the explanation. 5. Is the tape infinite in both directions or just one? 6. What are "the basic operations needed for mathematical computations"? – Peter Taylor Nov 7 '15 at 12:31
• @PeterTaylor I've modified the question a bit, I think it should be clearer this way, the only thing I haven't changed is the use of n, which I think could be useful if one were to implement something like a "pass" card (just speculating). What do you think? – WizardOfMenlo Nov 7 '15 at 13:04
• That's rather a long list of mathematical operations! Is the intention to handicap higher levels languages because they support more operations and so will have more cases to compile? Write nothing is equivalent to writing what you read; move nothing is "Enter an infinite loop" if you wrote the symbol you read, or "See the other side of this card" otherwise. So all ns can be eliminated trivially except the infinite loop case, which can be eliminated by adding two cards x and y with instructions 000y-100y and 001x-101x respectively. – Peter Taylor Nov 7 '15 at 13:19
• @PeterTaylor. The long list of operations I've introduced as sample operations, that I think a standard programming language should have, however I am no Programming Language expert, and I'm quite ignorant (yet) of the variety that they present, especially regarding the operations. Regarding the "n", I think that your logic is more than valid, and I will also project this change to the language itself. Thank you a lot! P.s If you have some more effective ideas for the operations to implement, please be welcome! – WizardOfMenlo Nov 7 '15 at 23:17
• There are a lot of astandard languages on this site! The obvious language to use to answer this challenge is BF, which doesn't have anything more than increment and decrement. (And even a language as mainstream as Java doesn't have an operator for exponentiation or integer square root). – Peter Taylor Nov 7 '15 at 23:39
• @PeterTaylor I've now added a system of bonuses, what do you think? Do you think the question is ready to be asked? – WizardOfMenlo Nov 8 '15 at 8:51
• In addition to the issue of languages not having all of the operations on that list built-in, I find long lists of bonuses unappealing in general. I suggest that you scrap the list of operations, and instead require that the source language has to be able to translate all of its own functionality. This would create an interesting tradeoff between languages which are powerful, but have too many functions to implement, and ones that have the advantage of being minimal, but are difficult to program in. If you go with this, make sure to require the source language be Turing-complete. – feersum Nov 9 '15 at 8:10
# First 100 Twin Primes
### What Are Twin Primes?
Twin primes are two prime numbers that has a gap of 2 between them; i.e. 3-5, 5-7, 11-13...
### Goal
• Take no input and print first 100 twin primes to STDOUT.
• Shortest code in bytes wins.
### Rules
• Your submission should be an executable complete program.
• Every prime couple should be on a new line.
• Printed twins should have a space between them.
• All standard rules are applied.
### Restrictions
• No usage of built-in or external methods or functions that returns a prime number.
• No hardcoded prime numbers except 2 (as number, not count).
Any suggestions? I looked to similar questions but i couldn't see an identical one.
• Unfortunately, we consider questions for which most solutions can be easily transferred duplicates, so without the restriction on builtins (and probably even with) your question would be closed as a duplicate of codegolf.stackexchange.com/questions/31822/… – lirtosiast Nov 10 '15 at 1:14
# Rounding Fractions
Back in the old days of game programming, before FPUs were the norm, games predominantly used fixed-point math to represent non-integer values. Typically, the lower 8 or 12 bits of a 32-bit word are used as fractional parts, and the rest are treated as the integral part. Sometimes when looking at fixed-point constants in old game code, I get confused trying to figure out what they were actually trying to approximate, particularly if it's not a number between 0 and 1 (0x4C = 0.3, 0x119 = 1.1?, 0x73 = ???).
Since just rounding 1/256ths and 1/4096ths has a limited range of applications, the challenge here is to take any integer ratio a/b, and output the simplest fraction that rounds down to it. More specifically, output the ratio p/q with lowest denominator such that a/b ≤ p/q < (a+1)/b.
This code should support any non-negative a and positive b up to at least 10,000, and should run in a reasonable time for anything in that range (nothing on the order of minutes, at least). Answers should be correct, i.e., no rounding errors due to floating-point. Answers can be in the form of a full program or function, and use any convenient input / output (a string '1/2', an ordered pair (1, 2), a list of two integers {1, 2}, etc).
This challenge is code-golf, lowest score in bytes wins.
Some test cases:
1/3 -> 1/3
4/10 -> 2/5
33/100 -> 1/3
66/100 -> 2/3
67/100 -> 19/28
115/256 -> 9/20 (who knew?)
0/417 -> 0/1
653/654 -> 653/654
1404/702 -> 2/1
## Sandbox Questions
Hey! I'm a long-time lurker, first-time-question-asker; I'm pretty sure this hasn't been asked before (more general than "Un-round fractions", not quite "Closest fraction"). Not sure what to set for a deadline before accepting an answer, since this is my first time actually participating. Anything else obvious I missed?
• Don't set a deadline before accepting an answer, at least in the sense of putting a date in the question. As a rule of thumb, wait a week, accept the winning answer, and if someone later posts an improved answer then change the accepted answer. – Peter Taylor Nov 14 '15 at 18:19
• Sort of duplicate? The differences seem to be normal rounding vs rounding down and the odd restriction to a single loop in the existing challenge. – Martin Ender Nov 14 '15 at 23:46
• That challenge is only about decimals, whereas this starts with an arbitrary rational number. Not sure if that's enough of a differentiation, but this one's broader, I guess. – Jonathan Aldrich Nov 15 '15 at 0:02
• @MartinBüttner, I agree it's borderline, but some of the answers to the older question couldn't easily be adapted because they rely on converting float to string, and that only works in base 10. – Peter Taylor Nov 15 '15 at 8:47
• @PeterTaylor It is a duplicate. floor(x)=round(x-0.5) – Xwtek Nov 17 '15 at 12:04
• @ChristianIrwan, why is that relevant? – Peter Taylor Nov 17 '15 at 14:32
• @PeterTaylor Oh, sorry, misread. – Xwtek Nov 18 '15 at 9:45
# Nondeterministic Turing Machine
## Introduction
We all know the concept of Turing machines, if not let's reiterate the concept. We have the following things that define a Turing machine:
• A tape that is divided into cells and is (potentially) infinite to the right.
• A transition function that defines state changes and the direction in which the head shall move, based on the current state and the input read.
We can now have to supply some input, and the definition of the transition function (the set of states is implicit, and contains all states defined in the function). Additionally we assume that the alphabet is [0-9a-zA-Z!?()^+-] and space is the blank symbol. The tape head is then postioned over the leftmost character on the tape, which in our case is the first character of the input tape. The machine then starts applying the transition function. The computation continues until one of the following happens:
• The machine reaches the HALT state
• There is no transition defined in the transition function for the current state with the given input.
If the first case occurs, we say that the machine "accepts" the word. If the second case occurs we say that the machine "rejects" the word.
We can now extend this definition, to obtain a nondeterministic Turing Machine. To do this we allow the transition function to define more than one "next" state for each state/input combination. The machine can then choose which "execution path" to take. We then say that the machine accepts the word if it reaches the "HALT" state in any execution path, and it rejects it if it does not reach this state in all exection pats.
## Problem definition
You must supply a program or function that accepts a string and returns a truth-ish value (either true/false, or 0/1, or anything else, at long as the meaning is clear) indicating wheter the word is accepted for at least one computation path or not. The input has the following for:
(<current_state>,<input_read>,<output>,<follow_up_state>,<move_direction>)
All the parts of the tuple are provided as strings where
• <input read> is a string of length 1, which can contain any character except ","
• <output> is also a string of length 1
• <move_direction> is either "l" (move left) or "r" (move right)
You may assume the following:
• The machine will always halt (i.e. no infite loops)
• There is only one state on which the machine halts which is HALT
• The alphabet is [0-9a-zA-Z!?()^+-] plus space as the blank symbol
• The leftmost character is always a blank, to indicate the ending of the tape on the left side.
• States are defined implicitly by the tranistion function. So the only states the machine knows are the one that occure during the definition of the transition function and there is no explicit definition of the states.
• The initial state is always s_i
The input has the following form
<nr_of_tuples_for_definition_of_transition_function>
<tuple_1>
<tuple_2>
...
<tuple_n>
<input_string>
All lines end with a newline character (\n) and the input string is not under double quotes.
Standard loopholes are disallowed! Shortest answer in byte wins.
# Notes
Working example is still missing, I'll update that in the following days
• "The machine then starts the computation at the beginning of the input": I suggest adding "i.e. with the first character under the tape head, and the others to the right". I'm not sure what you mean by "States are defined implicitly by the tranistion function". – Peter Taylor Nov 13 '15 at 13:59
• Followup questions: 1. What characters can appear in the name of a state? 2. What delimiters occur between the tuples in the input? 3. What delimits the end of the initial tape contents and the start of the tuples? – Peter Taylor Nov 13 '15 at 18:20
• Thank you very much for your feedback. I just realized that I didnt think the input part through, I'll need some time to rethink that. – wastl Nov 13 '15 at 18:54
• I realised today that there's another thing which needs specifying in the input: what is the starting state? This could done implicitly by saying that it's the <current_state> of the first tuple. – Peter Taylor Nov 14 '15 at 18:20
• I added input specification. Do you think this is an adequate method to provide the input? – wastl Nov 16 '15 at 11:39
• Yes, that works. – Peter Taylor Nov 16 '15 at 12:16 | {} |
RSQLite::dbGetPreparedQuery() is deprecated in AnnotationForge
2
0
Entering edit mode
MOD • 0
@mod-12330
Last seen 5.2 years ago
Teagasc Dublin
Hi,
I'm trying to run create an annotation database for Agaricus bisporus through NCBI in AnnotationForge, but I get a couple of errors:
Error in makeOrgDbFromDataFrames(data, tax_id, genus, species, dbFileName, :
'goTable' GO Ids must be formatted like 'GO:XXXXXXX'
In addition: Warning messages:
1: RSQLite::dbGetPreparedQuery() is deprecated, please switch to DBI::dbGetQuery(params = bind.data).
2: Named parameters not used in query: genes
3: Named parameters not used in query: name, value
How do I work around the deprecated RSQLite::dbGetPreparedQuery() function? The full script is given below along with sessonInfo. Furthermore, when I open the gene2go file the GO IDs seem fine so not sure why the go Table is not recognizing the IDs. Does anybody have an idea why the GO IDs are not recognized (I have pasted the top rows from the gene2go file that AnnotationForge obtained from NCBI at the bottom of this page)?
My script is:
> library(AnnotationDbi)
> library(GenomeInfoDb)
> library(biomaRt)
> library(survival)
> libraryUniProt.ws)
> library(knitr)
> library(DBI)
> library(mclust)
> makeOrgPackageFromNCBI(version = "0.1",
+ author = "my name",
+ maintainer = "email.com",
+ outputDir = ".",
+ tax_id = "936046",
+ genus = "Agaricus",
+ species = "bisporus")
If files are not cached locally this may take awhile to assemble a 12 GB cache databse in the NCBIFilesDir directory. Subsequent calls to this function should be faster (seconds). The cache will try to rebuild once per day.
preparing data from NCBI ...
getting data for gene2pubmed.gz
rebuilding the cache
extracting data for our organism from : gene2pubmed
getting data for gene2accession.gz
rebuilding the cache
extracting data for our organism from : gene2accession
getting data for gene2refseq.gz
rebuilding the cache
extracting data for our organism from : gene2refseq
getting data for gene_info.gz
rebuilding the cache
extracting data for our organism from : gene_info
getting data for gene2go.gz
rebuilding the cache
extracting data for our organism from : gene2go
processing gene2pubmed
processing gene_info: chromosomes
processing gene_info: description
processing alias data
processing refseq data
processing accession data
processing GO data
Please be patient while we work out which organisms can be annotated with
ensembl IDs.
making the OrgDb package ...
Populating genes table:
genes table filled
Populating pubmed table:
pubmed table filled
Populating chromosomes table:
chromosomes table filled
Populating gene_info table:
gene_info table filled
Populating entrez_genes table:
entrez_genes table filled
Populating alias table:
alias table filled
Populating refseq table:
refseq table filled
Populating accessions table:
accessions table filled
Populating go table:
go table filled
Error in makeOrgDbFromDataFrames(data, tax_id, genus, species, dbFileName, :
'goTable' GO Ids must be formatted like 'GO:XXXXXXX'
In addition: Warning messages:
1: RSQLite::dbGetPreparedQuery() is deprecated, please switch to DBI::dbGetQuery(params = bind.data).
2: Named parameters not used in query: genes
3: Named parameters not used in query: name, value
> sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
locale:
[1] LC_COLLATE=English_Ireland.1252 LC_CTYPE=English_Ireland.1252
[3] LC_MONETARY=English_Ireland.1252 LC_NUMERIC=C
[5] LC_TIME=English_Ireland.1252
attached base packages:
[1] stats4 parallel stats graphics grDevices utils datasets methods
[9] base
other attached packages:
[1] mclust_5.2.2 DBI_0.5-1 knitr_1.15.1
[4] UniProt.ws_2.14.0 RCurl_1.95-4.8 bitops_1.0-6
[7] survival_2.40-1 biomaRt_2.30.0 GenomeInfoDb_1.10.3
[10] AnnotationHub_2.6.4 AnnotationForge_1.16.0 AnnotationDbi_1.36.2
[13] IRanges_2.8.1 S4Vectors_0.12.1 Biobase_2.34.0
[16] BiocGenerics_0.20.0 RSQLite_1.1-2
loaded via a namespace (and not attached):
[1] Rcpp_0.12.9 splines_3.3.2
[3] lattice_0.20-34 xtable_1.8-2
[5] R6_2.2.0 httr_1.2.1
[7] tools_3.3.2 grid_3.3.2
[9] htmltools_0.3.5 yaml_2.1.14
[11] digest_0.6.12 interactiveDisplayBase_1.12.0
[13] Matrix_1.2-8 shiny_1.0.0
[15] memoise_1.0.0 mime_0.5
[17] BiocInstaller_1.24.0 XML_3.98-1.5
[19] httpuv_1.3.3
An example of the gene2go file obtained from NCBI is:
#tax_id GeneID GO_ID Evidence Qualifier GO_term PubMed Category 3702 814629 GO:0005634 ISM - nucleus - Component 3702 814629 GO:0008150 ND - biological_process - Process 3702 814630 GO:0003677 IEA - DNA binding - Function 3702 814630 GO:0003700 ISS - transcription factor activity, sequence-specific DNA binding 11118137 Function 3702 814630 GO:0005634 IEA - nucleus - Component 3702 814630 GO:0005634 ISM - nucleus - Component 3702 814630 GO:0006351 IEA - transcription, DNA-templated - Process
annotation microarray annotate annotationforge • 1.8k views
1
Entering edit mode
@james-w-macdonald-5106
Last seen 2 days ago
United States
Your post title is misleading, as the real problem here is the error, not the warning. The error arises for species that have no GO data at NCBI. As a fail-over we then parse data from Blast2GO, and if that results in no data, then it fails because of a small bug. That's fixed now, and the updated version (1.16.1) should make its way through the build servers in the next day or so.
The warning is a long-standing issue that has to do with changes that were made in the RSQLite package, which AnnotationForge depends on. This doesn't stop anything from working - it's just letting us know that a function we are depending on is probably going to disappear in the future.
The devel version of AnnotationForge is now updated to remove the warnings, so once we have the new release in April, those warnings will go away as well.
0
Entering edit mode
MOD • 0
@mod-12330
Last seen 5.2 years ago
Teagasc Dublin
Ok, thanks for the info and your reply James. I'll keep an eye out for the update. I had thought that the warning was part of the issue of not seeing the GO IDs. The gene2go file though did appear to have GO IDs though (see the end of my original question for the first few lines of the gene2go dataframe) and I was wondering why the program was not parsing that data into the goTable?
1
Entering edit mode
If you want to comment on a post, please click the ADD COMMENT link and type in the box that appears. The 'Add your answer' box below is intended for answers.
While you did show some rows from gene2go, you should note that the taxonomic ID for those rows (the first column) is 3702, which is Arabidopsis thaliana, not Agaricus bisporus. There are no rows in the gene2go file that have 936046 in the first column, hence no data parsed out for your GO table.
0
Entering edit mode
ok, thanks. I did not see that. Any idea why it obtained Arabidopsis thaliana GO ID's and not Agaricus bisporus? I'll try to see if I can source the GO IDs some where else and use the makeOrgPackage(). Thanks again for your help.
1
Entering edit mode
The gene2go file that is downloaded is a generic file that contains Entrez Gene ID -> GO ID mappings for all the species that NCBI has currently annotated. It just so happens that A. thaliana is at the top of the file. The function makeOrgPackageFromNCBI downloads all these generic files, then extracts data that are specific to whatever species you are interested in, and uses those data to build the orgDb package.
In the case of GO mappings, there are no mappings for your species in gene2go. So the function then queries blast2go, and gets all the mappings they have. It so happens that there are 42 (or 44? I forget) mappings for your species in blast2go, but unfortunately there aren't any Entrez Gene IDs associated with those GO terms, so they get dropped as well. In the end, there aren't any Entrez Gene -> GO mappings that makeOrgPackageFromNCBI can find, so you end up with an orgDb package that has everything but the GO table.
0
Entering edit mode
ok, thanks for the information. I really appreciate it. I have found GO annotation for Agaricus bisporus on the JGI website for that species. I've downloaded it and will attempt to construct a database using that. | {} |
Please, try EDU on Codeforces! New educational section with videos, subtitles, texts, and problems. ×
### Egor.Lifar's blog
By Egor.Lifar, 13 months ago, translation, ,
Hi!
On Jun/01/2019 17:35 (Moscow time) we will host Codeforces Global Round 3.
It is the third round of a new series of Codeforces Global Rounds supported by XTX Markets. The rounds are open for everybody, the rating will be updated for everybody.
The prizes for this round:
• 30 best participants get a t-shirt.
• 20 t-shirts are randomly distributed among those with ranks between 31 and 500, inclusive.
The prizes for the 6-round series in 2019:
• In each round top-100 participants get points according to the table.
• The final result for each participant is equal to the sum of points he gets in the four rounds he placed the highest.
• The best 20 participants over all series get sweatshirts and place certificates.
The problems of the round were developed by me, egor.lifar and UnstoppableSolveMachine. We are glad to say that we have prepared 8 tricky tasks for you. We hope you will enjoy them!
Thanks to KAN and _kun_ for the help with coordinating the round, and cookiedoth, lewin, voidmax, 300iq, Aleks5d, Learner99, Jeel_Vaishnav, arsijo, KAN, Ashishgup, AlexFetisov, V--gLaSsH0ldEr593--V for testing!
Good luck!
UPD. 1:
Here are a few words from the sponsor of Global Rounds, XTX Markets.
Hello, I’m Yuri Bedny from XTX Markets! While studying at university I actively took part in programming contests and later got to apply these skills at XTX Markets. Our office in London (UK) is looking for candidates for two open positions. We hope it will be interesting for some of you to apply your skills and knowledge in problems we are solving. I wish good luck to all the participants and hope you’ll enjoy the problems.
Open positions at XTX:
• XTX Markets is looking to expand its Java team. You’d be expected to be able to design low-level data structures and algorithms to fit particular performance characteristics. We have a direct impact on profits and very little bureaucracy and are open to candidates with no financial experience. Read the details via the link.
• XTX Markets is hiring into its Core Development team in London. This team is responsible for the design and implementation of the platform that provides a broad range of post-trade functionality essential to the firm’s business. This complex, distributed system has been developed in-house using a distributed microservices architecture to provide high throughput (thousands of trades per second) and high availability (24x7 operation) while also allowing very agile but well-controlled development practices (multiple production releases per day). The system is implemented primarily in Go, but prior Go experience is not required for those willing and able to learn quickly. The only necessary skills are exceptional programming ability, a passion for solving real-world problems, and a dedication to rigorous software engineering. Read the details via the link.
If you are interested in these positions, then fill out the application form via the link or during registration for the competition.
UPD. 2:
The scoring distribution will be 500 — 1250 — 1500 — 1750 — 2250 — 3000 — 4000 — 4000. The round will last 2 hours and 15 minutes.
UPD. 3:
Current results of all Global Rounds.
Congratulations to the winners!
UPD. 4:
Announcement of Codeforces Global Round 3
• +121
» 13 months ago, # | +76 Another rated round after a long break! Yeah!
» 13 months ago, # | 0 Hope it will be a balanced contest. Not like the previous one :)
• » » 13 months ago, # ^ | +25 Fortunately the wish came true. Good problemset!
• » » » 13 months ago, # ^ | -19 IMHO bad problemset
» 13 months ago, # | -49
» 13 months ago, # | -29 Why the Announcement link is given two times?
» 13 months ago, # | ← Rev. 4 → -35 deleted :(
» 13 months ago, # | +14 Where is the number of problems and score distribution?
• » » 13 months ago, # ^ | ← Rev. 3 → +12 It will be posted not long before the contest starts, maybe.(It is posted just two minutes before starts.)
» 13 months ago, # | +8 Hope everyone can have a good performance! (and hope I can get red again)
» 13 months ago, # | +5 Hi, new member of the CodeForces community here. Looking forward to participating in the contest and getting a good rating and learning cool stuff. Wishing everyone the same too!
» 13 months ago, # | +29 I'm interested in XTX Markets job but are they interested in me XD ? (I know the answer don't tell me)
» 13 months ago, # | ← Rev. 3 → -52 I like global rounds (I didn't entered any of them)
• » » 13 months ago, # ^ | ← Rev. 2 → +60 Ok, ok I deserved it (-52)
» 13 months ago, # | ← Rev. 2 → -15 Do global rounds support hacking? if yes then, during or after the contest?
• » » 13 months ago, # ^ | +46 During the contest of course, just like in regular rounds.
• » » » 13 months ago, # ^ | -6 Thank you for the clarification.
» 13 months ago, # | -33 Are there remote jobs at XTX Markets?
» 13 months ago, # | -34 Here we go!!
» 13 months ago, # | -41 How many problems will be shown
• » » 13 months ago, # ^ | -25 eights
» 13 months ago, # | 0 1.. 3.. 5.. 7.. 9.. Finally!! the contest season is back!
» 13 months ago, # | -10 hopes this contest will bring happiness in our life
• » » 13 months ago, # ^ | +38 happiness is a state of mind.
» 13 months ago, # | ← Rev. 3 → -40 round will be rated for those who have not solve anything?
• » » 13 months ago, # ^ | +52 If you do not make a single attempt, it won't be rated for you. Otherwise it will be.
» 13 months ago, # | -24 Why you don't thanks MikeMirzayanov for amazing systems Codeforces and Polygon! ?
• » » 13 months ago, # ^ | 0 they have bro
• » » 13 months ago, # ^ | -31 LOL what an attempt to get free upvotes
» 13 months ago, # | 0 Why I was told that too many request and couldn't submit my code?
» 13 months ago, # | +43 JOJO's Bizarre Adventure? That's pretty cool!
• » » 13 months ago, # ^ | ← Rev. 5 → +24 This comment has been edited and I sincerely apologize for my lack of knowledge about this great anime
• » » 13 months ago, # ^ | +10 Still cannot believe I lost this round...
» 13 months ago, # | -7 Is this a JoJo reference ?
• » » 13 months ago, # ^ | +16 Aha! I believe that there are only two sorts of people in the world, those who like JOJO, and those who don't know JOJO!
» 13 months ago, # | +58 Problem D was easier than Problem C. Am I the only one who felt that way?
• » » 13 months ago, # ^ | +4 C feels somewhat hit-or-miss, if you're familiar with the trick behind it, it's a cakewalk, otherwise it'd take some brainstorm.
• » » » 13 months ago, # ^ | +27 I found the way to solve the problem only took a short time, but it took me more than an hour just to correct the error in the code. :( But it took only 14 minutes to solve D, while it took 91 minutes to solve C.
• » » » 13 months ago, # ^ | -16 What is the trick?I can't think anything as a trick. Is it "if $|i-j|*2>=n$ then $|i-j|>=n/2$"?
• » » » » 13 months ago, # ^ | 0 I think trick is temporary swap, in an 8 long array if you need to change index 6 and 7, first swap(1,6), then swap(6,7), then swap(1,7). Part "you should use no more than 5⋅n operation" was a good hint but I wasn't able to think that during contest.
• » » » » 13 months ago, # ^ | 0 I meant the one of sorting the array using minimum swaps (if not counting the condition of $|i - j| \ge n/2$) by DFS idea.Once grasping that, the derived solution would be trivial.
• » » » » » 13 months ago, # ^ | 0 I don't know how you used that in this problem but I don't see any similarity.
• » » 13 months ago, # ^ | 0 Oppositely I thought E was easier than D.
• » » 13 months ago, # ^ | +28 Weird flex, but D was extremely hard for me, lol
• » » 13 months ago, # ^ | ← Rev. 2 → -10 Did we have to use the fact that every integer from 1 to 2⋅n is mentioned exactly once anywhere in the solution? I think I have not used it anywhere in my soln. But its complexity is nlog(n) using segment tree. UPD: Got it.
• » » » 13 months ago, # ^ | 0 No, see my solution 54947377 Hopefully it would be correct, coded it in the last few minutes
• » » » 13 months ago, # ^ | 0 Yes we can use the fact that every integer is mentioned exactly once.It makes the comparisons easier.We divide the pairs(a,b) in two parts:- Part 1:(a>b) Part 2:(ab1b2) bcoz(a2>a1 and a1>b1 so a2>a1)If part 2 has more elements then the answer will be the pairs sorted in decreasing order of second element.
• » » 13 months ago, # ^ | 0 i felt is was easier than B also.
• » » » 13 months ago, # ^ | 0 That, might be also right to some extent. Actually it took 18 minutes for me to solve B, which is longer than the time took for me to solve D.
» 13 months ago, # | ← Rev. 3 → +24 Regarding problem F, is it true that there will always exist at least one $s$ with not more than $2$ bits being set? :OMy solution only checked for those (and their compensation, just in case) and got Pretest passed :OUPD: It did fail at test 49. GG. :DUPD2: My source code was actually screwed up a bit. Still AC after fixing some bugs and optimizing a bit. ;) (54974178)
• » » 13 months ago, # ^ | +13 One case I can think of is an array with all elements equal(of size 60 or so) and maski has the ith bit set.
• » » 13 months ago, # ^ | 0 No. Suppose that we have one object for each mask with exactly two bits set, and that each object has value 1. If we use an s with at most 2 bits set then only a small fraction of the objects will have their values negated, so the sum will still have the same sign. And the same thing happens for all s with 60 (or more) bits set (this was what you meant by compensation, right?).But if you're lucky such a case won't be in the system tests. :)
» 13 months ago, # | +8 How to solve F?
• » » 13 months ago, # ^ | 0 Maintain sum and try to randomly switch bits?
• » » » 13 months ago, # ^ | +8 I did this, and my rationale about this solution was that the expected value of the sum is 0 because every number will flip the sign with probability 1/2. But since expected value is only average there might be the case that there a lot of small positive numbers reachable and few large negatives. In this case, random search can fail. I'm not sure if random search is a expected solution for this problem, and I will be happy to hear if it is a valid solution, why it is expected to work on time (with high probability).
• » » 13 months ago, # ^ | ← Rev. 4 → -8 I passed the pretests with following algorithms: Repeat this operations until you find an answer. 1. Choose value s randomly between $1$ and $2^{62}−1$. 2. Check the total prices. To calculate, it usually takes $O(NlogN)$, but if you use __builtin_popcountll (if C++), it takes $O(N∗C)$ and $C$ is around $5$. 3. If the sign of total prices has changed, you successfully found the answer. On average, you should do $~2$ operations. Is there any doubt that there is some cases that the sign of total price changes in very less percentage of $s$? I think it's not. I checked $100,000$ random cases which the value of mask is between $1$ and $63$. The worst case was the sign of total prices changes in $8/63$ of s assume s is also between $1$ and $63$. P.S. The testcase which the sign changes with only $8$ out of $63$ $s$ is as follows: Testcase10 0 45 -1 11 -1 21 -1 45 0 6 -1 51 0 19 0 40 0 60 0 51
• » » » 13 months ago, # ^ | ← Rev. 2 → +61 Try this test case:2621441 1310721 1310731 131074...1 393215The only answer is $393216+524288*k$.
• » » » » 13 months ago, # ^ | ← Rev. 3 → +15 Wow, incredible. :)
• » » » » 13 months ago, # ^ | 0 amazing. how did you get this?
• » » » » 13 months ago, # ^ | ← Rev. 2 → 0 Given the fact that only the bits being set on for at least one mask are counted, I guess your testcase can still be intercepted: 54974178.Changing $n$ into $262145$ and adding a line: 1 2^62-1 will counter such solutions.The systest of F still looks surprisingly weak as of now, which is strange.
» 13 months ago, # | 0 How to solve E?
• » » 13 months ago, # ^ | +9 Sort points via $s_i$. Then maintain stack of points having $s_i < t_i$. When you check another point, push it to stack if $s_i < t_i$ and try to match it with points on stack until stack becomes empty or you get $s_i = t_i$ if $s_i > t_i$.
• » » » 13 months ago, # ^ | ← Rev. 2 → 0 How about changing stack to queue?I think I have implemented algo like that, but I got WA.
• » » » » 13 months ago, # ^ | ← Rev. 2 → +3 I thought so during the contest, but probably no...ok, then it still yes, I guess :)
• » » » » » 13 months ago, # ^ | 0 But, I think my submission 54936636 did something like yours.
• » » » » » » 13 months ago, # ^ | 0 Oh! I found a stupid mistake.
• » » 13 months ago, # ^ | +10 Unfortunately, I have drunk too much prostate juice today, so I solved the problem incorrectly (greedly from both ends) and can't give you a hint what the correct solution is like.
» 13 months ago, # | +31 what the hell is pretest 14 on E ?
• » » 13 months ago, # ^ | +62 Something like this maybe? I got WA on 14 until I fixed this case 4 1 4 5 8 2 3 6 7
• » » » 13 months ago, # ^ | 0 yeah. That works. Thanks. Forgot to pair j with the max i, instead put min i, which lead to wrong solution.
» 13 months ago, # | +22 Too greedy contest
» 13 months ago, # | ← Rev. 2 → +21 Problem H proved once again that Za Warudo's true power is, indeed, the power to reign over this world
» 13 months ago, # | 0 Can someone please give hints for question: C. Crazy Diamond.
• » » 13 months ago, # ^ | +11 Consider how to use no more than $5$ swaps to move number $i$ to position $i$.
• » » » 13 months ago, # ^ | 0 Hi! Im sorry I still did not understand it, can you or anyone please explain it. tia!
• » » » » 13 months ago, # ^ | +5 You can find the positions of the numbers in the middle first. For example, if n=6, the order of numbers to find a position will be 3->4->2->5->1->6. When treats 3, if the original position>3, swap p1 and 3. And swap p1 and pn, and swap p3 and pn. If you do this way for each number, you can move each number to the correct position within three times. So the maximum number of the operation is 3n.
• » » » » 13 months ago, # ^ | ← Rev. 2 → +26 Maybe there are simpler solutions, I will explain mine. $p_i=i$, we don't need to do anything. $0$ swaps. $abs(at_i-i)\ge\frac{n}{2}$, just swap directly. $1$ swaps. (Edited, should be $\ge$ but not $>$) $i\le\frac{n}{2}$ and $at_i\le\frac{n}{2}$, use $p_n$ as temporary variable. $3$ swaps. $i>\frac{n}{2}$ and $at_i>\frac{n}{2}$, use $p_1$. $3$ swaps. otherwise, let $x$ be the lefter one in $i$ and $at_i$, $y$ be the righter one. Swap $p_x$ and $p_n$, $p_y$ and $p_1$. Then swap $p_1$ and $p_n$. Finally swap $p_x$ and $p_n$, $p_y$ and $p_1$. $5$ swaps.
• » » » » » 13 months ago, # ^ | 0 That is also fine. (Although I didn't think of that way during the competition.)
• » » » » » 13 months ago, # ^ | +15 2 should >=
• » » » » » 13 months ago, # ^ | ← Rev. 2 → 0 I was not able to deal with the 5th case . :(
• » » 13 months ago, # ^ | ← Rev. 2 → 0 Enumaration sort can be used and you will need at most n operations to sort.
» 13 months ago, # | 0 Too Greedy Contest, the setters want to steal our rating :(
» 13 months ago, # | ← Rev. 2 → -10 what is pretest 14 in problem E? why this code gets WA? 54943503
• » » 13 months ago, # ^ | 0 I also had this problem. I think we need to add d=min(d, (s[j] — s[i])/2)
• » » 13 months ago, # ^ | ← Rev. 2 → 0 Your code will fail on tests like 4 1 6 7 10 3 5 8 8 Your code gives the output "NO" but the correct output will be "YES" YES 3 1 2 1 1 4 1 3 4 1 here^ is one of the suitable output for the above test case
» 13 months ago, # | ← Rev. 2 → +34 Please tell me problem F is not about trying different $s(hit)$ randomly...
• » » 13 months ago, # ^ | ← Rev. 2 → +10 (retracted my previous claim; turns out there are counter tests)
• » » » 13 months ago, # ^ | 0 Why tf it's div. 1 F then...
• » » » » 13 months ago, # ^ | 0 Don't forget about Div.2 people :)) It's combined!
• » » 13 months ago, # ^ | ← Rev. 2 → +90 System test haven't started yet. I think they're trying to make that test :)
• » » 13 months ago, # ^ | ← Rev. 3 → +74 Deterministic solution for F:WLOG assume sum positive, otherwise change all signs. Iterate $k$ from $61$ to $0$. For each $k$ and each $i$, sum $val_i$ over all masks such that the lowest on bit of $mask_i$ is the $k$-th. If this sum is positive, then turn on the $k$-th bit in $s$, otherwise do nothing.Proof this works: Consider a certain $0-1$ mask, and consider the expected value of the sum over all masks that have this mask as a prefix. (choosing a mask uniformly at random). All $i$ that have an on bit below the $k$-th contribute $0$ to this expected value, while the ones that are all $0$ below the $k$-th bit contribute $2^k$ times their sum. So our algorithm just forces the contribution for these to be always negative.
• » » » 13 months ago, # ^ | +2 I'm thinking for an hour also reading this but still don't know why this solution works. The fact that you're using the expected value in proof makes it not very convenient. So I'm assuming it's not something have to do with "random"s but couldn't figure out.
• » » » » 13 months ago, # ^ | 0 Yeah, the expected value is just for convenience, the solution isn't random. In a bit more detail:For a mask $m$ with length at most $62$ let $f(m)$ be the expected value of the sum described in the problem if we choose a mask uniformly at random from among the masks of length $62$ that have $m$ as a prefix. (Using sum or average instead of expected value would give the same argument, I just found expected value clearer). By the linearity of expectation we can expand $f(m) = \displaystyle\sum_{i = 1}^n \displaystyle\sum_{s}\mathbb{E}\left[val_i \cdot (-1)^{popcount(s, mask_i)}\right]$Where the right sum goes over all masks of length 62 that have $m$ as a prefix. In particular for masks of length $62$ this is just the sum in the problem statement, which we want to make negative. So basically we build each digit of the mask while decreasing or keeping the expected value the same at each step. Initially the expected value is $0$, so this way we do get an answer at the end. The main claim is that the expected value for a certain $i$ and $m$ is $0$ if $i$ contains some $1$ after $m$, and $2^k \cdot val_i$ otherwise. This is because half of the masks with prefix $m$ have $popcount(s, mask_i)$ odd and half even as long as there's at least one on bit after $m$ (basically just restating that a set has the same number of odd size subsets as even size).
• » » » » » 13 months ago, # ^ | 0 Okay, thanks for answering again. But still, I'm not sure about some parts. Do you prove that it'll be negative in the end? Or do you show it just decreases or stays the same in each iteration?
• » » » » » » 13 months ago, # ^ | 0 I show that it decreases or stays the same. At the start it's $0$ because all masks have at least one on bit, so the end result is negative unless it never decreases. But if it never decreases then the sum of everything must be zero, which is false.
• » » » » 13 months ago, # ^ | ← Rev. 2 → +29 The way I think of it is like this:Go through the $(value_i,mask_i)$ and group them depending on which bit is the last 1 bit in the mask.What juckter's soution does is making it so that the sum of values for each of those groups is $\leq 0$. He does this by doing the operations in a smart order, starting with mask 10000000 and then doing X1000000, then XX100000, ..., ending with XXXXXXX1. Note how modifying bit k doesn't change the contributions from groups 1 to k-1. This way he can make the contribution from each group be $\leq 0$.Also because the sum of everything in the beginning is $> 0$ at least one of those groups must after running the algorithm have sum of values $< 0$. One way to see this is proof by contradiction, assume that after doing the algorithm all groups have contribution $= 0$, that implies that they also initially all had contribution $= 0$ which is a contradiction.
• » » » » » 13 months ago, # ^ | 0 Thanks, this is easier for me to understand.I was having a hard time to understand that groups don't block other's processes. Now it's clearer.
• » » » » 13 months ago, # ^ | +5 After reading mnbvmar's solution, here is the intuitive approach I got:Asume the sum of all the elements to be positive, otherwise change all signs.Iterate current_bit from 0->61Let the sum of all the values with current_bit as the highest set bit be current_sumNow, the values with higher set bits in their mask can be manipulated later on, but the ones contributing to current_sum cannot, because they are not effected by toggling higher bits in answer.So, if current_sum is positive, we must greedily toggle the current_bit in our answer, and then negate the values of all the masks having current_bit set as 1.This way, we will make sure that at each iteration, all the numbers with highest set bit as cuurent_bit sum up to non-positive number.
• » » 13 months ago, # ^ | ← Rev. 2 → +10 I have a algorithm (not random) for problem F. Haven't tested it yet, but I think It's correct.Let's split A (the origin set of objects) into two subsets: A1 and A2. All masks in A1 have the bit 0 turn off, and A2 = A\A1.Suppose we have someway to assign bit 1, bit 2, ... such that sum of all val in A1 is non-negative. After assign bit 1, bit 2, ..., we have two option: turn off bit 0 or turn on bit 0, one of them will make the sum of A2 is non-negative. So, we found a way to assign bit 0, bit 1, ... such that sum of A is positive.
• » » » 13 months ago, # ^ | 0 This argument is so elegant.
• » » 13 months ago, # ^ | 0 I think that tourist's solution is not that kind of solution.
» 13 months ago, # | +15 10 seconds too slow to submit correct F :(Too slow :(
» 13 months ago, # | +121 Did somebody find a counter-test that makes random solution fail on problem F?
• » » 13 months ago, # ^ | +40 There's no such thing!!
• » » » 13 months ago, # ^ | +172 WTF!!
• » » 13 months ago, # ^ | +58 I was in same room as Um_nik and I made random solution to problem F. he didn't hack my solution, so there is no counter-test.
• » » » 13 months ago, # ^ | ← Rev. 2 → +8 He didn't lock problem F, so...
• » » » » 13 months ago, # ^ | +6 I know that my F is in big danger. I was just joking. Deep inside I am afraid of losing it.
• » » 13 months ago, # ^ | +195 Yep, _kun_ found some about a week ago :)
• » » » 13 months ago, # ^ | +29 Cool! I hoped there were. How would one construct such a test case?
• » » » » 13 months ago, # ^ | +34 https://codeforces.com/blog/entry/67316?#comment-515034I constructed it by looking at only test cases with val=1 and looking at various possible n and mask size and i found that this makes only 1/16 masks work: 8 1 4 1 5 1 6 1 7 1 8 1 9 1 10 1 11 I tried extending it to higher n and succeeded.
• » » » 13 months ago, # ^ | +18 It's very brave not to put that test in pretests, given the fact that people complain about weak pretests each time.
• » » » » 13 months ago, # ^ | +20 What should they be scared of?
• » » » » » 13 months ago, # ^ | +8 of people disliking the round
• » » » » 13 months ago, # ^ | +18 Huh, but it's not the case when you don't pass systests due to some silly error. This one here is pretty essential...
• » » » » » 13 months ago, # ^ | +1 I understand a difference but is it obvious to you that pretests shouldn't catch solutions with a wrong idea?I don't have a strong opinion about it FYI.
• » » » » » » 13 months ago, # ^ | ← Rev. 2 → -8 What's the point of pretests if they do catch solutions with a wrong idea?As far as I know, main purpose of pretests is to check if you understood statement correctly and that your solution barely does what it should do in general. Everything else is pretty optional.
• » » » » » » » 13 months ago, # ^ | +56 xdOne of points is exactly to catch solutions with a wrong idea. Don't forget about hacks. Sombody could submit a wrong submission from one account, lock it and then reading codes of other people to get a correct solution.
• » » » » » » » » 13 months ago, # ^ | -7 There's surely no point in catching all such solutions, otherwise there would be nothing to hack.
• » » » » » » » » » 13 months ago, # ^ | 0 People don't hack hard problems, especially in combined-division contests. There are usually only 1-2 users with such problem solved in the same room.
• » » » » » » » » » 13 months ago, # ^ | 0 That's true. But here's another point: People usually don't cheat on hard problems this way.And I think that participants who submit shit on hard problems thinking "no one hacks them so pretests should be strong" should be punished.
• » » » » » » » » » 13 months ago, # ^ | +33 I tried to hack F, but I lacked the intelligence to come up with a test case. I think it’s fair for someone who manages to find such testcases to get extra points. If not, then what is the purpose of hacks, indeed?
• » » » » » » » » » 13 months ago, # ^ | 0 As I said, the issue is that it's a hard problem that almost nobody in the same room will solve.
• » » » » 13 months ago, # ^ | +16 If tests are weak, people complain about weak pretests. If tests are strong, more people solve F, so people complain about difficulty distribution. :)
• » » » 13 months ago, # ^ | +21 Could you also add that test case by dorijanlendvaj to upsolving testcases? It seems to break even more solutions. 262144 1 131072 1 131073 1 131074 ... 1 393215
• » » » » 13 months ago, # ^ | +13 Done
• » » » 13 months ago, # ^ | +38 In case somebody wonders, the test structure is as follows:Take some number of bits $k$ (must be odd)Then for every bitmask except the zero ($2^k - 1$ in total), add the item with this mask and the weight: if bitmask == $2^k - 1$, the weight is some constant $-B$ (e.g. $B = 10000000$). if bitmask has even number of bits, the weight is $+A$ (e.g. $A = 20000000$). otherwise the weight is $-A$. One can see that the only way to change the sign of the sum is to use the mask $2^k - 1$. Which is $2^{-k}$ probability if you select the mask randomly. I also fill tests with some amount of random noise so you shouldn't be able just to use largest mask in the test data. Since $2^k$ must be at most $n$, the probability is about $\frac{1}{n}$, which is $n^2$ expected running time.It was also me who suggested not to put this thing in pretests, I thought it would be a good idea to allow hackers to came up with this test as well and probably the bad impact is not too large, you wouldn't have expected that some random trash solution will get AC on the problem F, don't you? :)
• » » » » 13 months ago, # ^ | +20 you wouldn't have expected that some random trash solution will get AC on the problem F, don't you? Yeah yeah, it'd be ridiculous if some random trash passed F. G is another story, of course, in G there can be everything, but in F only good proven solution must pass, obviously.
» 13 months ago, # | +8 So many constructive and greedy problem!
» 13 months ago, # | +8 any hints to b plzz??
» 13 months ago, # | +4 How to solve B?
• » » 13 months ago, # ^ | +10 Greedy. Fix the number of canceled flights from the first set (A to B) and then check what the earliest time we can go from B to C with binary-search/lower_bound is.
• » » 13 months ago, # ^ | +3 Suppose you cancel the first x flights from A to B. That means Arkady will take flight x+1 from A to BDoing a binary search on b you can get the first available flight from B to C. But since you have only canceled x flights, you can cancel the next k-x flights, forcing Arkady to take later flights. Then you can now whats the earliest that Arkady can get to C.You can do that for every x<=k
• » » 13 months ago, # ^ | +3 Can solve using two pointers too instead of binary search! Here's my code
» 13 months ago, # | +9 I would like to ask in problem C, how low can the amount of swaps allowed be, so there would still be a solution for every possible permutation?
» 13 months ago, # | +3 Problem C:You can solve this task in less than 3*n moves is this true?Problem D: You partition the pairs into two sets: Set A: (pair.left < pair.right) Set B: (pair.left > pair.right)You cannot use elements from both sets because of the following Suppose you use an element from set A then you have ....pair.left(A) < pair.right(A)now if you want to append an element from set B you get.....pair.left(A) < pair.right(A) > pair.left(B) > pair.right(B) --> not possible or.... pair.left(A) < pair.right(A) < pair.left(B) > pair.right(B) --> not possible
• » » 13 months ago, # ^ | -15 Problem C: You can solve this task in less than 3*n moves is this true? No.
• » » » 13 months ago, # ^ | +8 Could you please tell me what the minimum number of moves required is then?
• » » » » 13 months ago, # ^ | ← Rev. 2 → -23 Near 5*n in worst case.
• » » » » 13 months ago, # ^ | +3 54945437 solution with at most 4*n (no idea how it works)
• » » » 13 months ago, # ^ | ← Rev. 2 → -21 Yes Enumeration sort can be used and you will need at most n operations to sort.
• » » » » 13 months ago, # ^ | 0 No, because we have condition: take two indices i and j such that 2⋅|i−j|≥n and swap pi and pj
• » » » » » 13 months ago, # ^ | ← Rev. 2 → +5 Okey thanks!
• » » » 13 months ago, # ^ | ← Rev. 2 → +5
• » » 13 months ago, # ^ | ← Rev. 2 → +4 I solved C taking less than 3*n moves. code
• » » » 13 months ago, # ^ | +4 At first your code looked like maggi (Noodles) :D .
• » » » » 13 months ago, # ^ | +1 busy to solving problems, I forgot using function XD
• » » 13 months ago, # ^ | 0 So, the answer to the first question is, YES. In my code the operation works within 3n times.
» 13 months ago, # | +55 The worst feeling in life: Waiting main tests with unproven random solution for F.
» 13 months ago, # | ← Rev. 2 → +7 How could this solution get TLE 12? Just because of slow output? Thanks in advance.
• » » 13 months ago, # ^ | +16 too many flushes. try without endl perhaps
• » » 13 months ago, # ^ | +3 Alex's right: 54950010
» 13 months ago, # | ← Rev. 33 → -34 it's my cake day so can i get 7 upvotes please
» 13 months ago, # | +38 Problem C was great, very enjoyable to solve.
• » » 13 months ago, # ^ | +3 What was your approach to solve C ? Please help! as I couldn't get it.
• » » » 13 months ago, # ^ | ← Rev. 3 → +1 Here is how to perform a swap between two indeces a and b in at most 5 moves:-1. Just a and b if they're far enough-2. Swap a with it's opposite side (1 or n)-3. Swap a with b now if possible, and undo step 2-4. Swap a with other side (1 or n, the one which hasn't been used yet)-5. Now you can definitely swap a and b, so swap those, and undo steps 4 and 2
» 13 months ago, # | ← Rev. 3 → +10 The difficulty gap between problem F and G. It was good for the other colors, but it was bad for reds.
• » » 13 months ago, # ^ | +129 And the new one:Good enough, I think.
» 13 months ago, # | ← Rev. 2 → +16 I felt q(D) easier than q(C). Is there anyone who felt same ?
• » » 13 months ago, # ^ | +2 I felt the same, and there's my comment like yours in the middle of comments.
• » » 13 months ago, # ^ | +3 I did
» 13 months ago, # | +18 If mnbvmar did not get a hack in the last five minutes tourist was likely to have a new personal best rating.
• » » 13 months ago, # ^ | +7 Yes, Really interesting in the last minutes.
• » » » 13 months ago, # ^ | +13 Codeforces' best rating *
» 13 months ago, # | 0 Problem G would be pretty cool if the graph was given. That shit with bitsets ruined it.
• » » 13 months ago, # ^ | +37 Author's solution doesn't use bitset.
• » » 13 months ago, # ^ | +10 We think that it is easier to write incorrect solution in this case, because an efficient test requires $O(N^2)$ edges, therefore, number of vertices must be $\leq 1000$. So, we have decided to make this version with $10^5$ vertices.
» 13 months ago, # | +26 Why so many people use random algorithms to pass F?Are they really correct?
• » » 13 months ago, # ^ | 0
• » » » 13 months ago, # ^ | +14 But later it turned out that it caused so many FSTs
» 13 months ago, # | +27 System Testing hasn't started; when will it start?
• » » 13 months ago, # ^ | 0 started
• » » » 13 months ago, # ^ | +3 Oh, I see.
» 13 months ago, # | 0 What's the upper-bound on number of operations in prbE.
• » » 13 months ago, # ^ | ← Rev. 2 → 0 I think n — 1, when the last one have to move to left n — 1 steps, and each of the rest have to move right 1 step.
» 13 months ago, # | +2 What if in C we had to minimize the number of operations?
» 13 months ago, # | ← Rev. 2 → 0 Any specific reason why submission 54926688 for B fails on pretest 8 ?Is the general solution using two pointers wrong or have I missed an edge case?Thanks in advance.
» 13 months ago, # | +115 Codeforces Global Failed System Test Round .
• » » 13 months ago, # ^ | +14 Never have seen so many FSTs in my life.
• » » » 13 months ago, # ^ | 0 Me either
• » » » » 13 months ago, # ^ | -33 Because you are newfags.When Codeforces just started, every round was with weak pretests, and it was funny, and everyone was happy.There is nothing better than laughing at a friend who failed systests.It is so boring when there is no hacks and no solution fails systests.
• » » » » » 13 months ago, # ^ | +30 We are definitely playing different games
» 13 months ago, # | +43 There are literally 5 times more failed on main tests submission for F than Accepted.
» 13 months ago, # | +8 Could somebody take a look at C 54937652? I have an O(n) solution that is getting TLed. Is this intended?
• » » 13 months ago, # ^ | +5 Don't use endl. It flushes the output stream, and this problem has a large amount of output.
• » » » 13 months ago, # ^ | -8 Even if you're right, I saw codes worse than this one (using endl and not the optimization sync_with_stdio) who passed system tests... Looks unlucky to me, maybe you're very very close to the time limit...
• » » » » 13 months ago, # ^ | ← Rev. 2 → -6 Hope there would be a system retest.. Like take a look at test 12 -- it's also n = 300000 and has <3000ms running time. Great disappointment.
» 13 months ago, # | +9 This contest be like: Pretests: Calculate 2+2 Main tests: The train goes 80 km/hours and you are writing programs. Calculate the mass of sun.
• » » 13 months ago, # ^ | +47 Is this a test that _kun_ came up with?
» 13 months ago, # | +181 The most accurate description of pretests:
• » » 13 months ago, # ^ | 0 Sad for Vn_nV
• » » » 13 months ago, # ^ | 0 first time see -300+ sad for him
• » » » » 13 months ago, # ^ | 0 -389 if to be accuracy
• » » 13 months ago, # ^ | 0 I remember him, he got +400 on predictor when I checked leaderboard
• » » » 13 months ago, # ^ | 0 What is predictor?
• » » » » 13 months ago, # ^ | +3
» 13 months ago, # | +8 Problem C. Same code, replacing endl with \n. TLE with endl 54949525, Accepted, 452 ms with \n 54949838. This is awful :(
• » » 13 months ago, # ^ | 0 Same thing, absolutely.
• » » 13 months ago, # ^ | +7 Endl flushes the output in addition to outputting new line, so it's slower. At least you won't make the same mistake again :)
» 13 months ago, # | -8 https://codeforces.com/contest/1148/submission/54936686. how it can be ml 38?! help me please
• » » 13 months ago, # ^ | +5 2 3 3 2 4 Try this
• » » » 13 months ago, # ^ | 0 thank you very much)
» 13 months ago, # | +12 Actually problem F can be solved in various ways . Including good-look random and check all 2^n and 2^n-1 then random. here
• » » 13 months ago, # ^ | +10 good-look random (×)randomize random (√)
• » » 13 months ago, # ^ | 0 How does the first one pass?
• » » » 13 months ago, # ^ | 0 now it cannot pass...
• » » » 13 months ago, # ^ | 0 +1 to this questionWhy did it pass test 15?
»
13 months ago, # |
+41
As usual, we used the following two files to determine tshirt winners:
get_tshirts.py
randgen.cpp
And the tshirt winners are:
List place Contest Rank Name
1 1148 1 mnbvmar
2 1148 2 tourist
3 1148 3 Petr
4 1148 4 yutaka1999
5 1148 5 LHiC
6 1148 6 Egor
7 1148 7 ksun48
8 1148 8 sunset
9 1148 9 krijgertje
10 1148 10 kczno1
11 1148 11 RNS_CUS
12 1148 12 betaveros
13 1148 13 ATS
14 1148 14 Reyna
15 1148 15 zeliboba
16 1148 16 khsoo01
17 1148 17 stO
18 1148 18 ainta
19 1148 19 Shanru
20 1148 20 Hazyknight
21 1148 21 zzq_IOI2019_AK
23 1148 23 gisp_zjz
24 1148 24 renjingyi
25 1148 25 Alex_2oo8
26 1148 26 whzzt
27 1148 27 lumibons
28 1148 28 komendart
29 1148 29 scanhex
30 1148 30 receed
53 1148 53 Noam527
55 1148 55 KrK
97 1148 97 lintoto
107 1148 107 RobeZH
110 1148 110 Motarack
140 1148 140 darkkcyan
146 1148 146 GoGooLi
151 1148 151 Gassa
152 1148 152 Pigbrain
162 1148 162 J.J.T.L.
231 1148 231 socketnaut
272 1148 272 RCG
305 1148 305 Cinro_9baka
324 1148 324 Wechselhau
330 1148 330 Daniel_Yeh
353 1148 353 Gullesnuffs
380 1148 380 robertthethirtieth
406 1148 406 joon
460 1148 460 fastle233
497 1148 496 Juney
Congratulations!
• » » 13 months ago, # ^ | +44 WOW! Never expected this to happen! How can I receive the prize?
» 13 months ago, # | +16 Why you rejudge some submissions even after the rating calculation?
• » » 13 months ago, # ^ | 0 I hope that I rejudged only a couple of upsolving submissions. I added a test from the comment above and wanted to check it. Why not?
• » » » 13 months ago, # ^ | ← Rev. 2 → +8 Okay. I think it better not to recalculate ratings. And you may do it in system test phase next time.By the way, MikeMirzayanov why don't CF release an open hack system like uoj?
• » » » » 13 months ago, # ^ | +22 Of course we are not going to recalculate ratings/change standings/anything. egor.lifar only rejudged practice submissions.
• » » » » » 13 months ago, # ^ | ← Rev. 4 → +8 In fact, you rejudged contest submissions, changed standings without recalc ratings. Now it looks really confusing xDDo you find anything strange in the image?
» 13 months ago, # | -132 This is the worst round I have ever seen. Terrible pretests. Problem writers made it purposely weak. Makeing bad pretests doesn't show the skill of the participants. I will never participate in these problemsetter's round. I made well in the contest, ended up losing a lot because of the stupid pretests. If you feel disapointed too, downvote the contest! (By a red user with fake account).
• » » 13 months ago, # ^ | +68 You sure care a lot about a round you didn't compete in.
• » » » 13 months ago, # ^ | +33 I can say it not from fake account. I don't even know why people create fake accounts to show their opinion. Afraid of downvotes? I just got sad because of increadibly weak SYSTESTS on F and G.How can this solutions even pass? That's so disappointing. It is the first wrong shitty idea, that you can come with. How is it possible to not test this out? Is that what meant in announcement by "8 tricky tasks for you"? That we must guess the tests? Or sorting by comparator and writing greedy algorithm on 2 pointers are "tricky" maybe?I didn't compete in first 2 rounds, but I already heard opinions that they were incredibly bad. I will for sure not participate in next global rounds, because in my opinion there are enough bad samples to not do it.By the way, why the round from XTX-markets is not prepared by XTX-markets employers if they are looking for people from CP community? Isn't that illogical? I wonder how many qualified high-rated people will they get after these rounds.
• » » » » 13 months ago, # ^ | 0 Now I'm glad I missed registration by seconds and didn't feel confident enough to try with extra registration.
• » » » » 13 months ago, # ^ | +10 There will be many accepted submissions (without proof) if they include up to 20 pretest.
• » » » » » 13 months ago, # ^ | +61 I don't care about pretests weakness. Systests must be strong enough to cut out shitty solutions.
• » » » » » » 13 months ago, # ^ | +8 Here another accepted solution that gives wrong results for the test case: 6 3 49 25 25 6 2 3
• » » » » » » » 13 months ago, # ^ | ← Rev. 3 → 0 By the time this solution was accepted, I also thought that the test data for this problem was a little weak:D
• » » » » » » » 13 months ago, # ^ | 0 Added your test for the upsolving, thanks!
• » » » » 13 months ago, # ^ | +43 How can this solutions even pass? That's so disappointing. It is the first wrong shitty idea, that you can come with. Hmm, it was your third attempt and it was after the contest... Is that what meant in announcement by "8 tricky tasks for you"? That we must guess the tests? Or sorting by comparator and writing greedy algorithm on 2 pointers are "tricky" maybe? Problem F had an author's solution with induction, which I can call tricky. Also, problems G and H both had a deterministic solution without any random.Of course, it's not ok that such solutions pass systests. But in such problems like F or G there can be a very huge amount of strange unintended solutions and it's very complicated to create tests against all of them. In my opinion, weak pretests enable such tasks to exist. Maybe we should have warned about it in the announcment to make system testing not such disappointing.
» 13 months ago, # | -61 I think this contest is really hard and unbalaced contest. I think that there should be more easy questions in the contests.
» 13 months ago, # | ← Rev. 7 → -25 Codeforces Globle Fail System Test Round 3
» 13 months ago, # | +11 Why is there no one asking for editorial?
» 13 months ago, # | +6 Isn't Codeforces even try full feedback contests? Many other websites already use full feedback. Why does Codeforces use this system?
• » » 13 months ago, # ^ | +26 Why not?
» 13 months ago, # | -75 following is my code to the 2nd ques Born this way. I dont know about the correctness as I was getting Runtime error for the 1st TC. However the other two passed in the custom invocation.Please tell me whats the problem here. int main() { ll n,m,ta,tb,k; cin>>n>>m>>ta>>tb>>k; multiset s1,s2; for(int i=0;i>x; x = x+ta; s1.insert(x); } for(int i=0;i>x; s2.insert(x); } ll c = 0; multiset::iterator it,temp; for(it=s1.begin();it!=s1.end();it++) { ll x = *it; auto f = s2.lower_bound(x); if(f!=s2.end()) { temp = it; s1.erase(temp); s2.erase(f); c++; } if(c>=k) { break; } } ll ans=-1; for(it=s1.begin();it!=s1.end();it++) { ll x = *it; auto f = s2.lower_bound(x); if(f!=s2.end()) { ans = *f+tb; break; } } cout<
• » » 13 months ago, # ^ | 0 Don't try to erase and iterate through the same container at the same time. Instead use another container with the same contents as of the one which you would like to modify and do further implementation with that dummy container
• » » 13 months ago, # ^ | +5 Please use spoiler tags to share your code, or use a link, it makes it difficult to scroll through the comments if a long code is posted.
» 13 months ago, # | ← Rev. 2 → 0 Can anybody please help to point out where I am going wrong in problem B it failed on 8th pretest Using the first flight from A to B I find the first flight from B to C using binary search . Now I have two options to cancel the flight a. Cancel the flight from A to B and use the next Flight from A to Bb. Cancel the flight I found by binary search and choose the next flight from B to C I find the time to reach C and which option between the above two gives higher value I go with that. If in this process the next flight from B to C cannot be chosen I print -1.
• » » 13 months ago, # ^ | +3 We can cancel atmost k flights. Start with value i=0 to i=k. Every time you will cancel i flights in A and You have to cancel k-i flight in B which firstly Available. n=10 m=10 t1=1 t2=1 k=3 A= 1 2 3 4 5 6 7 8 9 10 B= 3 4 5 6 7 8 9 10 11 12 If i=0 You have to cancel k flight in B which available for a [0] Every time you have to do same. I hope you got me.
• » » » 13 months ago, # ^ | 0 I got your approach,It's fine. but what 's wrong with mine?
• » » » » 13 months ago, # ^ | ← Rev. 2 → 0 Your code is hard to read and I think you are missing out on some details.You need to sort the A_flights. and when you are checking for the i'th flight in A_flights array you need to make sure that flights 1 to i-1 in the A_flights array are also cancelled.(you are probably not doing this) now you can cancel (k -(i — 1)) flights from B_flights. (B_flights is again sorted). This you can do using binary search. You need to make sure of the following:You do not need to cancel flights in B_flights if the starting time is before the starting time in A(current flight whatever you are checking).i.e you do'not need to cancel flights: starting_time(B_j) < starting_time(A_i).If you are doing all this then the fault lies in your implementation.
» 13 months ago, # | 0 First and foremost thing is to read problem statement carefully and see examples. For problem B, i did not at all read problem statement correctly and almost ended up in solving another variant of problem B which according to me , somehow is not straightforward, variant of which is Arkady minimises the wait time between two flights from A to B and B to C and I maximise this time after cancelling at-most k flights . Wait time here is same as flight overlay time. eg .A(start from city A) = 1, 3 , 4 , ta = 1,B(Start from city B) = 2, 6 , 7 , tb = 1 .Then arrival at B are 2,4,5 . So if K=2, we can cancel flights from original A where start time = 1,4 so that max wait time is between flight A = 3 to B = 6 and ans = 2; please note here there would be no use of tb. I ended up in solving this , to only learn at last moment this was not asked :(
» 13 months ago, # | +45 Editorial?
» 13 months ago, # | ← Rev. 2 → -52 abc
» 13 months ago, # | +18 Editorial??
» 13 months ago, # | ← Rev. 2 → 0 Can anyone please check my following submission for Problem B and figure out what's wrong with it? I got a runtime error on test case 9, and my logic seems to be working fine for all cases I'm dry running it for. Thanks in advance! :) https://codeforces.com/contest/1148/submission/54942140
• » » 13 months ago, # ^ | 0 There is no boundary check for j in here: while(b[j]
• » » » 13 months ago, # ^ | 0 Thank you. Finally submitted it successfully! :)
» 13 months ago, # | +25 How to solve G? Or can anbody give me some hint for how to use gcd(ai,aj)>0 and 2k<=n?
• » » 13 months ago, # ^ | ← Rev. 2 → +8 Find a spanning forest of the compliment graph. In the compliment graph if there are more than n/2 components then we have an n/2 clique in original graph. Else we can choose nodes only from components with size >=2 such that we take atleast 2 from each component we choose( can be proved that it is always possible to choose for any k st 6<=2k<=n ). In this set every vertex has an edge in the compliment graph, and hence this set is antifair.For finding the components, bitsets can be used to get a complexity of O( n^2 * MAXP /64) where MAXP is the max number of prime divisors of a number.
» 13 months ago, # | 0 Oh,it's not very good to the participants who cannot submit any code in the first 10min like me... and I was sad when I saw that the score I got on each problem is less than the score 5min ago a lot after I submit each problem...(and I just need 3 point to go to the red! I wish I can get a red account in the next round...) All in all, it's a not bad contest, although it need "a lot of" constructive algorithms and basic tricks in the problem from A to E. F is an amazing problem with short words, but I can't solve it. Can anyone help me with it?
» 13 months ago, # | 0 Can Someone explain how to solve C.
• » » 13 months ago, # ^ | 0 Try order the array begining from the numbers that need to be at middle, so that, we can use the positions 1 and N of array to make swaps.If a number cant go to its right position, we can keep swaping it with position 1 or N until it can be placed to the right position.N = 8, order of verifications: 4, 5, 3, 6, 2, 7, 1, 8Ps: my english is not that good.
» 13 months ago, # | -8 just curious why problem F is named Foo Fighters
• » » 13 months ago, # ^ | +12 Because both Foo and Fighters start with F.
» 13 months ago, # | 0 54973406 My submission is Accepted after contest, but it's actually wrong. For instance, when get such inputs 6 3 18 75 245 847 1859 26 it outputs 6 5 4
• » » 13 months ago, # ^ | +10 Added your tests into the upsolving, thanks!
» 13 months ago, # | 0 Did anyone's randomized solution for F pass the systests ?
» 13 months ago, # | +25 Editorial?, opening Codeforces after every 10 minutes just to see if editorial has been released.:-|
» 13 months ago, # | 0 Solution for G that got accepted: 54972721 How??
• » » 13 months ago, # ^ | 0 I call that phenomenon "Let's create 150 random tests and not write shitty solutions to test their strength"
» 13 months ago, # | +20 Editorial for this contest please.
» 13 months ago, # | ← Rev. 2 → +3 I made a silly bug in F and still got accepted 55273526, so I scanned through the first page of standings and successfully hacked an accepted deterministic solution. 54935870 egor.lifar Can you add these testcases to upsolving testcases? 3 0 1 -1 2 -1 3 3 0 1 1 2 1 3 3 0 2 1 1 1 3 3 0 2 -1 1 -1 3
• » » 13 months ago, # ^ | 0 Thank you! Added them.
» 8 days ago, # | 0 Problems similar to Problem C, Crazy Diamonds 432C - Prime Swaps (Almost same statement) 1365B - Trouble Sort It is interesting to note that this "temp variable" idea is repeated, and with each repetition the rating of the problem drops. | {} |
Mach Number
Written by Jerry Ratzlaff on . Posted in Dimensionless Numbers
Mach number, abbreviated as Ma, is a dimensionless number is the ratio of the velocity of flow to the velocity of sound.
$$\large{ Ma = \frac{v}{a} }$$ Where: $$\large{ Ma }$$ or $$\large{ M }$$ = Mach number $$\large{ a }$$ = speed of sound $$\large{ v }$$ = velocity, speed of the object Solve for: $$\large{ v = Ma }$$ $$\large{ v_s = \frac{v}{Ma} }$$
note about the speed of sound
The speed of sound in this equation is dependent on the density of the medium that the sound is traveling through. For example, the speed of sound through a solid object like a railroad track is much faster than the speed of sound through air at standard conditions.
Mach Number Conversion table
Mach Number Conversion Table
MultiplyByTo Get
9.646x107 feet per day
4.0192x105 feet per hour
66,986 feet per minute, fpm
1,116 feet per second, fps
1,225 kilometers per hour
2.94x107 meters per day
1.225x106 meters per hour
20,417 meters per minute
340.29 meters per second
761 miles per hour, mph | {} |
Solving simultaneous modular congruences
Let $\{n_i\}_{i=1}^k$ be a set of pairwise relatively prime integers- that is, $\mathsf{gcd}(n_i,n_j) = 1$ whenever $i \neq j$.
1. Show that the Chinese Remainder Theorem implies that for any $\{a_i\}_{i=1}^k \subseteq \mathbb{Z}$, there is a solution $x \in \mathbb{Z}$ to the simultaneous congruences $x \equiv a_i$ mod $n_i$ and this solution is unique mod $n = \prod n_i$.
2. Let $n_i^\prime = n/n_i$ for each $i$; note that $n_i^\prime$ and $n_i$ are relatively prime by assumption. Let $t_i$ be the inverse of $a_i^\prime$ mod $a_i$. Prove that the solution $x$ in part (a) is $x = \sum a_it_in_i^\prime$ mod $n$. (Note that these $t_i$ can be found quickly using the Euclidean Algorithm, and thus we can quickly find solutions to the system of congruences for any choice of the $a_i$.)
3. Solve the simultaneous system of congruences $x \equiv 1 \mod 8$, $x \equiv 2 \mod 25$, and $x \equiv 3 \mod 81$ and the system $y \equiv 5 \mod 8$, $y \equiv 12 \mod 25$, and $y \equiv 47 \mod 81$.
1. Since the $n_i$ are pairwise relatively prime, the ideals $(n_i)$ are pairwise comaximal. By the Chinese Remainder Theorem, the map $\varphi : \mathbb{Z} \rightarrow \prod \mathbb{Z}/(n_i)$ is surjective and has kernel $(\prod n_i)$. Consider the element $(\overline{a_i}) \in \prod \mathbb{Z}/(n_i)$; then there exists an element $x \in \mathbb{Z}$ such that $\varphi(x) = (\overline{x}) = (\overline{a_i})$. By the First Isomorphism Theorem for rings, $x$ is unique mod $n$.
2. It suffices to show that this $x$ is indeed a solution. To that end, consider $\varphi(x) = (\sum a_it_in_i^\prime)$. Note that the $j$th coordinate of $\varphi(x)$ is $\sum a_it_in_i^\prime$, taken mod $n_j$. By definition, $n_j$ divides $n_i^\prime$ for all $i \neq j$. Thus the $j$th coordinate of $\varphi(x)$ is $a_jt_jn_j^\prime \equiv a_j$, since $t_j$ is the inverse of $a_j^\prime$ mod $n_j$. Thus $\varphi(x) = (a_i)$ as desired.
3. For both of these examples, we have $n_1 = 8$, $n_2 = 25$, and $n_3 = 81$ (certainly these are pairwise relatively prime). Thus $n_1^\prime = 25 \cdot 81$, $n_2^\prime = 8 \cdot 81$, and $n_3^\prime = 8 \cdot 25$. We wish to invert $n_i^\prime$ mod $n_i$. Since $n_1^\prime \equiv 1$ mod 8, $t_1 = 1$. Since $n_2^\prime \equiv -2$ mod 25 and $25 - 2 \cdot 12 = 1$, $t_2 = 12$. Since $n_3^\prime \equiv 38$ mod 81 and $38\cdot 32 - 15 \cdot 81 = 1$, $t_3 = 32$.
Thus $x = \sum_{i=1}^3 a_i \cdot t_i \cdot n_i^\prime$ $= 1 \cdot 1 \cdot 25 \cdot 81 + 2 \cdot 12 \cdot 8 \cdot 81 + 3 \cdot 32 \cdot 8 \cdot 25$ $\equiv 4377$ mod $8 \cdot 25 \cdot 81$.
Similarly, $y \equiv 15437 \mod 8 \cdot 25 \cdot 81$. | {} |
# sigwin.blackmanharris class
Package: sigwin
Construct Blackman-Harris window object
## Description
Note: The use of `sigwin.blackmanharris` is not recommended. Use `blackmanharris` instead.
`sigwin.blackmanharris` creates a handle to a Blackman-Harris window object for use in spectral analysis and FIR filtering by the window method. Object methods enable workspace import and ASCII file export of the window values.
The following equation defines the symmetric Blackman-Harris window of length `N`:
$w\left(n\right)={a}_{0}-{a}_{1}\mathrm{cos}\left(\frac{2\pi n}{N-1}\right)+{a}_{2}\mathrm{cos}\left(\frac{4\pi n}{N-1}\right)-{a}_{3}\mathrm{cos}\left(\frac{6\pi n}{N-1}\right),\text{ }0\le n\le N-1$
The following equation defines the periodic Blackman-Harris window of length `N`:
$w\left(n\right)={a}_{0}-{a}_{1}\mathrm{cos}\frac{2\pi n}{N}+{a}_{2}\mathrm{cos}\frac{4\pi n}{N}-{a}_{3}\mathrm{cos}\frac{6\pi n}{N},\text{ }0\le n\le N-1$
The following table lists the coefficients:
CoefficientValue
`a0`0.35875
`a1`0.48829
`a2`0.14128
`a3`0.01168
## Construction
`H = sigwin.blackmanharris` returns a Blackman-Harris window object `H` of length 64.
`H = sigwin.blackmanharris(Length)` returns a Blackman-Harris window object `H` of length `Length`. `Length` must be a positive integer. Entering a positive noninteger value for `Length` rounds the length to the nearest integer. Entering a 1 for `Length` results in a window with a single value of 1.
## Properties
`Length` Blackman-Harris window length. The window length requires a positive integer. Entering a positive noninteger value for `Length` rounds the length to the nearest integer. Entering a 1 for `Length` results in a window with a single value of 1. `SamplingFlag` The type of window returned as one of `'symmetric'` or `'periodic'`. The default is `'symmetric'`. A symmetric window exhibits perfect symmetry between halves of the window. Setting the `SamplingFlag` property to `'periodic'` results in a N-periodic window. The equations for the Blackman-Harris window differ slightly based on the value of the `SamplingFlag` property. See Description for details.
## Methods
generate Generates Blackman–Harris window info Display information about Blackman–Harris window object winwrite Save Blackman–Harris window in ASCII file
## Copy Semantics
Handle. To learn how copy semantics affect your use of the class, see Copying Objects in the MATLAB® Programming Fundamentals documentation.
## Examples
Default length `N = 64` Blackman-Harris window:
```H = sigwin.blackmanharris; wvtool(H)```
Generate length `N = 128` periodic Blackman-Harris window, return values, and write ASCII file:
```H = sigwin.blackmanharris(128); H.SamplingFlag = 'periodic'; % Return window with generate win = generate(H); % Write ASCII file in current directory % with window values winwrite(H,'blackmanharris_128')```
## References
Harris, Fredric J. "On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform." Proceedings of the IEEE®. Vol. 66, January 1978, pp. 51–83. | {} |
# How do you write 9,364 in scientific notation?
May 10, 2016
#### Answer:
$9364 = 9.364 \times {10}^{3}$
#### Explanation:
Choose the exponent of $10$ so that when the mantissa is shifted that many places to the right, the resulting value lies in the range $\left[1.0 , 10\right)$.
In our example, we need to divide the mantissa by ${10}^{3}$ to get it into range. | {} |
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
Published by Pearson
# Chapter 1-2 - Cumulative Review - Page 152: 12
#### Answer
$t=9$
#### Work Step by Step
Using the properties of equality, the solution to the given equation, $\dfrac{2}{3}t+7=13$, is \begin{array}{l}\require{cancel} \dfrac{2}{3}t=13-7 \\\\ \dfrac{2}{3}t=6 \\\\ 2t=3(6) \\\\ t=\dfrac{3(6)}{2} \\\\ t=\dfrac{3(\cancel{2}\cdot3)}{\cancel{2}} \\\\ t=9 .\end{array}
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | {} |
# Understanding the Iterative Algorithm in Data-Flow Analysis
This blog is aim to help you understand the iterative algorithm and the worklist algorithm, which are the foundation of data-flow analysis, and also the most confusing part for beginners (in this case, me). A lot of content in this blog was summarized from the Static Program Analysis Lectures by Nanjing University. Thanks a lot to Prof. Yue Li and Prof. Tian Tan for presenting such an amazing course!
# Windows学习笔记(1)-WoW64机制初探
WoW64(Windows 32-bit on Windows 64-bit)是Windows x64提供的一种兼容机制,可以认为WoW64是64位Windows系统创建的一个32位模拟环境,使得32位可执行程序能够在64位的操作系统上正常运行。本文的主要内容是探索WoW64进程的初始化过程。 | {} |
### doreshnikov's blog
By doreshnikov, history, 4 weeks ago,
Hello Codeforces!
Once again, we are glad to invite you all to Codeforces Round #753 (Div. 3), the third division round held on Nov/02/2021 17:35 (Moscow time). This round was prepared (again) by me and MikeMirzayanov. The problems are different but our anticipation is the same :) Hope you will like the problems we've prepared!
Big thanks to MikeMirzayanov for great ideas as well as for helping me with writing good statements and better tests. I'm still a bit slow with some aspects of preparing problems so it's a really noticeable help for me. (UPD: as of this moment, it became much more noticeable, so, really, thanks a lot!)
Also special thanks for testing the round to 01aglupal, From_ITK18_With_Love, Mazaalai, GusMG, nizamoff, 2020akadaver, I_Remember_Olya_ashmelev, p0kemanbr, KoftaIQ and, as usual, to Gassa for valuable comments! And last but not least, thanks to everyone who'll be participating! This round contains 8 problems and is expected to have a decent level of difficulty for participants with ratings up to 1600. However, all of you who wish to take part and have a rating 1600 or higher, can register for the round unofficially.
The round will be hosted by rules of educational rounds (extended ACM-ICPC). Thus, during the round, solutions will be judged on preliminary tests, and after the round, it will be a 12-hour phase of open hacks. We tried to make tests strong enough but it doesn't at all guarantee that open hacks phase will be pointless.
You will be given 8 problems and 2 hours to solve them. We remind you that the penalty for the wrong submission in this round (and the following Div. 3 rounds) is 10 minutes. Also please note that the number of problems has slightly increased since the last edit.
Remember that only the trusted participants of the third division will be included in the official standings table. As it is written by link, this is a compulsory measure for combating unsporting behavior. To qualify as trusted participants of the third division, you must:
• take part in at least two rated rounds (and solve at least one problem in each of them)
• do not have a point of 1900 or higher in the rating.
Regardless of whether you are a trusted participant of the third division or not, if your rating is less than 1600, then the round will be rated for you.
Good luck to everyone!
UPD: Editorial is out!
• +320
» 4 weeks ago, # | +5 I hope to become Expert in this round. Please, wish me luck :)
• » » 4 weeks ago, # ^ | 0 I hope to cross 1500 :)
• » » » 4 weeks ago, # ^ | -32 I hope to cross tourist;)
» 4 weeks ago, # | ← Rev. 3 → +2 Look forward to great problems in this round! And I find that there is no testers in this round.
» 4 weeks ago, # | 0 i hope to add 50 points
• » » 4 weeks ago, # ^ | +6 you can make it :)
• » » » 4 weeks ago, # ^ | +1 Hey I don't see any changes in rating. When will they be updated?
• » » » » 4 weeks ago, # ^ | +1 Yes same thing I too wanted to ask
• » » » » » 4 weeks ago, # ^ | 0 this round is considered unrated in my contests don't know why.
• » » » » » » 4 weeks ago, # ^ | 0 how do you know that?
• » » » » » » » 4 weeks ago, # ^ | 0 Just go to your CONTESTS tab and see all unrated contests there this round will be there
• » » » » » » » 4 weeks ago, # ^ | 0 Yeah, just saw it.Same for me, it's listed under the unrated contests.
• » » » » » » 4 weeks ago, # ^ | 0 Are you sure that if it's listed there then it means it will not be rated for us? because I feel it's listed under the unrated contests because they didn't finish the rating calculation.
• » » » » » » » 4 weeks ago, # ^ | 0 I don't know actually just wait that's all we can do
• » » » » » » » » 4 weeks ago, # ^ | 0 Bro Div 3 contests are rare for us beginners and also 1 done in one month they make it not rated where will we live then
• » » » » » » » » » 4 weeks ago, # ^ | +2 Its not unrated guys it always appears in the unrated section until the rating updates . This contest is rated don't worry!!!!!
• » » » » » » » » » 4 weeks ago, # ^ | 0 Oh, Im so worry
• » » » » » » » » » 4 weeks ago, # ^ | ← Rev. 2 → 0 Such a relief to hear that, but do you know when is the update of rating?UPD: nevermind, +173 is here
• » » » » » » » » » 4 weeks ago, # ^ | 0 same
• » » » » » » » » » 4 weeks ago, # ^ | ← Rev. 3 → 0 adu hi người cùng fanbase :))
• » » » » » » » » » 4 weeks ago, # ^ | 0 Hii :)) tiện thể học tiếng anh luôn :)))
• » » » » » » » » » 4 weeks ago, # ^ | 0 =)) add fr đi ô có gì học hỏi
» 4 weeks ago, # | ← Rev. 2 → -21 Note the unusual time of the round.
• » » 4 weeks ago, # ^ | +2 After the thirth contest it isn't funny anymore
» 4 weeks ago, # | -15
» 4 weeks ago, # | +38 My first unrated round and this means a lot to me. It feels a lot better than I thought. I put a lot of effort into this in last 3 months and now this feeling is different kind of motivation to improve further. Codeforces has been sort of home to me. A big thank you to the best place on internet.
• » » 4 weeks ago, # ^ | +2 U cheated in codeforces round 742 div2. Any explanation for that?
• » » » 4 weeks ago, # ^ | ← Rev. 2 → +56 No explanation. There can't be any explanation for cheating ever. I still regret it and never did that again. I was deservedly skipped from that contest. And I can only say one thing and assure you that It will never ever happen again. I never should have done that. I'm Sorry.
• » » » » 2 weeks ago, # ^ | 0 where are you from???
• » » » 4 weeks ago, # ^ | 0 I think there is no way to cheat on codeforces, codeforces have too good security to cheat.
• » » » 4 weeks ago, # ^ | 0 how do you know that he cheated??
• » » » 4 weeks ago, # ^ | 0 What do you mean he cheated? Did he copy some other submission or something. How does cheating work on codeforces? I genuinely don't know — hence asking.
• » » 4 weeks ago, # ^ | -33 U cheated in codeforces round 743 div2. Any explanation for that?
• » » » 4 weeks ago, # ^ | ← Rev. 2 → +25 NO, I DIDN'T. I did it only once and have accepted it and will never never do ever again.
• » » » 4 weeks ago, # ^ | +13 That contest was made unrated for everyone.
• » » » » 4 weeks ago, # ^ | +22 Yes, right. Due to long queue issues.
» 4 weeks ago, # | 0 Hope to cross 1400.
• » » 4 weeks ago, # ^ | -10 Sir, i have one doubt.Is it difficult to increase one's rating in div3 round after one has become pupil?Because in last div3 round, I solved 4 problems and gained only +3 .Moreover the predictor was indicating -10.
• » » » 4 weeks ago, # ^ | +20 no, just solve more
• » » » 4 weeks ago, # ^ | 0 I think rank matters more than the number of problems solved.
• » » » 4 weeks ago, # ^ | 0 I gained +42 in last div 3 by solving 4 questions try to solve them quickly and without any wrong submission.
» 4 weeks ago, # | +14 Is it unrated?
• » » 4 weeks ago, # ^ | +2 for you, Yes
• » » » 4 weeks ago, # ^ | 0 so rude...
» 4 weeks ago, # | -12 My first unrated round!
• » » 4 weeks ago, # ^ | 0 me,too
• » » » 4 weeks ago, # ^ | 0 Hope to join you guys after today
• » » » » 4 weeks ago, # ^ | 0 +1
» 4 weeks ago, # | 0 Hello everyone! Good luck to everyone on the contest!
» 4 weeks ago, # | ← Rev. 2 → 0 hope i solve 5 question
» 4 weeks ago, # | 0 I hope to add +50 to remove Grey Tag :)
» 4 weeks ago, # | 0 I hope to cross 800 cause this is my first time and I'm new to coding ^_^
• » » 4 weeks ago, # ^ | 0 It's not so ez) but wish u luck
» 4 weeks ago, # | +4 Hope to solve 5 problems.
» 4 weeks ago, # | 0 Wish I could also comment My first unrated round. LOL!
• » » 4 weeks ago, # ^ | 0 How did you type like that?
• » » » 4 weeks ago, # ^ | -103 Source CodeWish I could also comment My first unrated round. LOL!
» 4 weeks ago, # | +11 I hope to return to monke in this round.
• » » 4 weeks ago, # ^ | +1 Every contest, I stray further from monke :(
» 4 weeks ago, # | 0 Div 3 contests are real fun! Hoping for a good one!!
» 4 weeks ago, # | ← Rev. 2 → 0 It's gonna be my first unrated div3 contest
» 4 weeks ago, # | 0 GL HF!
• » » 4 weeks ago, # ^ | 0 GG WP!
» 4 weeks ago, # | +26 Perfect. A round on my birthday where i can't lower my rating :D
• » » 4 weeks ago, # ^ | +15 Happy birthday!
• » » 4 weeks ago, # ^ | ← Rev. 2 → +3 Wishing you Good Luck!
• » » 4 weeks ago, # ^ | +3 Good luck!!! :)
» 4 weeks ago, # | +3 Hope to become pupil in this Round, wish me luck :)
» 4 weeks ago, # | 0 Guys please make div 3 contests on the weekends we can't attend them because of school and different time zones thanks
» 4 weeks ago, # | +24 As a tester, I want contribution. :)
» 4 weeks ago, # | 0 We tried to make tests strong enough but it doesn't at all guarantee that open hacks phase will be pointless.Thank you for your honesty.
» 4 weeks ago, # | +16 As a first time tester, I must sayMost importantlyPlease upvote this comment, so that my contribution becomes non-negative:') BonusNumber 1I have tested and hence cannot participate in the official round:D Number 2Problems are very interesting. Make sure you read all the problems.
» 4 weeks ago, # | 0 Hopefully I will become Specialist after this contest.
• » » 4 weeks ago, # ^ | 0 wish luck
» 4 weeks ago, # | +17 As a tester, i feel problems are very interesting. Hope you get more rating in this contest.Have a nice day :).
• » » 4 weeks ago, # ^ | +3 Thank you!
» 4 weeks ago, # | +10 I'm +3 away from being green, wish me a good luck guys.
» 4 weeks ago, # | +15 At this point, it's really become necessary to remind this:note the usual start time
» 4 weeks ago, # | +12 is expected to have a decent level of difficulty for participants with ratings up to 1600.Such a nice word decent.As I have to say, I'm always hoping for solving those decent problems in div.3 in contest. But almost always, it turns out that those problems are too hard for me or even someone else with higher rating than 1600 to pass QWQ.Wish me solve some of these decent problems in the upcoming div.3 round today.
» 4 weeks ago, # | +16 Hoping to recover points which i had lost on the previous round :)
» 4 weeks ago, # | +4 i guess that div3 is a great choice for the first round ever
» 4 weeks ago, # | +2 50 points away from expert, wish me good luck !
» 4 weeks ago, # | 0 Expert to be ♥
» 4 weeks ago, # | 0 Where is vovuh :(
» 4 weeks ago, # | +1 i have a feeling that my solution for D is wrong and gets ac
» 4 weeks ago, # | +12 GG crappy memory limit like FHC
» 4 weeks ago, # | +4 can't wait for the editorial, G is a new thing for me
» 4 weeks ago, # | ← Rev. 2 → +24 Infinite loop / massive stack memory usage case for MLE on test case 4 of F? Is there no way to get AC without building the solution iteratively?
• » » 4 weeks ago, # ^ | 0 Just stack memory usage. I had to stop using vector of vectors and use normal functions instead of std::function to get AC.
• » » 4 weeks ago, # ^ | ← Rev. 2 → -30 Accurate enough DFS implementations were accepted, most of the testers' solutions used DFS and got AC :)
• » » » 4 weeks ago, # ^ | +19 Anything that stands out to you as problematic in this solution? 134105154Three cases are general non-calculated, cycle found and value already known.I can't really think of much that's removable except removing $on_cycle$ and using a reference to a common int for cycle start node or something like that.Anyway is there a reason for the constraints to be so high in this problem? I can't think of any incorrect solution that needs to be cut off that won't also fail for $n \times m \leq 3 \times 10^5$
• » » » » 4 weeks ago, # ^ | ← Rev. 3 → -27 As I see it: gridx and gridy arrays are unnecessary, there's no point in storing them explicitly solve() can be slightly optimized by making curval global (and x and y also, but it will make code a bit less readable). Also local variables can be inlined I understand that these kinds of optimizations are not what you expect from Div.3 but the main expected solution doesn't use dfs at all, so it's not like we wanted to fail dfs explicitly, we just didnt' tune ML for any dfs to pass.UPD1: (sorry, previous UPD wasn't true, fixed)UPD: We just re-checked your solution in Polygon with double ML after making some modifications and it passed, so I guess we should've made the ML larger. Sorry about that :(
• » » » » » 4 weeks ago, # ^ | +11 My solution failed because of that too. I considered that to be a problem. But I didn’t believe that it was a true reason to get ML. After the contest I wrote solution without dfs and it passed
• » » » » » 4 weeks ago, # ^ | 0 Just wondering, in polygon, does increasing the ML also increase the stack limit (the option given to gcc -Wl,--stack=268435456)?I am debugging in gym and it doesn't seem like setting a higher memory limit changes the stack size. You still get a RTE/stackoverflow when you use above 250ish mb which is making this really annoying to debug.
» 4 weeks ago, # | +65 I am completely apalled that you need to do a constant optimization in MEMORY on a official cf round. There are currently 15 PAGES of MLE solutions on F. Even using std::function at all gives MLE. Actually disgusting.
» 4 weeks ago, # | ← Rev. 3 → +10 Was it really that necessary to make the limits tight for problem F, so that scc with recursion would TLE/MLE ??? Was it ???
» 4 weeks ago, # | 0 How to solve D?
• » » 4 weeks ago, # ^ | +1 My idea was greedy : first take the blue marked numbers and red marked numbers in the ascending order and keep a variable can = 1 Iterate through blue numbers : if (blue_number>=can) then can++ else we cannot convert this number to any other number in the permutation ans = NO;similarly for red numbers if ( red_number <=can) for each red_number if this too fails then ans = NO in all the other cases ans = YES
» 4 weeks ago, # | 0 is G greedy?
• » » 4 weeks ago, # ^ | 0 Yep
• » » 4 weeks ago, # ^ | ← Rev. 2 → 0 Yup, identify maximum prefix increase of a — b possible for first $i$ nodes. Then as we go backwards make operations so the difference is below the max prefix possible and as close below it as we can get (so it lies above the min prefix possible).I think the intuition is clear, but I do hand wave away some details so maybe there is a small edge case I'm missing and I'll get hacked. Code#include #define int long long using pii=std::pair; using namespace std; const int maxn = 2e5 + 5; int t, n, m, a[maxn], b[maxn], maxpref[maxn]; // max increase to a - b int32_t main() { ios_base::sync_with_stdio(false); cin.tie(0); cin >> t; for(int cases = 0; cases < t; cases++) { cin >> n >> m; int need = 0; for(int i = 1; i <= n; i++) { cin >> a[i] >> b[i]; int takehi_a = min(a[i], m); int takehi_b = m - takehi_a; int takelo_b = min(b[i], m); int takelo_a = m - takelo_b; maxpref[i] = maxpref[i - 1] + (takehi_a - takehi_b); need += a[i] - b[i]; } vector ops; for(int i = n; i >= 1; i--) { int takehi_a = min(a[i], m); int takelo_a = m - min(b[i], m); int target = need - maxpref[i - 1]; if((target & 1) != (m & 1)) // Swapping 1 from a to b changes the diff by an even amount target--; // solve linear equations x + y = m and x - y = target, but constrain by limits we know int takea = max(min((m + target) / 2, takehi_a), takelo_a); int takeb = m - takea; ops.push_back({takea, takeb}); need -= (takea - takeb); } cout << abs(need) << "\n"; reverse(ops.begin(), ops.end()); for(auto x : ops) cout << x.first << " " << x.second << "\n"; } return 0; }
• » » 4 weeks ago, # ^ | +12 A simple-ish way to think about G is as follows:Given values of a[i] and b[i], there is a minimum amount of each that the tester must eat (sometimes 0). Iterate once through and assign this minimum amount. This leaves a remaining 'variable amount' for each dish, and a starting difference. Iterate through again and assign this variable amount in such a way as to bring the difference back as close to 0 as possible.
• » » » 4 weeks ago, # ^ | +12 It feels so good when someone has done the exact same thing that you did and got AC .. Glad I could think this in time..
» 4 weeks ago, # | +21 Am I the only one for whom was the 2 hours too tight for this contest?
• » » 4 weeks ago, # ^ | 0 Nop
» 4 weeks ago, # | +14 such a bad div 3 round I have ever given. Authors div 3 is for newbies, so kindly make the problems for newbies and not for pros.
» 4 weeks ago, # | 0 Where is the solution to these questions?
» 4 weeks ago, # | 0 Someone please explain problem G, i don't understand :( example: test case inp 3 6 1 8 1 9 30 10 out 7 1 5 1 5 6 0 why the first line = 7 @@
• » » 4 weeks ago, # ^ | 0 the minimum imbalance after eating for that testcase is 7
• » » » 4 weeks ago, # ^ | 0 Yeah out 7: 1 5 -> 1 5 -> 6 0 but abs( (1 + 1 + 6) — (5 + 5 + 0) ) = 4 or did I misunderstand
• » » » » 4 weeks ago, # ^ | +1 its the absolute value of what is left, not what is eaten
• » » » » » 4 weeks ago, # ^ | 0 well ~~
• » » 4 weeks ago, # ^ | +3 You need to maximize the balance of dishes after the taster eat some of them, not maximize the balance the dishes eaten by the taster.Really annoying statement :(
• » » » 4 weeks ago, # ^ | 0 oh, can you explain with an example pls :(
• » » » 4 weeks ago, # ^ | 0 I asked exactly for that while contest.
» 4 weeks ago, # | +13 Really annoying G for its unclear statement. The taster needs to maximize the balance of dishes WHICH IS LEFT AFTER EATING BY HIM, not eaten by him. It drops me from rk 40~ to 170 :(
• » » 4 weeks ago, # ^ | 0 doesnt the first testcase show what they meant?
• » » » 4 weeks ago, # ^ | 0 I didn't saw it until I finish my program. I think the statement is clear enough, but it doesn't.
» 4 weeks ago, # | +7 chad
• » » 4 weeks ago, # ^ | +14 You don't need tarjan x)The graph is a functional graph so it's enough to just iterate while you don't cycle and keep track of the nodes you're visiting in a vector. Then, if you visit a node X you visited in the same run, there is a cycle starting at node X and you can recover the cycle with your vectorSee my code here for more details: [submission:https://codeforces.com/contest/1607/submission/134137875]
» 4 weeks ago, # | 0 I wish , if i could have started this contest 1hr early on time :(
» 4 weeks ago, # | +19 the MLE of F is awful
» 4 weeks ago, # | +114 I believe, F should belong to an educational round, not to a div3 round. Like I am not against teaching people how to write dfs using stack (though I believe that the authors who design such problems are quite... strange), but asking a beginner to do this optimization? For me it looks like the best way to discourage them from doing cp.
• » » 4 weeks ago, # ^ | -7 It's true that DFS is one of the first things that come to mind when one sees this problem, but it's not the only thing that could be used...The ML wasn't set up to explicitly fail all DFS solutions (and I mean it), we just expected a solution without DFS as we know it so we didn't tune the ML for any DFS to pass.Since if you think about the board as a graph, each vertex has only one outcoming edge and you can walk through the graph with a single loop without even knowing that such thing as DFS exists. And that's what actually written in main solution.
• » » » 4 weeks ago, # ^ | 0 Actually there are some DFS solutions be able to be optimized enough to pass. Yet they are very rare compared to those get MLE.
• » » » 4 weeks ago, # ^ | ← Rev. 2 → +60 Well, that's great that you have a solution that does not need any additional optimization. But it seems you don't get the point, the thing is that failing recursive solutions (with correct time and space complexities) is not nice. Did you not think of dfs while setting up this problem? That's really strange, because that's the first thing that comes to mind.I believe that in beginners contest (in any contest, actually, but for beginners it's even more important) the authors should try to cut off solutions with bad complexity and allow solutions with good complexity to pass comfortably and I also belive that this does not hold for this problem.UPD. So you say that none of the testers solutions were close to ML, while having recursive dfs inside? Seems unbelievable. Or you mean that they were able to squeeze dfs into time limit? This is possible, of course, but really, you wanted 1600- rated people to optimize dfs?
• » » » » 4 weeks ago, # ^ | +75 Maybe the authors wanted to SendThemToHell
• » » » » 4 weeks ago, # ^ | +40 I understood your point, yes. The fact that in the end this task was challenging not because of it's algorithmic difficulty but because of the memory limit is kinda frustrating, this was not how it was intended.Well, I can't argue with the fact that this Div. 3 is not as well-adjusted as the previous one, sorry for that. We'll try to make the next one more pleasant to solve.
• » » » » » 4 weeks ago, # ^ | ← Rev. 2 → +3 I solved 7 problems in 1 hour. And couldn’t optimize dfs to pass in the remaining hour of the contest :(everything else is super. Thanks for the great contest :) спасибо за интересные задачи
• » » » » 4 weeks ago, # ^ | ← Rev. 2 → +3 Yes most of the recursive solutions fail just because of MLE. But my main point is that there are so rare dfs solutions with heavy optimization to be able to pass, I did not mean that DFS is enough to past. I think that the author make a miscalculation to use 256Mb instead of 512Mb and kill around atleast half of the solutions.Yet some of my CM friend still find it very confusing to optimize further, some even spend an hour without being able to solve it.
• » » » 4 weeks ago, # ^ | +9 But is there a solution that even needs to be cut off? If not why set the constraints so high? Additionally you mentioned earlier that some testers had dfs solutions, were none of them close to the memory limit?
• » » 4 weeks ago, # ^ | -10 the graph is a functional graph so you can just use a loop [submission:https://codeforces.com/contest/1607/submission/134137875] (although I used a DFS during the round and got few problems)
• » » » 4 weeks ago, # ^ | +1 Is there a prerequisite technique on how to find the longest path in a functional graph? I dont get your solution, what do the while loops do?
• » » » » 4 weeks ago, # ^ | ← Rev. 12 → +4 First, why do I have that much downvotes :') ?!!I'll try to explain my solution step by step. First notice that in the graph, each node has at most one child. So basically, any connected component has at most ONE CYCLE. Indeed, let's assume you're currently constructing the connected component starting from node $1$. When we're at node $i$ we have two choices: we can either add an edge to node $i + 1$ (so the graph looks like a path and we don't have any cycle) or we can add an edge to some node $j$ such that $j < i$ and we'll create a cycle. Notice that after a cycle has been created, we can't add any more edge.Now let's say we found the cycles. For a given cycle, the length of the longest path is the same for each node of the cycle (it's the size of the cycle).So now we know that a functional graph looks like......x......|......|......vx->x->x->CYCLE(the . are used to align the edges. I'm sorry I couldn't do something clean. I'll try to send a drawing asap)Imagine it as we link some sort of directed tree (where all the edges are directed toward the root) to a cycle.About my code:What you can do is store for each node:1) if it has been visited2) if we are visiting the node (this means the node is part of the path we are exploring right now)3) the longest path starting from this nodeIn my code $ans[i][j] == 0$ if the node hasn't been visited, $ans[i][j] == -1$ is we're visiting the node, else it's the longest path starting from this node.Now let's iterate over each node $u$. If the node hasn't been visited, let's start a walk in the graph (basically we explore the unique path starting from node $u$). We'll keep track of a vector of all the nodes in the path ($curCycle$ in my code) and we'll also remember the length of the path ($cnt$ in my code)This is what I do in the first while loop.Let's say the child of our current node is $v$. If it's the first time we see it then we move to $v$ and keep exploring (don't forget to update $curCycle$ and $cnt$). If we already computed an answer for node $v$, we simply increment $cnt$ by this answer and break. Now, if $v$ is already part of our path (in my code it's $ans[i][j] == -1$), it means we cycled. So we're going to look for the last occurence of $v$ in our array $curCycle$. All the nodes after this occurence are part of the cycle and their answer should be updated accordingly. Notice that as we cycled, we can't expand our path anymore.Now we also need to update the answers of all the nodes in the path (nodes which are outside of the cycle). So we basically start again to walk from node $u$ and we set it's answer to $cnt$. Then when we move to the child of the current node, $cnt$ should decrease because the length of the path is reduced by $1$.The time complexity is $O(N)$ where $N$ is the number of nodes (here $N = nm$). Indeed, we visit each node exactly once because after a node has been visited, it's answer is remembered and we'll never explore again it's path.Essentially, finding cycles in a functional graph is the same as finding if there is a cycle in a directed graph (See CSES Round Trip)The only difference is that:1) As the graph is pretty simple we can use a while loop instead of a DFS2) We're actually finding ALL the cycles of the graph because each connected component has at most one cycleAbout the other part of the algorithm, imagine you "compress" the cycles into one big node with it's answer = the length of the cycle. We now have a DAG (and more specifically it's a kind of directed chain) so we can apply DP (here we have only one transition per node).The while loops are just a more convenient/efficient way to implement the algorithm.A few problems about functional graphs:Usaco silver, Swapity Swapity SwapUsaco silver, Dance MoovesUsaco gold, Exercise (well this one is a bit less related but it's an interesting problem)I hope my explanations were clear, if they weren't just ask me and I'll do my best to explain :)
• » » » » » 4 weeks ago, # ^ | 0 Thanks a lot! Really helpful.
» 4 weeks ago, # | +1 Why the memory limit for F is so tight ? As a beginner I find it very hard to optimize memory further more (though there are better algorithm but I cant just think of it)
• » » 4 weeks ago, # ^ | +3 The same for me. Seems recursive programming does not work. Have to do in a loop.
• » » » 4 weeks ago, # ^ | 0 Most of my friend using DFS are failed due to MLE (and there are CM too). Just one among them be able to pass using kosaraju instead of tarjan.
• » » » » 4 weeks ago, # ^ | 0 wait, did you really compressed the graph using strongly connected components ?
• » » » » 4 weeks ago, # ^ | 0 U wrote opposite maybe I believe he would have used tarzan as Kosaraju takes more memory . I too passed it in recursive way but I had to find cycles by simple DFS . And code is still on the edge for MLE .
• » » » » 4 weeks ago, # ^ | +3 But isn't Kosaraju also dfs?
• » » » » » 4 weeks ago, # ^ | 0 I mean that he is the only one among us used DFS and be able to pass lol
» 4 weeks ago, # | 0 It was timeforces.!
» 4 weeks ago, # | 0 Someone please tell me why my code is not working in problem D.134132886
• » » 4 weeks ago, # ^ | 0 You forgot return statement in the flag == 1 condition :/
• » » 4 weeks ago, # ^ | 0 You misunderstood the problem. The numbers can be completly out of range of 0..n, so the code does not work at all.
• » » » 4 weeks ago, # ^ | 0 No, they added specific conditions for 'B' and 'R'. Since 'B' can only be decreased by 1 and 'R' can only be increase by 1, it seems right to me.
• » » » » 4 weeks ago, # ^ | 0 Consider in example all red numbers bigger than n. Obviously output must be No.
• » » » » » 4 weeks ago, # ^ | 0 Sorry I don't get your point. This is exactly what is done in their code + what I explained.
• » » » » » » 4 weeks ago, # ^ | 0 No, the code does not check this. The var temp1 never gets incremented if values are out of range 1..n, so output is never No.
• » » » » » » » 4 weeks ago, # ^ | ← Rev. 2 → +1 Parts of the code Spoilerif(s[i]=='R') { if(arr[i]>n) { flag = 1; } mp2[arr[i]]++; } and ~~~~~ if(flag == 1) { cout<<"NO"<
• » » » » » » » » 4 weeks ago, # ^ | 0 Thanks. it works now.
• » » » » » » » » 4 weeks ago, # ^ | 0 Ah, ok, I did not see that ;)
» 4 weeks ago, # | ← Rev. 2 → 0 Can anyone help me understand why this is TLE on F? https://codeforces.com/contest/1607/submission/134130569I'm pretty sure it's just linear time complexity DFS with the board size, but why TLE?Is this because it's using recursive and cause stack overflow?
• » » 4 weeks ago, # ^ | +5 I see you're using std::map and std::set in your code, both of them will make total time complexity of code as: $t \times n \times m \times \log(n \times m)$ which will surely give TLE over all test cases :(
• » » » 4 weeks ago, # ^ | 0 even if it's using map and set, the complexity is 4*10^6 log (4*10^6) which is less than 100 millions. Why would that TLE over all test cases?
• » » » » 4 weeks ago, # ^ | 0 100 millions operations is at the border of the time limit per cases
» 4 weeks ago, # | -14 Div.3 sucks
» 4 weeks ago, # | ← Rev. 3 → 0 134133989 why my submission giving tle. please help. I had used multiset.
» 4 weeks ago, # | 0 How to solve B? :(
• » » 4 weeks ago, # ^ | 0 The key observation is that after 4 steps you are where you started.
• » » 4 weeks ago, # ^ | 0 Just observe all cases of n%4. Take any random case and try doing some observations. You will surely be able to do it.
» 4 weeks ago, # | +3 I really like it to solve so much problems as in div3 possible. But today there was also a big difficulty gap from E to F,G,H.
• » » 4 weeks ago, # ^ | 0 please see my solution also
» 4 weeks ago, # | 0 This input data is from Test Case #3 of problem D. Test Case :12016 1 17 10 5 2 13 34 20 24 2 9 17 14 31 3 1 8 34 12RRBRBBRBBBBRRRBRRRBRHi doreshnikov, Could you please help me? I wanted to know what is the logic behind this test case. I have stress tested my solution on random 10000 inputs of array length 20 but my generator couldn't catch this.
• » » 4 weeks ago, # ^ | +3 Not sure what's so exceptional about this test. If you sort all the numbers by (color, value), you get something like thisAs you can see, all blue numbers can be decreased to the corresponding number from permutation and all red numbers can be increased to get the number from permutation (the first number in the row is the expected number from permutation).It could be the fact that there is a Blue number that you don't have to apply operations to (2), but I was sure a similar case was in the example test...
» 4 weeks ago, # | 0 WTF! Why do you put a blank line in the input?
• » » 4 weeks ago, # ^ | +5 If there was no blank line it would be hard to know which testcase is which. The extra blanks don't affect input
• » » 4 weeks ago, # ^ | +9 So it is easier to distinguish tests in a multitest when you read it
» 4 weeks ago, # | -30 B was trash
• » » 4 weeks ago, # ^ | +20 It was a pretty straightforward observation after dry running any testcase that we land at the starting position after every 4 jumps.
• » » » 4 weeks ago, # ^ | -31 oh is it so straightforward? Please teach me how to make observations.Observations suck!
• » » » » 4 weeks ago, # ^ | +20 Observations are quite important in the world of competitive programming :) it's pretty valid advice from yasserkhan45: if you can't see the answer immediately, experiment with some test cases. As is pointed out, a single test case was enough to see what happens in general, and a small modification was required for an odd starting position.
• » » » » » 4 weeks ago, # ^ | 0 What to do if I still can't see through, happens with me most of the times, observational questions are the ones that take up most of my time in a contest, for most people they are straightforward but for me :(, any advice on how to improve?
• » » » » » » 4 weeks ago, # ^ | +2 If observation doesn't work, sometimes it helps to write a solution that you know is slow and will not pass but is really easy to implement (in this case it's just to simulate the process).Either you'll find a way to optimize it later so it would get OK (not in this particular problem though) or you'll have a way to search for patterns in answer a bit faster. In IOI format it also may help you to get at least partial score.If nothing else, at least you'll have a solution you can stress-test your main solution with if something goes wrong with it.
• » » » » » » 4 weeks ago, # ^ | +4 There's no catch-all answer here and I don't want to reel off any cliches but: practice really does make an enormous difference here: the more questions you solve, the more your past experience can inspire the right ideas. limits often provide a clue. The limits here were big, so it was clear there must be some sort of pattern that did not require us to iterate over all moves. look for patterns. Here, if you choose a starting point of 0 and iterate for a few moves, you get [0, -1, 1, 4, 0, -5, 1, 8, 0, -9, 1, 12, 0, -13, 1, 16, ...]. It's clear that we keep getting back to 0, and that this happens every 4 moves — think about why. It's because every 4 moves, the first and last move go left, and the middle two moves go right. What's happening between those moves? Every other even move (n % 4 = 2) brings us back to 1 (it's easy to consider why this happens). If n % 4 = 1, we're subtracting n from 0. If n % 4 = 3, we're adding n to 1. So this gives us the complete set of cases [0, -n, 1, n+1] for the four possible values of n % 4. Then we add the starting position. If we start on an odd position, it turns out (by similar experimentation and consideration) that the complete set of cases is [0, n, -1, -(n+1)].
• » » » » » » » 4 weeks ago, # ^ | +8 you are right but my observation took a lot of time, I guess more practice will help, thanks for the advice :)
• » » » » » » » » 4 weeks ago, # ^ | 0 Good Luck Patrick, I'm afraid but your psychic powers won't help you much here at Codeforces :D
• » » » » » » » » » 4 weeks ago, # ^ | 0 Haha,Nice One,
• » » » » » » » 4 weeks ago, # ^ | 0 It’s took almost 1 hour for me to solve B problem. After contest I asked my friends why B problem is so hard(After B I solve 2 more problems in less than 20 min), I was shocked that we can n%4. My solution is arithmetic progression, after we start in even number we move -1, +2, +3, -4, -5, +6, +7 etc. So I saw we have progression +2,6,10... and 3,7,11... -4,-8,-12... I just don’t know why this problem is B, because C is easier. In C you don’t need to notice anything, just write easy code. In B you have to write complicated arithmetic progression(which is obvious will pass 10^14 tests, after that I just didn’t think about another solution), or notice n%4 somehow.
• » » » » » » » » 4 weeks ago, # ^ | 0 That’s why you shouldn’t spend much time on one problem. You only read B and didn’t read other problems. Try to read next problems too, because they may be easier for you. If you just skipped B, you would have taken higher place on the contest. For example I solved problems in this order A B C D E H G and F after the contest.
• » » » » » » » » » 4 weeks ago, # ^ | 0 Thanks for the advice
• » » » 4 weeks ago, # ^ | +2 I still think its dumb. If you try to work out an idea instead of looking at the sequence and guessing you waste a bunch of time and get nowhere. To this point i have no idea why my solution works (will probably work it out now that i said this).
• » » 4 weeks ago, # ^ | 0 It was a trap :DSome problems just so nicely hit my blind spot. Got fixed on the idea of a combo of 4 arithmetic progressions. Lost a lot of time and all my morale. In retrospect, of course, it's so embarrassingly obvious.Oh well… Now I've learned. Again :)
» 4 weeks ago, # | ← Rev. 2 → 0 134139660 please see my solution .I am getting tle . for no reason .please do not ignore .
• » » 4 weeks ago, # ^ | ← Rev. 2 → 0 Hello. Note that you are using multiset, so its better to use s.lower_bound(number) here since lower_bound(all(x), number) will be linear complexity. Your sol with this idea: 134140462
• » » » 4 weeks ago, # ^ | 0 Sir please tell me when to use this form of lowerbound and when to use other
• » » » » 4 weeks ago, # ^ | +1 lower_bound(all(v), x) — usd with vectors/arrays s.lower_bound(x) — used with sets/maps
• » » 4 weeks ago, # ^ | 0 Couldn't wrap my head around your solution. Here is the simple one that I have implemented.cpp codeWe can decrease 'b' and increase 'r' so all the 'b' should come first and 'r' should come at the end as we can increase them after performing the operations. If for any of them doesn't follow the limit(1 to n), answer will be NO.
• » » 4 weeks ago, # ^ | ← Rev. 2 → +3 I think more people would have checked your code, if it was understandable. Did you here about coding style?
• » » » 4 weeks ago, # ^ | 0 Ok i will keep that in mind from next time btw thanks
» 4 weeks ago, # | 0 Question B and C someone please explain approach of these :))
• » » 4 weeks ago, # ^ | +2 C, consider the array a[] to be sorted. So the first value is smallest, so gets removed first. Observe that all values allways change by the same amount, so the relative sorting allways stays the same. So the values are removed from left to right.When removeing a[0], a[1] becomes a[1]-a[0], a[2] becomes a[2]-a[0] and so on.When removing a[1], a[2] becomes a[2]-a[0]-(a[1]-a[0])=a[2]-a[1] Same for a[3] becomes ... =a[3]-a[1]When removing a[2], a[3] becomes a[3]-a[2]...So ans=max(a[0], max(a[i]-a[i-1]))
» 4 weeks ago, # | 0 I got runtime error in test 10 problem H. Can someone fix for mehttps://www.ideone.com/64cEbJ
» 4 weeks ago, # | 0 thank u very much for this hopeful contest.
» 4 weeks ago, # | +18 I guess this is because the pointer in 64bit system is 8 bytes :( ![](https://cdn.luogu.com.cn/upload/image_hosting/ad7o9r88.png))
» 4 weeks ago, # | +2 I tried solving F. Robot on the Board 2 using dfs with dp on the matrix using directions as edges and cells as nodes.My idea was to first find a single nodes for each simple cycle for which I used dfs1. Then I used dfs2 to set each nodes of every cycle to it's cycle length. Finally, I used dfs3 to get length of paths for the remaining nodes of the graph. I'm getting Memory limit exceeded due to some bug. Can anyone point out the issue here.Here is my submission #134144883.
• » » 4 weeks ago, # ^ | +11 When you have recursion(dfs) system reserves memory for that recursion. It is usually called stack memory. So your dfs uses additional O(N*M) memory. That’s why you got MLE, and me too. Try to solve this problem without recursion.Hint: all cells have only one outgoing edge
• » » » 4 weeks ago, # ^ | 0 I understood. Thanks.
• » » 4 weeks ago, # ^ | +3 My Solution got AC with DFS although it is also on the edge of MLE. You can have a look if u want .
» 4 weeks ago, # | ← Rev. 2 → +110 Memory limit was too tight for problem F. I used dfs and got MLE with a memory usage of 283MB using C++17(64). However, I submitted it for C++17 and got AC, only 161MB.
» 4 weeks ago, # | 0 I hope to become Expert today :) (unless any of my question is hacked :( ) Thanks for this wonderful round!
» 4 weeks ago, # | +1 Thank you so much for interesting & worth studying problems :) I really enjoyed it. p.s. Any editorials yet?
» 4 weeks ago, # | +11 when will system testing / hack rejudging start??
• » » 4 weeks ago, # ^ | 0 Normally, I saw rating changes on 9:00(UTC)-ish. Let us be patient
» 4 weeks ago, # | +11 I hacked 10 users!When will system test begin?
• » » 4 weeks ago, # ^ | +13 I hacked 10 users! — r/lethal
• » » 4 weeks ago, # ^ | +8 And I made my first ever hack...Kinda excited for my testcase to appear lmao
• » » » 4 weeks ago, # ^ | 0 rip whoever got TLE'd by my test lol
• » » 4 weeks ago, # ^ | 0 any tips for hacking?
• » » » 4 weeks ago, # ^ | 0 To hack the records whose times are near the bound.If the time limit is 1s,we can find the records in 950~999ms.
• » » » » 4 weeks ago, # ^ | ← Rev. 2 → 0 smart way to find victim byou don't use any tools for test case generation?
• » » » » » 4 weeks ago, # ^ | 0 Oh,i use C++ editor:(
• » » » » » » 4 weeks ago, # ^ | 0 haha. I like your reply.
» 4 weeks ago, # | ← Rev. 3 → -9 how to improve my logic
• » » 4 weeks ago, # ^ | 0 What do you mean?
» 4 weeks ago, # | -8 Can anyone tell how to get rid of MLE in test 4 in problem F. Link to my submission:- https://codeforces.com/contest/1607/submission/134177362I used Kosaraju's algorithm and then did dp in the condensed graph.
• » » 4 weeks ago, # ^ | ← Rev. 2 → 0 here's what I did: i was earlier using SCC to find cycles, but then I switched to using just DFS iterative DFS using stack, instead of recursive DFS my submission: 134172739
» 4 weeks ago, # | -8 This was my first ever contest. I was not able to solve all the problems and wanted to know the solutions of it. PLease tell me where can I find the tutorials
• » » 4 weeks ago, # ^ | 0 They aren't out just yet, they should be in a short while :)
» 4 weeks ago, # | -8 I don't know what is the issue with the 3rd question the same logic in java is giving TLE and running perfectly fine in C++. I have used fast input-output methods in java still this happened.
» 4 weeks ago, # | -9 This was my first round, and I solved 7 problems but i'm still unrated :( Is this round unrated for me?
• » » 4 weeks ago, # ^ | +5 Rating changes are calculating now, please wait for some times. Hope that you can get high rating.
• » » » 4 weeks ago, # ^ | 0 Thanks!
• » » » 4 weeks ago, # ^ | +1 Who are you?
• » » » » 4 weeks ago, # ^ | +8 not an alt, I was a user for another PS website. I'm just not familiar for the Codeforces...
• » » » » » 4 weeks ago, # ^ | 0 No,i'm asking who are luogu_bot0.:(((
• » » » » » » 4 weeks ago, # ^ | +13 uh-oh. sorry.
• » » 4 weeks ago, # ^ | -8 gm alt?
• » » » 4 weeks ago, # ^ | 0 *chinese alt? spoilerjk
• » » » 4 weeks ago, # ^ | 0 He's obviously not a noob.
» 4 weeks ago, # | 0 how to do problem C minimum extraction I did a brute force but got tleMy solution
• » » 4 weeks ago, # ^ | ← Rev. 3 → 0 Time complexity of your solution is $O(n^2)$. There is a $O(n \log n)$ time solution for this problem.1) sort the array2) ans= -infinity3) if array[0] is positive, ans= array[0]4) for i in (1,n): ans= max(ans, array[i]-array[i-1])Note there is a corner case which |array| == 1
• » » » 4 weeks ago, # ^ | ← Rev. 2 → 0 correction my solutions is $O(n \log n)$
• » » » 4 weeks ago, # ^ | 0 Thank you very much for replying can you please tell me why this algorithm works.
• » » » » 4 weeks ago, # ^ | 0 I recommend you to try sample test cases by hand.Write test cases, follow the steps, and get the idea why it works. full solution: 134092282
• » » » » » 4 weeks ago, # ^ | 0 Thank you for the help!
• » » 4 weeks ago, # ^ | +10 $1\leq n\leq 2\times 10^5$And your code is not less than O(n^2).That must come to TLE.
• » » 4 weeks ago, # ^ | 0 You must calculate the TIME COMPLEXITY before submitting.
• » » » 4 weeks ago, # ^ | 0 Thank you very much for replying Actually I didn't submit this solution in contest in fact I couldn't solve this. Now when I am upsolving it this is the only solution I can come up with. I wanted to know the actual approach or solution that is why I asked and I also gave my approach I can come up with. I know that is O(n^2).
• » » » » 4 weeks ago, # ^ | 0 check editorial. It explains quite clearly.
» 4 weeks ago, # | 0 I solved a question in yesterday's contest but didn't get point till now!!why??
» 4 weeks ago, # | 0 Is this contest unrated?
• » » 4 weeks ago, # ^ | 0 wait 2 more hrs
• » » » 4 weeks ago, # ^ | 0 Thanks
» 4 weeks ago, # | 0 Will this contest be rated for me? this is my first ever codeforces contest ? Any information on this will be really helpfull. Thank you!
• » » 4 weeks ago, # ^ | +6 It is rated for youRegardless of whether you are a trusted participant of the third division or not, if your rating is less than 1600, then the round will be rated for you.
» 4 weeks ago, # | +1 is there anyone whose rating has been updated yet? I am still unrated, so I was wondering if I had to do something more in order to get a rating..
• » » 4 weeks ago, # ^ | ← Rev. 2 → -7 wait 2 more hrs if you have participated
• » » » 4 weeks ago, # ^ | -8 oh okk.... Thank you! :)
• » » » 4 weeks ago, # ^ | -8 3 hours ago...
• » » 4 weeks ago, # ^ | 0 Yeah,it's slow.But the only thing you can do is WAIT.
» 4 weeks ago, # | +8 Editorial? doreshnikov
• » » 4 weeks ago, # ^ | ← Rev. 2 → +8 Sorry, wasn't feeling well. I promised the previous time that editorial will come out sooner, but couldn't finish it faster this time :(ETA: half an hour probably, I hope...
» 4 weeks ago, # | +13 All problems in this round is with multi testcase. Interesting.
» 4 weeks ago, # | +14 Sorry, but when is the editorial available? Because last 2 problems I can't solve it T_T
» 4 weeks ago, # | 0 hahahahah i will get 500000000000000000000000 points by hard sttruggle
» 4 weeks ago, # | +11 What's the meaning of the memory limit of Problem F...
» 4 weeks ago, # | 0 I think A-E are good problems in div3,but F is obviously harder than E,it leads to the speed of solving the first five problems is particularly important.But anyway, I think it is a good round!
» 4 weeks ago, # | ← Rev. 2 → +8 n
» 4 weeks ago, # | ← Rev. 2 → 0 The test data for Problem H is not strong enough. After taste, for the $i$-th dish, $a'_i$ should be in $[\max(0, a_i - m_i), a_i - \max(0, m_i - b_i)]$; In my first submission, I wrote it as $[\max(0, a_i - m_i), a_i]$. It's wrong because I ignored this type of situation: when $m_i > b_i$, taster must eat some of $a_i$. But it got Accepted.Upd: It seems this bug surprisingly can't be hacked, none business of the strength of the test data, sry.
» 4 weeks ago, # | 0 cannot load the page during the contest (but I can visit other websites except cf), which made the experience a little painful. I wonder if anyone else had this problem yesterday.
» 4 weeks ago, # | -22 比赛结束了吗,难度咋样
• » » 4 weeks ago, # ^ | -10 What'are you saying?Which language?Chinese or Japanese?Please use English(recommended) or Russian.
» 4 weeks ago, # | -37 what the fucking the problem bitch it is a hard problem! who made this fucking question bitch?
» 4 weeks ago, # | -18 if you like codeforces.com? then put +
• » » 4 weeks ago, # ^ | ← Rev. 2 → 0 or downvote this ↑ comment
• » » 3 weeks ago, # ^ | 0 OK. We know there are 18 people here who don't like it, or just don't like you.
» 4 weeks ago, # | 0 Good luck to everyone!
» 3 weeks ago, # | 0 Used binary search for E: link
» 3 weeks ago, # | 0 I have created a whatsapp group for discussing questions related Competitive programming It going to be very helpful for beginners and to get momentum into cp. link: https://chat.whatsapp.com/JHLDhLlpoR46aVptVpItNL
» 3 weeks ago, # | 0 I love China!!(I get too less points so I need to shout out the angry of myself)
» 2 weeks ago, # | ← Rev. 7 → +4 Hi
• » » 2 weeks ago, # ^ | ← Rev. 2 → 0 thanks
» 2 weeks ago, # | 0 It would be so interesting, I think. | {} |
# semidirect product between subgroup of general linear group and vector space in GAP
I am currently working on trying to get a solvable doubly transitive permutation group using GAP. So, I am trying to create the semidirect product of a subgroup of a general linear group and a vector space. Currently I am trying to work with the group GL(2,3) and vector space V of dimension 2 over the field of 3 elements. I don't know if this is possible, since it seems like both parts of the semidirect product needs to be a group and I need a homomorphism from GL to V. I am new to GAP so I don't know what I can do to achieve this.
For the special case of a semidirect product of a matrix group with its natural vector space you can use SemidirectProduct without homomorphism:
gap> matgrp:=GL(2,3);
GL(2,3)
gap> sdp:=SemidirectProduct(matgrp,GF(3)^2);
<matrix group of size 432 with 3 generators>
The result is the semidirect product as an affine matrix group, that is the upper left corner is the matrix part and the last row (except from the last entry $$1$$) is the vector space part.
To get the permutation action on 9 points, you need to get the conjugation action (OnPoints) of the matrix part, together with the translation action (multiplication, OnRight) of the vector space part:
gap> normal:=Image(Embedding(sdp,2));;
gap> Size(normal);
9
gap> normalelms:=Elements(normal);;
gap> matrixpart:=Image(Embedding(sdp,1));;
gap> act1:=Action(matrixpart,normalelms,OnPoints);
Group([ (4,7)(5,8)(6,9), (2,7,6)(3,4,8) ])
gap> act2:=Action(normal,normalelms,OnRight);
Group([ (1,4,7)(2,5,8)(3,6,9), (1,2,3)(4,5,6)(7,8,9) ])
Together this gets the 2-transitive permutation action
gap> permrep:=ClosureGroup(act1,act2);
Group([ (4,7)(5,8)(6,9), (2,7,6)(3,4,8), (1,4,7)(2,5,8)(3,6,9) ])
gap> Size(permrep);
432
gap> Transitivity(permrep);
2
• Thank you for you response! I never realized I wouldn't immediately get the permutation group I wanted from taking the semidirect product. I appreciate the help! – LG74 Feb 13 '20 at 16:13 | {} |
# Extra Border Around ID3DXSprite [SOLVED]
## Recommended Posts
Extra Border Around ID3DXSprite I'm trying to use ID3DXSprite in DirectX 9.0 to draw animated sprites. However, when I try and draw sprites from my texture, I get an odd black border around my sprites. When drawing normally, there is a black line at the top. When rotated, the border extends all around the sprite. Let me demonstrate: This is my texture: Each sprite is 32 pixels wide and 57 pixels high (with a 1 pixel border extending around each pixel). Here is my code for loading:
// Initialization
LPD3DXSPRITE pD3DXSprite;
LPDIRECT3D9 pD3D9;
LPDIRECT3DDEVICE9 pD3DDevice9;
pD3D9 = Direct3DCreate9(D3D_SDK_VERSION);
// get the display mode
D3DDISPLAYMODE d3ddm;
// set the presentation parameters
D3DPRESENT_PARAMETERS d3dpp;
ZeroMemory(&d3dpp, sizeof(d3dpp));
d3dpp.BackBufferWidth = 1024;
d3dpp.BackBufferHeight = 768;
d3dpp.BackBufferCount = 1;
d3dpp.BackBufferFormat = d3ddm.Format;
d3dpp.Windowed = false;
d3dpp.EnableAutoDepthStencil = true;
d3dpp.AutoDepthStencilFormat = D3DFMT_D16;
d3dpp.Flags = D3DPRESENTFLAG_LOCKABLE_BACKBUFFER;
d3dpp.FullScreen_RefreshRateInHz = D3DPRESENT_RATE_DEFAULT;
d3dpp.PresentationInterval = D3DPRESENT_INTERVAL_IMMEDIATE; // for frame rate detection
D3DDEVTYPE_HAL, m_HWND,
behavior_flags,
&d3dpp, &pD3DDevice9);
D3DXCreateSprite(pD3DDevice9, &pD3DXSprite);
D3DXIMAGE_INFO d3dxImageInfo;
LPDIRECT3DTEXTURE9 m_pTexture;
D3DCOLOR colorkey = 0xFFFF00FF;
D3DXCreateTextureFromFileEx(Graphics_System->pD3DDevice9, "test.bmp", 128, 128, 1,
D3DPOOL_DEFAULT, D3DFMT_UNKNOWN, D3DPOOL_DEFAULT, D3DX_DEFAULT,
D3DX_DEFAULT, colorkey, // Use a magenta color key
&d3dxImageInfo, NULL, &m_pTexture)))
m_src_Rect.top = 1 + ((m_curr_frame / m_x_cells) * (m_height + 1));
m_src_Rect.left = 1 + ((m_curr_frame % m_x_cells) * (m_width + 1));
m_src_Rect.bottom = m_src_Rect.top + m_height;
m_src_Rect.right = m_src_Rect.left + m_width;
Here is my code for drawing:
// Texture coordinates:
RECT m_src_Rect;
int m_x_cells = 3; // Number of cells in the horizontal direction (3 x 2 cells means width is 3 cells)
int m_curr_frame = 0; // First frame, though this code has the same problem no matter what the frame
int m_height = 57;
int m_width = 32;
m_src_Rect.top = 1 + ((m_curr_frame / m_x_cells) * (m_height + 1));
m_src_Rect.left = 1 + ((m_curr_frame % m_x_cells) * (m_width + 1));
m_src_Rect.bottom = m_src_Rect.top + m_height;
m_src_Rect.right = m_src_Rect.left + m_width;
// Red background so it's easy to see what's going on
pD3DDevice9->Clear(0, NULL, D3DCLEAR_TARGET,
D3DCOLOR_XRGB(255, 0, 0), 1.0f, 0);
pD3DDevice9->BeginScene();
pD3DXSprite->Begin();
D3DXVECTOR2 m_vScale_Vector;
m_vScale_Vector.x = 1.0f;
m_vScale_Vector.y = 1.0f;
D3DXVECTOR2 m_vRotation_Center;
m_vRotation_Center.x = 0.5f * (float)m_width;
m_vRotation_Center.y = 0.5f * (float)m_height;
float m_Angle = 0; // Can set to whatever angle we desire
D3DXVECTOR2 m_vPosition;
m_vPosition.x = 300;
m_vPosition.x = 400;
DWORD m_Tint = D3DCOLOR_RGBA(255, 255, 255, 255);
m_pSprite->Draw(m_pTexture, &m_src_Rect, &m_vScale_Vector, &m_vRotation_Center, m_Angle,
&m_vPosition, m_Tint);
pD3DXSprite->End();
pD3DDevice9->EndScene();
pD3DDevice9->Present(NULL, NULL, NULL, NULL);
I don't understand why the black line exists at the top - my texture coordinates (in m_src_rect) specify that the top of the sprite begins at pixel.top = 1, which should be fine, because the top pixel of the texture (pixel.top = 0) is black, but underneath it's magenta. Similarly, why is the border visible everywhere when I rotate the sprite? Thanks. EDIT: This problem's now been solved. As discussed below, ID3DXSprite uses bilinear texture filtering. So either a transparent border must be used around all the frames, or point filtering must be used. Thanks guys. [Edited by - Gauvir_Mucca on March 7, 2010 2:27:33 AM]
##### Share on other sites
Check your m_width and m_height variables, are these values including the border or not? If the border is included you should adjust m_src_Rect.bottom and m_src_Rect.right.
##### Share on other sites
My m_width and m_height variables are correct.
If I try reducing them by 1 for purposes of calculating m_src_rect.bottom and m_src_rect.right, three things happen:
A) The top line is still always visible
B) The border still appears around the left and top of the sprite when rotated (though the right and bottom borders are no longer visible)
C) The right edge of the image is clipped by 1 pixel (hard to see with the all-blue rocket ship, but if I add surface details, you can see the right wing is clipped when I perform the above operation).
Based on my calculations, and the above evidence, I believe I can say that my m_width and m_height variables are correct. Even if they weren't, reducing them didn't solve anything.
I also included the original texture above so you can check for yourself.
Any other ideas?
[Edited by - Gauvir_Mucca on March 5, 2010 10:45:56 PM]
##### Share on other sites
Simple fix.. DX sprites act weird at the edges, it overlaps, top to bottom and left to right. Simply create a 1 pixel wide fully transparent border around the entire image and the offending lines will go away...
##### Share on other sites
Quote:
Original post by LancerSolurusSimple fix.. DX sprites act weird at the edges, it overlaps, top to bottom and left to right. Simply create a 1 pixel wide fully transparent border around the entire image and the offending lines will go away...
Is this a well-documented bug/feature that MS acknowledges and suggests as the fix for?
What you suggest seems to work, but it seems like a dirty hack, designed to get around actual behavior that does not match up with the suggested behavior of the documentation.
It does not seem to me that every image SHOULD have to have a transparent border - this means, for instance, that you can't have sprite images bordering one another without any kind of border (transparent or not) between them. Additionally, a custom-written D3D-based Sprite wrapper using textured quads wouldn't have an arbitrary requirement like this, would it?
I'm just concerned that the reason this is happening is because I'm doing something wrong, and not because MS's DirectX 9 is inherently buggy and does not work entirely as documented. It seems to me that if DirectX 9 is fundamentally flawed in such a manner, then MS would at least acknowledge that (and warn their developers)...
##### Share on other sites
Quote:
If I try reducing them by 1 for purposes of calculating m_src_rect.bottom and m_src_rect.top, three things happen:
Quote:
B) The border still appears around the left and top of the sprite when rotated (though the right and bottom borders are no longer visible)
I think you're forgetting that you added +1 to the left and top. You should reduce m_src_Rect.bottom and m_src_Rect.right by 2. If that doesn't help, make sure your rectangle(the area enclosed in the outer border) is not wider that 98 pixels(rect.right-rect.left) and that it's not higher than 115 pixels.
##### Share on other sites
Any reason why you can't just get rid of the border in the image itself?
##### Share on other sites
Quote:
Original post by Gauvir_MuccaIs this a well-documented bug/feature that MS acknowledges and suggests as the fix for?
It's not a "bug", it's a natural result of bilinear filtering. The only way to make sure that only the texels within your rectangle get sampled is to use POINT for your MagFilter and MinFilter. If you want bilinear filtering, you'll need to add a gutter around your image as others have suggested.
##### Share on other sites
Quote:
Original post by Mussi
Quote:
If I try reducing them by 1 for purposes of calculating m_src_rect.bottom and m_src_rect.top, three things happen:B) The border still appears around the left and top of the sprite when rotated (though the right and bottom borders are no longer visible)
A) Sorry, my earlier post was incorrect in describing my attempt at a fix. What I meant to write was:
Quote:
If I try reducing them by 1 for purposes of calculating m_src_rect.bottom and m_src_rect.right, three things happen:
So what I'm saying is, I changed the code to the following, per your suggestion:
m_src_Rect.top = 1 + ((m_curr_frame / m_x_cells) * (m_height + 1)); m_src_Rect.left = 1 + ((m_curr_frame % m_x_cells) * (m_width + 1)); m_src_Rect.bottom = m_src_Rect.top + m_height - 1; m_src_Rect.right = m_src_Rect.left + m_width - 1;
Quote:
I think you're forgetting that you added +1 to the left and top. You should reduce m_src_Rect.bottom and m_src_Rect.right by 2.
I need to add +1 to the left and top because the border surrounds all the frames. Reducing m_src_Rect.bottom and m_src_Rect.right by 2 not only does not make sense, but does not help either - the clipping on the right is even more severe (since I'm essentially reducing the width and height by 2, instead of 1 or 0).
Quote:
If that doesn't help, make sure your rectangle(the area enclosed in the outer border) is not wider that 98 pixels(rect.right-rect.left) and that it's not higher than 115 pixels.
First, m_src_rect should only encompass one cell (i.e., one frame), not all six cells - so if by "rect.right-rect.left" you mean "m_src_rect.right-m_src_rect.left", then it should be 33 x 57.
I counted and made sure the entire six cells with their borders are 100 x 117 pixels. If I discount the outer borders (but keep the separating borders in the middle), the six cells are 98 pixels by 115 pixels.
That is, for a rectangle of 3 x 2 cells:
width = 1 (outer border) + 32 (first cell width) + 1 (first middle border) + 32 (second cell width) + 1 (second middle border) + 32 (third cell width) + 1 (outer border) = 100
height = 1 (outer border) + 57 (first cell height) + 1 (middle border) + 57 (second cell height) + 1 (outer border) = 117
Like I said, the image linked above is the actual texture I use - you can check for yourself if you don't believe me, but I've checked and re-checked that my sizes and numbers are correct.
Quote:
Original post by AiwendilAny reason why you can't just get rid of the border in the image itself?
Like I said, I COULD simply use transparent borders - but that seems like an awkward requirement and workaround. I specified a rectangle with top-left coordinates of (1, 1) to DirectX - so DirectX should draw my image with no regard for what kind of pixels are at (0, 0) or (1, 0) or anything else above (1, 1), since y = 1 should be the highest row of pixels it's using.
What if I had my sprite cells all next to each other without ANY border? In principle, it seems like I should be able to do that. But with this problem, the images will overlap slightly. If the top three rocket cells were blue and the bottom three were red, then attempting to draw the fourth cell, for example, would draw a mostly red rocket with a blue tip at the top (since the blue rocket in the first cell's position is bordering the top of the red rocket in the fourth cell's position).
Nothing in the documentation says your images cannot border one another in the texture. Likewise, nothing suggests your images must not only have a border, but that it be transparent as well.
Which leads me to believe that it isn't a requirement for accurate images, and that something wrong must be occurring. Otherwise, wouldn't this problem be mentioned somewhere by MS?
##### Share on other sites
EDIT: Double post, deleted.
##### Share on other sites
Quote:
Original post by MJP
Quote:
Original post by Gauvir_MuccaIs this a well-documented bug/feature that MS acknowledges and suggests as the fix for?
It's not a "bug", it's a natural result of bilinear filtering. The only way to make sure that only the texels within your rectangle get sampled is to use POINT for your MagFilter and MinFilter. If you want bilinear filtering, you'll need to add a gutter around your image as others have suggested.
Ok, so I don't want bilinear filtering. I try calling this before rendering:
pD3DDevice9->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_POINT);pD3DDevice9->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_POINT);
But my problem still remains. What am I doing wrong?
##### Share on other sites
Quote:
Original post by Gauvir_Mucca
Quote:
Original post by MJP
Quote:
Original post by Gauvir_MuccaIs this a well-documented bug/feature that MS acknowledges and suggests as the fix for?
It's not a "bug", it's a natural result of bilinear filtering. The only way to make sure that only the texels within your rectangle get sampled is to use POINT for your MagFilter and MinFilter. If you want bilinear filtering, you'll need to add a gutter around your image as others have suggested.
Ok, so I don't want bilinear filtering. I try calling this before rendering:
*** Source Snippet Removed ***
But my problem still remains. What am I doing wrong?
Never mind, I need to make those calls AFTER the call to pD3DXSprite->Begin(). Any reason why? It seems like making the call to SetSamplerState() should affect anything afterwards (unless a later call contradicts it). Does pD3DXSprite->Begin() automatically set bilinear filtering?
##### Share on other sites
Quote:
Original post by Gauvir_MuccaNever mind, I need to make those calls AFTER the call to pD3DXSprite->Begin(). Any reason why? It seems like making the call to SetSamplerState() should affect anything afterwards (unless a later call contradicts it). Does pD3DXSprite->Begin() automatically set bilinear filtering?
The reason why is because ID3DXSprite->Begin() sets sampler states of its own, including enabling bilinear blending.
##### Share on other sites
Quote:
Original post by Gauvir_Mucca
Quote:
Original post by Gauvir_Mucca
Quote:
Original post by MJP
Quote:
Original post by Gauvir_MuccaIs this a well-documented bug/feature that MS acknowledges and suggests as the fix for?
It's not a "bug", it's a natural result of bilinear filtering. The only way to make sure that only the texels within your rectangle get sampled is to use POINT for your MagFilter and MinFilter. If you want bilinear filtering, you'll need to add a gutter around your image as others have suggested.
Ok, so I don't want bilinear filtering. I try calling this before rendering:
*** Source Snippet Removed ***
But my problem still remains. What am I doing wrong?
Never mind, I need to make those calls AFTER the call to pD3DXSprite->Begin(). Any reason why? It seems like making the call to SetSamplerState() should affect anything afterwards (unless a later call contradicts it). Does pD3DXSprite->Begin() automatically set bilinear filtering?
It sure does. There's a list of all sampler/render/texture states it sets in the documentation.
## Create an account
Register a new account
• ### Forum Statistics
• Total Topics
628305
• Total Posts
2981967
• 9
• 13
• 11
• 12
• 11 | {} |
# How to "directly" obtain Maclaurin series of $\exp(x-1+\sqrt{x^2+1})$
Consider the Taylor expansion centered around $$x_0 = 0$$ for $$f(x) = e^{x-1 + \sqrt{x^2+1}} ~,\qquad x\in\mathbb{R}$$
The goal is to arrive at $$\sum c_k x^k$$ by hand, and I wonder if there is an efficient way to obtain the coefficients of each power $$x^k$$ directly?
Below are examples of three approaches I know of, which are NOT efficient because they involve cumbersome combinatorial identities or further rearrangements that are too complicated (to me).
1. Take the whole exponent into the formula $$e^z = \sum z^k / k!$$ where $$z = x-1 + \sqrt{x^2+1}$$
2. Split the exponent into two so that $$\displaystyle f(x) = \left( \sum \frac{ z_1^k }{ k! }\right) \left( \sum \frac{ z_2^m }{ m! }\right)$$ where $$z_1 = x-1$$ and $$z_2 = \sqrt{x^2-1}$$.
3. Back to the definition of Taylor expansion $$f(x) = \sum (x - x_0)^k f^{(k)}(x_0)\, /\, k!$$ and do the derivatives with chain rules, for every order.
Note that wherever $$\sqrt{x^2 + 1}$$ appears, it has to be expanded into power series $$1 + \frac12 x^2 - \frac18 x^4 + \ldots$$ sooner or later to yield the final result in the desired form $$f(x) = \sum x^k / k!$$
I consider these calculations as indirect because the procedure itself by definition doesn't yield $$c_k$$ "directly", and one has to go through more rearrangements that are error prone.
When one wants only the first few terms, these might be okay, but when it comes to exact expressions (in summations), I personally don't find them to be practical.
There are many posts on this site pertaining to finding series expansion of various functions. However, so far what I've seen are all "indirect" methods like described above. For example, this post.
One can always throw the expression to Wolfram Alpha if one cares only about the result. However, sometimes one needs the exact expression for some further steps of proving etc, that is, doing the analysis based on the series.
Thank you for your time.
I think that directly would be very difficult since it is already hard to find the general expression of the coefficients for the expansion of $$e^{f(x)}$$ $$\frac{e^{f(x)}}{e^{f(0)} }=1+x f'(0)+\frac{1}{2} x^2 \left(f''(0)+f'(0)^2\right)+\frac{1}{6} x^3 \left(f^{(3)}(0)+f'(0)^3+3 f'(0) f''(0)\right)+$$ $$\frac{1}{24} x^4 \left(f^{(4)}(0)+3 f''(0)^2+f'(0)^4+4 f^{(3)}(0) f'(0)+6 f'(0)^2 f''(0)\right)+O\left(x^5\right)$$
On the other side, composing first the Taylor series of $$f(x)$$ is in general simple. In your specific case, we can write $$\sqrt{x^2+1}=\sum_{k=0}^\infty \binom{\frac{1}{2}}{k} x^{2 k}$$ $$x-1+\sqrt{x^2+1}=x+\sum_{k=1}^\infty \binom{\frac{1}{2}}{k} x^{2 k}$$
$$\exp(x-1+\sqrt{x^2+1})=e^x \prod_{k=1}^\infty \exp\left(\binom{\frac{1}{2}}{k} x^{2 k} \right)$$ Doing the development of each term of the infinite product would again lead to an infinite product of series and finding explicitely the coefficient of $$x^n$$ could be a real problem.
Edit
Back to the problem five years later, writing $$f(x) = e^{x-1 + \sqrt{x^2+1}}=\sum_{n=0}^\infty a_n x^n$$ the $$a_n$$ are defined by the recurrence relation $$a_n=\frac{(2 n-3)\, a_{n-1}-\left(n^2-5 n+7\right)\, a_{n-2}+2 (n-3)\, a_{n-3}}{(n-2)\, n}$$ where $$a_0=a_1=a_2=1$$.
• Regarding your opening line ...it is already hard to find the general expression of... Would you mind take a look at my following question post? Mar 11 '19 at 7:47
• @CharlieMosby. Thanks to your comment, I went back to the problem and found the recurrence relation. I shall now have a look at your post. Thanks and cheers Mar 11 '19 at 8:26
• Thank you very much for the recurrence relation. It will be a good trip for me trying to understand how it works and how it came to be. Mar 11 '19 at 8:47
I'm trying to provide another way to do it, but I cannot say it is efficient enough.
Let $$f(x)=e^{x-1+\sqrt{x^2+1}}=\sum_{k=0}^\infty a_k x^k=a_0+a_1x+a_2x^2+\cdots$$ as usual. Since $$\dfrac d{dx}e^{x-1+\sqrt{x^2+1}}=\bigg(1+\dfrac x{\sqrt{x^2+1}}\bigg)e^{x-1+\sqrt{x^2+1}}=\sum_{k=0}^\infty(k+1)a_{k+1}x^k$$ We have $$\frac{x}{\sqrt{x^2+1}}e^{x-1+\sqrt{x^2+1}}=\sum_{k=0}^\infty [(k+1)a_{k+1}-a_k]x^k$$ Turning back to Taylor series aspect, since when $$x=0$$, we have $$f(0)=1$$ and $$f'(0)=1$$, hence $$a_0=a_1=1$$, then we can write $$\dfrac{x}{\sqrt{x^2+1}}f(x)=\sum_{k=1}^\infty [(k+1)a_{k+1}-a_k]x^k$$ By shifting the index we have $$\sum_{k=0}^\infty a_kx^k=\sqrt{x^2+1}\sum_{k=0}^\infty[(k+2)a_{k+2}-a_{k+1}]x^k$$ By using the Binomial expansion of $$\sqrt{x^2+1}$$ and solving some simultaneous equation system, we can solve up to $$a_n$$ for any natural number $$n$$, despite you would required to have a powerful brain or computer.
• I really like your creative approach.Thanks. There's a later answer with essentially the same observation, but yours is closer to my level. Feb 16 '19 at 5:27
You have $$f'(x)=f(x)\left(1+{x\over\sqrt{x^2+1}}\right)$$ and therefore $$\left({f'(x)\over f(x)}-1\right)^2={x^2\over x^2+1}\ .$$ It follows that $$f$$ satisfies the ODE $$(x^2+1)\bigl(f'^2-2ff'\bigr)+f^2=0\ .\tag{1}$$ Now plug $$f(x):=\sum_{k=0}^\infty a_k x^k$$ into $$(1)$$. Using $$f^2(x)=\sum_{j\geq0} a_jx^j\cdot \sum_{k\geq0} a_kx^k=\sum_{r\geq0}\left(\sum_{k=0}^r a_{r-k} a_k\right)x^r\ ,$$ and similarly for $$f'^2$$and $$ff'$$, you can obtain a recursion scheme for the $$a_k$$.
Don't you have the definition $$c_k = f^{(k)}(0)/k!$$ from Taylor's theorem? This gives you a "direct" way to compute $$c_k$$: just compute $$k$$ derivatives of $$f$$, plug in $$0$$, and divide by $$k!$$.
(Of course, computing $$k$$ derivatives of your given $$f$$ will get hairy very quickly, which is why people try the "indirect" methods as well.)
• Thanks for taking an interest in my question. I mentioned (in passing) in the post that: there are terms involving $\sqrt{x^2 + 1}$ and its derivatives which have to be expanded, producing sub-series for each order. This requires rearrangement or recognizing some obscure combinatorial identity (to arrive at the final "clean" power series). Feb 14 '19 at 4:32
• My apology....I just realized I mistakenly mixed things up in the comment above (and in the post). Indeed as you said (the very basic thing) that calculating $f^{(k)}(x)$ is merely taking repeated derivatives (albeit hairy). Evaluated at $x_0 = 0$ it is just a constant, and there's no extra expansion needed like I suggested. Feb 14 '19 at 5:02 | {} |
# How to set batch_size, steps_per epoch and validation steps
I am starting to learn CNNs using Keras. I am using the theano backend.
I don't understand how to set values to:
• batch_size,
• steps per epoch,
• validation_steps.
What should be the value set to batch_size, steps per epoch, and validation steps if I have 240,000 samples in the training set and 80,000 in the test set?
• What's your hardware specifications? It depends on that Generally people use batch size of 32/64 , epochs as 10~15 and then you can calculate steps per epoch from the above.. – Aditya Mar 30 '18 at 9:49
• batch_size determines the number of samples in each mini batch. Its maximum is the number of all samples, which makes gradient descent accurate, the loss will decrease towards the minimum if the learning rate is small enough, but iterations are slower. Its minimum is 1, resulting in stochastic gradient descent: Fast but the direction of the gradient step is based only on one example, the loss may jump around. batch_size allows to adjust between the two extremes: accurate gradient direction and fast iteration. Also, the maximum value for batch_size may be limited if your model + data set does not fit into the available (GPU) memory.
• steps_per_epoch the number of batch iterations before a training epoch is considered finished. If you have a training set of fixed size you can ignore it but it may be useful if you have a huge data set or if you are generating random data augmentations on the fly, i.e. if your training set has a (generated) infinite size. If you have the time to go through your whole training data set I recommend to skip this parameter.
• validation_steps similar to steps_per_epoch but on the validation data set instead on the training data. If you have the time to go through your whole validation data set I recommend to skip this parameter.
• What do you mean by "skipping this parameter"? When I remove the parameter I get When using data tensors as input to a model, you should specify the steps_per_epoch argument. – Nicolas Raoul Sep 27 '18 at 7:09
• According to the documentation, the parameter steps_per_epoch of the method fit has a default and thus should be optional: "the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined." Source: keras.io/models/model – Silpion Sep 28 '18 at 21:04
there is an answer in Github
1. model.fit_generator requires the input dataset generator to run infinitely.
2. steps_per_epoch is used to generate the entire dataset once by calling the generator steps_per_epoch times
3. whereas epochs give the number of times the model is trained over the entire dataset.
From tensorflow_estimator/python/estimator/training.py
Stop condition:
In order to support both distributed and non-distributed configuration reliably, the only supported stop condition for model training is train_spec.max_steps. If train_spec.max_steps is None, the model is trained forever. Use with care if model stop condition is different. For example, assume that the model is expected to be trained with one epoch of training data, and the training input_fn is configured to throw OutOfRangeError after going through one epoch, which stops the Estimator.train. For a three-training-worker distributed configuration, each training worker is likely to go through the whole epoch independently. So, the model will be trained with three epochs of training data instead of one epoch. | {} |
## Do we ever need to convert pressure?
Volume: $\Delta S = nR\ln \frac{V_{2}}{V_{1}}$
Temperature: $\Delta S = nC\ln \frac{T_{2}}{T_{1}}$
Wilson Yeh 1L
Posts: 42
Joined: Fri Sep 29, 2017 7:06 am
### Do we ever need to convert pressure?
Is there a standard unit of pressure for the equation $\Delta S = nRln\frac{P1}{P2}$
Like say they give it in atm, kPA, bar, etc. does it matter if I just use those units straight up or do I have to convert it to a standard unit like atm? From the textbook it doesn't seem like I have to, but I'm really not sure. Thanks!
Carlos Gonzales 1H
Posts: 50
Joined: Fri Sep 29, 2017 7:05 am
Been upvoted: 1 time
### Re: Do we ever need to convert pressure?
In the equation that you mentioned, the pressure can be used in any units because they will cancel out as they are being divided. | {} |
Goto Chapter: Top 1 2 3 4 5 6 A B C D E F Bib Ind
### F Overview of the MatricesForHomalg Package Source Code
#### F.1 Rings, Ring Maps, Matrices, Ring Relations
Filename .gd/.gi Content homalg definitions of the basic GAP4 categories and some tool functions (e.g. homalgMode) homalgTable dictionaries between MatricesForHomalg and the computing engines HomalgRing internal and external rings HomalgRingMap ring maps HomalgMatrix internal and external matrices HomalgRingRelations a set of ring relations
#### F.2 The Low Level Algorithms
In the following CAS or CASystem mean computer algebra systems.
Filename .gd/.gi Content Tools the elementary matrix operations that can be overwritten using the homalgTable (and hence delegable even to other CASystems) Service the three operations: basis, reduction, and syzygies; they can also be overwritten using the homalgTable (and hence delegable even to other CASystems) Basic higher level operations for matrices (cannot be overwritten using the homalgTable)
#### F.3 Logical Implications for MatricesForHomalg Objects
Filename .gd/.gi Content LIRNG logical implications for rings LIMAP logical implications for ring maps LIMAT logical implications for matrices COLEM clever operations for lazy evaluated matrices
#### F.4 The subpackage ResidueClassRingForHomalg
Filename .gd/.gi Content ResidueClassRingForHomalg some global variables ResidueClassRing residue class rings, their elements, and matrices, together with their constructors and operations ResidueClassRingTools the elementary matrix operations for matrices over residue class rings ResidueClassRingBasic the three operations: basis, reduction, and syzygies for matrices over residue class rings
#### F.5 The homalgTable for GAP4 built-in rings
For the purposes of homalg, the ring of integers is, at least up till now, the only ring which is properly supported in GAP4. The GAP4 built-in cababilities for polynomial rings (also univariate) and group rings do not statisfy the minimum requirements of homalg. The GAP4 package Gauss enables GAP to fullfil the homalg requirements for prime fields, and ℤ / p^n.
Filename .gi Content Integers the homalgTable for the ring of integers
Goto Chapter: Top 1 2 3 4 5 6 A B C D E F Bib Ind
generated by GAPDoc2HTML | {} |
Introduction¶
This document is intended to be a core reference guide to the formats, naming convention and data quality flags used by the reference files for pipeline steps requiring them, and is not intended to be a detailed description of each of those pipeline steps. It also does not give details on pipeline steps that do not use reference files. The present manual is referred to by several other documentation pages, such as the JWST pipeline and JDocs.
Reference File Naming Convention¶
Before reference files are ingested into CRDS, they are renamed following a convention used by the pipeline. As with any other changes undergone by the reference files, the previous names are kept in header keywords, so the Instrument Teams can easily track which delivered file is being used by the pipeline in each step.
The naming of reference files uses the following syntax:
jwst_<instrument>_<reftype>_<version>.<extension>
where
• instrument is one of “fgs”, “miri”, “nircam”, “niriss”, and “nirspec”
• reftype is one of the type names listed in the table below
• version is a 4-digit version number (e.g. 0042)
• extension gives the file format, such as “fits” or “asdf”
An example NIRCam GAIN reference file name would be “jwst_nircam_gain_0042.fits”.
The HISTORY header keyword of each reference file includes details on specific processing undergone by the files before being ingested in CRDS.
Reference File Types¶
Most reference files have a one-to-one relationship with calibration steps, e.g. there is one step that uses one type of reference file. Some steps, however, use several types of reference files and some reference file types are used by more than one step. The tables below show the correspondence between pipeline steps and refernece file types. The first table is ordered by pipeline step, while the second is ordered by reference file type. Links to the reference file types provide detailed documentation on each reference file.
Pipeline Step
Reference File Type (REFTYPE)
align_refs
ami_analyze
THROUGHPUT
assign_wcs
CAMERA
COLLIMATOR
DISPERSER
DISTORTION
FILTEROFFSET
FORE
FPA
IFUFORE
IFUPOST
IFUSLICER
MSA
OTE
SPECWCS
REGIONS
WAVELENGTHRANGE
background
WFSSBKG
WAVELENGTHRANGE
cube_build
CUBEPAR
RESOL
dark_current
DARK
dq_init
extract_1d
EXTRACT1D
APCORR
extract_2d
WAVECORR
WAVELENGTHRANGE
flatfield
FLAT
DFLAT
FFLAT
SFLAT
fringe
FRINGE
gain_scale
GAIN
ipc
IPC
jump
GAIN
linearity
LINEARITY
msaflagopen
MSAOPER
pathloss
PATHLOSS
persistence
PERSAT
TRAPDENSITY
TRAPPARS
photom
PHOTOM
AREA
ramp_fitting
GAIN
refpix
REFPIX
reset
RESET
rscd
RSCD
saturation
SATURATION
source_catalog
APCORR
ABVEGAOFFSET
straylight
REGIONS
superbias
SUPERBIAS
tso_photometry
TSOPHOT
Step Parameters Reference Types¶
When each Step is instantiated, a CRDS look-up, based on the Step class name and input data, is made to retrieve a configuration file. The reftype for such configuration files is pars-<class name>. For example, for the step jwst.persistence.PersistenceStep, the reftype would be pars-persistencestep.
Standard Required Keywords¶
At present, most JWST science and reference files are FITS files with image or table extensions. The FITS primary data unit is always empty. The primary header contains all keywords not specific to individual extensions. Keywords specific to a particular extension are contained in the header of that extension.
The required Keywords Documenting Contents of Reference Files are:
Keyword
Comment
REFTYPE
WFSSBKG Required values are listed in the discussion of each pipeline step.
DESCRIP
Summary of file content and/or reason for delivery
AUTHOR
Fred Jones Person(s) who created the file
USEAFTER
YYYY-MM-DDThh:mm:ss Date and time after the reference files will be used. The T is required. Time string may NOT be omitted; use T00:00:00 if no meaningful value is available.
PEDIGREE
Options are 'SIMULATION' 'GROUND' 'DUMMY' 'INFLIGHT YYYY-MM-DD YYYY-MM-DD'
HISTORY
Description of Reference File Creation
HISTORY
DOCUMENT: Name of document describing the strategy and algorithms used to create file.
HISTORY
SOFTWARE: Description, version number, location of software used to create file.
HISTORY
DATA USED: Data used to create file
HISTORY
DIFFERENCES: How is this version different from the one that it replaces?
HISTORY
If your text spills over to the next line, begin it with another HISTORY keyword, as in this example.
TELESCOP
JWST Name of the telescope/project.
INSTRUME
FGS Instrument name. Allowed values: FGS, NIRCAM, NIRISS, NIRSPEC, MIRI
SUBARRAY
FULL, GENERIC, SUBS200A1, ... (XXX abstract technical description of SUBARRAY)
SUBSTRT1
1 Starting pixel index along axis 1 (1-indexed)
SUBSIZE1
2048 Size of subarray along axis 1
SUBSTRT2
1 Starting pixel index along axis 2 (1-indexed)
SUBSIZE2
2048 Size of subarray along axis 2
FASTAXIS
1 Fast readout direction relative to image axes for Amplifier #1 (1 = +x axis, 2 = +y axis, -1 = -x axis, -2 = -y axis) SEE NOTE BELOW.
SLOWAXIS
2 Slow readout direction relative to image axes for all amplifiers (1 = +x axis, 2 = +y axis, -1 = -x axis, -2 = -y axis)
Observing Mode Keywords¶
A pipeline module may require separate reference files for each instrument, detector, filter, observation date, etc. The values of these parameters must be included in the reference file header. The observing-mode keyword values are vital to the process of ingesting reference files into CRDS, as they are used to establish the mapping between observing modes and specific reference files. Some observing-mode keywords are also used in the pipeline processing steps. If an observing-mode keyword is irrelevant to a particular observing mode (such as GRATING for the MIRI imager mode or the NIRCam and NIRISS instruments), then it may be omitted from the file header.
The Keywords Documenting the Observing Mode are:
Keyword
Sample Value
Comment
PUPIL
NRM
Pupil wheel element. Required only for NIRCam and NIRISS. NIRCam allowed values: CLEAR, F162M, F164N, F323N, F405N, F466N, F470N, GRISMV2, GRISMV3 NIRISS allowed values: CLEARP, F090W, F115W, F140M, F150W, F158M, F200W, GR700XD, NRM
FILTER
F2100W
Filter wheel element. Allowed values: too many to list here
GRATING
G395M
Required only for NIRSpec.
NIRSpec allowed values: G140M, G235M, G395M, G140H, G235H, G395H, PRISM, MIRROR
EXP_TYPE
MIR_MRS
Exposure type.
FGS allowed values: FGS_IMAGE, FGS_FOCUS, FGS_SKYFLAT, FGS_INTFLAT, FGS_DARK
MIRI allowed values: MIR_IMAGE, MIR_TACQ, MIR_LYOT, MIR_4QPM, MIR_LRS-FIXEDSLIT, MIR_LRS-SLITLESS, MIR_MRS, MIR_DARK, MIR_FLATIMAGE, MIR_FLATMRS, MIR_CORONCAL
NIRCam allowed values: NRC_IMAGE, NRC_GRISM, NRC_TACQ, NRC_TACONFIRM, NRC_CORON, NRC_TSIMAGE, NRC_TSGRISM, NRC_FOCUS, NRC_DARK, NRC_FLAT, NRC_LED
NIRISS allowed values: NIS_IMAGE, NIS_TACQ, NIS_TACONFIRM, NIS_WFSS, NIS_SOSS, NIS_AMI, NIS_FOCUS, NIS_DARK, NIS_LAMP
NIRSpec allowed values: NRS_TASLIT, NRS_TACQ, NRS_TACONFIRM, NRS_CONFIRM, NRS_FIXEDSLIT, NRS_AUTOWAVE, NRS_IFU, NRS_MSASPEC, NRS_AUTOFLAT, NRS_IMAGE, NRS_FOCUS, NRS_DARK, NRS_LAMP, NRS_BOTA, NRS_BRIGHTOBJ, NRS_MIMF
DETECTOR
MIRIFULONG
Allowed values: GUIDER1, GUIDER2
NIS
NRCA1, NRCA2, NRCA3, NRCA4, NRCB1, NRCB2, NRCB3, NRCB4, NRCALONG, NRCBLONG
NRS1, NRS2
MIRIFULONG, MIRIFUSHORT, MIRIMAGE
CHANNEL
12
MIRI MRS (IFU) channel. Allowed values: 1, 2, 3, 4, 12, 34 SHORT NIRCam channel. Allowed values: SHORT, LONG
BAND
MEDIUM
IFU band. Required only for MIRI. Allowed values are SHORT, MEDIUM, LONG, and N/A, as well as any allowable combination of two values (SHORT-MEDIUM, LONG-SHORT, etc.). (Also used as a header keyword for selection of all MIRI Flat files, Imager included.)
FAST
Name of the readout pattern used for the exposure. Each pattern represents a particular combination of parameters like nframes and groups. For MIRI, FAST and SLOW refer to the rate at which the detector is read.
MIRI allowed values: SLOW, FAST, FASTGRPAVG, FASTINTAVG
NIRCam allowed values: DEEP8, DEEP2, MEDIUM8, MEDIUM2, SHALLOW4, SHALLOW2, BRIGHT2, BRIGHT1, RAPID
NIRSpec allowed values: NRSRAPID, NRS, NRSN16R4, NRSIRS2RAPID
NIRISS allowed values: NIS, NISRAPID
FGS allowed values: ID, ACQ1, ACQ2, TRACK, FINEGUIDE, FGS60, FGS840, FGS7850, FGSRAPID, FGS
NRS_NORM
16
Required only for NIRSpec.
NRS_REF
4
Required only for NIRSpec.
P_XXXXXX
pattern keywords used by CRDS for JWST to describe the intended uses of a reference file using or’ed combinations of values. Only a subset of P_pattern keywords are supported.
Note: For the NIR detectors, the fast readout direction changes sign from one amplifier to the next. It is +1, -1, +1, and -1, for amps 1, 2, 3, and 4, respectively. The keyword FASTAXIS refers specifically to amp 1. That way, it is entirely correct for single-amp readouts and correct at the origin for 4-amp readouts. For MIRI, FASTAXIS is always +1.
Tracking Pipeline Progress¶
As each pipeline step is applied to a science data product, it will record a status indicator in a header keyword of the science data product. The current list of step status keyword names is given in the following table. These status keywords may be included in the primary header of reference files, in order to maintain a history of the data that went into creating the reference file. Allowed values for the status keywords are ‘COMPLETE’ and ‘SKIPPED’. Absence of a particular keyword is understood to mean that step was not even attempted.
Table 1. Keywords Documenting Which Pipeline Steps Have Been Performed.
S_AMIANA AMI fringe analysis S_AMIAVG AMI fringe averaging S_AMINOR AMI fringe normalization S_BARSHA Bar shadow correction S_BKDSUB Background subtraction S_COMB1D 1-D spectral combination S_DARK Dark subtraction S_DQINIT DQ initialization S_ERRINI ERR initialization S_EXTR1D 1-D spectral extraction S_EXTR2D 2-D spectral extraction S_FLAT Flat field correction S_FRINGE Fringe correction S_FRSTFR MIRI first frame correction S_GANSCL Gain scale correction S_GRPSCL Group scale correction S_GUICDS Guide mode CDS computation S_IFUCUB IFU cube creation S_IMPRNT NIRSpec MSA imprint subtraction S_IPC IPC correction S_JUMP Jump detection S_KLIP Coronagraphic PSF subtraction S_LASTFR MIRI last frame correction S_LINEAR Linearity correction S_MRSMAT MIRI MRS background matching S_MSAFLG NIRSpec MSA failed shutter flagging S_OUTLIR Outlier detection S_PERSIS Persistence correction S_PHOTOM Photometric (absolute flux) calibration S_PSFALI Coronagraphic PSF alignment S_PSFSTK Coronagraphic PSF stacking S_PTHLOS Pathloss correction S_RAMP Ramp fitting S_REFPIX Reference pixel correction S_RESAMP Resampling (drizzling) S_RESET MIRI reset correction S_RSCD MIRI RSCD correction S_SATURA Saturation check S_SKYMAT Sky matching S_SRCCAT Source catalog creation S_SRCTYP Source type determination S_STRAY Straylight correction S_SUPERB Superbias subtraction S_TELEMI Telescope emission correction S_TSPHOT TSO imaging photometry S_TWKREG Tweakreg image alignment S_WCS WCS assignment S_WFSCOM Wavefront sensing image combination S_WHTLIT TSO white-light curve generation
Orientation of Detector Image¶
All steps in the pipeline assume the data are in the DMS (science) orientation, not the native readout orientation. The pipeline does NOT check or correct for the orientation of the reference data. It assumes that all files ingested into CRDS have been put into the science orientation. All header keywords documenting the observing mode (Table 2) should likewise be transformed into the DMS orientation. For square data array dimensions it’s not possible to infer the actual orientation directly so reference file authors must manage orientation carefully.
Table 2. Correct values for FASTAXIS and SLOWAXIS for each detector.
DETECTOR
FASTAXIS
SLOWAXIS
MIRIMAGE
1
2
MIRIFULONG
1
2
MIRIFUSHORT
1
2
NRCA1
-1
2
NRCA2
1
-2
NRCA3
-1
2
NRCA4
1
-2
NRCALONG
-1
2
NRCB1
1
-2
NRCB2
-1
2
NRCB3
1
-2
NRCB4
-1
2
NRCBLONG
1
-2
NRS1
2
1
NRS2
-2
-1
NIS
-2
-1
GUIDER1
-2
-1
GUIDER2
2
-1
Differing values for these keywords will be taken as an indicator that neither the keyword value nor the array orientation are correct.
P_pattern keywords¶
P_ pattern keywords used by CRDS for JWST to describe the intended uses of a reference file using or’ed combinations
For example, if the same NIRISS SUPERBIAS should be used for
or
the definition of READPATT in the calibration s/w datamodels schema does not allow it. READPATT can specify one or the other but not both.
To support expressing combinations of values, CRDS and the CAL s/w have added “pattern keywords” which nominally begin with P_ followed by the ordinary keyword, truncated as needed to 8 characters. In this case, P_READPA corresponds to READPATT.
Pattern keywords override the corresponding ordinary keyword for the purposes of automatically updating CRDS rmaps. Pattern keywords describe intended use.
In this example, the pattern keyword:
P_READPA = NIS | NISRAPID |
can be used to specify the intent “use for NIS or for NISRAPID”.
Only or-ed combinations of the values used in ordinary keywords are valid for pattern keywords.
Patterns appear in a slightly different form in rmaps than they do in P_ keywords. The value of a P_ keyword always ends with a trailing or-bar. In rmaps, no trailing or-bar is used so the equivalient of the above in an rmap is:
‘NIS|NISRAPID’
From a CRDS perspective, the P_ pattern keywords and their corresponding datamodels paths currently supported can be found in the JWST Pattern Keywords section of the CRDS documentation.
Currently all P_ keywords correspond to basic keywords found only in the primary headers of reference files and are typically only valid for FITS format..
The traslation from these P_ pattern keywords are completely generic in CRDS and can apply to any reference file type so they should be assumed to be reserved whether a particular type uses them or not. Defining non-pattern keywords with the prefix P_ is strongly discouraged.
Data Quality Flags¶
Within science data files, the PIXELDQ flags are stored as 32-bit integers; the GROUPDQ flags are 8-bit integers. The meaning of each bit is specified in a separate binary table extension called DQ_DEF. The binary table has the format presented in Table 3, which represents the master list of DQ flags. Only the first eight entries in the table below are relevant to the GROUPDQ array. All calibrated data from a particular instrument and observing mode have the same set of DQ flags in the same (bit) order. For Build 7, this master list will be used to impose this uniformity. We may eventually use different master lists for different instruments or observing modes.
Within reference files for some steps, the Data Quality arrays for some steps are stored as 8-bit integers to conserve memory. Only the flags actually used by a reference file are included in its DQ array. The meaning of each bit in the DQ array is stored in the DQ_DEF extension, which is a binary table having the following fields: Bit, Value, Name, and Description.
Table 3. Flags for the DQ, PIXELDQ, and GROUPDQ Arrays (Format of DQ_DEF Extension).
Bit
Value
Name
Description
0
1
DO_NOT_USE
1
2
SATURATED
Pixel saturated during exposure
2
4
JUMP_DET
Jump detected during exposure
3
8
DROPOUT
Data lost in transmission
4
16
OUTLIER
Flagged by outlier detection
5
32
RESERVED
6
64
RESERVED
7
128
RESERVED
8
256
UNRELIABLE_ERROR
Uncertainty exceeds quoted error
9
512
NON_SCIENCE
Pixel not on science portion of detector
10
1024
11
2048
HOT
Hot pixel
12
4096
WARM
Warm pixel
13
8192
LOW_QE
Low quantum efficiency
14
16384
RC
RC pixel
15
32768
TELEGRAPH
Telegraph pixel
16
65536
NONLINEAR
Pixel highly nonlinear
17
131072
Reference pixel cannot be used
18
262144
NO_FLAT_FIELD
Flat field cannot be measured
19
524288
NO_GAIN_VALUE
Gain cannot be measured
20
1048576
NO_LIN_CORR
Linearity correction not available
21
2097152
NO_SAT_CHECK
Saturation check not available
22
4194304
UNRELIABLE_BIAS
Bias variance large
23
8388608
UNRELIABLE_DARK
Dark variance large
24
16777216
UNRELIABLE_SLOPE
Slope variance large (i.e., noisy pixel)
25
33554432
UNRELIABLE_FLAT
Flat variance large
26
67108864
OPEN
Open pixel (counts move to adjacent pixels)
27
134217728
28
268435456
UNRELIABLE_RESET
Sensitive to reset anomaly
29
536870912
MSA_FAILED_OPEN
Pixel sees light from failed-open shutter
30
1073741824
A catch-all flag
31
2147483648
REFERENCE_PIXEL
Pixel is a reference pixel
Note: Words like “highly” and “large” will be defined by each instrument team. They are likely to vary from one detector to another – or even from one observing mode to another.
Parameter Specification¶
There are a number of steps, such as OutlierDetectionStep or SkyMatchStep, that define what data quality flags a pixel is allowed to have to be considered in calculations. Such parameters can be set in a number of ways.
First, the flag can be defined as the integer sum of all the DQ bit values from the input images DQ arrays that should be considered “good”. For example, if pixels in the DQ array can have combinations of 1, 2, 4, and 8 and one wants to consider DQ flags 2 and 4 as being acceptable for computations, then the parameter value should be set to “6” (2+4). In this case a pixel having DQ values 2, 4, or 6 will be considered a good pixel, while a pixel with a DQ value, e.g., 1+2=3, 4+8=”12”, etc. will be flagged as a “bad” pixel.
Alternatively, one can enter a comma-separated or ‘+’ separated list of integer bit flags that should be summed to obtain the final “good” bits. For example, both “4,8” and “4+8” are equivalent to a setting of “12”.
Finally, instead of integers, the JWST mnemonics, as defined above, may be used. For example, all the following specifications are equivalent:
"12" == "4+8" == "4, 8" == "JUMP_DET, DROPOUT"
Note
• The default value (0) will make all non-zero pixels in the DQ mask be considered “bad” pixels and the corresponding pixels will not be used in computations.
• Setting to None will turn off the use of the DQ array for computations.
• In order to reverse the meaning of the flags from indicating values of the “good” DQ flags to indicating the “bad” DQ flags, prepend ‘~’ to the string value. For example, in order to exclude pixels with DQ flags 4 and 8 for computations and to consider as “good” all other pixels (regardless of their DQ flag), use a value of ~4+8, or ~4,8. A string value of ~0 would be equivalent to a setting of None. | {} |
# Factor by difference of squares Calculator
## Get detailed solutions to your math problems with our Factor by difference of squares step-by-step calculator. Practice your math skills and learn step by step with our math solver. Check out all of our online calculators here!
Go!
1
2
3
4
5
6
7
8
9
0
x
y
(◻)
◻/◻
÷
2
e
π
ln
log
log
lim
d/dx
Dx
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
### Difficult Problems
1
Solved example of factor by difference of squares
$\int\:\left(\frac{4\sin\:\left(x\right)}{\left(\left(4\cos\:\left(x\right)^2\right)-16\right)}\right)dx$
2
Taking the constant out of the integral
$4\int\frac{\sin\left(x\right)}{4\cos\left(x\right)^2-16}dx$
3
Solve the integral $\int\frac{\sin\left(x\right)}{4\cos\left(x\right)^2-16}dx$ applying u-substitution. Let $u$ and $du$ be
$\begin{matrix}u=\cos\left(x\right) \\ du=-\sin\left(x\right)dx\end{matrix}$
4
Isolate $dx$ in the previous equation
$\frac{du}{-\sin\left(x\right)}=dx$
5
Substituting $u$ and $dx$ in the integral and simplify
$4\int\frac{1}{-4u^2+16}du$
6
Factor the integral's denominator by $-4$
$4\int\frac{1}{-4\left(-4+u^2\right)}du$
7
Take the constant out of the integral
$4-\frac{1}{4}\int\frac{1}{-4+u^2}du$
8
Multiply $4$ times $-\frac{1}{4}$
$-\int\frac{1}{-4+u^2}du$
9
Rewrite the difference of squares $-4+u^2$ as the product of two conjugated binomials
$-\int\frac{1}{\left(2+u\right)\left(2-u\right)}du$
10
Rewrite the fraction $\frac{1}{\left(2+u\right)\left(2-u\right)}$ in $2$ simpler fractions using partial fraction decomposition
$\frac{1}{\left(2+u\right)\left(2-u\right)}=\frac{A}{2+u}+\frac{B}{2-u}$
11
Find the values of the unknown coefficients. The first step is to multiply both sides of the equation by $\left(2+u\right)\left(2-u\right)$
$1=\left(2+u\right)\left(2-u\right)\left(\frac{A}{2+u}+\frac{B}{2-u}\right)$
12
Multiplying polynomials
$1=\frac{A\left(2+u\right)\left(2-u\right)}{2+u}+\frac{B\left(2+u\right)\left(2-u\right)}{2-u}$
13
Simplifying
$1=A\left(2-u\right)+B\left(2+u\right)$
14
Expand the polynomial
$1=2A-Au+2B+Bu$
15
Assigning values to $u$ we obtain the following system of equations
$\begin{matrix}1=4A&\:\:\:\:\:\:\:(u=-2) \\ 1=4B&\:\:\:\:\:\:\:(u=2)\end{matrix}$
16
Proceed to solve the system of linear equations
$\begin{matrix}4A & + & 0B & =1 \\ 0A & + & 4B & =1\end{matrix}$
17
Rewrite as a coefficient matrix
$\left(\begin{matrix}4 & 0 & 1 \\ 0 & 4 & 1\end{matrix}\right)$
18
Reducing the original matrix to a identity matrix using Gaussian Elimination
$\left(\begin{matrix}1 & 0 & \frac{1}{4} \\ 0 & 1 & \frac{1}{4}\end{matrix}\right)$
19
The integral of $\frac{1}{\left(2+u\right)\left(2-u\right)}$ in decomposed fraction equals
$-\int\left(\frac{\frac{1}{4}}{2+u}+\frac{\frac{1}{4}}{2-u}\right)du$
20
The integral of the sum of two or more functions is equal to the sum of their integrals
$-\int\frac{\frac{1}{4}}{2+u}du-\int\frac{\frac{1}{4}}{2-u}du$
21
Apply the formula: $\int\frac{n}{ax+b}dx$$=\frac{n}{a}\ln\left|ax+b\right|, where a=-1, b=2, x=u and n=\frac{1}{4} -\int\frac{\frac{1}{4}}{2+u}du+\frac{1}{4}\ln\left|-u+2\right| 22 Substitute u back for it's value, \cos\left(x\right) -\int\frac{\frac{1}{4}}{2+u}du+\frac{1}{4}\ln\left|-\cos\left(x\right)+2\right| 23 Apply the formula: \int\frac{n}{x+b}dx$$=n\ln\left|x+b\right|$, where $b=2$, $x=u$ and $n=\frac{1}{4}$
$-\frac{1}{4}\ln\left|u+2\right|+\frac{1}{4}\ln\left|-\cos\left(x\right)+2\right|$
24
Substitute $u$ back for it's value, $\cos\left(x\right)$
$-\frac{1}{4}\ln\left|\cos\left(x\right)+2\right|+\frac{1}{4}\ln\left|-\cos\left(x\right)+2\right|$
25
As the integral that we are solving is an indefinite integral, when we finish we must add the constant of integration
$-\frac{1}{4}\ln\left|\cos\left(x\right)+2\right|+\frac{1}{4}\ln\left|-\cos\left(x\right)+2\right|+C_0$
### Struggling with math?
Access detailed step by step solutions to millions of problems, growing every day! | {} |
# How to interpret probability of a nonrepeating event?
I'm wondering about the meaning of ascribing probabilities to the outcomes of nonrepeating events. As a concrete example, here in the UK we're pollsters are currently predicting the result of the referendum on UK's membership in the European Union. They say things such as "Britain will stay in the EU with 70% probability", but I just don't understand the intuition behind this statement. If we had a referendum every week, and if we could assume that the individual outcomes were independent, then this would mean that, in about 70% of the cases, people would vote to stay. In reality, however, we will actually have just one referendum, so there is no way to talk about 70% of the outcomes. Thus, the number "70%" seems completely arbitrary to me and it doesn't seem to carry much more information than, say, 42!
I assume that my question is related to What does actually probability mean? and the relationship between the frequentist and Bayesian interpretation of probability. I've read the Wikipedia page mentioned in that article, and I think I understand the frequentist interpretation and find it intuitive; however, it seems to me that this interpretation relies on events repeating. Moreover, I just can't get my head around the Bayesian interpretation. So I would really appreciate it if someone could clearly explain the following points:
• Does it at all make sense to talk of probability of outcomes of one-off events? Maybe people just use the word "probability" without really paying attention to its formal meaning?
• Am I right in thinking that the frequentist interpretation requires repeating events?
• Could someone please explain or point me to a nice paper/book explaining the intuition behind the Bayesian probability?
Please note that I'm not asking which interpretation is right; I'd like to understand the merits of either interpretation independently. Many thanks in advance!
• Minor observation: Take 10 balls, color 3 green and 7 red, place them in a box, shake, and take one out. If Britain based its decision on choosing a red ball, you would agree (perhaps) on the number $70\%$, even though this event is non-repeating. Of course, in principle, we could repeat the experiment over and over. – Michael Jun 15 '16 at 21:20
• The actual statement about Britain in the news might be based on random samplings of voter opinions from a small sample of the population and/or intuitions gained based on observations of similar historical events. – Michael Jun 15 '16 at 21:22
• I understand the second point. Thank you very much for the first comment: I'm finding it really illustrative! So is the interpretation of probability you're suggesting more frequentist or more Bayesian, and should I even care to understand the difference (in this specific care)? – Boris Jun 15 '16 at 21:32
• I do not know enough about the "frequentist versus Bayesian" camps to label myself either way. Do we need to have a 2-party system here? Perhaps we should introduce a "Green party." – Michael Jun 15 '16 at 21:51
There are a number of links you can Google, and most texts in probability discuss this. I remember a nice discussion in this direction in the intro of Bob Gallager's book Discrete Stochastic Processes. The main points I think are:
1) Probability does not need to have repeatable experiments. In the mathematical world, we just need to define an abstract set of possible outcomes and a corresponding probability measure that satisfies certain basic axioms.
2) To relate probability theory to the real world, the probability measures we use might be based on "relative frequency" concepts and real-world observations about repeated experiments. This is not required for the theory, but helps to explain why theoretical results are often useful in the real world.
3) The "Law of Large Numbers" is an interesting bridge between probability theory and relative frequency intuition. Within the theory itself, it is possible to define a notion of "independent and identically distributed experiments." The Law of Large Numbers says the time average over repetitions of independent and identically distributed experiments does indeed converge to the a-priori success probability of one such experiment.
• Thank you, I'll check the book out you're suggesting. – Boris Jun 15 '16 at 21:42
Unless you want to invoke quantum mechanics, the actual probability of a Yes vote is either $0$ or $1$, we just don't happen to know yet which it is. What is being asserted as being $70\%$ may actually be a "subjective probability". Based on whatever evidence they may have gathered, the pollsters might judge that a bet on the outcome at odds 70 to 30 would be a fair bet. This might involve a Bayesian analysis, or it might just be a number plucked out of the air.
• Thank you, I always had the impression that such numbers are somewhat arbitrary. – Boris Jun 15 '16 at 22:33
• What is meant by "the actual probability of a Yes vote" and why should it be either 0 or 1? – Michael Jun 15 '16 at 23:24
• I mean, according to a deterministic view of the universe, either it will happen or it won't. We'll know which when the votes are counted. In principle (according to classical physics) if we could measure precisely enough the positions and velocities of all particles on Earth (or close enough to affect us by voting day) and integrate their equations of motion with a powerful enough computer, we could predict with certainty what the result would be. – Robert Israel Jun 16 '16 at 0:23
• Now quantum mechanics actually says that outcomes are not completely determined: you could say the world's wave function on voting day is determined by the wave function now, but that may not be an eigenfunction of the observable "result of the vote", so the probability of a "Yes" outcome would not be $0$ or $1$. But this quantum mechanical probability, in an uncontrolled macroscopic situation such as ours, is practically impossible to measure, and certainly is not accessible to the pollsters. – Robert Israel Jun 16 '16 at 0:33
• Thanks. With that perspective, my guess is that a deterministic approximation of all particles in the universe would only be accurate for a fraction of a second, not long enough to predict an election. So the quantum mechanics, and any other as-yet-unknown factors (including free will?) would be non-negligible. – Michael Jun 16 '16 at 23:35 | {} |
# The Shortest Wavelength Of He + Ion In Balmer Series Is X, Then Longest Wavelength In The Paschene Series Of Li +2 Is:
The shortest wavelength of Helium ion in Balmer Series is X
$$\frac{1}{\lambda He} = R_{H}(\frac{1}{2^{2}} – \frac{1}{\infty ^{2}})2^{2}$$ $$\Rightarrow \frac{1}{\lambda He} = \frac{R_{H}}{4} * 4$$ $$\Rightarrow \frac{1}{\lambda He} = \frac{R_{H}}{4}$$ $$\Rightarrow \frac{1}{R_{H}} = \lambda He$$ $$\Rightarrow x$$
The longest wavelength of Lithium-ion in the Paschene series is
$$\frac{1}{\lambda H} = R_{H}3^{2}(\frac{1}{3^{2}} – \frac{1}{4^{2}})$$ $$\Rightarrow \frac{1}{\lambda Li} = R_{H}(\frac{7}{144})$$ $$\Rightarrow \lambda _{4} = \frac{16}{7R_{H}}$$ $$\Rightarrow \frac{16}{7}x$$
Explore more such questions and answers at BYJU’S. | {} |
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Mat. Sb.: Year: Volume: Issue: Page: Find
Systems of random variables equivalent in distribution to the Rademacher system and $\mathscr K$-closed representability of Banach couplesS. V. Astashkin 3 On a criterion for the topological conjugacy of a quasisymmetric group to a group of affine transformations of $\mathbb R$L. A. Beklaryan 31 On the Dirichlet problem for the Helmholtz equation on the plane with boundary conditions on an almost closed curveR. R. Gadyl'shin 43 On the problem of control synthesis: the Pontryagin alternating integral and the Hamilton–Jacobi equationA. B. Kurzhanskii, N. B. Melnikov 69 Birationally rigid Fano double hypersurfacesA. V. Pukhlikov 101 An almost Spechtian variety of alternative algebras over a field of characteristic 3S. V. Pchelintsev 127 On the problem of the description of sequences of best rational trigonometric approximationsA. P. Starovoitov 145 Double canal hypersurfaces in the Euclidean space $E^n$M. A. Cheshkova 155 | {} |
## College Algebra 7th Edition
a.$\quad$ x and y b.$\quad$ dependent c.$\quad$ $x=3+z,\ y=5-2z,\ z=$ any real number.
a. Leading variables correspond to leading entries in the reduced row-echelon form. x and y have a corresponding leading 1 in their correspoding columns, z does not. Answer: x and y. b. Since the last row represents an equation 0=0, which is always true, the system will have solutions. Since not all variables are leading, it will be dependent. Answer: dependent. c. Taking z to be any real number (since it has no corresponding leading entry), we back-substitute to find x and y. $x-z=3\quad\Rightarrow\quad x=3+z$ $y+2z=5\quad\Rightarrow\quad y=5-2z$ Answer: $x=3+z,\ y=5-2z,\ z=$ any real number. | {} |
Lemma 27.23.3. Let $X$ be a scheme. There exists a cardinal $\kappa$ such that every quasi-coherent module $\mathcal{F}$ is the directed colimit of its quasi-coherent $\kappa$-generated quasi-coherent subsheaves.
Proof. Choose an affine open covering $X = \bigcup _{i \in I} U_ i$. For each pair $i, j$ choose an affine open covering $U_ i \cap U_ j = \bigcup _{k \in I_{ij}} U_{ijk}$. Write $U_ i = \mathop{\mathrm{Spec}}(A_ i)$ and $U_{ijk} = \mathop{\mathrm{Spec}}(A_{ijk})$. Let $\kappa$ be any infinite cardinal $\geq$ than the cardinality of any of the sets $I$, $I_{ij}$.
Let $\mathcal{F}$ be a quasi-coherent sheaf. Set $M_ i = \mathcal{F}(U_ i)$ and $M_{ijk} = \mathcal{F}(U_{ijk})$. Note that
$M_ i \otimes _{A_ i} A_{ijk} = M_{ijk} = M_ j \otimes _{A_ j} A_{ijk}.$
see Schemes, Lemma 25.7.3. Using the axiom of choice we choose a map
$(i, j, k, m) \mapsto S(i, j, k, m)$
which associates to every $i, j \in I$, $k \in I_{ij}$ and $m \in M_ i$ a finite subset $S(i, j, k, m) \subset M_ j$ such that we have
$m \otimes 1 = \sum \nolimits _{m' \in S(i, j, k, m)} m' \otimes a_{m'}$
in $M_{ijk}$ for some $a_{m'} \in A_{ijk}$. Moreover, let's agree that $S(i, i, k, m) = \{ m\}$ for all $i, j = i, k, m$ as above. Fix such a map.
Given a family $\mathcal{S} = (S_ i)_{i \in I}$ of subsets $S_ i \subset M_ i$ of cardinality at most $\kappa$ we set $\mathcal{S}' = (S'_ i)$ where
$S'_ j = \bigcup \nolimits _{(i, j, k, m)\text{ such that }m \in S_ i} S(i, j, k, m)$
Note that $S_ i \subset S'_ i$. Note that $S'_ i$ has cardinality at most $\kappa$ because it is a union over a set of cardinality at most $\kappa$ of finite sets. Set $\mathcal{S}^{(0)} = \mathcal{S}$, $\mathcal{S}^{(1)} = \mathcal{S}'$ and by induction $\mathcal{S}^{(n + 1)} = (\mathcal{S}^{(n)})'$. Then set $\mathcal{S}^{(\infty )} = \bigcup _{n \geq 0} \mathcal{S}^{(n)}$. Writing $\mathcal{S}^{(\infty )} = (S^{(\infty )}_ i)$ we see that for any element $m \in S^{(\infty )}_ i$ the image of $m$ in $M_{ijk}$ can be written as a finite sum $\sum m' \otimes a_{m'}$ with $m' \in S_ j^{(\infty )}$. In this way we see that setting
$N_ i = A_ i\text{-submodule of }M_ i\text{ generated by }S^{(\infty )}_ i$
we have
$N_ i \otimes _{A_ i} A_{ijk} = N_ j \otimes _{A_ j} A_{ijk}.$
as submodules of $M_{ijk}$. Thus there exists a quasi-coherent subsheaf $\mathcal{G} \subset \mathcal{F}$ with $\mathcal{G}(U_ i) = N_ i$. Moreover, by construction the sheaf $\mathcal{G}$ is $\kappa$-generated.
Let $\{ \mathcal{G}_ t\} _{t \in T}$ be the set of $\kappa$-generated quasi-coherent subsheaves. If $t, t' \in T$ then $\mathcal{G}_ t + \mathcal{G}_{t'}$ is also a $\kappa$-generated quasi-coherent subsheaf as it is the image of the map $\mathcal{G}_ t \oplus \mathcal{G}_{t'} \to \mathcal{F}$. Hence the system (ordered by inclusion) is directed. The arguments above show that every section of $\mathcal{F}$ over $U_ i$ is in one of the $\mathcal{G}_ t$ (because we can start with $\mathcal{S}$ such that the given section is an element of $S_ i$). Hence $\mathop{\mathrm{colim}}\nolimits _ t \mathcal{G}_ t \to \mathcal{F}$ is both injective and surjective as desired. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | {} |
# Calculus: Calculus, Volume 1: One-Variable Calculus with an Introduction to Linear Algebra
###### Tom M. Apostol
Publisher:
John Wiley
Publication Date:
1967
Number of Pages:
688
Format:
Hardcover
Edition:
2
Price:
145.95
ISBN:
0471000051
Category:
Textbook
BLL Rating:
The Basic Library List Committee considers this book essential for undergraduate mathematics libraries.
There is no review yet. Please check back later.
Historical Introduction.
Some Basic Concepts of the Theory of Sets.
A Set of Axioms for the Real Number System.
Mathematical Induction, Summation Notation, and Related Topics.
The Concepts of the Integral Calculus.
Some Applications of Differentiation.
Continuous Functions.
Differential Calculus.
The Relation between Integration and Differentiation.
The Logarithm, the Exponential, and the Inverse Trigonometric Functions.
Polynomial Approximations to Functions.
Introduction to Differential Equations.
Complex Numbers.
Sequences, Infinite Series, Improper Integrals.
Sequences and Series of Functions.
Vector Algebra.
Applications of Vector Algebra to Analytic Geometry.
Calculus of Vector-Valued Functions.
Linear Spaces.
Linear Transformations and Matrices.
Exercises. | {} |
## A Natural Limit Definition
Often, the first exposure one gets to rigorous mathematics is the definition of a limit. Let’s consider what this is for a sequence. We say $\lim_{n\rightarrow \infty} a_n = A$ if
$\displaystyle \forall \epsilon \in \mathbb{R}^+\quad\exists N \text{ s.t.}\quad\quad n > N \implies |a_n-A| \leq \epsilon$
This, at first sight, is ugly. It takes a while to even understand what it’s saying, longer to see why it works, and much longer to apply it. It’s intimidating to say the least. I feel, however, there is another version that makes the idea of limits simple and very natural giving a deep insight into what a limit really is. | {} |
On the stability of the coupling of 3D and 1D fluid-structure interaction models for blood flow simulations
ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Tome 41 (2007) no. 4, pp. 743-769.
We consider the coupling between three-dimensional ($3$D) and one-dimensional ($1$D) fluid-structure interaction (FSI) models describing blood flow inside compliant vessels. The $1$D model is a hyperbolic system of partial differential equations. The $3$D model consists of the Navier-Stokes equations for incompressible newtonian fluids coupled with a model for the vessel wall dynamics. A non standard formulation for the Navier-Stokes equations is adopted to have suitable boundary conditions for the coupling of the models. With this we derive an energy estimate for the fully $3$D-$1$D FSI coupling. We consider several possible models for the mechanics of the vessel wall in the $3$D problem and show how the $3$D-$1$D coupling depends on them. Several comparative numerical tests illustrating the coupling are presented.
DOI : https://doi.org/10.1051/m2an:2007039
Classification : 65M12, 65M60, 92C50, 74F10, 76Z05
Mots clés : fluid-structure interaction, 3D-1D FSI coupling, energy estimate, multiscale models
@article{M2AN_2007__41_4_743_0,
author = {Formaggia, Luca and Moura, Alexandra and Nobile, Fabio},
title = {On the stability of the coupling of 3D and 1D fluid-structure interaction models for blood flow simulations},
journal = {ESAIM: Mathematical Modelling and Numerical Analysis - Mod\'elisation Math\'ematique et Analyse Num\'erique},
pages = {743--769},
publisher = {EDP-Sciences},
volume = {41},
number = {4},
year = {2007},
doi = {10.1051/m2an:2007039},
zbl = {1139.92009},
mrnumber = {2362913},
language = {en},
url = {www.numdam.org/item/M2AN_2007__41_4_743_0/}
}
Formaggia, Luca; Moura, Alexandra; Nobile, Fabio. On the stability of the coupling of 3D and 1D fluid-structure interaction models for blood flow simulations. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Tome 41 (2007) no. 4, pp. 743-769. doi : 10.1051/m2an:2007039. http://www.numdam.org/item/M2AN_2007__41_4_743_0/
[1] C. Begue, C. Conca, F. Murat and O. Pironneau, Les équations de Stokes et de Navier-Stokes avec des conditions aux limites sur la pression, in Nonlinear partial differential equations and their applications, Collège de France Seminar, in Pitman Res. Notes Math. Ser. 181, Longman Sci. Tech., Harlow (1986) 179-264. | Zbl 0687.35069
[2] H. Beirão Da Veiga, On the existence of strong solutions to a coupled fluid-structure evolution problem. J. Math. Fluid Mechanics 6 (2004) 21-52. | Zbl 1068.35087
[3] C.G. Caro and K.H. Parker, The effect of haemodynamic factors on the arterial wall, in Atherosclerosis - Biology and Clinical Science, A.G. Olsson Ed., Churchill Livingstone, Edinburgh (1987) 183-195.
[4] P. Causin, J.-F. Gerbeau and F. Nobile, Added-mass effect in the design of partitioned algorithms for fluid-structure problems. Comput. Methods Appl. Mech. Engrg. 194 (2005) 4506-4527. | Zbl 1101.74027
[5] A. Chambolle, B. Desjardins, M. Esteban and C. Grandmont, Existence of weak solutions for the unsteady interaction of a viscous fluid with an elastic plate. J. Math. Fluid Mech. 7 (2005) 368-404. | Zbl 1080.74024
[6] P.G. Ciarlet, Mathematical Elasticity. Volume 1: Three Dimensional Elasticity. Elsevier, second edition (2004). | Zbl 0888.73001
[7] C. Conca, F. Murat and O. Pironneau, The Stokes and Navier-Stokes equations with boundary conditions involving the pressure. Japan J. Math. 20 (1994) 279-318. | Zbl 0826.35093
[8] D. Coutand and S. Shkoller, The interaction between quasilinear elastodynamics and the Navier-Stokes equations. Arch. Rational Mech. Anal. 179 (2006) 303-352. | Zbl 1138.74325
[9] L. Formaggia, J.F. Gerbeau, F. Nobile and A. Quarteroni, On the coupling of 3D and 1D Navier-Stokes equations for flow problems in compliant vessels. Comput. Methods Appl. Mech. Engrg. 191 (2001) 561-582. | Zbl 1007.74035
[10] L. Euler, Principia pro motu sanguinis per arterias determinando. Opera posthima mathematica et physica anno 1844 detecta 2 (1775) 814-823.
[11] M.A. Fernández and M. Moubachir, A Newton method using exact Jacobian for solving fluid-structure coupling. Comput. Struct. 83 (2005) 127-142.
[12] M.A. Fernández, J.-F. Gerbeau and C. Grandmont, A projection semi-implicit scheme for the coupling of an elastic structure with an incompressible fluid. Inter. J. Num. Meth. Eng. 69 (2007) 794-821.
[13] L. Formaggia and A. Veneziani, Reduced and multiscale models for the human cardiovascular system. Lecture notes VKI Lecture Series 2003-07, Brussels (2003).
[14] L. Formaggia, F. Nobile, A. Quarteroni and A. Veneziani, Multiscale modeling of the circulatory system: a preliminary analysis. Comput. Visual. Sci. 2 (1999) 75-83. | Zbl 1067.76624
[15] L. Formaggia, J.F. Gerbeau, F. Nobile and A. Quarteroni, Numerical treatment of defective boundary conditions for the Navier-Stokes equations. SIAM J. Num. Anal. 40 (2002) 376-401. | Zbl 1020.35070
[16] L. Formaggia, D. Lamponi, M. Tuveri and A. Veneziani, Numerical modeling of 1D arterial networks coupled with a lumped parameters description of the heart. Comput. Methods Biomech. Biomed. Eng. 9 (2006) 273-288.
[17] L. Formaggia, A. Quarteroni and A. Veneziani, The circulatory system: from case studies to mathematical modelling, in Complex Systems in Biomedicine, A. Quarteroni, L. Formaggia and A. Veneziani Eds., Springer, Milan (2006) 243-287.
[18] V.E. Franke, K.H. Parker, L.Y. Wee, N.M. Fisk and S.J. Sherwin, Time domain computational modelling of 1D arterial networks in monochorionic placentas. ESAIM: M2AN 37 (2003) 557-580. | Numdam | Zbl 1065.92017
[19] J.-F. Gerbeau and M. Vidrascu, A quasi-Newton algorithm based on a reduced model for fluid-structure interaction problems in blood flows. ESAIM: M2AN 37 (2003) 631-647. | Numdam | Zbl 1070.74047
[20] J.-F. Gerbeau, M. Vidrascu and P. Frey, Fluid-structure interaction in blood flows on geometries coming from medical imaging. Comput. Struct. 83 (2005) 155-165.
[21] F.J.H. Gijsen, E. Allanic, F.N. Van De Vosse and J.D. Janssen, The influence of the non-Newtonian properies of blood on the flow in large arteries: unsteady flow in a ${90}^{\circ }$ curved tube. J. Biomechanics 32 (1999) 705-713.
[22] V. Giraut and P.-A. Raviart, Finite element method fo the Navier-Stokes equations, in Computer Series in Computational Mathematics 5, Springer-Verlag (1986). | Zbl 0585.65077
[23] J. Gobert, Une inégalité fondamentale de la théorie de l'élasticité3-4) (1962) 182-191. | Zbl 0112.38902
[24] J. Heywood, R. Rannacher and S. Turek, Artificial boundaries and flux and pressure conditions for the incompressible Navier-Stokes equations. Int. J. Num. Meth. Fluids 22 (1996) 325-352. | Zbl 0863.76016
[25] K. Laganà, G. Dubini, F. Migliavaca, R. Pietrabissa, G. Pennati, A. Veneziani and A. Quarteroni Multiscale modelling as a tool to prescribe realistic boundary conditions for the study of surgical procedures. Biorheology 39 (2002) 359-364.
[26] D.A. Mcdonald, Blood flow in arteries. Edward Arnold Ltd (1990).
[27] A. Moura, The Geometrical Multiscale Modelling of the Cardiovascular System: Coupling $3$D and $1$D FSI models. Ph.D. thesis, Politecnico di Milano (2007).
[28] R.M. Nerem and J.F. Cornhill, The role of fluid mechanics in artherogenesis. J. Biomech. Eng. 102 (1980) 181-189.
[29] F. Nobile and C. Vergara, An effective fluid-structure interaction formulation for vascular dynamics by generalized Robin conditions. Technical Report 97, MOX (2007). | MR 2385883
[30] M.S. Olufsen, C.S. Peskin, W.Y. Kim, E.M. Pedersen, A. Nadim and J. Larsen, Numerical simulation and experimental validation of blood flow in arteries with structured-tree outflow conditions. Ann. Biomed. Eng. 28 (2000) 1281-1299.
[31] T.J. Pedley, The fluid mechanics of large blood vessels. Cambridge University Press (1980). | Zbl 0449.76100
[32] T.J. Pedley, Mathematical modelling of arterial fluid dynamics. J. Eng. Math. 47 (2003) 419-444. | Zbl 1065.76212
[33] K. Perktold and G. Rappitsch, Mathematical modeling of local arterial flow and vessel mechanics, in Computational Methods for Fluid Structure Interaction, Pitman Research Notes in Mathematics 306, J. Crolet and R. Ohayon Eds., Harlow, Longman (1994) 230-245. | Zbl 0809.76098
[34] K. Perktold, M. Resch and H. Florian, Pulsatile non-Newtonian flow characteristics in a three-dimensional human carotid bifurcation model. J. Biomech. Eng. 113 (1991) 464-475.
[35] A. Quaini and A. Quarteroni, A semi-implicit approach for fluid-structure interaction based on an algebraic fractional step method. Technical Report 90, MOX (2006). | Zbl pre05176130
[36] A. Quarteroni, Cardiovascular mathematics, in Proceedings of the International Congress of Mathematicians, Vol. 1, M. Sanz-Solé, J. Soria, J.L. Varona and J. Vezdeza Eds., European Mathematical Society (2007) 479-512. | Zbl 1121.92022
[37] A. Quarteroni, M. Tuveri and A. Veneziani, Computational vascular fluid dynamics: problems, models and methods. Comput. Visual. Sci. 2 (2000) 163-197. | Zbl 1096.76042
[38] A. Quarteroni, S. Ragni and A. Veneziani, Coupling between lumped and distributed models for blood flow problems. Comput. Visual. Sci. 4 (2001) 111-124. | Zbl 1097.76615
[39] S. Sherwin, L. Formaggia, J. Peiró and V. Franke, Computational modelling of 1D blood flow with variable mechanical properties and its application to the simulation of wave propagation in the human arterial system. Int. J. Num. Meth. Fluids 12 (2002) 48-54. | Zbl 1008.92011
[40] A. Veneziani and C. Vergara, Flow rate defective boundary conditions in haemodinamics simulations. Int. J. Num. Meth. Fluids 47 (2005) 801-183. | Zbl 1134.76748
[41] I.E. Vignon-Clementel, C.A. Figueroa, K.E. Jansen and C.A. Taylor, Outflow boundary conditions for three-dimensional finite element modeling of blood flow and pressure in arteries. Comput. Methods Appl. Mech. Engrg. 195 (2006) 3776-3796. | Zbl pre05194200 | {} |
Browse ORBi by ORBi project The Open Access movement
ORBi is a project of
Publications of Michaël Gillon Results 1-20 of 364. 1 2 3 4 5 6 7 8 9 10 A search for transiting planets around hot subdwarfs: I. Methods and performance tests on light curves from Kepler, K2, TESS, and CHEOPSVan Grootel, Valérie ; Pozuelos Romero, Francisco José ; Thuillier, Antoine et alin Astronomy and Astrophysics (in press)Context. Hot subdwarfs experienced strong mass loss on the red giant branch (RGB) and are now hot and small He-burning objects. These stars constitute excellent opportunities for addressing the question ... [more ▼]Context. Hot subdwarfs experienced strong mass loss on the red giant branch (RGB) and are now hot and small He-burning objects. These stars constitute excellent opportunities for addressing the question of the evolution of exoplanetary systems directly after the RGB phase of evolution. Aims. In this project we aim to perform a transit survey in all available light curves of hot subdwarfs from space-based telescopes (Kepler, K2, TESS, and CHEOPS) with our custom-made pipeline SHERLOCK in order to determine the occurrence rate of planets around these stars as a function of orbital period and planetary radius. We also aim to determine whether planets that were previously engulfed in the envelope of their red giant host star can survive, even partially, as a planetary remnant. Methods. For this first paper, we performed injection-and-recovery tests of synthetic transits for a selection of representative Kepler, K2, and TESS light curves to determine which transiting bodies in terms of object radius and orbital period we will be able to detect with our tools. We also provide estimates for CHEOPS data, which we analyzed with the pycheops package. Results. Transiting objects with a radius $\lesssim$ 1.0 $R_{\Earth}$ can be detected in most of the Kepler, K2, and CHEOPS targets for the shortest orbital periods (1~d and shorter), reaching values as low as $\sim$0.3 $R_{\Earth}$ in the best cases. Sub-Earth-sized bodies are only reached for the brightest TESS targets and for those that were observed in a significant number of sectors. We also give a series of representative results for larger planets at greater distances, which strongly depend on the target magnitude and on the length and quality of the data. Conclusions. The TESS sample will provide the most important statistics for the global aim of measuring the planet occurrence rate around hot subdwarfs. The Kepler, K2, and CHEOPS data will allow us to search for planetary remnants, that is, very close and small (possibly disintegrating) objects. [less ▲]Detailed reference viewed: 54 (12 ULiège) Discovery of a young low-mass brown dwarf transiting a fast-rotating F-type star by the Galactic Plane eXoplanet (GPX) surveyBenni, P.; Burdanov, A. Y.; Krushinsky, V. V. et alin Monthly Notices of the Royal Astronomical Society (2021), 505We announce the discovery of GPX-1 b, a transiting brown dwarf with a mass of 19.7 ± 1.6 MJup and a radius of 1.47 ± 0.10 RJup, the first substellar object discovered by the Galactic Plane eXoplanet (GPX ... [more ▼]We announce the discovery of GPX-1 b, a transiting brown dwarf with a mass of 19.7 ± 1.6 MJup and a radius of 1.47 ± 0.10 RJup, the first substellar object discovered by the Galactic Plane eXoplanet (GPX) survey. The brown dwarf transits a moderately bright (V = 12.3 mag) fast-rotating F-type star with a projected rotational velocity $v\sin {\, i_*}=40\pm 10$ km s-1. We use the isochrone placement algorithm to characterize the host star, which has effective temperature 7000 ± 200 K, mass 1.68 ± 0.10 $\mathrm{\it M}_\odot$, radius 1.56 ± 0.10 $\mathrm{\it R}_\odot$, and approximate age $0.27_{-0.15}^{+0.09}$ Gyr. GPX-1 b has an orbital period of ~1.75 d and a transit depth of 0.90 ± 0.03 per cent. We describe the GPX transit detection observations, subsequent photometric and speckle-interferometric follow-up observations, and SOPHIE spectroscopic measurements, which allowed us to establish the presence of a substellar object around the host star. GPX-1 was observed at 30-min integrations by TESS in Sector 18, but the data are affected by blending with a 3.4 mag brighter star 42 arcsec away. GPX-1 b is one of about two dozen transiting brown dwarfs known to date, with a mass close to the theoretical brown dwarf/gas giant planet mass transition boundary. Since GPX-1 is a moderately bright and fast-rotating star, it can be followed-up by the means of the Doppler tomography. [less ▲]Detailed reference viewed: 25 (3 ULiège) (6478) Gault: physical characterization of an active main-belt asteroidDevogèle, Maxime; Ferrais, Marin ; Jehin, Emmanuel et alin Monthly Notices of the Royal Astronomical Society (2021), 505In 2018 December, the main-belt asteroid (6478) Gault was reported to display activity. Gault is an asteroid belonging to the Phocaea dynamical family and was not previously known to be active, nor was ... [more ▼]In 2018 December, the main-belt asteroid (6478) Gault was reported to display activity. Gault is an asteroid belonging to the Phocaea dynamical family and was not previously known to be active, nor was any other member of the Phocaea family. In this work, we present the results of photometric and spectroscopic observations that commenced soon after the discovery of activity. We obtained observations over two apparitions to monitor its activity, rotation period, composition, and possible non-gravitational orbital evolution. We find that Gault has a rotation period of P = 2.4929 ± 0.0003 h with a light-curve amplitude of 0.06 magnitude. This short rotation period close to the spin barrier limit is consistent with Gault having a density no smaller than ρ = 1.85 g cm^-3 and its activity being triggered by the YORP (Yarkovsky-O'Keefe-Radzievskii-Paddack) spin-up mechanism. Analysis of the Gault phase curve over phase angles ranging from 0.4° to 23.6° provides an absolute magnitude of H = 14.81 ± 0.04, G1 = 0.25 ± 0.07, and G2 = 0.38 ± 0.04. Model fits to the phase curve find the surface regolith grain size constrained between 100 and 500 $\rm {\mu }$m. Using relations between the phase curve and albedo, we determine that the geometrical albedo of Gault is p[SUB]v[/SUB] = 0.26 ± 0.05 corresponding to an equivalent diameter of $D = 2.8^{+0.4}_{-0.2}$ km. Our spectroscopic observations are all consistent with an ordinary chondrite-like composition (S, or Q-type in the Bus-DeMeo taxonomic classification). A search through archival photographic plate surveys found previously unidentified detections of Gault dating back to 1957 and 1958. Only the latter had been digitized, which we measured to nearly double the observation arc of Gault. Finally, we did not find any signal of activity during the 2020 apparition or non-gravitational effects on its orbit. [less ▲]Detailed reference viewed: 21 (1 ULiège) Warm Jupiters in TESS Full-frame Images: A Catalog and Observed Eccentricity Distribution for Year 1Dong, Jiayin; Huang, Chelsea X.; Dawson, Rebekah I. et alin Astrophysical Journal Supplement Series (2021), 255Warm Jupiters-defined here as planets larger than 6 Earth radii with orbital periods of 8-200 days-are a key missing piece in our understanding of how planetary systems form and evolve. It is currently ... [more ▼]Warm Jupiters-defined here as planets larger than 6 Earth radii with orbital periods of 8-200 days-are a key missing piece in our understanding of how planetary systems form and evolve. It is currently debated whether Warm Jupiters form in situ, undergo disk or high-eccentricity tidal migration, or have a mixture of origin channels. These different classes of origin channels lead to different expectations for Warm Jupiters' properties, which are currently difficult to evaluate due to the small sample size. We take advantage of the Transiting Exoplanet Survey Satellite (TESS) survey and systematically search for Warm Jupiter candidates around main-sequence host stars brighter than the TESS-band magnitude of 12 in the full-frame images in Year 1 of the TESS Prime Mission data. We introduce a catalog of 55 Warm Jupiter candidates, including 19 candidates that were not originally released as TESS objects of interest by the TESS team. We fit their TESS light curves, characterize their eccentricities and transit-timing variations, and prioritize a list for ground-based follow-up and TESS Extended Mission observations. Using hierarchical Bayesian modeling, we find the preliminary eccentricity distributions of our Warm-Jupiter-candidate catalog using a beta distribution, a Rayleigh distribution, and a two-component Gaussian distribution as the functional forms of the eccentricity distribution. Additional follow-up observations will be required to clean the sample of false positives for a full statistical study, derive the orbital solutions to break the eccentricity degeneracy, and provide mass measurements. [less ▲]Detailed reference viewed: 16 (1 ULiège) Transit detection of the long-period volatile-rich super-Earth \nu^2 Lupi d with CHEOPSDelrez, Laetitia ; Ehrenreich, David; Alibert, Yann et alin Nature Astronomy (2021)Exoplanets transiting bright nearby stars are key objects for advancing our knowledge of planetary formation and evolution. The wealth of photons from the host star gives detailed access to the ... [more ▼]Exoplanets transiting bright nearby stars are key objects for advancing our knowledge of planetary formation and evolution. The wealth of photons from the host star gives detailed access to the atmospheric, interior, and orbital properties of the planetary companions. $\nu^2$ Lupi (HD 136352) is a naked-eye ($V = 5.78$) Sun-like star that was discovered to host three low-mass planets with orbital periods of 11.6, 27.6, and 107.6 days via radial velocity monitoring (Udry et al. 2019). The two inner planets (b and c) were recently found to transit (Kane et al. 2020), prompting a photometric follow-up by the brand-new $CHaracterising\:ExOPlanets\:Satellite\:(CHEOPS)$. Here, we report that the outer planet d is also transiting, and measure its radius and mass to be $2.56\pm0.09$ $R_{\oplus}$ and $8.82\pm0.94$ $M_{\oplus}$, respectively. With its bright Sun-like star, long period, and mild irradiation ($\sim$5.7 times the irradiation of Earth), $\nu^2$ Lupi d unlocks a completely new region in the parameter space of exoplanets amenable to detailed characterization. We refine the properties of all three planets: planet b likely has a rocky mostly dry composition, while planets c and d seem to have retained small hydrogen-helium envelopes and a possibly large water fraction. This diversity of planetary compositions makes the $\nu^2$ Lupi system an excellent laboratory for testing formation and evolution models of low-mass planets. [less ▲]Detailed reference viewed: 24 (2 ULiège) A transit timing variation observed for the long-period extremely low-density exoplanet HIP 41378 fBryant, Edward M.; Bayliss, Daniel; Santerne, Alexandre et alin Monthly Notices of the Royal Astronomical Society (2021), 504HIP 41378 f is a temperate 9.2 ± 0.1 R⊕ planet with period of 542.08 d and an extremely low density of 0.09 ± 0.02 g cm-3. It transits the bright star HIP 41378 (V = 8.93), making it an exciting target ... [more ▼]HIP 41378 f is a temperate 9.2 ± 0.1 R⊕ planet with period of 542.08 d and an extremely low density of 0.09 ± 0.02 g cm-3. It transits the bright star HIP 41378 (V = 8.93), making it an exciting target for atmospheric characterization including transmission spectroscopy. HIP 41378 was monitored photometrically between the dates of 2019 November 19 and 28. We detected a transit of HIP 41378 f with NGTS, just the third transit ever detected for this planet, which confirms the orbital period. This is also the first ground-based detection of a transit of HIP 41378 f. Additional ground-based photometry was also obtained and used to constrain the time of the transit. The transit was measured to occur 1.50 h earlier than predicted. We use an analytic transit timing variation (TTV) model to show the observed TTV can be explained by interactions between HIP 41378 e and HIP 41378 f. Using our TTV model, we predict the epochs of future transits of HIP 41378 f, with derived transit centres of TC, 4 = 2459 355.087-0.022+0.031 (2021 May) and TC, 5 = 2459 897.078-0.060+0.114 (2022 November). [less ▲]Detailed reference viewed: 44 (13 ULiège) Six transiting planets and a chain of Laplace resonances in TOI-178Leleu, A.; Alibert, Y.; Hara, N. C. et alin Astronomy and Astrophysics (2021), 649Determining the architecture of multi-planetary systems is one of the cornerstones of understanding planet formation and evolution. Resonant systems are especially important as the fragility of their ... [more ▼]Determining the architecture of multi-planetary systems is one of the cornerstones of understanding planet formation and evolution. Resonant systems are especially important as the fragility of their orbital configuration ensures that no significant scattering or collisional event has taken place since the earliest formation phase when the parent protoplanetary disc was still present. In this context, TOI-178 has been the subject of particular attention since the first TESS observations hinted at the possible presence of a near 2:3:3 resonant chain. Here we report the results of observations from CHEOPS, ESPRESSO, NGTS, and SPECULOOS with the aim of deciphering the peculiar orbital architecture of the system. We show that TOI-178 harbours at least six planets in the super-Earth to mini-Neptune regimes, with radii ranging from 1.152‒0.070+0.073 to 2.87‒0.13+0.14 Earth radii and periods of 1.91, 3.24, 6.56, 9.96, 15.23, and 20.71 days. All planets but the innermost one form a 2:4:6:9:12 chain of Laplace resonances, and the planetary densities show important variations from planet to planet, jumping from 1.02‒0.23+0.28 to 0.177‒0.061+0.055 times the Earth's density between planets c and d. Using Bayesian interior structure retrieval models, we show that the amount of gas in the planets does not vary in a monotonous way, contrary to what one would expect from simple formation and evolution models and unlike other known systems in a chain of Laplace resonances. The brightness of TOI-178 (H = 8.76 mag, J = 9.37 mag, V = 11.95 mag) allows for a precise characterisation of its orbital architecture as well as of the physical nature of the six presently known transiting planets it harbours. The peculiar orbital configuration and the diversity in average density among the planets in the system will enable the study of interior planetary structures and atmospheric evolution, providing important clues on the formation of super-Earths and mini-Neptunes. [less ▲]Detailed reference viewed: 18 (4 ULiège) Massive Search for Spot- and Facula-Crossing Events in 1598 Exoplanetary Transit Light CurvesBaluev, R. V.; Sokov, E. N.; Sokova, I. A. et alin Acta Astronomica (2021), 71We developed a dedicated statistical test for a massive detection of spot- and facula-crossing anomalies in multiple exoplanetary transit light curves, based on the frequentist p-value thresholding. This ... [more ▼]We developed a dedicated statistical test for a massive detection of spot- and facula-crossing anomalies in multiple exoplanetary transit light curves, based on the frequentist p-value thresholding. This test was used to augment our algorithmic pipeline for transit light curves analysis. It was applied to 1598 amateur and professional transit observations of 26 targets being monitored in the EXPANSION project. We detected 109 statistically significant candidate events revealing a roughly 2:1 asymmetry in favor of spots-crossings over faculae-crossings. Although some candidate anomalies likely appear non-physical and originate from systematic errors, such asymmetry between negative and positive events should indicate a physical difference between the frequency of star spots and faculae. Detected spot-crossing events also reveal positive correlation between their amplitude and width, possibly due to spot size correlation. However, the frequency of all detectable crossing events appears just about a few per cent, so they cannot explain excessive transit timing noise observed for several targets. [less ▲]Detailed reference viewed: 23 (3 ULiège) CHEOPS observations of the HD 108236 planetary system: a fifth planet, improved ephemerides, and planetary radiiBonfanti, A.; Delrez, Laetitia ; Hooton, M. J. et alin Astronomy and Astrophysics (2021), 646Context. The detection of a super-Earth and three mini-Neptunes transiting the bright (V = 9.2 mag) star HD 108236 (also known as TOI-1233) was recently reported on the basis of TESS and ground-based ... [more ▼]Context. The detection of a super-Earth and three mini-Neptunes transiting the bright (V = 9.2 mag) star HD 108236 (also known as TOI-1233) was recently reported on the basis of TESS and ground-based light curves.
Aims: We perform a first characterisation of the HD 108236 planetary system through high-precision CHEOPS photometry and improve the transit ephemerides and system parameters.
Methods: We characterise the host star through spectroscopic analysis and derive the radius with the infrared flux method. We constrain the stellar mass and age by combining the results obtained from two sets of stellar evolutionary tracks. We analyse the available TESS light curves and one CHEOPS transit light curve for each known planet in the system.
Results: We find that HD 108236 is a Sun-like star with R[SUB]⋆[/SUB] = 0.877 ± 0.008 R[SUB]⊙[/SUB], M[SUB]⋆[/SUB] = 0.869[SUB]-0.048[/SUB][SUP]+0.050[/SUP] M[SUB]⊙[/SUB], and an age of 6.7[SUB]-5.1[/SUB][SUP]+4.0[/SUP] Gyr. We report the serendipitous detection of an additional planet, HD 108236 f, in one of the CHEOPS light curves. For this planet, the combined analysis of the TESS and CHEOPS light curves leads to a tentative orbital period of about 29.5 days. From the light curve analysis, we obtain radii of 1.615 ± 0.051, 2.071 ± 0.052, 2.539[SUB]-0.065[/SUB][SUP]+0.062[/SUP], 3.083 ± 0.052, and 2.017[SUB]-0.057[/SUB][SUP]+0.052[/SUP] R[SUB]⊕[/SUB] for planets HD 108236 b to HD 108236 f, respectively. These values are in agreement with previous TESS-based estimates, but with an improved precision of about a factor of two. We perform a stability analysis of the system, concluding that the planetary orbits most likely have eccentricities smaller than 0.1. We also employ a planetary atmospheric evolution framework to constrain the masses of the five planets, concluding that HD 108236 b and HD 108236 c should have an Earth-like density, while the outer planets should host a low mean molecular weight envelope.
Conclusions: The detection of the fifth planet makes HD 108236 the third system brighter than V = 10 mag to host more than four transiting planets. The longer time span enables us to significantly improve the orbital ephemerides such that the uncertainty on the transit times will be of the order of minutes for the years to come. A comparison of the results obtained from the TESS and CHEOPS light curves indicates that for a V ~ 9 mag solar-like star and a transit signal of ~500 ppm, one CHEOPS transit light curve ensures the same level of photometric precision as eight TESS transits combined, although this conclusion depends on the length and position of the gaps in the light curve.
Light curves are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/646/A157 [less ▲]Detailed reference viewed: 94 (2 ULiège) Refining the transit timing and photometric analysis of TRAPPIST-1: Masses, radii, densities, dynamics, and ephemeridesAgol, Eric; Dorn, Caroline; Grimm, Simon L. et alin Planetary Science Journal (2021), 2We have collected transit times for the TRAPPIST-1 system with the Spitzer Space Telescope over four years. We add to these ground-based, HST and K2 transit time measurements, and revisit an N-body ... [more ▼]We have collected transit times for the TRAPPIST-1 system with the Spitzer Space Telescope over four years. We add to these ground-based, HST and K2 transit time measurements, and revisit an N-body dynamical analysis of the seven-planet system using our complete set of times from which we refine the mass ratios of the planets to the star. We next carry out a photodynamical analysis of the Spitzer light curves to derive the density of the host star and the planet densities. We find that all seven planets' densities may be described with a single rocky mass-radius relation which is depleted in iron relative to Earth, with Fe 21 wt% versus 32 wt% for Earth, and otherwise Earth-like in composition. Alternatively, the planets may have an Earth-like composition, but enhanced in light elements, such as a surface water layer or a core-free structure with oxidized iron in the mantle. We measure planet masses to a precision of 3-5%, equivalent to a radial-velocity (RV) precision of 2.5 cm/sec, or two orders of magnitude more precise than current RV capabilities. We find the eccentricities of the planets are very small; the orbits are extremely coplanar; and the system is stable on 10 Myr timescales. We find evidence of infrequent timing outliers which we cannot explain with an eighth planet; we instead account for the outliers using a robust likelihood function. We forecast JWST timing observations, and speculate on possible implications of the planet densities for the formation, migration and evolution of the planet system. [less ▲]Detailed reference viewed: 33 (8 ULiège) Abundance measurements of H[SUB]2[/SUB]O and carbon-bearing species in the atmosphere of WASP-127b confirm its supersolar metallicitySpake, Jessica J.; Sing, David K.; Wakeford, Hannah R. et alin Monthly Notices of the Royal Astronomical Society (2021), 500The chemical abundances of exoplanet atmospheres may provide valuable information about the bulk compositions, formation pathways, and evolutionary histories of planets. Exoplanets with large, relatively ... [more ▼]The chemical abundances of exoplanet atmospheres may provide valuable information about the bulk compositions, formation pathways, and evolutionary histories of planets. Exoplanets with large, relatively cloud-free atmospheres, and which orbit bright stars provide the best opportunities for accurate abundance measurements. For this reason, we measured the transmission spectrum of the bright (V ∼ 10.2), large (1.37 R[SUB]J[/SUB]), sub-Saturn mass (0.19 M[SUB]J[/SUB]) exoplanet WASP-127b across the near-UV to near-infrared wavelength range (0.3-5 μm), using the Hubble and Spitzer Space Telescopes. Our results show a feature-rich transmission spectrum, with absorption from Na, H[SUB]2[/SUB]O, and CO[SUB]2[/SUB], and wavelength-dependent scattering from small-particle condensates. We ran two types of atmospheric retrieval models: one enforcing chemical equilibrium, and the other which fit the abundances freely. Our retrieved abundances at chemical equilibrium for Na, O, and C are all supersolar, with abundances relative to solar values of 9 $^{+15}_{-6}$ , 16 $^{+7}_{-5}$ , and 26 $^{+12}_{-9}$ , respectively. Despite giving conflicting C/O ratios, both retrievals gave supersolar CO[SUB]2[/SUB] volume mixing ratios, which adds to the likelihood that WASP-127b's bulk metallicity is supersolar, since CO[SUB]2[/SUB] abundance is highly sensitive to atmospheric metallicity. We detect water at a significance of 13.7σ. Our detection of Na is in agreement with previous ground-based detections, though we find a much lower abundance, and we also do not find evidence for Li or K despite increased sensitivity. In the future, spectroscopy with James Webb Space Telescope will be able to constrain WASP-127b's C/O ratio, and may reveal the formation history of this metal-enriched, highly observable exoplanet. [less ▲]Detailed reference viewed: 23 (2 ULiège) SPECULOOS: Ultracool dwarf transit survey. Target list and strategySebastian, Daniel ; Gillon, Michaël ; Ducrot, Elsa et alin Astronomy and Astrophysics (2021), 645Context. One of the most promising avenues for the detailed study of temperate Earth-sized exoplanets is the detection of such planets in transit in front of stars that are small and near enough to make ... [more ▼]Context. One of the most promising avenues for the detailed study of temperate Earth-sized exoplanets is the detection of such planets in transit in front of stars that are small and near enough to make it possible to carry out a thorough atmospheric characterisation with next-generation telescopes, such as the James Webb Space telescope (JWST) or Extremely Large Telescope (ELT). In this context, the TRAPPIST-1 planets form a unique benchmark system that has garnered the interest of a large scientific community.
Aims: The SPECULOOS survey is an exoplanet transit survey targeting a volume-limited (40 pc) sample of ultracool dwarf stars (of spectral type M7 and later) that is based on a network of robotic 1 m telescopes especially designed for this survey. The strategy for brighter and earlier targets leverages on the synergy with the ongoing TESS space-based exoplanet transit survey.
Methods: We define the SPECULOOS target list as the sum of three non-overlapping sub-programmes incorporating the latest type objects (T[SUB]eff[/SUB] ≲ 3000 K). Programme 1 features 365 dwarfs that are small and near enough to make it possible to detail atmospheric characterisation of an `Earth-like' planet with the upcoming JWST. Programme 2 features 171 dwarfs of M5-type and later for which a significant detection of a planet similar to TRAPPIST-1b should be within reach of TESS. Programme 3 features 1121 dwarfs that are later than M6-type. These programmes form the basis of our statistical census of short-period planets around ultracool dwarf stars.
Results: Our compound target list includes 1657 photometrically classified late-type dwarfs, with 260 of these targets classified, for the first time, as possible nearby ultracool dwarf stars. Our general observational strategy was to monitor each target between 100 and 200 h with our telescope network, making efficient use of the synergy with TESS for our Programme 2 targets and a proportion of targets in our Programme 1.
Conclusions: Based on Monte Carlo simulations, we expect to detect up to a few dozen temperate, rocky planets. We also expect a number of them to prove amenable for atmospheric characterisation with JWST and other future giant telescopes, which will substantially improve our understanding of the planetary population of the latest-type stars. | {} |
ln (phi) dalam Fungsi Hiperbolik
Bentuk lain $\ln (\phi)$ dalam fungsi hiperbolik, tepatnya pada inverse hyperbolic cosecant dari 2, atau inverse hyperbolic sine dari 1/2:
$\displaystyle \boxed {\ln(\phi)=\mathrm{csch}^{-1}(2)= \mathrm{sinh}^{-1}\left ( \frac{1}{2} \right )= 0.481211825....}$
Formula kunci kemunculan $\ln(\phi)$ dalam fungsi hiperbolik dari adanya masalah ini :
$\displaystyle \boxed {e^k-e^{-k}=1}$ atau $\displaystyle \boxed {e^{2k}-e^k-1=0}$,
$\boxed {k=\ln(\phi)}$ atau $\boxed {e^{k}=\phi}$
dengan $e$ adalah konstanta Euler dan $\phi$ adalah golden ratio.
Ini juga tak kalah menarik, bisa dibuktikan pula bahwa $\ln (\phi)$ dapat dinyatakan dalam perluasan deret berikut ini :
$\displaystyle \boxed {\ln(\phi)=\sum_{n=0}^{\infty} \frac{(-1)^n(2n)!}{2^{4n+1}(2 n+1)(n!)^2}= 0.481211825....}$
Boy’s Surfaces
Boy’s surface is a nonorientable surface found by Werner Boy in 1901. Boy’s surface is an immersion of the real projective plane in 3-dimensional without infinities and singularities, but it meets itself in a triple point (self-intersect). The images below are generated using 3D-XplorMath, and with the “optimal” Bryant-Kusner parametrization when $a= 0.5$, $-1.45 < u < 0$, and $0 < v < 2\pi$ :
Other viewpoints :
Projective Plane
There are many ways to make a model of the Boy’s surface using the projective plane, one of them is to take a disc, and join together opposite points on the edge with self intersection (note : In fact, this can not be done in three dimensions without self intersections); so the disc must pass through itself somewhere. The Boy’s surface can be obtained by sewing a corresponding band (Möbius band) round the edge of a disc.
The $\mathbb{R}^{3}$ Parametrization of Boy’s surface
Rob Kusner and Robert Bryant discovered the beautiful parametrization of the Boy’s surface on a given complex number $z$, where $\left | z \right |\leq 1$, so that giving the Cartesian coordinates $\left ( X,Y,Z \right )$ of a point on the surface.
In 1986 Apéry gave the analytic equations for the general method of nonorientable surfaces. Following this standard form, $\mathbb{R}^{3}$ parametrization of the Boy’s surface can also be written as a smooth deformation given by the equations :
$x\left ( u,v \right )= \frac {\sqrt{2}\cos\left ( 2u \right ) \cos^{2}\left ( v \right )+\cos \left ( u \right )\sin\left ( 2v \right ) }{D},$
$y\left ( u,v \right )= \frac {\sqrt{2}\sin\left ( 2u \right ) \cos^{2}\left ( v \right )-\sin \left ( u \right )\sin\left ( 2v \right ) }{D},$
$z\left ( u,v \right )= \frac {3 \cos^{2}\left ( v \right )}{D}.$
where
$D= 2-a\sqrt{2}\sin\left ( 3u \right )\sin\left ( 2v \right )$,
$a$ varies from 0 to 1.
Here are some links related to Boy’s surface :
Parametric Breather Pseudospherical Surface
Parametric breather surfaces are known in one-to-one correspondence with the solutions of a certain non-linear wave-equation, i.e., the so-called Sine-Gordon Equation. It turns out, solutions to this equation correspond to unique pseudospherical surfaces, namely soliton. Breather surface corresponds to a time-periodic 2-soliton solution.
Parametric breather surface has the following parametric equations :
$x = -u+\frac{2\left(1-a^2\right)\cosh(au)\sinh(au)}{a\left(\left(1-a^2\right)\cosh^2(au)+a^2\,\sin^2\left(\sqrt{1-a^2}v\right)\right)}$
$y = \frac{2\sqrt{1-a^2}\cosh(au)\left(-\sqrt{1-a^2}\cos(v)\cos\left(\sqrt{1-a^2}v\right)-\sin(v)\sin\left(\sqrt{1-a^2}v\right)\right)}{a\left(\left(1-a^2\right)\cosh^2(au)+a^2\,\sin^2\left(\sqrt{1-a^2}v\right)\right)}$
$z = \frac{2\sqrt{1-a^2}\cosh(au)\left(-\sqrt{1-a^2}\sin(v)\cos\left(\sqrt{1-a^2}v\right)+\cos(v)\sin\left(\sqrt{1-a^2}v\right)\right)}{a\left(\left(1-a^2\right)\cosh^2(au)+a^2\,\sin^2\left(\sqrt{1-a^2}v\right)\right)}$
Where $0, $u$ controls how far the tip goes, and $v$ controls the girth.
When $a=0.4$, $-13, and $-38 :
With orthographic projection :
Surface in $\mathbb{R}^{3}$ having constant Gaussian curvature $K= -1$ are usually called pseudospherical surfaces.
If $X : M \subset \mathbb{R}^{3}$ is a surface with Gaussian curvature $K= -1$ then it is known that there exists a local asymptotic coordinate system $(x,t)$ on $M$ such that the first and second fundamental forms are:
$dx^{2}+dt^{2}+2 \cos{q}~dx~dt$, and $2 \sin{q}~dx~dt$,
where $q$ is the angle between asymptotic lines (the x-curves and t-curves). The Gauss-Codazzi equations for$M$ in these coordinates become a single equation, the sine-Gordon equation (SGE) :
$q_{xt}= \sin{q}$
The SGE is one of the model soliton equations.
• Chuu-Lian Terng. 2004. Lecture notes on curves and surfaces in $\mathbb{R}^{3}$, available here. | {} |
?
Free Version
Difficult
# Convergence of a Trigonometric Sum
APCALC-BJBCUE
For which values of $x$ on the interval $(0, \pi)$ will the series below converge?
$$\sum _{ n=1 }^{ \infty }{ ( \sin { x } ) ^{ n } }$$
A
$(0, \pi)$
B
$(0,\cfrac{\pi}{2})$ or $(\cfrac{\pi}{2},\pi)$
C
$(0, \cfrac{\pi}{2})$
D
$(\cfrac{\pi}{2},\pi )$ | {} |
# Atom Editor
### Combined Stream
47 thoughts
last posted Nov. 22, 2014, 12:43 a.m.
get stream as: markdown or atom
0
I'm encouraging people to share their thoughts here on Github's Atom Editor.
0
For this reason, we didn't build Atom as a traditional web application. Instead, Atom is a specialized variant of Chromium designed to be a text editor rather than a web browser. Every Atom window is essentially a locally-rendered web page.
0
Looks like they've open-sourced a lot of the libraries (80+) but not the whole thing.
0
One of my editor use-cases is dealing with multi-megabyte UTF-8 polytonic Greek in syntax-highlighted formats. I've filed bugs with other editors in the past where they've performed poorly with these files.
Will be interesting to see how Atom goes.
0
Atom is free during the beta period.
...suggests it won't be after.
0
cmd-shift-P to get the command palette. Sounds familiar.
0
Unsurprisingly, one of the first things I noticed opening up an existing project was the integrated Git support.
0
Uncaught Error: Atom can only handle files < 1MB, for now.
0
A lot to like if you're coming from ST*, notably:
• Ctrl-Shift-Down works as expected (and required for me).
• Colored git status in the side tree is very useful.
I would like to be able to hide *.pyc files from the tree though. Might dig into the tree view if I can find it.
0
Atom makes use of CSON which seems to be to JSON what CoffeeScript is to JSON.
Disturbingly, reading it seems to involve an eval:
result = coffee.eval(src, opts)
(see https://github.com/bevry/cson/issues/32 for details)
0
The extreme modularity of Atom reminds me a little of Eclipse. In fact, I wonder if it could be said that Atom is to the modern world of CoffeeScript, Node, etc what Eclipse is to the world of Java.
0
\$ atom /path/to/project works out of the box. nice polish.
0
My first impression of Atom was it reminded me of Kod. I wonder if there is any connection?
0
To give some perspective on my thoughts about Atom here is my text editor background:
• first editor I ever used was EditPlus on Windows
• used jEdit after moving to OS X circa 2003
• have used TextMate since February 2006
• currently use TextMate 2
I've occasionally used vim, specifically on servers, and did attempt to use it full-time a few times, but was never successful.
I am a web developer so a text editor built on the same technology I build with everyday excites me.
0
This is exactly why Atom excites me.
0
I am waiting on pins and needles for my invite!
0
I love having the ability to configure Atom with a configuration file. It is in a format called CSON. Looks like JSON, but more towards YAML.
0
I am not a huge fan of the default theme, but can get used to it. I might try looking for alternatives.
0
Ran apm from command line and it resulted in:
env: node: No such file or directory
After installing node.js from Homebrew everything worked.
0
Discovered that there are two themes. A UI theme and a syntax theme. The UI theme controls everything except the editor. The syntax theme controls the editor.
Looks like the Atom Light UI theme works nicely with the Atom Dark syntax theme.
0
The Base16 Tomorrow Dark theme is the promising for me personally.
0
I figured out how to get split panes last night:
command+shift+P to get the command palette and then type split and you'll have options for various splits.
This will split the current file to view it at the same time in multiple views, which can be useful in it's own right but I generally prefer to have multiple related files open at the same time so I just use this to get a new pane open and then while a file is selected, any file you click on your project nav will open that file in that pane.
It would be nice to have the panes setup before opening files.
0
The Coffee Lint appears to be the only linting package available now.
0
I wonder how hard it would be to write a package for flake8.
0
So it seems that I'll need to reverse engineer some other packages to figure out how to write a plugin.
0
There is both the reverse engineering CoffeeScript, and writing a Node.js package that executes a Python script that will prove challenging.
0
I think using the key will be figuring out how best to interop with Python (or any binary that can execute on your machine). Need to find an example of this to reduce experimentation.
0
Does Atom Editor have an equivalent of option-click-drag?
0
The Package Generator will be a helpful start. Reminds me of pinax-starter-app.
0
cmd-shift-w is the first big workflow change I need to make.
I am used to cmd-w closing the current tab and when no tabs remain it closes the window.
Due to the distinction Atom makes between a window and buffer (within a tab) the close action was promoted to its own key mapping.
I like the distinction and can certainly get used to it, but will require changing that muscle memory.
0
Protip: export EDITOR="atom -nw"
0
For anyone questioning Atom being an odd tangent for GitHub, here's how I see it:
reposted to Atom Editor by jtauber
0
cmd-D is handy for additively selecting multiple occurrences of a string for simultaneous editing of them.
repost from Atom Editor by brosner
0
For anyone questioning Atom being an odd tangent for GitHub, here's how I see it:
0
Is there a way to update all packages with newer versions?
0
Did a bit of tweaking today. I am getting very comfortable with Atom.
0
I noticed that in the last couple versions of Atom, packages no longer showed if there was an update in the settings view.
It appears it has moved to apm upgrade or I missed it earlier. This works much better as you can upgrade all packages at once.
0
My current themes are:
• UI: spacegray-dark-ui
• Syntax: twilight-syntax-spacegray-ui
0
cmd-r has become my new best friend.
0
Made a start on a flake8 package last night. As soon as I have a prototype working I'll publish to Github so others can see/help.
0
Really want to contribute a thought on Atom, if I only had an invite :)
0
Looks like someone beat me to it. Check out atom-flake8.
0
It still a bit early for atom-flake8. It is crashing my editor and reports from others that it's not working too well when it doesn't crash. But good to see there is a start on it. Perhaps I'll fork it and help.
0
As of 0.71.0 the tree view can be shown on the right side. Win.
0
Here are a few packages I have found very useful recently:
0
There have been a few cases when I run apm upgrade and the latest plugin versions are not compatible with Atom. The version required hasn't been released by Github causing a broken editor temporarily.
0
I've been frustrated with the lack of per-window environment variables. It renders my Go workflow nearly impossible.
In my most recent attempt to fix it, I discovered the init.coffee hook. This runs after each window is loaded. Perfect!
Take a look at my first stab at CoffeeScript to solve my problem:
This reads a file in the project directory and sets the environment. I added a quick hack for resolving relative paths inside the project.
Works like a charm! | {} |
## Appendix C: Compressed Image Formats
The compressed texture formats used by Vulkan are described in the specifically identified sections of the Khronos Data Format Specification, version 1.1.
Unless otherwise described, the quantities encoded in these compressed formats are treated as normalized, unsigned values.
Those formats listed as sRGB-encoded have in-memory representations of R, G and B components which are nonlinearly-encoded as R', G', and B'; any alpha component is unchanged. As part of filtering, the nonlinear R', G', and B' values are converted to linear R, G, and B components; any alpha component is unchanged. The conversion between linear and nonlinear encoding is performed as described in the “KHR_DF_TRANSFER_SRGB” section of the Khronos Data Format Specification.
### Block-Compressed Image Formats
Table 81. Mapping of Vulkan BC formats to descriptions
VkFormat Khronos Data Format Specification description
Formats described in the “S3TC Compressed Texture Image Formats” chapter
VK_FORMAT_BC1_RGB_UNORM_BLOCK
BC1 with no alpha
VK_FORMAT_BC1_RGB_SRGB_BLOCK
BC1 with no alpha, sRGB-encoded
VK_FORMAT_BC1_RGBA_UNORM_BLOCK
BC1 with alpha
VK_FORMAT_BC1_RGBA_SRGB_BLOCK
BC1 with alpha, sRGB-encoded
VK_FORMAT_BC2_UNORM_BLOCK
BC2
VK_FORMAT_BC2_SRGB_BLOCK
BC2, sRGB-encoded
VK_FORMAT_BC3_UNORM_BLOCK
BC3
VK_FORMAT_BC3_SRGB_BLOCK
BC3, sRGB-encoded
Formats described in the “RGTC Compressed Texture Image Formats” chapter
VK_FORMAT_BC4_UNORM_BLOCK
BC4 unsigned
VK_FORMAT_BC4_SNORM_BLOCK
BC4 signed
VK_FORMAT_BC5_UNORM_BLOCK
BC5 unsigned
VK_FORMAT_BC5_SNORM_BLOCK
BC5 signed
Formats described in the “BPTC Compressed Texture Image Formats” chapter
VK_FORMAT_BC6H_UFLOAT_BLOCK
BC6H (unsigned version)
VK_FORMAT_BC6H_SFLOAT_BLOCK
BC6H (signed version)
VK_FORMAT_BC7_UNORM_BLOCK
BC7
VK_FORMAT_BC7_SRGB_BLOCK
BC7, sRGB-encoded
### ETC Compressed Image Formats
The following formats are described in the “ETC2 Compressed Texture Image Formats” chapter of the Khronos Data Format Specification.
Table 82. Mapping of Vulkan ETC formats to descriptions
VkFormat Khronos Data Format Specification description
VK_FORMAT_ETC2_R8G8B8_UNORM_BLOCK
RGB ETC2
VK_FORMAT_ETC2_R8G8B8_SRGB_BLOCK
RGB ETC2 with sRGB encoding
VK_FORMAT_ETC2_R8G8B8A1_UNORM_BLOCK
RGB ETC2 with punch-through alpha
VK_FORMAT_ETC2_R8G8B8A1_SRGB_BLOCK
RGB ETC2 with punch-through alpha and sRGB
VK_FORMAT_ETC2_R8G8B8A8_UNORM_BLOCK
RGBA ETC2
VK_FORMAT_ETC2_R8G8B8A8_SRGB_BLOCK
RGBA ETC2 with sRGB encoding
VK_FORMAT_EAC_R11_UNORM_BLOCK
Unsigned R11 EAC
VK_FORMAT_EAC_R11_SNORM_BLOCK
Signed R11 EAC
VK_FORMAT_EAC_R11G11_UNORM_BLOCK
Unsigned RG11 EAC
VK_FORMAT_EAC_R11G11_SNORM_BLOCK
Signed RG11 EAC
### ASTC Compressed Image Formats
ASTC formats are described in the “ASTC Compressed Texture Image Formats” chapter of the Khronos Data Format Specification.
Table 83. Mapping of Vulkan ASTC formats to descriptions
VkFormat Compressed texel block dimensions sRGB-encoded
VK_FORMAT_ASTC_4x4_UNORM_BLOCK
4 × 4
No
VK_FORMAT_ASTC_4x4_SRGB_BLOCK
4 × 4
Yes
VK_FORMAT_ASTC_5x4_UNORM_BLOCK
5 × 4
No
VK_FORMAT_ASTC_5x4_SRGB_BLOCK
5 × 4
Yes
VK_FORMAT_ASTC_5x5_UNORM_BLOCK
5 × 5
No
VK_FORMAT_ASTC_5x5_SRGB_BLOCK
5 × 5
Yes
VK_FORMAT_ASTC_6x5_UNORM_BLOCK
6 × 5
No
VK_FORMAT_ASTC_6x5_SRGB_BLOCK
6 × 5
Yes
VK_FORMAT_ASTC_6x6_UNORM_BLOCK
6 × 6
No
VK_FORMAT_ASTC_6x6_SRGB_BLOCK
6 × 6
Yes
VK_FORMAT_ASTC_8x5_UNORM_BLOCK
8 × 5
No
VK_FORMAT_ASTC_8x5_SRGB_BLOCK
8 × 5
Yes
VK_FORMAT_ASTC_8x6_UNORM_BLOCK
8 × 6
No
VK_FORMAT_ASTC_8x6_SRGB_BLOCK
8 × 6
Yes
VK_FORMAT_ASTC_8x8_UNORM_BLOCK
8 × 8
No
VK_FORMAT_ASTC_8x8_SRGB_BLOCK
8 × 8
Yes
VK_FORMAT_ASTC_10x5_UNORM_BLOCK
10 × 5
No
VK_FORMAT_ASTC_10x5_SRGB_BLOCK
10 × 5
Yes
VK_FORMAT_ASTC_10x6_UNORM_BLOCK
10 × 6
No
VK_FORMAT_ASTC_10x6_SRGB_BLOCK
10 × 6
Yes
VK_FORMAT_ASTC_10x8_UNORM_BLOCK
10 × 8
No
VK_FORMAT_ASTC_10x8_SRGB_BLOCK
10 × 8
Yes
VK_FORMAT_ASTC_10x10_UNORM_BLOCK
10 × 10
No
VK_FORMAT_ASTC_10x10_SRGB_BLOCK
10 × 10
Yes
VK_FORMAT_ASTC_12x10_UNORM_BLOCK
12 × 10
No
VK_FORMAT_ASTC_12x10_SRGB_BLOCK
12 × 10
Yes
VK_FORMAT_ASTC_12x12_UNORM_BLOCK
12 × 12
No
VK_FORMAT_ASTC_12x12_SRGB_BLOCK
12 × 12
Yes
#### ASTC decode mode
If the VK_EXT_astc_decode_mode extension is enabled the ASTC decoding described in the Khronos Data Format Specification is modified by replacing or modifying the corresponding sections as described below.
Table 84. Mapping of Vulkan ASTC decoding format to ASTC decoding modes
VkFormat Decoding mode
VK_FORMAT_R16G16B16A16_SFLOAT
decode_float16
VK_FORMAT_R8G8B8A8_UNORM
decode_unorm8
VK_FORMAT_E5B9G9R9_UFLOAT_PACK32
decode_rgb9e5
##### LDR and HDR Modes
Note This replaces section 16.5 in the Khronos Data Format Specification.
The decoding process for LDR content can be simplified if it is known in advance that sRGB output is required. This selection is therefore included as part of the global configuration.
The two modes differ in various ways, as shown in ASTC differences between LDR and HDR modes.
Table 85. ASTC differences between LDR and HDR modes
Operation LDR Mode HDR Mode
Returned Value
Determined by decoding mode
Determined by decoding mode
sRGB compatible
Yes
No
LDR endpoint decoding precision
16 bits, or 8 bits for sRGB
16 bits
HDR endpoint mode results
Error color
As decoded
Error results
Error color
Vector of NaNs (0xFFFF)
The type of the values returned by the decoding process is determined by the decoding mode as shown in ASTC decoding modes.
Table 86. ASTC decoding modes
Decode mode LDR Mode HDR Mode
decode_float16
Vector of FP16 values
Vector of FP16 values
decode_unorm8
Vector of 8-bit unsigned normalized values
invalid
decode_rgb9e5
Vector using a shared exponent format
Vector using a shared exponent format
Using the decode_unorm8 decoding mode in HDR mode gives undefined results.
For sRGB, the decoding mode is ignored, and the decoding always returns a vector of 8-bit unsigned normalized values.
The error color is opaque fully-saturated magenta [(R,G,B,A) = (0xFF,0x00,0xFF,0xFF). This has been chosen as it is much more noticeable than black or white, and occurs far less often in valid images.
For linear RGB decode, the error color may be either opaque fully-saturated magenta (R,G,B,A) = (1.0,0.0,1.0,1.0) or a vector of four NaNs (R,G,B,A) = (NaN,NaN,NaN,NaN). In the latter case, the recommended NaN value returned is 0xFFFF.
When using the decode_rgb9e5 decoding mode in HDR mode, error results will return the error color because NaN cannot be represented.
The error color is returned as an informative response to invalid conditions, including invalid block encodings or use of reserved endpoint modes.
Future, forward-compatible extensions to ASTC may define valid interpretations of these conditions, which will decode to some other color. Therefore, encoders and applications must not rely on invalid encodings as a way of generating the error color.
Note This replaces section 16.19 in the Khronos Data Format Specification.
Once the effective weight i for the texel has been calculated, the color endpoints are interpolated and expanded.
For LDR endpoint modes, each color component C is calculated from the corresponding 8-bit endpoint components C0 and C1 as follows:
If sRGB conversion is not enabled, or for the alpha channel in any case, C0 and C1 are first expanded to 16 bits by bit replication:
C0 = (C0 << 8) | C0; C1 = (C1 << 8) | C1;
If sRGB conversion is enabled, C0 and C1 for the R, G, and B channels are expanded to 16 bits differently, as follows:
C0 = (C0 << 8) | 0x80; C1 = (C1 << 8) | 0x80;
C0 and C1 are then interpolated to produce a UNORM16 result C:
C = floor( (C0*(64-i) + C1*i + 32)/64 )
If sRGB conversion is not enabled and the decoding mode is decode_float16, then if C = 65535 the final result is 1.0 (0x3C00); otherwise C is divided by 65536 and the infinite-precision result of the division is converted to FP16 with round-to-zero semantics.
If sRGB conversion is not enabled and the decoding mode is decode_unorm8, then top 8 bits of the interpolation result for the R, G, B, and A channels are used as the final result.
If sRGB conversion is not enabled and the decoding mode is decode_rgb9e5, then the final result is a combination of the (UNORM16) values of C for the three color components (Cr, Cg, and Cb) computed as follows:
int lz = clz17( Cr | Cg | Cb | 1);
if (Cr == 65535 ) { Cr = 65536; lz = 0; }
if (Cg == 65535 ) { Cg = 65536; lz = 0; }
if (Cb == 65535 ) { Cb = 65536; lz = 0; }
Cr <<= lz;
Cg <<= lz;
Cb <<= lz;
Cr = (Cr >> 8) & 0x1FF;
Cg = (Cg >> 8) & 0x1FF;
Cb = (Cb >> 8) & 0x1FF;
uint32_t exponent = 16 - lz;
uint32_t texel = (exponent << 27) | (Cb << 18) | (Cg << 9) | Cr;
The clz17() function counts leading zeros in a 17-bit value.
If sRGB conversion is enabled, then the decoding mode is ignored, and the top 8 bits of the interpolation result for the R, G and B channels are passed to the external sRGB conversion block and used as the final result. The A channle uses the decode_float16 decoding mode.
For HDR endpoint modes, color values are represented in a 12-bit pseudo-logarithmic representation, and interpolation occurs in a piecewise-approximate logarithmic manner as follows:
In LDR mode, the error result is returned.
In HDR mode, the color components from each endpoint, C0 and C1, are initially shifted left 4 bits to become 16-bit integer values and these are interpolated in the same way as LDR. The 16-bit value C is then decomposed into the top five bits, E, and the bottom 11 bits M, which are then processed and recombined with E to form the final value Cf:
C = floor( (C0*(64-i) + C1*i + 32)/64 )
E = (C & 0xF800) >> 11; M = C & 0x7FF;
if (M < 512) { Mt = 3*M; }
else if (M >= 1536) { Mt = 5*M - 2048; }
else { Mt = 4*M - 512; }
Cf = (E<<10) + (Mt>>3)
This interpolation is a considerably closer approximation to a logarithmic space than simple 16-bit interpolation.
This final value Cf is interpreted as an IEEE FP16 value. If the result is +Inf or NaN, it is converted to the bit pattern 0x7BFF, which is the largest representable finite value.
If the decoding mode is decode_rgb9e5, then the final result is a combination of the (IEEE FP16) values of Cf for the three color components (Cr, Cg, and Cb) computed as follows:
if( Cr > 0x7c00 ) Cr = 0; else if( Cr == 0x7c00 ) Cr = 0x7bff;
if( Cg > 0x7c00 ) Cg = 0; else if( Cg == 0x7c00 ) Cg = 0x7bff;
if( Cb > 0x7c00 ) Cb = 0; else if( Cb == 0x7c00 ) Cb = 0x7bff;
int Re = (Cr >> 10) & 0x1F;
int Ge = (Cg >> 10) & 0x1F;
int Be = (Cb >> 10) & 0x1F;
int Rex = Re == 0 ? 1 : Re;
int Gex = Ge == 0 ? 1 : Ge;
int Bex = Be == 0 ? 1 : Be;
int Xm = ((Cr | Cg | Cb) & 0x200) >> 9;
int Xe = Re | Ge | Be;
uint32_t rshift, gshift, bshift, expo;
if (Xe == 0)
{
expo = rshift = gshift = bshift = Xm;
}
else if (Re >= Ge && Re >= Be)
{
expo = Rex + 1;
rshift = 2;
gshift = Rex - Gex + 2;
bshift = Rex - Bex + 2;
}
else if (Ge >= Be)
{
expo = Gex + 1;
rshift = Gex - Rex + 2;
gshift = 2;
bshift = Gex - Bex + 2;
}
else
{
expo = Bex + 1;
rshift = Bex - Rex + 2;
gshift = Bex - Gex + 2;
bshift = 2;
}
int Rm = (Cr & 0x3FF) | (Re == 0 ? 0 : 0x400);
int Gm = (Cg & 0x3FF) | (Ge == 0 ? 0 : 0x400);
int Bm = (Cb & 0x3FF) | (Be == 0 ? 0 : 0x400);
Rm = (Rm >> rshift) & 0x1FF;
Gm = (Gm >> gshift) & 0x1FF;
Bm = (Bm >> bshift) & 0x1FF;
uint32_t texel = (expo << 27) | (Bm << 18) | (Gm << 9) | (Rm << 0);
#### Void-Extent Blocks
Note This modifies section 16.23 in the Khronos Data Format Specification.
In the HDR case, if the decoding mode is decode_rgb9e5, then any negative color component values are set to 0 before conversion to the shared exponent format (as described in Weight Application). | {} |
#### Efficient Nonnegative Tucker Decompositions: Algorithms and Uniqueness
##### Guoxu Zhou, Andrzej Cichocki, Qibin Zhao, Shengli Xie
Nonnegative Tucker decomposition (NTD) is a powerful tool for the extraction of nonnegative parts-based and physically meaningful latent components from high-dimensional tensor data while preserving the natural multilinear structure of data. However, as the data tensor often has multiple modes and is large-scale, existing NTD algorithms suffer from a very high computational complexity in terms of both storage and computation time, which has been one major obstacle for practical applications of NTD. To overcome these disadvantages, we show how low (multilinear) rank approximation (LRA) of tensors is able to significantly simplify the computation of the gradients of the cost function, upon which a family of efficient first-order NTD algorithms are developed. Besides dramatically reducing the storage complexity and running time, the new algorithms are quite flexible and robust to noise because any well-established LRA approaches can be applied. We also show how nonnegativity incorporating sparsity substantially improves the uniqueness property and partially alleviates the curse of dimensionality of the Tucker decompositions. Simulation results on synthetic and real-world data justify the validity and high efficiency of the proposed NTD algorithms.
arrow_drop_up | {} |
# Principle of Superposition and Wronskian
Assume that $$p$$ and $$q$$ are continuous and that the functions $$y_1$$ and $$y_2$$ are solutions of the differential equation $$y''+p(t) y'+q(t)y=0$$ an open interval $$I$$. Prove that if $$y_1$$ and $$y_2$$ are zero at the same point in $$I$$, then they cannot be a fundamental set of solutions on that interval.
If $$y_1$$ and $$y_2$$ are zero at the same point does this cause the determinant of Wronskian matrix is equal zero?
If so is this the reason why there cannot be a fundamental set of solutions on that interval.
Thanks in advance for any help.
• You answered your own question: the Wronskian determinant is zero. Mar 13 '19 at 19:14
• Recent similar question: math.stackexchange.com/q/3143942/115115 If you look for it, you will most probably find many more. Mar 13 '19 at 19:25
• To your last question: it is not that "there cannot be a fundamental set of solutions on that interval" (since there always is such a system), but "the solutions $y_1$ and $y_2$ do not form a fundamental system of solutions." Mar 13 '19 at 19:26
The Wronskian matrix $$W(y_1, y_2)$$ of two solutions $$y_1$$ and $$y_2$$ of
$$y'' + p(t)y'(t) + q(t)y(t) = 0 \tag 1$$
may be defined as
$$W(y_1, y_2) = \begin{bmatrix} y_1 & y_2 \\ y_1' & y_2' \end{bmatrix}, \tag 2$$
with determinant
$$\Delta_W = \vert W(y_1, y_2) \vert = \det \left (\begin{bmatrix} y_1 & y_2 \\ y_1' & y_2' \end{bmatrix} \right ) = y_1y_2' - y_2 y_1'; \tag 3$$
we calculate
$$\Delta_W' = y_1'y_2' + y_1y_2'' - y_2'y_1' - y_2y_1'' = y_1y_2'' - y_2y_1''; \tag 4$$
we may now use (1) in the form
$$y_i'' = -py_i' - qy_i, \; i = 1, 2, \tag 5$$
to transform (4) to
$$\Delta_W' = y_1(-py_2' -qy_2) - y_2(-py_1' - qy_1) = -py_1y_2' - qy_1y_2 + py_1'y_2 + qy_1y_2 = -p(y_1y_2' - y_1'y_2) = -p \Delta_W, \tag 6$$
a simple first order, linear ordinary differential equation for $$\Delta_W$$; the solutions of this equation are
$$\Delta_W(t) = \Delta_W(t_0) e^{-\int_{t_0}^t p(s)\; ds}; \; t_0, t \in I, \tag 7$$
which the reader may easily check. It follows from this equation and the uniquenness of solutions that if
$$\Delta_W(t_0) = 0, \tag 8$$
then
$$\Delta_W(t) = 0, \; \forall t \in I; \tag 9$$
thus if
$$y_1(t_0) = y_2(t_0) = 0, \tag{10}$$
it follows that
$$\Delta_W(t_0) = 0, \tag{11}$$
and (9) binds as well. Thus $$y_1$$ and $$y_2$$ cannot be a fundamental solution system for (1) on $$I$$ since fundamental systems are characterized by the non-vanishing of $$\Delta_W$$ everywhere, that is, by the linear independence of the columns of (2); but of course when the columns are linearly dependent then $$W(y_1, y_2) = 0$$.
Finally, as pointed out by user539887 in his comment to the question itself, it's not that a fundamental system doesn't exist but that $$y_1$$, $$y_2$$ does not form one. A fundamental solution system always exists, as may be seen by taking
$$W(t_0) = I = \begin{bmatrix} 1& 0 \\ 0 & 1 \end{bmatrix}, \tag{12}$$
for then
$$\Delta_W(t) \ne 0, \; \forall t \in I, \tag{13}$$
as follows from (7) since
$$\Delta_W(t_0) = 1 \tag{14}$$
in this case. | {} |
I feel really dumb, but I am stuck on this (changing direction of a sprite)
Recommended Posts
Hey, I am currently working on making a pong clone (for learning purposes). I have it to the point where both paddles and the ball are onscreen, both paddles can be controlled, and there are checks in place to make sure nothing goes off screen. However when I added the ball, I ran into problems. I can get it to move at the start fine (as the code below shows), and it stops where its supposed to (well, close enough anyways). The problem is I cannot for the life of me figure out how to get it to change direction. I know its going to be something stupid but I have been working on this for awhile now and I feel I have gotten to the point where I should ask for help. Note: This project is being done with SFML in Code::Blocks, if that information makes any difference.
#include <SFML/Graphics.hpp>
#include <iostream>
int main()
{
// Create the main rendering window
sf::RenderWindow App(sf::VideoMode(800, 600, 32), "SFML Pong");
App.SetFramerateLimit(60); // Limits framerate
// Next 3 lines display window size in console
std::cout << App.GetHeight();
std::cout << "\n";
std::cout << App.GetWidth();
sf::Image ball;
// next 3 (currently 2) if's load images and displays error message if there is a problem
{
}
{
}
{
std::cout << "Error, ball.png failed to load";
}
// set blue paddle sprite and values
// set red paddle sprite and values
// set the ball's sprite and values
sf::Sprite ballSprite(ball);
ballSprite.SetX(250);
ballSprite.SetY(250);
// Start game loop
while (App.IsOpened())
{
// Process events
sf::Event Event;
while (App.GetEvent(Event))
{
// Close window : exit
if (Event.Type == sf::Event::Closed)
App.Close();
// A key has been pressed
if (Event.Type == sf::Event::KeyPressed)
{
// Escape key : exit
if (Event.Key.Code == sf::Key::Escape)
App.Close();
}
}
// Clear the screen
App.Clear(sf::Color(0, 0, 0));
//next 2 if's for bluePaddles border gaurds (collision detection, makes sure it stays in bounds)
{
}
{
}
//nest 2 ifs are for redPaddles vorder gaurds (same as blue)
{
}
{
}
//-> start of code dealing with ball. This bit will deal with ball movement/collision/etc
ballSprite.Move(150 * App.GetFrameTime() * -1, 0);
if (ballSprite.GetPosition().y < 0)
{
ballSprite.SetY(0.5);
}
if(ballSprite.GetPosition().y > App.GetHeight()-ball.GetHeight())
{
ballSprite.SetY(582);
}
{
ballSprite.Move(150 * App.GetFrameTime() * 1, 0);
}
//<- end of all the work with ball
//this chunk provides the code for player control (movement)
if (App.GetInput().IsKeyDown(sf::Key::W)) {
bluePaddleSprite.Move(0, 150 * App.GetFrameTime() * -1);
} else if (App.GetInput().IsKeyDown(sf::Key::S)) {
bluePaddleSprite.Move(0, 150 * App.GetFrameTime() * 1);
}
//this bit is a tester for red before I put in AI, to make sure movement works. (tested working)
if (App.GetInput().IsKeyDown(sf::Key::Up)) {
redPaddleSprite.Move(0, 150 * App.GetFrameTime() * -1);
} else if (App.GetInput().IsKeyDown(sf::Key::Down)) {
redPaddleSprite.Move(0, 150 * App.GetFrameTime() * 1);
}
//Draws the ball
App.Draw(ballSprite);
// Display window contents on screen
App.Display();
}
return EXIT_SUCCESS;
}
Is my full code. The piece of code in question is:
ballSprite.Move(150 * App.GetFrameTime() * -1, 0);
if (ballSprite.GetPosition().y < 0)
{
ballSprite.SetY(0.5);
}
if(ballSprite.GetPosition().y > App.GetHeight()-ball.GetHeight())
{
ballSprite.SetY(582);
}
{
ballSprite.Move(150 * App.GetFrameTime() * 1, 0);
}
There are a few other bugs, such as that although the ball stops, it stops at the point even if the paddle is not there (leading me to think I made it so its checking the y value of the area the paddle is on, instead of just the paddle). However my biggest problem right now is just getting the ball to change directions on contact. Thank you, and sorry for the trouble.
Share on other sites
I think you are looking for 2D Vector Reflection. :)
Share on other sites
Quote:
Original post by LitheonI think you are looking for 2D Vector Reflection. :)
If only I understood what any of that meant...
Share on other sites
what the other poster is getting at is that you should imagine the direction of anything as being a vector .. a vector in 2d is just an x and a y.
if D is your direction vector , then -2*D means that your x and y are multiplied by -2.
the formula that is written on that link he posted is
Vect2 = Vect1 - 2 * WallN * (WallN DOT Vect1)
... you can use this formula, i wont confirm it correct.
but lets say you are using a vector style direction ..
if you hit a vertical wall you multiply your X by -1, if you hit a horizontal wall you multiply y by -1.
if you want to change how paddle ENDS react on the ball ..make the distance the ball hits the paddle from the center as a multiplier on the Y so its more like the pong game.
PS. i think this works only if in settings like in pong where the walls are straight vertical and straight horizontal ... i believe its just a transformation matrix { {-1,0},{0,1} }
[Edited by - rsalazar on April 25, 2010 3:58:53 AM]
Share on other sites
Turns out my base issue was having may variables for certain things in the wrong place. Now it works, at least as far as back and fourth bouncing (other issues present but I can deal)
I had re-written the code with
float v_x = 5;
float v_y = 0;
but had put them before the part dealing with the movement, thus putting them inside of a loop, so they where constantly being made and redefined. I moved them to outside of the loop and things work now.
I did learn a lot doing this though, and thanks for pointing me in the right direction as far as the math itself.
Create an account
Register a new account
• Forum Statistics
• Total Topics
627733
• Total Posts
2978839
• 10
• 9
• 21
• 14
• 12 | {} |
# when does cos^2x=sin^2x?
• Feb 4th 2010, 02:26 PM
Amberosia32
when does cos^2x=sin^2x?
when does cos^2 x=sin^2 x?
• Feb 4th 2010, 02:32 PM
Prove It
Quote:
Originally Posted by Amberosia32
when does cos^2 x=sin^2 x?
You should know from the Pythagorean Identity that
$\cos^2{x} + \sin^2{x} = 1$.
So $\cos^2{x} = 1 - \sin^2{x}$.
Substituting this into your original equation:
$\cos^2{x} = \sin^2{x}$
$1 - \sin^2{x} = \sin^2{x}$
$1 = 2\sin^2{x}$
$\sin^2{x} = \frac{1}{2}$
$\sin{x} = \pm\frac{1}{\sqrt{2}}$
$x = \left \{ \frac{\pi}{4}, \frac{3\pi}{4}, \frac{5\pi}{4}, \frac{7\pi}{4} \right \} + 2\pi n$, where $n$ is an integer representing the number of times you have gone around the unit circle.
• Feb 4th 2010, 02:48 PM
Quote:
Originally Posted by Amberosia32
when does cos^2 x=sin^2 x?
$Cos^2x=Sin^2x$
$Cos^2x-Sin^2x=0$
$\left(Cosx+Sinx\right)\left(Cosx-Sinx\right)=0$
$Cosx=Sinx,\ or\ Cosx=-Sinx$
As Cosx gives the horizontal co-ordinate and Sinx gives the vertical co-ordinate of the unit circle centred at the origin,
Cosx=Sinx at $\frac{\pi}{4}+2n\pi,\ \left(\pi+\frac{\pi}{4}+2n\pi\right)$ for n=0,1,2.....
Cosx= -Sinx at $\left(\pi-\frac{\pi}{4}+2n\pi\right),\ \left(2{\pi}-\frac{\pi}{4}+2n\pi\right)$ for n=0.1.2....
• Feb 4th 2010, 05:43 PM
Soroban
Hello, Amberosia32!
Quote:
When does $\cos^2\!x \:=\:\sin^2\!x\;?$
We have: . $\sin^2\!x \:=\:\cos^2\!x$
Divide by $\cos^2\!x\!:\;\;\;\frac{\sin^2\!x}{\cos^2\!x} \:=\:1 \quad\Rightarrow\quad \left(\frac{\sin x}{\cos x}\right)^2 \:=\:1 \quad\Rightarrow\quad \tan^2\!x \:=\:1$ . $\Rightarrow\quad \tan x \:=\:\pm1$
Therefore: . $x \;=\;\frac{\pi}{4} + \frac{\pi}{2}n\;\;\text{ for any integer }n$
• Feb 4th 2010, 10:22 PM
pacman
cos^2 x = sin^2 x? | {} |
# Profile Likelihood
November 16, 2015
By
(This article was first published on Freakonometrics » R-english, and kindly contributed to R-bloggers)
Consider some simulated data
```> set.seed(1)
> x=exp(rnorm(100))```
Assume that those data are observed i.id. random variables with distribution, with . The natural idea is to consider the maximum likelihood estimator
For instance, consider some maximum likelihood estimator,
```> library(MASS)
> (F=fitdistr(x,"gamma"))
shape rate
1.4214497 0.8619969
(0.1822570) (0.1320717)
> F\$estimate[1]+c(-1,1)*1.96*F\$sd[1]
[1] 1.064226 1.778673```
Here, we have an approximated (since the maximum likelihood has an asymptotic Gaussian distribution) confidence interval for . We can use numerical optimization routine to get the maximum of the log-likelihood function
```> log_lik=function(theta){
+ a=theta[1]
+ b=theta[2]
+ logL=sum(log(dgamma(x,a,b)))
+ return(-logL)
+ }
> optim(c(1,1),log_lik)
\$par
[1] 1.4214116 0.8620311
\$value
[1] 146.5909```
And we have the same value.
Now, what if we care only about , and not . The we can use profile likelihood. The idea is to solve
i.e.
or, equivalently,
```> prof_log_lik=function(a){
+ b=(optim(1,function(z) -sum(log(dgamma(x,a,z)))))\$par
+ return(-sum(log(dgamma(x,a,b))))
+ }
> vx=seq(.5,3,length=101)
> vl=-Vectorize(prof_log_lik)(vx)
> plot(vx,vl,type="l")
> optim(1,prof_log_lik)
\$par
[1] 1.421094
\$value
[1] 146.5909```
A few weeks ago, we have mentioned the likelihood ratio test, i.e.
The analogous can be obtained here, since
(the 1 comes from the fact that is a one-dimensional coefficient). The (technical) proof can be found in Suhasini Subba Rao’s notes (see also Section 4.5.2 in Antony Davison’s Statistical Models). From that property, we can easily obtain a confidende interval for
Hence, from our sample, we get the following 95% confidence interval,
```> abline(v=optim(1,prof_log_lik)\$par,lty=2)
> abline(h=-optim(1,prof_log_lik)\$value)
> abline(h=-optim(1,prof_log_lik)\$value-qchisq(.95,1)/2)
> segments(F\$estimate[1]-1.96*F\$sd[1],
-170,F\$estimate[1]+1.96*F\$sd[1],-170,lwd=3,col="blue")
> borne=-optim(1,prof_log_lik)\$value-qchisq(.95,1)/2
> (b1=uniroot(function(z) Vectorize(prof_log_lik)(z)+borne,c(.5,1.5))\$root)
[1] 1.095726
> (b2=uniroot(function(z) Vectorize(prof_log_lik)(z)+borne,c(1.25,2.5))\$root)
[1] 1.811809```
that can be visualized below,
`> segments(b1,-168,b2,-168,lwd=3,col="red")`
In blue the obtained obtained using the asymptotic Gaussian property of the maximum likelihood estimator, and in red, the obtained obtained using the asymptotic chi-square distribution of the log (profile) likelihood ratio.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook... | {} |
# familiar with IGBT? how to use it in a strobe flash circuit?
#### rwmoekoe
Joined Mar 1, 2007
172
Hi everyone
A common strobe flash circuit with brightess preset until now usually is using a fast thyristor (such as cr3jm) as the 'quenched' switch.
You may refer to the circuit here:
<------ (edited) the link doesn't work! please refer to the attachment below in latter post
This is a schematic of a strobe flash with brightness / intensity preset.
I'm only gonna describe the presetting concept roughly, hope it's ok with you all.
The circuit part below the main circuit will act as (presettable) timer to cease the flash from firing. Yes, the level intensity is actually determined by the period in which the flash should fire.
The main switch that make this stopping possible, is the CR3JM fast thyristor. Normally, a common thyristor would fail to disengage. That's why a fast one is used instead.
The situation is like this:
#### hgmjr
Joined Jan 28, 2005
9,029
I too was unable to open the jpg file.
hgmjr
#### rwmoekoe
Joined Mar 1, 2007
172
The image shack link doesn't work.
Do I understand correctly, are you looking for less-than-$0.02 per IGBT in quantities of a few hundred? Such is likely unachievalbe. If price is your only reason for using an IGBT, then stick with the thyrsitor. thx thingmaker3 for responding the thyristor cr3jm, it's$6.00 per piece and sold for bulk 300 pcs. i am still bargaining and the latest quote was \$5.75 a piece sold for bulk of 100 pcs.
as a matter of fact, price is one reason, another is i'm afraid that the thyristor will be dicontinued of production, because some people say that it's obsolete already.
oh and i'm attaching the pic here. thx alot!
friendly,
robert
#### Attachments
• 81.3 KB Views: 188
#### thingmaker3
Joined May 16, 2005
5,073
Not 300 for 6, but 300 at 6 each! I understand now. If it were me, I'd go through the catalogs looking for the least expensive MOSFET or IGBT that will handle 120% or more of the voltage & current.
#### rwmoekoe
Joined Mar 1, 2007
172
Not 300 for 6, but 300 at 6 each! I understand now. If it were me, I'd go through the catalogs looking for the least expensive MOSFET or IGBT that will handle 120% or more of the voltage & current.
really? if you think so, my friend, i'll do just that.
can you help me with the schematic? i don't have any idea on using mosfet yet. in fact, the only mosfet i'm familiar with is the irfz44 and the series. only 40 volt max isn't it?
thx thingmaker3,
robert
#### thingmaker3
Joined May 16, 2005
5,073 | {} |
# Bounding $\liminf_{n} n |f^n(x)-x|$
I solved an exercise in which the first part asks to prove that for any measure preserving measurable transformation $f:[0,1]\rightarrow [0,1]$ we have $$\liminf_{n} n |f^n(x)-x| \leq 1, \ \mbox{a.e.}$$
I can't prove the second part of the exercise: Let $\omega=(\sqrt{5}-1)/2$ and let $f:[0,1]\rightarrow[0,1]$ defined as $f(x)= (x+\omega) \pmod{1}$. Use this transformation to prove that there is no $c<\frac{1}{\sqrt{5}}$ such that $$\liminf_{n} n |f^n(x)-x| \leq c$$
-
How did you proof the first part? If you take $f=0$ and $x=\frac{1}{2}$ the bound is not satisfied. – Lucien Feb 12 '13 at 16:34
I'm sorry! I forgot to say that it holds a.e. – badaui90 Feb 12 '13 at 16:50
If you take $f=0$, then still the bound is satisfied only for $x=0$, so not a.e. – Lucien Feb 12 '13 at 17:00
My bad again, I was think about the problem and forgot to say that $f$ also has to preserve Lebesgue $\lambda$ measure, ie,$\lambda(f^{-1}(B))=\lambda(B)$ for all measurable set $B$. – badaui90 Feb 12 '13 at 17:13
As long we are dealing with $\omega \pmod 1$ it doen't make any difference to deal with $\omega+1 = \frac{\sqrt{5}+1}{2}= \phi$. Therefore we can apply Hurwitz's theorem that states: for every irrational number $\zeta$ there are infinitely many rationals $m/n$ such that $$\left| \zeta - \frac{m}{n} \right| \leq \frac{1}{\sqrt{5} n^2}$$ Moreover $\sqrt{5}$ is the best constant you can get: if you replace it with an $A>\sqrt{5}$ and take $\zeta=\phi$ there are only a finite number os rational numbers suck that the propertry above holds with $A$ instead of $\sqrt{5}$. | {} |
Deepak Scored 45->99%ile with Bounce Back Crack Course. You can do it too!
# Solve each of the following quadratic equations:
Question:
Solve each of the following quadratic equations:
$100 x^{2}-20 x+1=0$
Solution:
We write, $-20 x=-10 x-10 x$ as $100 x^{2} \times 1=100 x^{2}=(-10 x) \times(-10 x)$
$\therefore 100 x^{2}-20 x+1=0$
$\Rightarrow 100 x^{2}-10 x-10 x+1=0$
$\Rightarrow 10 x(10 x-1)-1(10 x-1)=0$
$\Rightarrow(10 x-1)(10 x-1)=0$
$\Rightarrow(10 x-1)^{2}=0$
$\Rightarrow 10 x-1=0$
$\Rightarrow x=\frac{1}{10}$
Hence, $\frac{1}{10}$ is the repreated root of the given equation. | {} |
Browse Questions
The type of hybrid orbitals used by the chlorine atom in $ClO^{-}_2$ is
$(A)\;SP^3 \\ (B)\;SP^2 \\(C)\;SP \\(D)\;\text{none of these}$
We can calculate the number of orbitals involved in hybridisation by using the reaction.
Number of orbital involved in hybridisation by using the relation:
Number of orbital involved in hybridisation $= \large\frac{1}{2} $$(V+M -C+A) Where V= number of valence electrons M= number of monovalent atoms surrounding the atom C= Charge on cation. A= Charge on anion. Number of orbitals involved in hybridisation =\large\frac{1}{2}$$(7+0+0+1)=4$
Since 4 orbitals are involved in hybridisation so it is $SP^3$ hybridised.
Hence A is the correct answer. | {} |
## Bijective proofs
#### A fourth proof
Last month I described three proofs of the formula for the number of ways to choose k objects from a set of n, if repetition is allowed and order is not significant; it is the same as the number of choices of k objects from a set of n+k−1, with repetition not allowed but order not significant.
If I needed evidence that I am getting older, here it is. I completely forgot the simplest proof of all, which I have known for a long time!
This is a bijective proof, that is, we show that the two sets have the same cardinality by establishing a bijection between them.
It works very simply. Given a set of k objects from the set {1,…,n+k−1}, since the order of choice is not important, we can label them arbitrarily; call the smallest element the first, the next smallest the second, and so on. Now we produce a selection (possibly with repetitions) of k objects from {1,…,n} as follows: keep the first object; subtract one from the second; subtract two from the third; … subtract k−1 from the kth.
The inverse bijection is described in the same way, but replacing “subtract … from” by “add … to”. In a selection with repetition, ties are unimportant.
Here is how it works out with n = 4, k = 2.
Pairs without repetition:
12, 13, 14, 15, 23, 24, 25, 34, 35, 45
Pairs with repetition:
11, 12, 13, 14, 22, 23, 24, 33, 34, 44
I said back then that different proofs have the advantage that they generalise in different ways. Here is an example, which I used when I taught elementary probability.
The British National Lottery involves choosing 6 numbers from {1,…49} (without replacement). Many people believe that the chance of having two consecutive numbers in the winning selection is very small. What is it really?
There are ${49\choose 6}$ combinations altogether. Exactly the same bijection that we just saw matches the combinations with no two consecutive numbers with the combinations of 6 things from 49−6+1=44. So the answer to the problem is $1-{44\choose 6}/{49\choose 6}=0.495198449\ldots$, that is, very close to evens.
#### Bijections and permutations
“Bijective proofs” are sometimes called “combinatorial proofs”, a term I don’t like. If you have two sets An and Bn, with cardinalities f(n) and g(n), then one way to prove that f(n) = g(n) is to find a bijection between the two sets. But there are many others, no less combinatorial: for example, show that the generating functions for the sequences (f(n)) and (g(n)) are equal.
I think the advantage of bijective proofs (apart from the generic advantage of having another proof of something) lies in a different area. If you have a number of sets (say m of them) which you are trying to match up bijectively, the smallest amount of work you need to do is to find m−1 bijections forming a “minimal connector”, that is, the edges of a tree. But if you find extra bijections, you open up a new field of investigation.
Last year I wrote about Dima Fon-Der-Flaass, and mentioned the piece of work which brought him to my attention. To recap briefly: We have a finite partially ordered set P. Then there are bijections between the sets of down-sets, up-sets, and antichains in P, as follows:
• The set of maximal elements in a down-set is an antichain, and the set of elements lying below something in an antichain is a down-set (and these are inverse bijections).
• The set of minimal elements in an up-set is an antichain, and the set of elements lying above something in an antichain is an up-set (and these are inverse bijections).
• The complement of a down-set is an up-set, and vice versa.
Now, if we start at a particular down-set and go “round the triangle”, we obtain another down-set, not necessarily the same as the one we started with. So we have a permutation on the down-sets. Dima investigated this permutation, proving a “duality” for its cycles conjectured by Deza and Fukuda, and finding the lengths of the cycles in some cases.
More generally, if we have more than enough bijections to establish that our n sets are all of the same size, then we can compose them to find various permutations on one of the sets. Following Dima, I think it might be interesting to investigate these permutations in some cases.
One of the best worked-over areas for bijective proofs involves “Catalan objects”, things counted by the famous Catalan numbers. These include binary trees with n leaves; rooted plane trees with n edges; dissections of a convex n-gon; bracketings of a product of n factors; Dyck paths from the origin to (2n,0); and Young tableaux for a 2×n rectangle. There are so many known bijections here that we should be able to produce some interesting permutations!
#### Back to sampling
I have now given four proofs of the bijection between unordered samples without and with repetition.
You may recall that the third proof was a recursive proof inspired by the second. I have not checked whether the bijections produced by these two proofs are the same. Also, I think that the first and fourth proofs give the same bijection. (Would anyone like to check these assertions?)
However, the bijections produced by the second and fourth proofs are definitely different. Recall that in the second proof, we add k−1 “dummies” to the set {1,…,n}, to describe the repetitions in the sample. It seems natural to put the dummies at the end. If we do this in the case n = 4, k = 2, we obtain the following. (Remember that, for example, 15 translates into 11.)
Pairs without repetition:
12, 13, 14, 15, 23, 24, 25, 34, 35, 45
Pairs with repetition:
12, 13, 14, 11, 23, 24, 22, 34, 33, 44
So, applying one and the inverse of the other gives a permutation on the pairs of distinct elements which shifts cyclically the pairs with a given least element.
Problem. What happens for arbitrary n and k?
#### A small challenge
Here is a challenge you might enjoy, to find a bijective proof. In my post on the book Combinatorial Chance by David and Barton, I mentioned that the following sets of combinations of k from {1,…,n} (order unimportant, no repetition) have the same number of elements, for given m:
• the sets in which the difference between the greatest and least element is m;
• the sets in which the second largest element is m.
I have slightly re-formulated the observation, for simplicity.
Now the problem has two parts:
1. Find a bijective proof.
2. Formulate and prove an analogous result for combinations with repetition allowed. | {} |
Notes on Introduction to Water | Grade 11 > Chemistry > Water | KULLABS.COM
## Introduction to Water (adsbygoogle = window.adsbygoogle || []).push({});
• Note
• Things to remember
### Water
Introduction to Water
Molecular formula = H2O
Molecular mass = 18
Melting point = 0°C
Boiling point = 100°C.
Water is a very common substance and is the monoxide of hydrogen. It only feebly dissociates to hydrogen and hydroxyl ions. It exists in all the three states: ice (solid), water (liquid) and steam (gas).
#### Anomalous Behavior of water
Water exhibits many unusual properties.
1. It has specific heat, the high latent heat of fusion and latent heat of vaporization and high dielectric constant.
2. Its melting point and boiling point are abnormally higher than those of hydrides of other members of group VIA.
3. It has the higher density than ice. This means ice floats on water and this useful for aquatic lives. It has a maximum density at 4°C (1 gm/cc).
The anomalous property of water can be well understood in light of its structure.
Due to the strongly electronegative character of the oxygen atom, the water molecules is highly polarized. Therefore, there's formation of inter-molecular hydrogen bonding between the oxygen of one water molecule and hydrogen of another water molecule. This bonding leads to the association of several water molecules both in the liquid and solid states. In the absence of hydrogen bonding, water would exist in the gaseous state like H2S. Due to hydrogen bonding, water is in polymeric form i.e. (H2O)n. Some extra energy is needed to break the hydrogen bonding and hence hydrogen bonding is the reason for unusual properties of water.
X-ray studies have shown that water molecules in ice are arranged so as to form the loose open cage-like structure with vacant spaces due to the interplay of hydrogen bonding. The oxygen atoms are situated tetrahedrally to one another in this structure. Therefore, when ice is formed the volume increases and then ice has the lesser density than water. When ice melts, some of the hydrogen bonds are broken and its open cage-like structure is partially destroyed causing the water molecules to come closer to each other. This makes the water more compact and denser than ice at this melting point.
In the water molecule, central atom oxygen undergoessp3hybridization. This leads to the tetrahedral shape of the molecule in which two H atoms should lie at two the concerns of the tetrahedron and the rest of two concerns are occupied by two lone pairs of electrons of the oxygen atom. But as lone par-lone pair repulsion is greater than that between lone pair-bond pair and bond pair-bond pair, the tetrahedral shape is distorted. Thus, HOH bond angle decreases from 109.5° (which is a regular tetrahedral bond angle) to 104.5°. So, the water molecule has a distorted tetrahedral structure. It has an angular or bent (V-shaped) structure if atomic nuclei only are considered.
#### Types of Water
By investigating the dissolved salt ions, water classified as soft water and hard water.
1. Soft water: Water which is free from soluble salt ions like Cl-, SO3- -, HCO3-, CO3- - of calcium (Ca+ +) or magnesium (Mg+ +) is called soft water. Soft water easily produces enough lather with ordinary soap. For example rain water, distilled water and demineralized water etc.
2. Hard water: Water containing dissolved salt ions like SO4--, Cl-, HCO3- of calcium (Ca++) or magnesium (Mg++) which does not produce enough lather easily with soap is called hard water.
How is hard water formed?
The property due to which water is unable to produce lather with soap is known as the hardness of sulphate (SO4- -) and bicarbonate (HCO3-) of calcium and magnesium. During rainfall, CO2 of air dissolves in water to form carbonic acid.
$$CO_2+H_2O\longrightarrow{H_2CO_3}$$
As the water containing carbonic acid flows on the surface of the earth, it reacts with CaCO3 present in the rock to produce bicarbonate salts.
$$CaCO_3+H_2CO_3\longrightarrow{Ca(HCO_3)_2}$$
Similarly, water is contaminated with MgSO4, MgCl2, CaSO4 and CaCl2 salts when it passes over the beds of rocks.
Why is hard water unable to produce enough lather?
Soap contains sodium salt of higher fatty acid like stearic acid, palmitic acid, oleic acid etc. The general formula of the soap is given by RCOONa. When hard water comes in contact with soap, calcium or magnesium salts of fatty acid are formed which are insoluble in water.
$$2C_{17}H_{35}COONa+CaSO_4\longrightarrow{(C_{17}H_{35}COO)_2Ca+Na_2SO_4}$$
$$2C_{15}H_{31}COONa+MgCl_2\longrightarrow{(C_{15}H_{31}COO)_2Mg+2NaCl}$$
Until the Ca+ + and Mg+ + ions are precipitated, no lather is produced with the soap and a lot of soap is washed before getting lather from the hard water.
Types of Hardness of Water
By knowing the nature of dissolved ions, hard water is classified as temporary hard water and permanent hard water.
Temporary Hardness
The hardness caused by dissolution of soluble bicarbonates of calcium or magnesium in water is called temporary hardness can be removed simply by heating. So, such type of hardness is called temporary hardness.
Permanent Hardness
The hardness caused by dissolution of soluble sulphates and chlorides of magnesium or calcium in water is called permanent hardness. This type of hardness can't be removed by the simple method and needs special chemical methods. Therefore, such type of hardness is called permanent hardness.
#### Methods of Removal of Hardness
Removal of Temporary Hardness
The temporary hardness of water is due to the presence of soluble bicarbonates of calcium and magnesium. It can simply be removed by 1) boiling and 2) Clark's method.
1. By boiling: When water containing bicarbonates of calcium or magnesium is heated, insoluble CaCO3 or MgCO3 are formed which can be removed by the filtration process.
$$Mg(HCO_3)_2\xrightarrow\Delta{MgCO_3+H_2O+CO_2}$$
$$Ca(HCO_3)_2\xrightarrow\Delta{CaCO_3+H_2O+CO_2}$$
2. By Clark's method:It is a simple chemical method developed by Clark.
When lime water is added to the temporary hard water, insoluble carbonate with precipitate out. The precipitate is removed by the filtration process.
$$Mg(HCO_3)_2+Ca(OH)_2\longrightarrow{MgCO_3↓+CaCO_3↓+2H_2O}$$
$$Ca(HCO_3)_2+Ca(OH)_2\longrightarrow{2CaCO_3↓+2H_2O}$$
The use of an excess of lime water will reverse the path of reaction by absorbing CO2 from the atmosphere.
Removal of Permanent Hardness
There are two chemical methods to soften the permanent hard water which are described below.
1. Using washing soda: When calculated amount of washing soda (Na2CO3) is added to the permanent hard water, chlorides or sulphates of calcium or magnesium change into insoluble carbonates of calcium or magnesium as well as soluble sodium chloride (NaCl) and sodium sulphate (Na2SO4) salts. These soluble salts are separated by the distillation process.
$$MgCl_2+NaCO_3\longrightarrow{MgCO_3↓+2NaCl}$$
$$MgSO_4+Na_2CO_3\longrightarrow{MgCO_3↓+Na_2SO_4}$$
$$CaCl_2+Na_2CO_3\longrightarrow{CaCO_3↓+2NaCl}$$
$$CaSO_4+Na_2CO_3\longrightarrow{CaCO_3↓+Na_2SO_4}$$
Thus, Ca+ + or Mg+ + ions are removed as a residue.
2. By ion-exchange method: This method is quite useful and more reliable method. The principle of this method is simple. The ions which are responsible for the hardness of water are exchanged by certain less damaging ions present in some chemical compounds called ion-exchangers which may be an organic or inorganic compounds.
a) Using inorganic ion exchangers (permutit method): Hydrated sodium aluminum silicate, Na2Al2Si2O8.xH2O, is a large molecule or complex compound, known as permutit or zeolite complex.
When hard water containing CaSO4, CaCl2, MgSO4 or MgCl2 is treated with permutit compound, sodium ion of the compound is exchanged by Ca+ + or Mg+ + ions of hard water to produce calcium or magnesium aluminum silicate and soluble NaCl or Na2SO4 salts. Thus, formed calcium or magnesium aluminum silicates are insoluble which are left in the tank as a residue. Soluble NaCl and Na2SO4 are separated by the distillation process.
$$CaCl_2+Na_2Z\longrightarrow{CaZ↓ +2NaCl}$$
$$MgSO_4+Na_2Z\longrightarrow{MgZ↓+Na_2SO_4}$$
Regeneration of permutit: After some time, the whole of the Na2Al2Si2O8 gets changed into CaAl2Si2O8 or MgAl2Si2O8 and the sodium ions are regenerated by adding 10% NaCl solution to continue the reaction.
$$CaAl_2Si_2O_8+2NaCl\longrightarrow{Na_2Al_2SiO_8+CaCl_2}$$
$$MgAl_2Si_2O_8+2NaCl\longrightarrow{Na_2Al_2Si_2O_8+MgCl_2}$$
b)Using organic ion-exchangers:This is more advanced method than the permutit process in the method, big organic molecules having high molecular mass, permeable molecular structure and acidic groups (-COOH, -SO3H) or basic groups (-OH, -NH2) are attached. These are known as ion-exchange resins. These ion-exchange resins are superior to zeolites because they can remove all kinds of dissolved ions in water. The resulting water is known as deionised (demineralized) water.
The ion exchange resins which contain replaceable H+ ions are called cation exchange resins. For example, resins- COOH, resin- SO3H. Another types of synthetic organic ion-exchange resins which contain replaceable -OH group are called anion exchange resins. It is represented by resin- OH. When hard water is added to the tank containing the ion-exchange resin (resin-H), the cations of the water are exchanged by hydrogen of the resin-H.
$$CaCl_2+2resin\;H\rightleftharpoons{(resin)_2Ca+2H^++2Cl^-}$$
$$MgSO_4+2resin\;H\rightleftharpoons{(resin)_2\;Mg+2H^++SO_4^{-\;-}}$$
The above reactions are reversible. To make it irreversible water is flown down the column. Ca++ & Mg++ are trapped by the resin. The H+, Cl- and SO4- - ions pass into another tank which contains anion exchange resin. (resin-OH). The resin_OH reacts with free SO4- - and Cl- ions liberating free OH- ions.
$$Cl^-+resin-OH\rightleftharpoons{resin-Cl+OH^-}$$
$$SO_4^{-\;-}+2resin-OH\rightleftharpoons{(resin)_2SO_4+2OH^-}$$
To make the reaction irreversible, water is allowed to flow down the column.
The first two reactions occur in cation exchange tank & the second two reactions occur in anion exchange tank. Thus formed free H+ ions and free OH- ions combine together to produce water.
$$H^++OH^-\longrightarrow{H_2O}$$
Regeneration of resins: As the reaction proceed, all the H+ or OH- ions are consumed. In order to regenerate the resin, the entry of hard water is stopped and dilute HCl is introduced into the cation-exchange tank and dilute NaOH is added to the anion-exchange tank.
$$(resin)_2Ca+2HCl\longrightarrow{CaCl_2+2resin-H}$$
$$(resin)_2Mg+2HCl\longrightarrow{MgCl_2+2resin-H}$$
Similarly,
$$resin-Cl+NaOH\longrightarrow{NaCl+resin-OH}$$
$$(resin)_2+2NaOH\longrightarrow{Na_2SO_4+2resin-OH}$$
#### Solvent Property of Water
Water has a high dielectric constant value (82). That's why many of the substances are dissociated in it and hence water is known as the universal solvent. In fact, water is a good solvent for ionic substances and poor for covalent substances. As it's polar in nature, it dissolves polar (mainly ionic) substances but cannot dissolve non-polar substances.
1. The solubility of ionic substances: Let us consider the dissolution of an ionic substance, say sodium chloride, in water. When NaCl crystals are placed in water, the partially positively charged hydrogen atoms of polar H2O molecules surround the negatively charged chloride ions. Similarly, the partially negatively charged oxygen atoms of water molecules surround the positively charged sodium ions a shown below.
In other words, Na+ ions and Cl- ions get hydrated as Na+ (H2O)m and Cl- (H2O)n from the action of NaCl with (n+m) water molecule. This process involves ion-dipole attractive interactions and energy is released, which is called hydration energy. This energy is used to break the lattice of the crystals and the ions pass into solution. If the hydration energy of an ionic solid is grater than the crystal energy (also called lattice energy which is the energy required to bind the ions in crystal) then it will dissolve, otherwise not. Thus PbSO4 is insoluble in H2O because its hydration energy is less than its crystal energy while the hydration energy exceeds the lattice energy in case of NaCl.
2. The solubility of some polar covalent substances: Some polar organic substances like alcohols, sugar, and carboxylic acids dissolve in water as inter molecular hydrogen bonding takes place between the function group of these compounds and polar water molecules.
The non-polar covalent substances like benzene, methane are not soluble in water as these molecules do not interact with water from H-bonds and again the energy released due to interaction is not sufficient enough to overcome the weak Vander Wall's force of attraction existing between the molecules of these covalent substances.
Detergents and Water Pollution
A synthetic detergent which does not form any precipitate with water is one of the major pollutants of water. The alkyl benzene sulphonate from the detergents is non-degradable and causes foaming. The phosphate ion of detergent is an essential plant nutrient whose presence in water promotes the growth of algae. Algae reduce the concentration of the dissolved oxygen of water due to which the growth of aquatic animals is retarded.
Bibliography:
Gewali, Mohan Bikram and Rishi Tiwari. Principles of Chemistry. second edition. Kathmandu: Buddha Academic Publishers and Distributors Pvt. Ltd., 2009.
• $$2C_{17}H_{35}COONa+CaSO_4\longrightarrow{(C_{17}H_{35}COO)_2Ca+Na_2SO_4}$$
• $$2C_{15}H_{31}COONa+MgCl_2\longrightarrow{(C_{15}H_{31}COO)_2Mg+2NaCl}$$
• $$Mg(HCO_3)_2\xrightarrow\Delta{MgCO_3+H_2O+CO_2}$$
• $$Ca(HCO_3)_2\xrightarrow\Delta{CaCO_3+H_2O+CO_2}$$
.
0%
## ASK ANY QUESTION ON Introduction to Water
No discussion on this note yet. Be first to comment on this note | {} |
# For the Following Inequation, Represent the Solution on a Number Line : (4
in General
For the following inequation, represent the solution on a number line :
$(4 - "x")/2 < 3$ , x R
by kratos
$(4 - "x")/2 < 3$ , x R | {} |
Algebra 1
$-4\frac{1}{12}\leq w\lt12\frac{1}{4}$
$-\frac{4}{3}\leq\frac{1}{7}w-\frac{3}{4}\lt1\longrightarrow$ using the addition property of inequality, add $\frac{3}{4}$ to each part $-\frac{4}{3}+\frac{3}{4}\leq\frac{1}{7}w-\frac{3}{4}+\frac{3}{4}\lt1+\frac{3}{4}\longrightarrow$ add $-\frac{7}{12}\leq\frac{1}{7}w\lt1\frac{3}{4}\longrightarrow$ using the multiplication property of inequality, multiply each part by 7 $-\frac{7}{12}\times7\leq\frac{1}{7}w\times7\lt1\frac{3}{4}\times7\longrightarrow$ simplify $-4\frac{1}{12}\leq w\lt12\frac{1}{4}$ | {} |
# Homework Help: How is "cross sectional area" different from "area"?
1. Jan 11, 2006
### untitledm9
How is "cross sectional area" different from "area"?
I don't understand what cross sectional area is and there is no explanation about it in my textbook.I cannot find this anywhere and am really desperate right now. Can someone please help me?
Last edited: Jan 11, 2006
2. Jan 11, 2006
### Tide
Imagine cutting a (spherical!) grapefruit into two equal parts. The area of each of the flat circles you just created is called the cross sectional area. That differs from the (surface) area of the original grapefruit only in that you are calculating the area of different things (i.e. area is area!).
3. Jan 11, 2006
### untitledm9
So cross sectional area is the area of just part of an object?
4. Jan 11, 2006
### Tide
Not exactly. We could talk about the cross sectional area of that grapefruit without actually doing the cutting. It's the area it would have if we cut it.
With respect to hydrodynamics it is often useful to talk about things like the cross sectional area of a flow such as through a pipe. So, for example, if a fluid is flowing through a pipe the diameter of the pipe may vary from place to place and we can use the concept of cross sectional area to infer things like flow velocity or pressure at a location given the velocity and/or pressure at another location.
If the flow is steady then $\rho A v$ is a constant with $\rho$ being the mass density, A is the cross sectional area and v is the flow speed which simply says that the flow through any cross section is the same at any point along the pipe. We don't actually have to cut the pipe to make use of the concept of cross section.
Last edited: Jan 11, 2006
5. Jan 11, 2006
### untitledm9
Thank you so much for helping me. I was getting so frustrated that I couldn't find what cross sectional area is anywhere. Thank you!!!
6. Jan 11, 2006
### Tide
You are very welcome! | {} |
# Core Data code for manually creating a lot of entries from old ones
Does this looks ok to everyone?
NSFetchRequest *oldFetchRequest = [[NSFetchRequest alloc] init];
NSEntityDescription *oldEntryEntity = [NSEntityDescription entityForName:@"Entry"
inManagedObjectContext:oldContext];
[oldFetchRequest setEntity:oldEntryEntity];
int numberOfEntries = [oldContext countForFetchRequest:oldFetchRequest error:nil];
int batchSize = 10;
[oldFetchRequest setFetchBatchSize:10];
int offset = 0;
while (numberOfEntries - offset > 0) {
[oldFetchRequest setFetchOffset:offset];
NSError *error;
NSArray *entries = [oldContext executeFetchRequest:oldFetchRequest error:&error];
for (NSManagedObject *entry in entries) {
Entry *newEntry = [NSEntityDescription insertNewObjectForEntityForName:@"Entry"
inManagedObjectContext:newContext];
[newEntry setupCreationDate:[entry valueForKey:@"creationDate"] withSave:NO];
newEntry.entryID = [entry valueForKey:@"entryID"];
NSMutableOrderedSet *newMediaSet = [[NSMutableOrderedSet alloc] init];
NSOrderedSet *mediaSet = [entry valueForKey:@"media"];
int i = 0;
for (NSManagedObject *media in mediaSet) {
Media *newMedia = [NSEntityDescription insertNewObjectForEntityForName:@"Media"
inManagedObjectContext:newContext];
newMedia.positionInEntry = [NSNumber numberWithDouble:i + 1]; //Potentially needs changing
newMedia.mediaID = [Entry generateString];
MediaImageData *imageData = [NSEntityDescription insertNewObjectForEntityForName:@"MediaImageData"
inManagedObjectContext:newContext];
if ([newMedia.type isEqualToString:@"Image"]) {
imageData.data = [media valueForKey:@"originalImage"];;
}
else if ([newMedia.type isEqualToString:@"Movie"]) {
NSURL *movieURL = [NSURL URLWithString:newMedia.movie];
MPMoviePlayerController *moviePlayer = [[MPMoviePlayerController alloc] initWithContentURL:movieURL];
[moviePlayer stop];
UIImage *screenshot = [moviePlayer thumbnailImageAtTime:0.0 timeOption:MPMovieTimeOptionNearestKeyFrame];
[moviePlayer stop];
imageData.data = UIImageJPEGRepresentation(screenshot, 1.0);
}
newMedia.imageData = imageData;
newMedia.entry = newEntry;
i++;
}
newEntry.media = newMediaSet;
}
[newContext save:&error];
offset = offset + batchSize;
}
The first big problem I see with this code is here:
int i = 0;
for (NSManagedObject *media in mediaSet) {
// stuff
newMedia.positionInEntry = [NSNumber numberWithDouble:i + 1];
// more stuff
i++;
}
First of all, if positionInEntry is a property intended to hold some sort of index or something for your Media class, then it should be of the same type that collections tend to use for their indexing: NSUInteger. Although... the object itself probably doesn't need to know what position it is in the array, and as this could change, it's probably best to just not have this property at all.
Next, i is defined as an int, and 1 is an int literal, so we're creating the NSNumber in the wrong way. We should be doing this (if we're going to continue to keep this positionInEntry property):
newMedia.positionInEntry = [NSNumber numberWithInt:i + 1];
Finally, we really shouldn't be doing this in this manner at all. If we want to know what index we're working with, we should use a traditional for loop.
for (int i = 0; i < [mediaSet count]; ++i) {
NSManagedObject *media = mediaSet[i];
// stuff
newMedia.positionInEntry = @(i+1);
// more stuff
}
However, there's a slightly better option.
It is good to use a forin loop where we can, because this actually is faster than a traditional for loop in Objective-C. We still don't want to rely on i however, for a couple of reasons, right now, I'll say, the most important of which is readability. But there is this option:
for (NSManagedObject *media in mediaSet) {
// stuff
NSUInteger index = [mediaSet indexOfObject:media];
newMedia.positionInEntry = @(index + 1);
// more stuff
}
We can call indexOfObject: on an NSArray and it will return the first index it finds that object at. As a note, this method returns NSNotFound if the object isn't found in the array, but in a forin loop, this should hopefully never be the case. | {} |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Nanoscale mapping of quasiparticle band alignment
## Abstract
Control of atomic-scale interfaces between materials with distinct electronic structures is crucial for the design and fabrication of most electronic devices. In the case of two-dimensional materials, disparate electronic structures can be realized even within a single uniform sheet, merely by locally applying different vertical gate voltages. Here, we utilize the inherently nano-structured single layer and bilayer graphene on silicon carbide to investigate lateral electronic structure variations in an adjacent single layer of tungsten disulfide (WS2). The electronic band alignments are mapped in energy and momentum space using angle-resolved photoemission with a spatial resolution on the order of 500 nm (nanoARPES). We find that the WS2 band offsets track the work function of the underlying single layer and bilayer graphene, and we relate such changes to observed lateral patterns of exciton and trion luminescence from WS2.
## Introduction
The construction of a two-dimensional (2D) electronic device, such as a pn-junction, can be envisioned using two strategies: The first is to smoothly join two 2D materials with different electronic properties, essentially following the established recipe for three-dimensional (3D) semiconductors. Alternatively, one can create junctions using a single uniform sheet of material placed over a suitably pre-patterned substrate1,2,3, exploiting the sensitivity of 2D materials to their environment via band alignment4,5, screening6,7,8,9, or hybridization10,11,12. This approach has several advantages, such as technical simplicity and the absence of a possibly defective interface13,14. However, the interaction between a 2D material and substrate is highly non-trivial and hitherto poorly understood: Even in the absence of hybridization or charge transfer, substrate-screening can lead to an asymmetric band gap change, creating a type II heterojunction within a single sheet of 2D material1. This environmental screening may even be employed to engineer the photoluminescence (PL) from excitons at the $${\bar{\mathrm{K}}}$$ valley of single layer (SL) semiconducting transition metal dichalcogenides (TMDs)15,16,17, as demonstrated by placing a SL TMD on a variable number of graphene layers18 or on conventional metals and insulators19. Moreover, strong many-body effects lead to a complex connection between the quasiparticle band structure and the optical properties. On one hand, even strong changes of the quasiparticle band structure might only have a very minor influence on the optical band gap, due to the interplay of the quasiparticle band gap size and exciton binding energy6. On the other hand, the quasiparticle band structure can greatly affect the formation of more complex entities such as trions20.
Here, we investigate the interplay of quasiparticle band alignments and optical properties in a lateral heterostructure of semiconducting SL WS2 placed on alternating areas of SL graphene (SLG) and bilayer graphene (BLG) grown on SiC. Since BLG has a tendency to nucleate at the step edges of SiC, we are able to study how the electronic structure and light–matter interaction varies on the nanoscale, due to the lateral change of the work function between SLG and BLG areas on SiC21,22. This demonstrates the possibility to utilize a specific substrate pattern to control the optoelectronic properties of an adjacent TMD. We directly visualize how the electronic structure changes at the complex heterogeneous atomic-scale interfaces present in our samples using nanoARPES; see illustration in Fig. 1a. This groundbreaking technique for electronic structure characterization provides three key new insights for the type of van der Waals heterostructure investigated here, which could not be accessed in conventional ARPES measurements that merely reveal the laterally averaged electronic structure (for example, in TMDs synthesized on metal substrates23,24 or graphene/SiC substrates25,26,27): (i) We can determine the energy- and momentum-dependence of band alignments at truly 2D interfaces, (ii) we obtain detailed spatially resolved information on how the electronic structure of a 2D semiconductor is modified around the one-dimensional (1D) SLG/BLG interface, and (iii) we can spatially disentangle the electronic dispersions of SL WS2 and few-layer (FL) WS2, and distinguish between islands of different orientations.
## Results
### Mapping of surface potential and electronic structure
Figure 1b shows the morphology and microscopic surface potential of WS2 islands on graphene measured by scanning Kelvin probe microscopy (SKPM) under ambient conditions. Triangular WS2 islands are observed with SL regions near the edges and FL areas towards the center. Alternating stripes of BLG and SLG are visible in both bare and WS2-covered areas. The strong contrast difference between WS2 placed on alternating stripes of BLG and SLG is caused by the large work function difference on the order of 100 meV28. The SL WS2 islands have a negligible influence on the relative work function difference between the underlying SLG and BLG, as confirmed by density functional theory calculations28.
Figure 1c–f presents the (E, k)-dependence of the topmost WS2 valence bands (VBs) measured from (500 × 500) nm2 areas on the sample using nanoARPES, extracted at the locations indicated with corresponding markers on the real space maps in Fig. 1g–i (for a large-scale overview see Fig. S2 in the Supplementary Material). Typically, a sharp and intense state is observed at $${\bar{\mathrm{\Gamma }}}$$ that can be assigned to the local VB maximum (VBM) of SL WS229,30. Upon close inspection, the binding energy of the VBM turns out to depend on the position within a WS2 island. In most cases, the VBM is found at either the energy shown in Fig. 1c or that in Fig. 1d. These two different energy regions have thus been marked by a blue and green box, respectively. In some areas, it is even possible to observe the simultaneous presence of two rigidly shifted SL WS2 VBs (Fig. 1e). The dispersion in Fig. 1f, on the other hand, is strikingly different from the other examples, showing a three-fold splitting with nearly equal intensity distribution between the split bands at $${\bar{\mathrm{\Gamma }}}$$. The WS2 islands tend to orient either with the $${\bar{\mathrm{\Gamma }}}$$$${\bar{\mathrm{M}}}$$ (see Fig. 1c–e) or the $${\bar{\mathrm{\Gamma }}}$$$${\bar{\mathrm{K}}}$$ (see Fig. 1f) high symmetry directions aligned with the underlying graphene, although we occasionally find other orientations.
Further insight into local variations in the dispersion is obtained by investigating the spatial intensity distribution of the split states at $${\bar{\mathrm{\Gamma }}}$$, as shown in Fig. 1g–i. These images correspond to real space maps of the photoemission intensity composed from the (E, k)-regions demarcated by boxes of the same color in Fig. 1c–f. The maps have been measured in scanning steps of 250 nm over a (4.5 × 4.5) μm2 area, thereby covering the edges of two adjacent WS2 islands of different orientations, as in the very similar region imaged by SKPM in the inset of Fig. 1b. The two SL WS2 VBs at different binding energy positions originate from distinct areas close to the edges where they give rise to the intense spots in Fig. 1g, h. The topmost split VB states (see magenta box in Fig. 1f) are concentrated towards the interior of the WS2 islands, where mainly FL structures occur, as evidenced by SKPM in Fig. 1b. In fact, the band structure in Fig. 1f is easily identified as being caused by multilayer splitting rather than simple shifts due to the visibly different effective mass (inverse curvature) of the topmost band.
We show that the shift between the VBs in Fig. 1c–e is correlated with the thickness of the underlying graphene by composing a real space map from the photoemission intensity of a BLG band. BLG is characterized by a splitting of the linear π-band near the $${\bar{\mathrm{K}}}_{}^{}$$ point as shown in Fig. 1j, k (see arrow in panel (k) for the second branch). Mapping the intensity from this second branch permits a straightforward identification of BLG stripes in Fig. 1l; and this has been used to mark the gray-shaded boxes in all the real space maps. The BLG stripes are found to coincide with areas where the SL WS2 VB is shifted to lower binding energies, see Fig. 1d, h. Additional details of the correlation between graphene thickness and SL WS2 VB binding energy positions are discussed in Supplementary Notes 3 and 4 and Figs. S3 and S4.
The nanoARPES data from the VB can be complemented by angle-integrated core level spectra, in the expectation that the core level binding energy should track an offset in the VB alignment between different areas, at least in a simple single-particle picture. Figure 2 presents nanoscale W 4f core level measurements collected over the same area as the VB spectra used to construct Fig. 1c–e. Each of the spin–orbit split components consists of two peaks separated by 0.3 eV. By plotting the spatial distribution of the photoemission intensity at each of the peak energies marked by green and blue bars, we obtain the maps shown in Fig. 2b, c corresponding to the VB analysis in Fig. 1. The peak at a lower binding energy (see green bar in panel (a)) appears to coincide with BLG areas as seen in panel (b) while the peak at a higher binding energy (see blue bar in panel (a)) is concentrated on the SLG areas as seen in panel (c). The trend is thus consistent with the spatial VB maps in Fig. 1g, h.
### Exciton and trion luminescence
We turn to the consequences of this spatially heterogeneous electronic structure for the luminescence of excitons and trions in WS231. PL mapping of a WS2 island, acquired under ambient conditions, is shown in Fig. 3a, where a stronger PL signal is observed on SL WS2 on BLG compared to SL WS2 on SLG. The energies of characteristic lines associated with SL WS2 on SLG and on BLG are identified in the PL spectra displayed in Fig. 3b. Detailed analysis by curve fitting to Lorentzian line shapes in Fig. 3c reveals an additional component for SL WS2 on BLG (bottom panel), attributed to charged exciton states (trions) at an energy of 1.90 eV, whereas the neutral exciton peak is found at 1.93 eV for both WS2 on SLG (top panel) and BLG. The position of the neutral exciton peak is shifted by ≈100 meV compared to SiO2 supported heterostructures of WS2 and graphene18, which may be explained by a change of doping (and thus screening) of the graphene on our SiC substrate. We note that patterns of exciton and trion luminescence have been observed in TMD flakes previously32,33 and interpreted in terms of a change in the chemical composition between the transition metal and chalcogen atoms22. Finally, Fig. 3a, b shows a weak PL response from the island’s centre, which is ascribed to the presence of FL WS2 and the indirect band gap of this material.
### Determination of band offsets
In order to obtain more accurate values for the band offsets, we analyze energy distribution curve (EDC) cuts at $${\bar{\mathrm{\Gamma }}}$$ for the different structures. Figure 4a presents an EDC from the spectrum in Fig. 1e where a SL WS2 island straddles SLG and BLG stripes (see inset in Fig. 4a). Curve fitting of the peak positions reveals a binding energy shift of the WS2 of 0.29(5) eV between SLG and BLG, which matches the separation of the core level peaks in Fig. 2a. Performing a similar EDC analysis of the spectrum in Fig. 1f reveals that a splitting of 0.66(1) eV occurs between the states at lowest and highest binding energies, which matches the expected splitting of bilayer WS234. The additional peak at 1.91(2) eV between the bilayer WS2 bands is attributed to a SL region (see inset in Fig. 4b) on a BLG stripe. We observe binding energy variations of up to 70 meV between VB peak positions in SL WS2 on the same substrate regions, which is evident from the two different binding energies in SL WS2 on BLG in Fig. 4a, b. However, we did not find a systematic trend in these small binding energy shifts. We speculate that details in the chemical composition within each flake may give rise to shifts on this energy scale as demonstrated for WS2 synthesized on titania22.
The data in Fig. 1f also provides access to the $${\bar{\mathrm{K}}}$$ point of WS2, which is characterized by spin–orbit split bands that form the global VBM in SL WS2. $${\bar{\mathrm{K}}}$$ is not accessible in the other spectra in Fig. 1c–e because of the rotated Brillouin zone. The EDC fit in Fig. 4c yields a spin–orbit splitting of 0.42(4) eV, in agreement with previous studies of SL WS2 in van der Waals heterostructures20,30, and a VBM of 1.59(4) eV for SL WS2 on BLG. By rigidly correcting for the shift on SLG areas one would thus expect the VBM on those regions around a binding energy of 1.9 eV. Under the assumption that the direct quasiparticle band gap of SL WS2 on SLG and BLG is smaller than 2.4 eV measured on silica16, we can infer that our WS2 remains n-type doped in the entire sample, although the density of free electrons will be substantially higher in SL WS2 on SLG.
## Discussion
We have now tracked both the band offsets, W 4f core level energies and the excitonic spectrum across the SLG–BLG interface beneath SL WS2 and are thus in a position to explore the connection between these. The rigid VB and core level shifts of WS2 on SLG and BLG areas are consistent with an ideal 2D Schottky contact between WS2 and graphene. In order to see this, consider first a sketch of the band alignments for 3D metal–semiconductor junctions in Fig. 5a. The Schottky barrier height ϕB is set by the metal work function W and semiconductor electron affinity χ, i.e., ϕB = W − χ. Forming a metal–semiconductor contact leads to band bending with a depletion region towards the bulk of the semiconductor. For the interface between two 2D materials, this is irrelevant and the band offset is expected to follow the sketch in Fig. 5b, c for WS2 on SLG and BLG, respectively. The relevant quantity here is the work function change between SLG and BLG on SiC, such that the higher work function of BLG pushes the WS2 VBM closer to EF35, as observed in our data. The difference in Schottky barrier height between SLG and BLG areas results in a built-in bias Δϕ ≈ 0.3 eV that laterally conforms to the SLG/BLG patterns. The magnitude of Δϕ is similar to the SLG/BLG work function difference of 0.1–0.2 eV in ultra-high vacuum (UHV)35. We speculate that the slightly larger shift in our case can be attributed to a difference in dielectric screening between SLG and BLG, which may give rise to an asymmetric renormalization of the WS2 quasiparticle gap, effectively causing a variation in χ as well1,7. The interpretation of the band alignment in terms of a Schottky contact without Fermi level pinning relies on the quasi-freestanding nature of WS2 on graphene10,36. It is consistent with the absence of hybridization between graphene and WS2 bands in any of our spectra, as well as with the sharp VB features at $${\bar{\mathrm{\Gamma }}}$$, in contrast to the situation on metal substrates9,11,24.
The pattern of the PL signal in Fig. 3 may be interpreted in terms of the Schottky contact-induced band alignment. Superficially, it appears surprising that the exciton PL energy is nearly identical for the two sample regions with different band alignments. This, however, is well understood. A rigid band offset would not be expected to affect the quasiparticle band gap in the material, and even a screening-induced band gap renormalization, would only be expected to have a minor effect on the exciton binding energy6. The change in band alignment can be used to explain the strongly increased trion signal in the BLG areas, as indicated in Fig. 5d. The more n-doped WS2 would have a strongly increased population of electrons in the conduction band, facilitating the formation of negatively charged electron–electron–hole (eeh) trions when the material is excited by light as sketched in Fig. 5d20,37. Our nanoARPES measurements suggest that trion formation would be expected in the SLG areas, whereas our PL measurements indicate that it is actually favored in the BLG areas (Fig. 3). This can still be understood in terms of Fig. 5d, combined with the knowledge that the PL maps were acquired under ambient conditions rather than in UHV. Under ambient conditions, the higher reactivity of SLG compared to BLG leads to the adsorption of impurities and a reversal of the work function difference in the SLG/BLG patterns compared to UHV, as explained in more detail in Supplementary Note 5 and Fig. S528. Since the deposited WS2 largely tracks the work function of the underlying SLG/BLG21, this is accompanied by a reversal of the band alignment. We note that the band offset between the SLG and BLG regions also implies the existence of a 1D interface with lateral band bending in the WS2 VBs, but this is not observable in our experiments because the screening length of graphene on SiC is an order of magnitude smaller than our spatial resolution, as discussed in Supplementary Note 6 and Fig. S6.
The sharp 1D interfaces and the laterally varying band positions of WS2 between SLG/BLG areas demonstrate the concept of creating nanoscale devices from a single sheet of 2D material, placed on a suitably patterned substrate. Indeed our conclusions are applicable beyond the Schottky contacts studied here and we envision that similar properties can be induced on patterned insulating materials based on oxides or hexagonal boron nitride20,38,39. Particularly intriguing is the complex interplay between electronic and optical properties, that not only allows the confinement of electronic states but also that of more complex objects—such as trions—on the nanoscale, opening a promising avenue for engineering 2D devices.
## Methods
### Growth of WS2/graphene/SiC heterostructures
Graphene was synthesized on a semi-insulating (0001) 6H-SiC substrate etched in H2 at 200 mbar, during a temperature ramp from room temperature to 1580 °C to remove polishing damage. Graphene growth was carried out at 1580 °C for 25 min in Ar gas, at 100 mbar, and used as a substrate for subsequent WS2 growth. WS2 islands were synthesized on graphene/SiC at 900 °C by ambient pressure chemical vapor deposition. During the synthesis process, sulfur powders were heated up to 250 °C to generate sulfur vapor. Ar gas flow was used for carrying the sulfur vapor to react with WO3 powder.
### Scanning Kelvin probe microscopy
SKPM experiments were carried out in ambient conditions, using a Bruker Icon AFM and Bruker highly doped Si probes (PFQNE-AL) with a force constant ≈0.9 N/m and resonant frequency f0 of 300 kHz. Double-pass frequency-modulated SKPM (FM-SKPM) has been used in all measurements, with topography acquired first and the surface potential recorded in a second pass. An AC voltage with a lower frequency (fmod = 3 kHz) than that of the resonant frequency of the cantilever was applied to the tip, inducing a frequency shift. The feedback loop of FM-KPFM monitored the side modes, f0 ± fmod, and compensated the mode frequency by applying an offset DC voltage, equal to the contact potential difference, which was recorded to obtain the surface potential map. The FM-SKPM experiments in Supplementary Note 5 were carried out in ambient air and vacuum (pressure of 1 × 10−6 mbar) as described above using an NT-MDT NTEGRA Aura system.
### Photoluminescence mapping
PL spectroscopy mapping was carried out under ambient conditions using a Renishaw inVia confocal microscope-based system with a 532 nm laser line as the excitation wavelength (2.33 eV excitation energy). The laser beam was focused through a 100× microscope objective, with the PL signal recorded in back-scattering geometry, using integration time of 0.1 s/pixel and a lateral spacing of 0.3 μm to acquire the PL intensity maps.
### nanoARPES
Samples were transferred in air to the nanoARPES end-station at beamline I05 at Diamond Light Source, UK. Prior to measurements the samples were annealed up to 450 °C and kept under UHV conditions (pressure better than 10−10 mbar) for the entire experiment. Synchrotron light with a photon energy of 95 eV was focused using a Fresnel zone plate followed by an order sorting aperture placed 8 and 4 mm from the sample, respectively. The sample was aligned using the characteristic linear dispersion of the underlying graphene substrate as described in Supplementary Note 1. The standard scanning mode involved collecting photoemission spectra with a Scienta Omicron DA30 hemispherical analyzer by rastering the sample position with respect to the focused synchrotron beam in steps of 250 nm using SmarAct piezo stages. Areas with WS2 islands were found using coarse scan modes with larger step sizes as described in Supplementary Note 2. We measured multiple fast maps of the same area on the sample sequentially, and these were subsequently aligned and added together in order to remove possible intensity variations from lateral drifts. The total data acquisition times were typically on the order of 8 h for the scans presented here. The energy- and angular-resolution were set to 30 meV and 0.2°, respectively. The spatial resolution was determined to be (500 ± 100) nm using a sharp feature in the sample as described in Supplementary Note 7 and Fig. S7. The experiments were carried out with the sample at room temperature.
## Data availability
All data presented in this study are available from the corresponding authors upon reasonable request.
## References
1. Rösner, M. et al. Two-dimensional heterojunctions from nonlocal manipulations of the interactions. Nano Lett. 16, 2322–2327 (2016).
2. Wilson, N. R. et al. Determination of band offsets, hybridization, and exciton binding in 2D semiconductor heterostructures. Sci. Adv. 3, e1601832 (2017).
3. Huang, X. et al. Realization of in-plane p–n junctions with continuous lattice of a homogeneous material. Adv. Mater. 30, 1802065 (2018).
4. Baugher, B. W. H., Churchill, H. O. H., Yang, Y. & Jarillo-Herrero, P. Optoelectronic devices based on electrically tunable p–n diodes in a monolayer dichalcogenide. Nat. Nanotechnol. 9, 262 (2014).
5. Lee, C.-H. et al. Atomically thin p–n junctions with van der Waals heterointerfaces. Nat. Nanotechnol. 9, 676 (2014).
6. Komsa, H.-P. & Krasheninnikov, A. V. Effects of confinement and environment on the electronic structure and exciton binding energy of MoS2 from first principles. Phys. Rev. B 86, 241201 (2012).
7. Ugeda, M. et al. Giant bandgap renormalization and excitonic effects in a monolayer transition metal dichalcogenide semiconductor. Nat. Mater. 13, 1091–1095 (2014).
8. Čabo, A. G. et al. Observation of ultrafast free carrier dynamics in single layer MoS2. Nano Lett. 15, 5883 (2015).
9. Eickholt, P. et al. Spin structure of K valleys in single-layer WS2 on Au (111). Phys. Rev. Lett. 121, 136402 (2018).
10. Allain, A., Kang, J., Banerjee, K. & Kis, A. Electrical contacts to two-dimensional semiconductors. Nat. Mater. 14, 1195 (2015).
11. Dendzik, M. et al. Substrate-induced semiconductor-to-metal transition in monolayer WS2. Phys. Rev. B 96, 235440 (2017).
12. Shao, B. et al. Pseudodoping of a metallic two-dimensional material by the supporting substrate. Nat. Commun. 10, 180 (2019).
13. Wang, L. et al. One-dimensional electrical contact to a two-dimensional material. Science 342, 614–617 (2013).
14. Zhang, C. et al. Strain distributions and their influence on electronic structures of WSe2–MoS2 laterally strained heterojunctions. Nat. Nanotechnol. 13, 152–158 (2018).
15. Chernikov, A. et al. Exciton binding energy and nonhydrogenic Rydberg series in monolayer WS2. Phys. Rev. Lett. 113, 076802 (2014).
16. Chernikov, A. et al. Electrical tuning of exciton binding energies in monolayer WS2. Phys. Rev. Lett. 115, 126802 (2015).
17. Wang, G. et al. Colloquium: excitons in atomically thin transition metal dichalcogenides. Rev. Mod. Phys. 90, 021001 (2018).
18. Raja, A. et al. Coulomb engineering of the bandgap and excitons in two-dimensional materials. Nat. Commun. 8, 15251 (2017).
19. Drüppel, M., Deilmann, T., Krüger, P. & Rohlfing, M. Diversity of trion states and substrate effects in the optical properties of an MoS2 monolayer. Nat. Commun. 8, 2117 (2017).
20. Katoch, J. et al. Giant spin-splitting and gap renormalization driven by trions in single-layer WS2/h-BN heterostructures. Nat. Phys. 14, 355 (2018).
21. Giusca, C. E. et al. Excitonic effects in tungsten disulfide monolayers on two-layer graphene. ACS Nano 10, 7840–7846 (2016).
22. Kastl, C. et al. Effects of defects on band structure and excitons in WS2 revealed by nanoscale photoemission spectroscopy. ACS Nano 13, 1284 (2019).
23. Miwa, J. et al. Electronic structure of epitaxial single-layer MoS2. Phys. Rev. Lett. 114, 046802 (2015).
24. Dendzik, M. et al. Growth and electronic structure of epitaxial single-layer WS2 on Au (111). Phys. Rev. B 92, 245442 (2015).
25. Zhang, Y. et al. Direct observation of the transition from indirect to direct bandgap in atomically thin epitaxial MoSe2. Nat. Nanotechnol. 9, 111–115 (2014).
26. Miwa, J. A. et al. Van der Waals epitaxy of two-dimensional MoS2–graphene heterostructures in ultrahigh vacuum. ACS Nano 9, 6502–6510 (2015).
27. Zhang, Y. et al. Electronic structure, surface doping, and optical response in epitaxial WSe2 thin films. Nano Lett. 16, 2485–2491 (2016).
28. Giusca, C. E. et al. Water affinity to epitaxial graphene: the impact of layer thickness. Adv. Mater. Interfaces 2, 1500252 (2015).
29. Henck, H. et al. Electronic band structure of two-dimensional WS2/graphene van der Waals heterostructures. Phys. Rev. B 97, 155421 (2018).
30. Kastl, C. et al. Multimodal spectromicroscopy of monolayer WS2 enabled by ultra-clean van der Waals epitaxy. 2D Mater. 5, 045010 (2018).
31. Mak, K. F. & Shan, J. Photonics and optoelectronics of 2D semiconductor transition metal dichalcogenides. Nat. Photonics 10, 216 (2016).
32. Gutiérrez, H. R. et al. Extraordinary room-temperature photoluminescence in triangular WS2 monolayers. Nano Lett. 13, 3447–3454 (2013).
33. Bao, W. et al. Visualizing nanoscale excitonic relaxation properties of disordered edges and grain boundaries in monolayer molybdenum disulfide. Nat. Commun. 6, 7993 (2015).
34. Zeng, H. et al. Optical signature of symmetry variations and spin-valley coupling in atomically thin tungsten dichalcogenides. Sci. Rep. 3, 1608 (2013).
35. Mammadov, S. et al. Work function of graphene multilayers on SiC (0001). 2D Mater. 4, 015043 (2017).
36. Le Quang, T. et al. Scanning tunneling spectroscopy of van der Waals graphene/semiconductor interfaces: absence of Fermi level pinning. 2D Mater. 4, 035019 (2017).
37. Mak, K. F. et al. Tightly bound trions in monolayer MoS2. Nat. Mater. 12, 207 (2013).
38. Ulstrup, S. et al. Spatially resolved electronic properties of single-layer WS2 on transition metal oxides. ACS Nano 10, 10058 (2016).
39. Ulstrup, S. et al. Imaging microscopic electronic contrasts at the interface of single-layer WS2 with oxide and boron nitride substrates. Appl. Phys. Lett. 114, 151601 (2019).
## Acknowledgements
The authors thank Diamond Light Source for access to Beamline I05 (Proposal No. SI19260) that contributed to the results presented here. S.U. acknowledges financial support from VILLUM FONDEN under the Young Investigator Program (Grant No. 15375). This project has received funding from the European Union’s Horizon 2020 research and innovation programme Graphene Flagship under grant agreement No 785219. C.E.G. acknowledges financial support from the UK National Measurement System. J.A.M. and P.H. acknowledge support from the Danish Council for Independent Research, Natural Sciences under the Sapere Aude Program (Grant Nos. DFF-6108-00409 and DFF 4002-00029), and the Aarhus University Research Foundation. This work was supported by VILLUM FONDEN via the Centre of Excellence for Dirac Materials (Grant No. 11744). D.K.G. and R.L.M.-W. and work at NRL was supported by the Office of Naval Research. The authors thank Davide Curcio and Marco Bianchi for help with initial sample characterization.
## Author information
Authors
### Contributions
C.E.G., S.U. and P.H. conceived and planned the project. C.E.G. and O.K. performed and analyzed scanning Kelvin probe and photoluminescence measurements. D.K.G. and R.L.M.-W. prepared the graphene/SiC substrates. T.Z. and M.T. synthesized the WS2 islands on graphene/SiC. S.U., C.E.G., J.A.M., C.E.S., A.B., P.D., C.C. and P.H. performed the nanoARPES measurements. P.D. and C.C. developed and maintained the nanoARPES setup. S.U. and J.A.M. analyzed the nanoARPES data. All authors contributed to interpreting the data and writing the draft.
### Corresponding authors
Correspondence to Søren Ulstrup or Cristina E. Giusca.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information: Nature Communications thanks Bjarke Jessen and other anonymous reviewer(s) for their contribution to the peer review of this work.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Ulstrup, S., Giusca, C.E., Miwa, J.A. et al. Nanoscale mapping of quasiparticle band alignment. Nat Commun 10, 3283 (2019). https://doi.org/10.1038/s41467-019-11253-2
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41467-019-11253-2
• ### Spin-dependent vibronic response of a carbon radical ion in two-dimensional WS2
• Katherine A. Cochrane
• Jun-Ho Lee
• Bruno Schuler
Nature Communications (2021)
• ### Indirect to direct band gap crossover in two-dimensional WS2(1−x)Se2x alloys
• Cyrine Ernandes
• Lama Khalil
• Abdelkarim Ouerghi
npj 2D Materials and Applications (2021) | {} |
# Scoping constructs in Wolfram Language
GROUPS:
NOTE: Please see the original version of this post and related discussion HERE. Cross-posted here per suggestion of Vitaliy Kaurov.
You will find a lot of information in this answer. I will add a few personal notes.
## Module
Use Module when you want to localize variables inside your function's body, and those variables will potentially acquire and/or change their values during the computation.
### Basic use
For example:
f[x_]:=Module[{y=x^2},y=y+x;{x,y}]
Here, a local mutable variable (symbol) y is local to the Module, and is, indeed, a symbol with a unique name. This is the closest you have in Mathematica to, say, local variables in C.
Module also has advanced uses. One of them is to create closures - functions with a persistent state. My third post in this thread illustrates many cases of that and has further references. One example I will steal from there: the following function will produce the next Fibonacci number on demand, and yet it will be as fast as the iterative loop implementation for generation of consecutive Fibonacci numbers (since Module is invoked only once, when the function is defined):
Module[{prev, prevprev, this},
reset[] := (prev = 1; prevprev = 1);
reset[];
nextFib[] := (this = prev + prevprev; prevprev = prev; prev = this)
];
reset[];
Table[nextFib[], {1000}]; // Timing
(*
---> {0.01, Null}
*)
One problem with persistence created with Module-variables is that one should not generally serialize such state (definitions), for example by saving the state via Save or DumpSave. This is because, the uniqueness of names for Module-generated symbols is guaranteed only within a single Mathematica session.
Module also allows one to create local functions, which With does not (except pure functions). This is a very powerful capability. It is particularly useful for writing recursive functions, but not only. In the link mentioned above, there were examples of this. One problem with local functions created by Module is that these symbols won't be automatically garbage-collected when Module finishes (if they have DownValues, SubValues or UpValues. OwnValues are fine), and so may lead to memory leaks. To avoid that, one can Clear these symbols inside Module before returning the result.
## With
Use With to define local constants, which can not be changed inside the body of your function.
### Basic use
For example,
f[x_,y_]:=With[{sum = x+y},{sum *x, sum *y}]
It is instructive to trace the execution of f. You will notice that sum gets replaced by its value very early on, before the body starts evaluating. This is quite unlike Module, where variable entries get replaced by their values in the process of evaluation, just as it would normally happen were the variables global.
On an advanced level, With can be used to inject some evaluated code deep into some expression which is otherwise unevaluated:
With[{x=5},Hold[Hold[x^2]]]
(*
Hold[Hold[5^2]]
*)
and is thus an important meta-programming tool. There are lots of uses for this feature, in particular one can use this to inject code into Compile at run-time right before compilation. This can extend the capabilities / flexibility of Compile quite a bit. One example can be found in my answer to this question.
The semantics of With is similar to that of rule substitutions, but an important difference is that With cares about inner scoping constructs (during variable name collisions), while rules don't. Both behaviors can be useful in different situations.
### Module vs With
Both of these are lexical scoping constructs, which means that they bind their variables to lexical their occurrences in the code. Technically, the major difference between them is that you can not change the values of constants initialized in With, in the body of With, while you can change values of Module variables inside the body. On a deeper level, this is because With does not generate any new symbols. It does all the replacements before the body evaluates, and by that time no "constant symbols" are at all present, all of them replaced with their values. Module, OTOH, does generate temporary symbols (which are normal symbols with an attribute Temporary), which can store a mutable state.
Stylistically, it is better to use With if you know that your variables are in fact constants, i.e. they won't change during the code execution. Since With does not create extra (mutable) state, the code is cleaner. Also, you have more chances to catch an occasional erroneous attempt in the code to modify such a constant.
Performance-wise, With tends to be faster than Module, because it does not have to create new variables and then destroy them. This however usually only shows up for very light-weight functions. I would not base my preference of one over another on performance boosts.
## Block
### Basic use
Block localizes the value of the variable. In this example, a does not refer to i literally inside Block, but still uses the value set by Block.
a:=i
Block[{i=2},a]
{a,i}
Block therefore affects the evaluation stack, not just the literal occurrences of a symbol inside the code of its body. Its effects are much less local than those of lexical scoping constructs, which makes it much harder to debug programs which use Block extensively. It is not much different from using global variables, except that Blockguarantees that their values will be restored to their previous values once the execution exits Block (which is often a big deal). Even so, this non-transparent and non-local manipulation of the variable values is one reason to avoid using Block where With and / or Module can be used. But there are more (see below).
In practice, my advice would be to avoid using Block unless you know quite well why you need it. It is more error-prone to use it for variable localization than With or Module, because it does not prevent variable name collisions, and those will be quite hard to debug. One of the reasons people suggest to use Block is that they claim it is faster. While it is true, my opinion is that the speed advantage is minimal while the risk is high. I elaborated on this point here, where at the bottom there is also an idiom which allows one to have the best of both worlds. In addition to these reasons, as noted by @Albert Retey, using Block with the Dynamic - related functionality may lead to nasty surprises, and errors resulting from that may also be quite non-local and hard to find.
One valid use of Block is to temporarily redefine some global system settings / variables. One of the most common such use cases is when we want to temporarily change the value of
$RecursionLimit or$IterationLimit
variables. Note however that while using
Block[{$IterationLimit = Infinity}, ...] is generally okay, using Block[{$RecursionLimit = Infinity}, ...]
is not, since the stack space is limited and if it gets exhausted, the kernel will crash. A detailed discussion of this topic and how to make functions tail-recursive in Mathematica, can be found e.g. in my answer to this question.
It is quite interesting that the same ability of Block can be used to significantly extend the control the user has over namespaces/symbol encapsulation. For example, if you want to load a package, but not add its context to the $ContextPath (may be, to avoid shadowing problems), all you have to do is Block[{$ContextPath}, Needs[Your-package]]
As another example, some package you want to load modifies some other function (say, SystemSomeFunction), and you want to prevent that without changing the code of the package. Then, you use something like
Block[{SomeFunction}, Needs[That-package]]
which ensures that all those modifications did not affect actual definitions for SomeFunction - see this answer for an example of this.
Block is a very powerful metaprogramming device, because you can make every symbol (including system functions) temporarily "forget" what it is (its definitions and other global properties), and this may allow one to change the order of evaluation of an expression involving that symbol(s) in non-trivial ways, which may be hard to achieve by other means of evaluation control (this won't work on Locked symbols). There are many examples of this at work, one which comes to mind now is the LetL macro from my answer to this question.
Another more advanced use of Block is to ensure that all used variables would be restored to their initial values, even in the case of Abort or exception happening somewhere inside the body of Block. In other words, it can be used to ensure that the system will not find itself in an illegal state in the case of sudden failure. If you wrap your critical (global) variables in Block, it will guarantee you this.
A related use of Block is when we want to be sure that some symbols will be cleared at the end. This question and answers there represent good examples of using Block for this purpose.
### Variable name conflicts
In nested scoping constructs, it may happen that they define variables with the same names. Such conflicts are typically resolved in favor of the inner scoping construct. The documentation contains more details.
### Block vs Module/With
So, Block implements dynamic scoping, meaning that it binds variables in time rather than in space. One can say that a variable localized by Block will have its value during the time this Block executes (unless further redefined inside of it, of course). I tried to outline the differences between Block and With/Module (dynamic vs lexical scoping) in this answer.
### Some conclusions
• For most common purposes of variable localization, use Module
• For local constants, use With
• Do not ordinarily use Block for introducing local variables
• All of the scoping constructs under discussion have advanced uses. For Module this is mostly creating and encapsulating non-trivial state (persistent or not). For With, this is mostly injecting inside unevaluated expressions. For Block, there are several advanced uses, but all of them are, well, advanced. I'd be worried if I found myself using Block a lot, but there are cases when it is indispensable.
1 month ago
6 Replies
Sander Huisman 7 Votes Thanks for sharing! very informational. I mostly use With/Module, Block very rarely. For the case of some (sub)routine, I found one generally uses Module over With because a routine generally has some temporary variable that needs to be stored/manipulated, so then one can only use Module (not With as the variables can not be manipulated).
1 month ago
1 month ago
Joe Donaldson 3 Votes For differentiating between With, Module, and Block, I find this example helpful (from the docs for With, "Examples", "Properties and Relations"): In[1]:= {Block[{x = 5}, Hold[x]], With[{x = 5}, Hold[x]], Module[{x = 5}, Hold[x]]} Out[1]= {Hold[x], Hold[5], Hold[x\$119582]} In[2]:= ReleaseHold[%] Out[2]= {x, 5, 5}
1 month ago
Leonid Shifrin 3 Votes It's a useful example, but to fully understand it, one needs to also understand the non-standard evaluation process and how garbage collection works, so it is by no means trivial. I would say that it illustrates some advanced uses of these constructs rather than really differentiates between them. I would rather use an example like this to illustrate a basic difference: ClearAll[i, a]; i = 1; a := i^2; Module[{i = 2}, i = 3; a] Block[{i = 2}, i = 3; a] With[{i = 2}, i = 3; a] (* 1 9 During evaluation of In[12]:= Set::setraw: Cannot assign to raw object 2. 1 *)
1 month ago
# With vs. Block example
One basic example that even new user often struggle is to insert values of parameters into an analytic solution. Let's say we have something like this
solution = DSolveValue[{y'[x] + y[x] == a Sin[x], y[0] == b}, y[x], x]
(* Out[1]= -(1/2) E^-x (-a - 2 b + a E^x Cos[x] - a E^x Sin[x]) *)
If you now want to insert values for a and b, the usual way is to use a replacement rule {a->aValue, b->bValue}. Nevertheless, users might try to insert values using
With[{a = 1, b = 1},
solution
]
(* Out[6]= -(1/2) E^-x (-a - 2 b + a E^x Cos[x] - a E^x Sin[x]) *)
which fails. As Leonid already wrote With "does all the replacements before the body evaluates" and this makes the approach fail. Module cannot be used as well, because it uses lexical scoping that introduces new variable names that only look like normal a and b. Block, however, can be used
Block[{a = 1, b = 1},
solution
]
(* Out[7]= -(1/2) E^-x (-3 + E^x Cos[x] - E^x Sin[x]) *)
Although in general, using replacement rules is the better alternative, for some cases this gives a good alternative. | {} |
# Math Help - Level Curves
1. ## Level Curves
Function: $z=f(x,y)=5-x^2-y^2$
How would you draw a contour plot and label the level curves for z=0, z=1, z=2, z=3, z=4, z=5? Also, how do you find out what the largest and smallest z-values are? Would the smallest be z=0 and the largest be z=5?
2. for the level curves z = c = 5- x^2 -y^2
x^2 + y^2 = 5 - c are concentric circles of radius sqrt(5-c)
for c= 0 ,1,2,3,4,5
3. Originally Posted by eri123
Function: $z=f(x,y)=5-x^2-y^2$
How would you draw a contour plot and label the level curves for z=0, z=1, z=2, z=3, z=4, z=5? Also, how do you find out what the largest and smallest z-values are? Would the smallest be z=0 and the largest be z=5?
because $x^2 + y^2 \geq 0$ for all $x,y$, then the largest value of $z = 5 - (x^2 + y^2)$ is 5 and the smallest is $- \infty$ | {} |
# BMI index
Calculate BMI (body mass index, an index indicating obesity, overweight, normal weight, underweight) man weighing m = 71 kg and height h = 170 cm. Index is calculated according to equation (formula):
$BMI=\frac{m}{{h}^{2}}$
With BMI index is possible to compare people of different heights in the following categories:
BMI Category
bellow 18,5Underweight
18,5 - 24,9 Normal weight
25,0 - 9,9 Overweight
30,0 - 34,9 Obesity 1st grade
35 - 39,9 Obesity 2nd grade
Correct result:
BMI = 24.6
#### Solution:
We would be pleased if you find an error in the word problem, spelling mistakes, or inaccuracies and send it to us. Thank you!
Tips to related online calculators
Need help calculate sum, simplify or multiply fractions? Try our fraction calculator.
Do you want to convert length units?
Do you want to convert mass units?
## Next similar math problems:
• Here is
Here is a data set (n=117) that has been sorted. 10.4 12.2 14.3 15.3 17.1 17.8 18 18.6 19.1 19.9 19.9 20.3 20.6 20.7 20.7 21.2 21.3 22 22.1 22.3 22.8 23 23 23.1 23.5 24.1 24.1 24.4 24.5 24.8 24.9 25.4 25.4 25.5 25.7 25.9 26 26.1 26.2 26.7 26.8 27.5 27.6 2
• Complaints
The table is given: days complaints 0-4 2 5-9 4 10-14 8 15-19 6 20-24 4 25-29 3 30-34 3 1.1 What percentage of complaints were resolved within 2weeks? 1.2 calculate the mean number of days to resolve these complaints. 1.3 calculate the modal number of day
• Four prisms
Question No. 1: The prism has the dimensions a = 2.5 cm, b = 100 mm, c = 12 cm. What is its volume? a) 3000 cm2 b) 300 cm2 c) 3000 cm3 d) 300 cm3 Question No.2: The base of the prism is a rhombus with a side length of 30 cm and a height of 27 cm. The heig
• Chocolate roll
The cube of 5 cm chocolate roll weighs 30 g. How many calories will contain the same chocolate roller of a prism shape with a length of 0.5 m whose cross section is an isosceles trapezoid with bases 25 and 13 cm and legs 10 cm. You know that 100 g of this
• Truncated cone
Calculate the height of the rotating truncated cone with volume V = 1115 cm3 and a base radii r1 = 7.9 cm and r2 = 9.7 cm.
• Speed of Slovakian trains
Rudolf decided to take the train from the station 'Ostratice' to 'Horné Ozorovce'. In the train timetables found train Os 5409 : km 0 Chynorany 15:17 5 Ostratice 15:23 15:23 8 Rybany 15:27 15:27 10 Dolné Naštice 15:31 15:31 14 Bánovce nad Bebravou 15:35 1
• Crystal water
The chemist wanted to check the content of water of crystallization of chromic potassium alum K2SO4 * Cr2 (SO4) 3 * 24 H2O, which was a long time in the laboratory. From 96.8 g of K2SO4 * Cr2 (SO4) 3 * 24 H2O prepared 979 cm3 solution of base.
• Cuboid walls
Calculate the volume of the cuboid if its different walls have area of 195cm², 135cm² and 117cm².
• Tent
Calculate how many liters of air will fit in the tent that has a shield in the shape of an isosceles right triangle with legs r = 3 m long the height = 1.5 m and a side length d = 5 m.
• Pebble
The aquarium with internal dimensions of the bottom 40 cm × 35 cm and a height of 30 cm is filled with two-thirds of water. Calculate how many millimeters the water level in the aquarium rises by dipping a pebble-shaped sphere with a diameter of 18 cm.
• Railway embankment
The section of the railway embankment is an isosceles trapezoid, the sizes of the bases of which are in the ratio 5: 3. The arms have a length of 5 m and the height of the embankment is 4.8 m. Calculates the size of the embankment section area. | {} |
# Question about the proof that 'A graph with maximum degree at most k is (k + 1) colorable
I'm trying to follow the MIT introductory mathematics for cs course.
In the reading on graph theory, the proof that a graph with maximum degree at most k is (k + 1) colorable is given as follows:
We use induction on the number of vertices in the graph, which we denote by n. Let P(n) be the proposition that an n-vertex graph with maximum degree at most k is (k + 1)-colorable.
Base case (n = 1):
1. A 1-vertex graph has maximum degree 0 and is 1-colorable, so P (1) is true.
Inductive step:
1. Now assume that P (n) is true, and let G be an (n + 1)-vertex graph with maximum degree at most k.
2. Remove a vertex v (and all edges incident to it), leaving an n-vertex subgraph, H. The maximum degree of H is at most k, and so H is (k + 1)-colorable by our assumption P (n).
3. Now add back vertex v. We can assign v a color (from the set of k + 1 colors) that is different from all its adjacent vertices, since there are at most k vertices adjacent to v and so at least one of the k + 1 colors is still available.
4. Therefore, G is (k + 1)-colorable. This completes the inductive step, and the theorem follows by induction.
Simple enough, but I'm confused as to why we need part 1 and 2 of the inductive step.
That is to say why do we need to create a (n + 1) vertex graph, remove an edge and then add it back again? Can't we just define H from the start to be a graph of size n with a maximum degree of at most k? Doesn't step 3 work just the same?
• You could define $H$ as you wish, but then you would have to prove that $G$ can be constructed from $H$ by adding a vertex and some edges. You could say that $H$ is a graph that we can use to construct $G$ by adding a vertex, but then you have to prove that such graph $H$ exists. Starting from $G$ and then removing and re-adding that vertex is the simplest way which takes care of all such issues. – dtldarek Sep 30 '14 at 11:22
I think you're asking "Why can't we prove the $n+1$ node case by starting with an $n$-node graph and adding a vertex?" The answer is "Because that proves the $n+1$ node case only for graphs that can be constructed that way, which might, a priori, not be all of them." What steps 1 and 2 do is show how, given an $n+1$-node graph $G$, to find an $n$-node graph $H$ which, when a new node and the right edges are added, becomes the graph $G$ that you're interested in.
That seems completely trivial, and maybe in this case it actually is. But in general, for inductions that prove things on larger and larger sets, it's not always clear how to get from an element of the smaller set to every possible element of the larger set. Students often think that they see "an obvious way" that's in fact wrong.
As an example: consider and $n \times n$ symmetric matrices $M$ whose "outer" values (i.e., those for which one index is either 1 or $n$) are all "1"s, and whose inner values are integers other than one.
It's pretty clear that to get from a size-$n$ example of this to a size $n+1$ example, you can't just add a row at the bottom and a column at the right, but you might think, "yeah, but you could just add a row in the MIDDLE and a column at the corresponding position in the middle", with the condition that the inserted row start and end with 1 and have non-1s in the other places.
Clear? Pretty much. Unfortunately, it's also wrong. You can't get from the (only) $1 \times 1$ example of this kind of matrix, namely $[1]$ to the only $2 \times 2$ example (which is all $1$s) through this "obvious" step.
• Great example with matrices! – mattecapu Sep 30 '14 at 13:22
• Thank you, I think I've got it now. Would I be correct in saying that if you start by defining H as any n-node graph and then add a node to it, you are only proving P(n+1) for a specific case (the type of graph that can be created from graph H by adding a node). Whereas if you start by defining G as any n+1 graph and then remove and add a node, you are proving P(n+1) for any n+1 graph? – mallardz Sep 30 '14 at 15:49
• That's exactly right. And in this particular case, those two sets happen to coincide, and that's so obvious to some students that they never imagine that there could be a situation in which that DOESn't happen...which is why I built that matrix example. – John Hughes Sep 30 '14 at 20:38 | {} |
Infoscience
Presentation / Talk
# Cognitive Architecture for Mutual Modelling
In social robotics, robots needs to be able to be understood by humans. Especially in collaborative tasks where they have to share mutual knowledge. For instance, in an educative scenario, learners share their knowledge and they must adapt their behaviour in order to make sure they are understood by others. Learners display behaviours in order to show their understanding and teachers adapt in order to make sure that the learners' knowledge is the required one. This ability requires a model of their own mental states perceived by others: has the human understood that I(robot) need this object for the task or should I explain it once again ?" In this paper, we discuss the importance of a cognitive architecture enabling second-order Mutual Modelling for Human-Robot Interaction in educative contexts
#### Reference
• EPFL-TALK-216927
Record created on 2016-02-22, modified on 2017-05-12 | {} |
Explain the following
1. ${\mathrm{CO}}_{2}$ is a better reducing agent below 710 K whereas CO is a better reducing agent above 710 K.
2. Generally sulphide ores are converted into oxides before reduction.
3. Silica is added to the sulphide ore of copper in the reverberatory furnace.
4. Carbon and hydrogen are not used as reducing agents at high temperatures.
5. Vapour phase refining method is used for the purification of Ti.
(a) As shown in Ellingharn diagram which relates Gibbs free energy and temperature at below 710 K.
is a better reducing agent than CO while above 710K it becomes a very good reducing agent.
(b) Generally, sulphide ores are converted to oxides before reduction as reduction of oxides can easily be done using C or CO depending upon metal ore and temperature.
(c) Silica is a flux added to the sulphide ore of copper in the reverberatory furnace leading
to the formation of slag
(d) Carbon and hydrogen are not used as reducing agents at high temperature. At high temperature carbon and hydrogen readily form their carbides and hydrides respectively.
(e) Vapour phaserefining methodis used for the purification of Ti as | {} |
lw
# Plotter Control (obsolete)¶↑
lw()
Name:
lw - laser writer graphical output (or HP pen plotter)
Syntax:
h.lw(file)
h.lw(file, device)
h.lw()
Description:
h.lw(file, device) opens a file to keep a copy of subsequent plots (file is a string variable or a name enclosed in double quotes). All graphs which are generated on the screen are saved in this file in a format given by the integer value of the device argument.
device =1
Hewlett Packard pen plotter style.
device =2
Fig style (Fig is a public domain graphics program available on the SUN computer). The filter f2ps translates fig to postscript.
device =3
Codraw style. Files in this style can be read into the PC program, CODRAW. The file should be opened with the extension, .DRA.
Lw keeps copying every plot to the screen until the file is closed with the command, h.lw(). Note that erasing the screen with h.plt(-3) or a Control-e will throw away whatever is in the file and restart the file at the beginning. Therefore, lw keeps an accurate representation of the current graphic status of the screen.
After setting the device once, it remains the same unless changed again by another call with two arguments. The default device is 2.
Example:
Suppose an HP plotter is connected to serial port, COM1:. Then the following procedure will plot whatever graphics information happens to be on the screen (not normal text).
from neuron import h, gui
import os
# function for hp style plotter
def hp():
h.plt(-1)
h.lw()
os.system("cp temp com1:")
h.lw("temp")
h.lw("temp", 1)
Notice that the above procedure closes a file, prints it, and then re-opens temp. The initial direct command makes sure the file is open the first time hp is called.
Warning
It is often necessary to end all the plotting with a h.plt(-1) command before closing the file to ensure that the last line drawing is properly terminated.
In our hands the the HP plotter works well at 9600 BAUD and with the line \verb+MODE COM1:9600,,,,P+ in the autoexec.bat file. | {} |
# [ILUG] [OTish] Regex question...
Stephen Shirley diamond at csn.ul.ie
Mon Jan 7 18:01:52 GMT 2002
Mornin',
Say you were porting some files from dos to linux (woe is me),
and you wanted to run a searh+replace in vi (or sed, or awk or
*shudder* emacs) to convert
#include "foo\bar\apples.h"
to
#include "foo/bar/apples.h"
Now the best i can come up with is this (in vi):
%s/$$#include[^"]*"$$$$[^\\]*$$\\$$.*$$/\1\2\/\3/cg
but that will only match and convert the first \. How do I get a regex
to match all the \'s in the line?
Steve
--
"My mom had Windows at work and it hurt her eyes real bad" | {} |
# Circular motion
In physics, circular motion is movement with constant speed around in a circle: a circular path or a circular orbit. It is one of the simplest cases of accelerated motion. Circular motion involves acceleration of the moving object by a centripetal force which pulls the moving object towards the center of the circular orbit. Without this acceleration, the object would move inertially in a straight line, according to Newton's first law of motion. Circular motion is accelerated even though the speed is constant, because the velocity of the moving object is constantly changing.
Examples of circular motion are: an artificial satellite orbiting the Earth in geosynchronous orbit, a stone which is tied to a rope and is being swung in circles (cf. hammer throw), a racecar turning through a curve in a racetrack, an electron moving perpendicular to a uniform magnetic field, a gear turning inside a mechanism.
A special kind of circular motion is when an object rotates around itself. This can be called spinning motion, or rotational motion.
Circular motion is characterized by an orbital radius r, a speed v, the mass m of the object which moves in a circle, and the magnitude F of the centripetal force. These quantities all relate to each other through the equation
[itex] F = {m v^2 \over r} [itex]
which is always true for circular motion.
Contents
## Mathematical description
Circular motion can be described by means of parametric equations, viz.
[itex] x(t) = R \, \cos \, \omega t, \qquad \qquad (1) [itex]
[itex] y(t) = R \, \sin \, \omega t, \qquad \qquad (2) [itex]
where R and ω are coefficients. Equations (1) and (2) describe motion around a circle centered at the origin with radius R. The derivatives of these equations are
[itex] \dot{x}(t) = - R \omega \, \sin \, \omega t, \qquad \qquad (3) [itex]
[itex] \dot{y}(t) = R \omega \, \cos \, \omega t. \qquad \qquad (4) [itex]
The vector (x,y) is the position vector of the object undergoing the circular motion. The vector [itex] (\dot{x},\dot{y}) [itex], given by equations (3) and (4), is the velocity vector of the moving object. This velocity vector is perpendicular to the position vector, and it is tangent to the circular path of the moving object. The velocity vector must be considered to have its tail located at the head of the position vector. The tail of the position vector is located at the origin.
The derivatives of equations (3) and (4) are
[itex] \ddot{x}(t) = - R \omega^2 \, \cos \, \omega t, \qquad \qquad (5) [itex]
[itex] \ddot{y}(t) = - R \omega^2 \, \sin \, \omega t. \qquad \qquad (6) [itex]
The vector [itex] (\ddot{x},\ddot{y}) [itex], called the acceleration vector, is given by equations (5) and (6). It has its tail at the head of the position vector, but it points in the direction opposite to the position vector. This means that circular motion can be described by differential equations, thus
[itex] \ddot{x} = - \omega^2 x, [itex]
[itex] \ddot{y} = - \omega^2 y, [itex]
or letting x denote the position vector, then circular motion can be described by a single vector differential equation
[itex] \ddot{\mathbf{x}} = - \omega^2 \mathbf{x}. [itex]
The quantity ω is the angular velocity.
## Deriving the centripetal force
From equations (5) and (6) it is evident that the magnitude of the acceleration is
[itex] a = \omega^2 R. \qquad \qquad (7) [itex]
The angular frequency ω is expressed in terms of the period T as
[itex] \omega = {2 \pi \over T}. \qquad \qquad (8) [itex]
The speed v around the orbit is given by the circumference divided by the period:
[itex] v = {2 \pi R \over T}. \qquad \qquad (9) [itex]
Comparing equations (8) and (9), we deduce that
[itex] v = \omega R. \qquad \qquad (10) [itex]
Solving equation (10) for ω and substituting into equation (7) yields
[itex] a = {v^2 \over R}. \qquad \qquad (11) [itex]
Newton's second law of motion is usually expressed as
[itex] F = m a \,[itex]
which together with equation (11) implies that
[itex] F = {m v^2 \over R},\qquad \qquad (12) [itex]
## Kepler's third law
For satellites tethered to a body of mass M at the origin by means of a gravitational force, the centripetal force is also equal to
[itex] F = {G M m \over R^2} \qquad \qquad (13) [itex]
where G is the gravitational constant, 6.67 × 10−11 N-m2/kg2. Combining equations (12) and (13) yields
[itex] {G M m \over R^2} = {m v^2 \over R} [itex]
which simplifies to
[itex] G M = R v^2. \qquad \qquad (14) [itex]
Combining equations (14) and (10) then yields
[itex] \omega^2 R^3 = G M \ [itex]
which is a form of Kepler's harmonic law of planetary motion.
• Art and Cultures
• Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
• Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
• Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
• United States (http://www.academickids.com/encyclopedia/index.php/United_States)
• World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
• Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
• Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
• Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
• Space and Astronomy
• Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System) | {} |
Equations with x in the Denominator
Here you’ll learn how to get rid of a denominator containing variables. The good news is that the approach is the same as with getting rid of a denominator with numbers! To find the common denominator, you multiply all the different factors once. Here’s two examples:
Example 1
Solve the equation $\frac{2}{x}=3$
$\begin{array}{llll}\hfill \frac{2}{x}& =3\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill x×\frac{2}{x}& =3×x\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill 2& =3x\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \frac{2}{3}& =\frac{\text{3}x}{\text{3}}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \frac{2}{3}& =x\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill x& =\frac{2}{3}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
Example 2
Solve the equation $\frac{x+1}{x}+2=\frac{3}{x+1}+3$ for $x$
The common denominator is $x\left(x+1\right)$. Multiply both sides of the equation with the common denominator:
$\begin{array}{llll}\hfill & \phantom{=}x\left(x+1\right)×\phantom{\rule{-0.17em}{0ex}}\left(\frac{x+1}{x}+2\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =x\left(x+1\right)×\phantom{\rule{-0.17em}{0ex}}\left(\frac{3}{x+1}+3\right).\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
$x\left(x+1\right)×\phantom{\rule{-0.17em}{0ex}}\left(\frac{x+1}{x}+2\right)=x\left(x+1\right)×\phantom{\rule{-0.17em}{0ex}}\left(\frac{3}{x+1}+3\right).$
This expands to
$\begin{array}{llll}\hfill & \phantom{=}x\left(x+1\right)×\frac{x+1}{x}+x\left(x+1\right)×2\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =x\left(x+1\right)×\frac{3}{x+1}+x\left(x+1\right)×3.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
$x\left(x+1\right)×\frac{x+1}{x}+x\left(x+1\right)×2=x\left(x+1\right)×\frac{3}{x+1}+x\left(x+1\right)×3.$
Here you can cancel some factors:
$\begin{array}{llll}\hfill & \phantom{=}\text{x}\left(x+1\right)\frac{x+1}{\text{x}}+x\left(x+1\right)2\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =x\text{(x+1)}\frac{3}{\text{x+1}}+x\left(x+1\right)3.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
$\text{x}\left(x+1\right)\frac{x+1}{\text{x}}+x\left(x+1\right)2=x\text{(x+1)}\frac{3}{\text{x+1}}+x\left(x+1\right)3.$
Now the expression simplifies to
$\begin{array}{llll}\hfill & \phantom{=}{x}^{2}+2x+1+2{x}^{2}+2x\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =3x+3{x}^{2}+3x.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
${x}^{2}+2x+1+2{x}^{2}+2x=3x+3{x}^{2}+3x.$
Isolate all the variables on one side and the constants on the other:
$\begin{array}{llll}\hfill -1={x}^{2}& +2{x}^{2}-3{x}^{2}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & +2x+2x-3x-3x.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
${x}^{2}+2{x}^{2}-3{x}^{2}+2x+2x-3x-3x=-1.$
This simplifies to
$-2x=-1.$
$\frac{\text{−2}x}{\text{−2}}=\frac{-1}{-2}=\frac{1}{2}.$
Therefore, $x=\frac{1}{2}$. | {} |
Mar 27, 2020
# Introduction
Rolling and re-rolling dice is our task for this week's Riddler Classic. Each time we roll, we replace the sides of the dice with the values of our previous roll. This makes for a tricky probability space, but a very fun Python class to write and some markov chain analysis to crunch in order to solve it. Here's the full prompt.
You start with a fair 6-sided die and roll it six times, recording the results of each roll. You then write these numbers on the six faces of another, unlabeled fair die. For example, if your six rolls were 3, 5, 3, 6, 1 and 2, then your second die wouldn’t have a 4 on it; instead, it would have two 3s.
Next, you roll this second die six times. You take those six numbers and write them on the faces of yet another fair die, and you continue this process of generating a new die from the previous one.
Eventually, you’ll have a die with the same number on all six faces. What is the average number of rolls it will take to reach this state?
Extra credit: Instead of a standard 6-sided die, suppose you have an N-sided die, whose sides are numbered from 1 to N. What is the average number of rolls it would take until all N sides show the same number?
# Solution
In the game starting with a fair, six-sided die, it takes an average of 9.66 rolls to reach a single number on all sides.
Sides Expected Rolls
2 2.0
3 3.85714
4 5.77931
5 7.71198
6 9.65599
7 11.60815
8 13.56629
9 15.52909
While I haven't explicitly derived a formula for an $N$-sided dice, it feels reasonable to speculate that the pattern of "two extra turns per additional side of the dice" could continue for some time, though it likely falls off in magnitude as we increase the number of sides substantially. I'll have to save the formal analysis for another time.
# Methodology
Starting with a six-sided dice is computationally difficult, but solvable. The key challenge is calculating the many combinations of probabilities for the intermediate positions of the game. For example, when we start with a standard dice, we could end up with over 46,000 different roll permutations! Of course, many of those permutations can be treated the same for the sake of our game (e.g. [3, 3, 4, 4, 5, 5] is the same as [5, 4, 3, 3, 4, 5]), but grouping the probabilities is not easy.
The workhorse of this week's puzzle is the Multinomial Distribution. The Multinomial Distribution lets us calculate the probability of various combinations of observed dice throws. For example, it can tell us the probability of observing $x_1$ ones, $x_2$ twos, $x_3$ threes, and so on, from six throws of a standard dice.
In a fairly simple case, suppose we have thrown the dice several times and now have two numbers remaining. Suppose those numbers are one and six, and we have 2 ones and 4 sixes on the sides of the dice. We can use the Multinomial Distribution to tell us the likelihood of observing different combinations of throws. In python, we use it like this.
from scipy.stats import multinomial
# a six-sided dice with 2 ones and 4 sixes on the sides
# n is the number of sides; p is the probability of each number being rolled
dice = multinomial(n=6, p=[2/6, 4/6])
# what is the probability of rolling 6 sixes? Use the probability mass function
dice.pmf([0, 6]) # 0.08779149519890264
# what about rolling 6 ones?
dice.pmf([6, 0]) # 0.0013717421124828531
# what about rolling 4 ones and 2 sixes?
dice.pmf([4, 2]) # 0.0823045267489711
What we learn from this is that with this dice, we win the game roughly 9% of the time in one more turn. This represents the probability of rolling all 6 sixes plus the probability of rolling all six ones. Otherwise, we continue the game with the new number of ones and sixes according to our roll.
To make this easier, I wrote a class in Python called Dice, that handles all the probability calculations behind the scenes. A new game is started by creating a Dice with six sides, where each value on the sides shows up once:
# create a six-sided dice with one of each number
Dice([1, 1, 1, 1, 1, 1])
# or we could create a six sided dice with 2 ones and 4 sixes, like before
# the actual numbers don't matter - what matters is the number of unique values
Dice([2, 4])
# we can calculate the probability of moving from one Dice to another easily
Dice([2, 4]).transition_vector()
# output is below, where Dice([1, 5]) is added to Dice([5, 1]) for simplicity
# {<Dice(6,)>: 0.0891632373113855,
# <Dice(1, 5)>: 0.2798353909465022,
# <Dice(2, 4)>: 0.4115226337448561,
# <Dice(3, 3)>: 0.2194787379972568}
# we can also calculate the number of steps expected before the game ends
Dice([4, 2]).expected_value()
# we expect to have to roll 6.58 more times before the game ends
# 6.584840446909428
Ultimately, to answer the question for a six-sided Dice, we use just one line:
>>> Dice([1, 1, 1, 1, 1, 1]).expected_value()
9.655991483885606
# Full Code
Quite a bit of code for this week's solution. We have a utility function, multinomial_domain that lists all the possible permutations of dice rolls we could see. But the heavy lifting is done by the Dice class, which has methods for calculating roll probabilities, state transitions, a complete directed graph of nodes and edges, a transition matrix, and ultimately an expected_value method, which returns the number of expected rolls to end the game.
At this point, the code works well for small dice, but for anything more than 10 sides, the brute force nature of calculating probabilities breaks down.
"""
Solution to the Riddler Classic from March 27, 2020
https://fivethirtyeight.com/features/can-you-get-the-gloves-out-of-the-box/
"""
from typing import Iterator, List, Tuple, Union
import networkx as nx
import numpy as np
from scipy.stats import multinomial
def multinomial_domain(n: int, k: int) -> Iterator[list]:
"""
Yields all lists of length k whose values sum to n. This comprises the
entire domain space of the multinomial distribution. For example, if we have
n=2 and k=3, we want to generate all lists of length 3 with values that sum
to 2, including all permutations which is
[[0, 0, 2], [0, 1, 1], [0, 2, 0], [1, 0, 1], [1, 1, 0], [2, 0, 0]]
Parameters
----------
n : int, the number of multinomial trials to run
k : int, the number of categories that could be chosen for each trial
Yields
------
x : a list of integers
Examples
--------
>>> list(multinomial_domain(n=2, k=3))
[[0, 0, 2], [0, 1, 1], [0, 2, 0], [1, 0, 1], [1, 1, 0], [2, 0, 0]]
>>> list(multinomial_domain(n=6, k=2))
[[0, 6], [1, 5], [2, 4], [3, 3], [4, 2], [5, 1], [6, 0]]
"""
# we solve this recursively, so if we have only one slot remaining, then we
# fill it with whatever value is left. Otherwise, we loop backwards through
# all permutations calling this function to fill lists of smaller sizes
if k == 1:
yield [n]
else:
for value in range(n + 1):
for permutation in multinomial_domain(n - value, k - 1):
yield [value] + permutation
class Dice:
"""
Models an n-sided dice for the purposes of solving the Riddler Classic.
We construct a dice by specifying the number of unique values written on the
sides. For example, if we call Dice([1, 1, 1, 1, 1, 1]), then it means we
have a dice with six sides and each side has a unique value. Dice([2, 2, 2])
also represents a six-sided dice, but with only three unique values.
We track the number of unique values rather than the values themselves
because it lets us calculate transition probabilities from one state to the
next. A transition probability tells us the probability that we end up with
a Dice with X unique values, given that we start with a Dice with Y uniques.
Behind the scenes we model this dice using a multinomial distribution, where
each side of the dice has an equal likelihood (but each value's probability
is in proportion to the number of occurrences on the sides of the dice.)
For example, suppose we have a standard 6-sided Dice. We create a new Dice
instance by calling Dice([1, 1, 1, 1, 1, 1]). We want to solve for the
expected number of throws until the Dice only has one unique value remaining
which is a Dice([6]) instance.
We call Dice([1, 1, 1, 1, 1, 1]).expected_value(), which gives the result
of 9.655991483885606, meaning it takes 9.66 turns on average to reach the
end of the game.
Parameters
----------
uniques : Tuple[int, ...] a tuple containing the number of unique values on
the sides of the dice. The sum of the values in uniques should match
the number of sides of the dice. For example, a six-sided dice with the
values [1, 2, 2, 4, 5, 5] would be passed as uniques=(1, 1, 2, 2)
Examples
--------
>>> # unique values are sorted and zeros are dropped upon instantiation
>>> Dice([2, 1, 2, 1, 0, 0]).uniques
(1, 1, 2, 2)
>>> # two dice with the same unique sides are equal
>>> a = Dice([1, 1, 2, 0, 1])
>>> b = Dice([0, 2, 1, 1, 1])
>>> a == b
True
>>> Dice([1, 1, 1, 1, 1, 1]).expected_value()
9.655991483885606
"""
def __init__(self, uniques: Union[List[int], Tuple[int, ...]]):
self.uniques = tuple(sorted(u for u in uniques if u > 0))
self.total_sides = sum(self.uniques)
self.unique_sides = len(self.uniques)
self.distribution = multinomial(
n=self.total_sides, p=[u / self.total_sides for u in self.uniques]
)
def __eq__(self, other) -> bool:
"""Ensure that two Dice with the same sorted unique sides are equal"""
return self.uniques == other.uniques
def __lt__(self, other) -> bool:
"""Sort Dice objects by their uniques tuple values"""
return self.uniques < other.uniques
def __hash__(self) -> int:
"""
The Dice hash is the same as the hash for the uniques tuple. We need to
define the hash for this object so it can be used as a dictionary key.
"""
return hash(self.uniques)
def __repr__(self) -> str:
"""String representation of the Dice object"""
return f"<Dice{self.uniques}>"
def domain_permutations(self) -> Iterator[list]:
"""
Yields all permutations of the domain of this dice. For example, suppose
we have a six-sided dice with only two numbers left. We yield all valid
orderings of the rolls we could get
"""
return multinomial_domain(n=self.total_sides, k=self.unique_sides)
def domain(self) -> set:
"""
Returns a set of sorted tuples that fully describe the possibility space
of rolling the Dice. For example, if we have a six-sided Dice with two
unique values on the sides, then the domain set is all permutations of
the numbers we could roll. We return Dice objects for each possibility.
Examples
--------
>>> # a dice with two unique sides and six sides total
>>> Dice((2, 4)).domain()
{<Dice(1, 5)>, <Dice(3, 3)>, <Dice(6,)>, <Dice(2, 4)>}
"""
return {Dice(p) for p in self.domain_permutations()}
def transition_vector(self) -> dict:
"""
Returns a dictionary of the domain set and the transition probabilities
to each one. Keys are Dice objects, and values are floats that sum to 1.
Examples
--------
>>> # a dice with two unique sides and six sides total
>>> vector = Dice((2, 4)).transition_vector()
>>> for key, probability in vector.items():
... print(f"{key}: {probability:.6f}")
<Dice(6,)>: 0.089163
<Dice(1, 5)>: 0.279835
<Dice(2, 4)>: 0.411523
<Dice(3, 3)>: 0.219479
"""
vector: dict = {}
for p in self.domain_permutations():
key = Dice(p)
vector[key] = vector.get(key, 0.0) + self.distribution.pmf(p)
return vector
def graph(self) -> nx.DiGraph:
"""
Returns a directed graph mapping the transition from each Dice object to
other Dice objects. Edge weights are transition probabilities.
"""
def _graph(dice) -> nx.DiGraph:
G = nx.DiGraph()
for new_state, probability in dice.transition_vector().items():
return G
G = nx.DiGraph()
for new_state in self.transition_vector():
G = nx.compose(G, _graph(new_state))
return G
def transition_matrix(self) -> Tuple[np.array, list]:
"""
Returns a square transition matrix that defines transition probabilities
from each Dice to each other dice. The matrix is sorted from high unique
values to low unique values, meaning the right-most column is the ending
state. Rows sum to one. Also returns the node index as an array.
Examples
--------
>>> values, idx = Dice((3, 3)).transition_matrix()
>>> values
array([[0.40252058, 0.20897634, 0.05358368, 0.33491941],
[0.27983539, 0.41152263, 0.21947874, 0.08916324],
[0.1875 , 0.46875 , 0.3125 , 0.03125 ],
[0. , 0. , 0. , 1. ]])
>>> idx
[<Dice(1, 5)>, <Dice(2, 4)>, <Dice(3, 3)>, <Dice(6,)>]
"""
G = self.graph()
nodelist = sorted(G.nodes)
return nx.to_numpy_array(G, nodelist=nodelist), nodelist
def expected_value(self) -> float:
"""
Return the expected number of throws before all sides of this dice have
the same value. Uses the Dice's directed graph to perform a markov chain
analysis of the time it takes to reach the end of the game.
Examples
--------
>>> Dice((1, 1, 1, 1, 1, 1)).expected_value()
9.655991483885606
>>> Dice((1, 5)).expected_value()
4.623000562655744
>>> Dice((2, 4)).expected_value()
6.584840446909428
>>> Dice((3, 3)).expected_value()
7.205027730889816
"""
if self.unique_sides == 1:
return 0.0
# here we solve the expected number of rolls to end with a single-value
# dice using a transition matrix and some linear algebra
M, nodelist = self.transition_matrix()
results = np.linalg.inv(np.eye(len(M) - 1) - M[:-1, :-1]).sum(1)
return results[nodelist.index(self)]
if __name__ == "__main__":
import doctest
doctest.testmod() | {} |
A rapid sand filter comprising a number of filter beds is required to produce $99$ MLD of potable water. Consider water loss during backwashing as $5\%$, rate of filtration as 6.0 m/h and length to width ratio of filter bed as $1.35$. The width of each filter bed is to be kept equal to $5.2$ m. One additional filter bed is to be provided to take care of break-down, repair, and maintenance. The total number of filter beds required will be
1. $19$
2. $20$
3. $21$
4. $22$
in Others
retagged | {} |
## A Poisson Geometry Version of the Fukaya Category
Is there any possibility of a Poisson Geometry version of the Fukaya category? Given a Poisson manifold Y, objects could be manifolds with isolated singularities X which have the property that TX is contained in NX maximally. The naive example would be something like the Poisson structure on R^2 which is (x^2 + y^2)(d/dx^d/dy). Branes would in this case be curves with some nodal singularity at the origin.
The morphisms could still be from holomorphic disks with respect to the standard complex structure. In principal, it seems like in this example the Fukaya category could be defined in the standard way (although maybe there is something more subtle one should do with the morphisms?). For a brane L passing through the origin there should be some interesting multiplicative structure in the algebra A(L) owing to the fact that the brane is required to remain fixed at the origin.... I would hope that the Hochschild cohomology could be related to the Poisson cohomology of the manifold though I haven't studied my example yet. What obstructions arise when trying to construct this category?
-
This sounds very interesting to me. However I am a bit perplexed by the objects of your category being (sub)manifolds with isolated singularities. In the Fukaya-type categories I've seen, the objects are non-singular Lagrangian submanifolds, or immersed Lagrangian submanifolds... – Kevin Lin Mar 22 2010 at 20:29 I was just thinking of one way to generalize the condition of Lagrangian submanifold. Since in the above example the bilinear form is zero on the full vector space,the nodal sing somehow was meant to generalize being a maximal subspace on which the form vanishes. It seems like it might be better to think about leafwise constructions... or maybe not think about it at all. – Dan Mar 24 2010 at 18:14
The fundamental technique of symplectic topology is the theory of pseudo-holomorphic curves. One studies maps $u$ from a Riemann surface into a symplectic manifold, equipped with an almost complex structure tamed by the symplectic form, such that $Du$ is complex-linear. Numerous algebraic structures can be built from such maps: Gromov-Witten invariants, Hamiltonian Floer cohomology, Floer cohomology of pairs of Lagrangian submanifolds, and most elaborate of all, $A_\infty$-structures on Lagrangian Floer cochains (Fukaya categories).
Though the basic theory of pseudo-holomorphic curves makes sense on more general almost-complex manifolds, the presence of the symplectic structure is vital for Gromov compactness to be applicable. Without this, your curves are liable to vanish into thin air. None of the algebraic structures I mentioned have been developed on almost complex manifolds, nor on Poisson manifolds. It's conceivable that leafwise constructions can be made to work in the Poisson context, but there are basic analytic and geometric questions to be addressed.
There are situations where one might reasonably hope to find relations between Poisson geometry and symplectic topology, but in those situations it may be wise to go via intermediate constructions. For instance, a version of the derived Fukaya category of $T^{\ast} L$ was shown by Nadler to be equivalent to the derived category of constructible sheaves on $L$, and I'm told that that category is related to deformation quantization of $T^{\ast} L$ - something which truly does belong to Poisson geometry.
-
How about looking at the source-simply connected symplectic groupoid of an integrable Poisson manifold. and then forming ITS Fukaya category? This symplectic groupoid is naturally attached to the Poisson manifold, so any invariant of it is a Poisson invariant. If M has the zero Poisson structure, the symplectic groupoid is T*M. If M is the dual g* of a Lie algebra, the groupoid is the cotangent bundle of the simply connected Lie group G. If M is symplectic, the groupoid is the fundamental groupoid of M (just $M x M^{opp}$ if M is simply connected). | {} |
### Theory:
Let us consider a group of social workers gathered at a place, and they planned to dig 100 pits to plant trees. If one person can dig one pit then,
If there are $$20$$ workers, they will take, $\frac{100}{20}$ $$=$$ 5 days.
If there are $$10$$ workers, they will take $$=$$ $\frac{100}{10}$ $$=$$ 10 days.
If there are $$5$$ workers, they will take $$=$$ $\frac{100}{5}$ $$=$$ 20 days
Now, in this condition, are the number of pits, and the number of workers is in direct proportion?
If your answer is NO, that is correct. Because when the number of workers is increasing, the days are decreasing accordingly at the same rate.
Now we say that these quantities are in inverse proportion.
Let us denote the number of social workers as $$X$$ and the number of days as $$Y$$. Now observe the following table,
Number of Social Workers $$X$$ $$20$$ $$10$$ $$5$$ Number of days $$Y$$ 5 10 20
From the table, we can observe that when the values of $$X$$ decrease the corresponding values of $$Y$$ increases in such way that the ratio of $\frac{X}{Y}$ in each case has the same value which is a constant (say $$k$$).
Derivation:
Consider each of the value of $$X$$ and the corresponding value of $$Y$$. Their products are all equal say $\mathit{XY}\phantom{\rule{0.147em}{0ex}}=\phantom{\rule{0.147em}{0ex}}100\phantom{\rule{0.147em}{0ex}}=\phantom{\rule{0.147em}{0ex}}k\phantom{\rule{0.147em}{0ex}}\phantom{\rule{0.147em}{0ex}}\phantom{\rule{0.147em}{0ex}}\left(k\phantom{\rule{0.147em}{0ex}}\mathit{is}\phantom{\rule{0.147em}{0ex}}a\phantom{\rule{0.147em}{0ex}}\mathit{constant}\right)$, and it can be expressed as .
If ${X}_{1}{X}_{2}$ are the values of $$X$$ is corresponding to the values of ${Y}_{1}{Y}_{2}$ of $$Y$$, respectively.
Therefore,
$\begin{array}{l}{X}_{1}{Y}_{1}={X}_{2}{Y}_{2}=k\phantom{\rule{0.147em}{0ex}}\mathit{contant}.\\ \mathit{That}\phantom{\rule{0.147em}{0ex}}\mathit{is}\phantom{\rule{0.147em}{0ex}}\frac{{X}_{1}}{{X}_{2}}=\frac{{Y}_{2}}{{Y}_{1}}\\ \mathit{Thus}\phantom{\rule{0.147em}{0ex}}X\phantom{\rule{0.147em}{0ex}}\mathit{and}\phantom{\rule{0.147em}{0ex}}Y\phantom{\rule{0.147em}{0ex}}\mathit{are}\phantom{\rule{0.147em}{0ex}}\mathit{in}\phantom{\rule{0.147em}{0ex}}\mathit{inverse}\phantom{\rule{0.147em}{0ex}}\mathit{proportion}.\end{array}$
From the above table, we should take ${X}_{1}$ and ${X}_{2}$ from the values of $$X$$. Similarly, take ${Y}_{1}$ and ${Y}_{2}$ from the values of $$Y$$.
That is,
Number of Social Workers $$X$$ ${X}_{1}$ ${X}_{2}$ ${X}_{3}$ ${X}_{4}$ Number of days $$Y$$ ${Y}_{1}$ ${Y}_{2}$ ${Y}_{3}$ ${Y}_{4}$
From the above table, we can learn that we need at least $$3$$ variables to find out the other value.
Do you know how to find out the values of ${Y}_{2}$, ${Y}_{3}$ and ${Y}_{4}$?
Step 1
Let's consider that ${X}_{1}$ and ${Y}_{1}$ are in $$1$$ series, ${X}_{2}$ and ${X}_{3}$ are in $$2$$ series and so on.
If ${X}_{1}$, ${X}_{2}$ and ${Y}_{1}$ values are given using these values, we can find out ${Y}_{2}$.
Step 2
Similarly to find ${Y}_{3}$ value first, you have to make sure that you know the values of ${X}_{2}$${X}_{3}$ and ${Y}_{2}$.
If you don't know that you have to find out that unknown value using previous series values.
After that using the $$3$$ variables ${X}_{2}$, ${X}_{3}$ and ${Y}_{2}$, we can find out the value of ${Y}_{3}$.
Step 3
Now we know the value of ${Y}_{3}$ then using ${X}_{3}$${X}_{4}$ and ${Y}_{3}$ values we can calculate the value of ${Y}_{4}$. | {} |
# Execute Stage¶
Read time: 48 minutes (12014 words)
We have reached the point where we need to process some data. That all happens in the execute stage, and it can get complex. We have a bunch of instructions, each trying to manipulate some bits, and we have some components that can help do all of that. Specifically, we have a nice ALU unit. The problem is that we need to route the various parts of instructions, all those items we decoded previously, to that ALU, or past that ALU, depending on what needs to happen.
The only way to figure all this out is to study the instructions we want to execute.
## Decoder Data¶
There are only a few different data items we need to decode in this set:
### Constants¶
The range of numbers we decode is set by the number of bits in the instruction. The only puzzle is figuring out which of those numbers is a two’s complement number, meaning a signed number.
#### Positive values¶
• k16: 0<= k16 <= 65536 (16_bits) Unsigned
• k8: 0 <= k8 <= 255 (8-bits)unsigned
• k12: -2048 <= k12 <= 2047 (12_bits) signed
• k7: -64 <= k7 <= 63 (7-bits) signed
• A8: I/O Port address (8 bits) unsigned
The instructions where we are dealing with signed data are all involved in branching. All of those “k” constants will end up being added to the PC value.
The I/O instructions reference a port number, limited to a value between 0 and 63 (unsigned). We could keep that number separate from the other data items. However, in examining the instruction set, there is never a time when an instruction references both A and k, so we could deal with that port address as just another constant.
We also know that every instruction will alter the value in the PC by one, maybe more if we are referencing something in data memory. The PC will also be modified by branching instructions, and we have some math to do if that is the case.
All arithmetic operations involving either the PC register value or the SP register value are 16-bit operations. Perhaps we need to just let our ALU do 16-bit math if needed. We could also set up two ALU units, one for 16 bit math, and another for 8-bit math.
Our decoder needs to tell us which instruction we have, and provide four other pieces of data: Rs, Rd, and two constants: “A” and “k”. Whoops, there are four variants of “k” in this set! ALl are signed numbers, so the size is not really important, We will end up treating them as a 16-bit signed number when we do our address calculations.
Note
The number of bits decoded for each constant limits the range of possible values. What we do need to do is note the sign of that constant. In our simulator, we will need to pay attention to the sign when we proces the constant.
## Instruction Set Architecture¶
Up to this point in our processing, the action in each stage has been pretty generic, and would be very similar in any processor you wish to study. However, things can change as we work through the rest of the machine. The exact parts we select for the next stage and how we organize them is driven by the instruction set chosen, not the other way around. That is why this is called “instruction set architecture”
Designers of chips study the applications the chip is intended to support. In doing that, they introduce instructions they feel will assist in those applications. Companies like Atmel, who designed the AVR family, came up with a set of instructions, then proceeded to manufacture a range of chips all supporting a subset of all available instructions. The idea was to offer chips best suited to a range of intended applications, not all of them. Customers chould select the chip that best suited their needs from that set.
This whole idea is under attack now, since any company that can build a chip is all set up to build custom chips based on a customers needs directly. There is some overhead cost involved with setting up the new chip design, but that is a software issue, not a hardware one. Switching software is easy enough that you can approach any chip manufacturer with a Verilog design you have put together and get one prototype, to 10,000 chips in short order!
That is scary to companies like Intel, who has spent years developing a complex chip designed to handle “all” processing needs. (You should know that attempts to use Pentium chips in consumer gadgets have failed to gain much attention. Today, ARM rules that market!)
## Routing our Data¶
For each instruction we have selected, we need to see where the bits need to go to get through the Execute Stage. Lets work on them one by one!
### BRNE¶
This instruction has a constant that will be added to the PC coming out of the decoder. If the decoder has done its job, it will already have been incremented, so all we need to do is add the constant k to the PC, but only if the Z flag tells us to do that. So, we need to route PC to one side of our ALU, and k to the other. (Whoops!, this is a 16-bit operation! We cannot use an 8-bit ALU without resorting to some trickery!) The final PC value leaving this stage and will be routed back to the input to the fetch stage later.
### DEC¶
We subtract a constant (one) from a register value in this one. All we need to do is route that register item to one side of the ALU, and come up with a constant (One) to route to the other side. The result will end up routed back to the register memory in the next step
Note
We could avoid bothering the ALU if we just add a “decrement” unit to the execute stage. Those are simple to implement. We probably could use an “increment” unit as well.
### EOR¶
This is a classic two operand ALU instruction. One register will be fed t one side of the ALU, the other to the opposite side. We need to send a code to the ALU telling it what instruction we want (which we really needed to do in our previous instructions as well), then the final result simply leave this stage to find its way back to the register memory later.
### IN/OUT¶
IN (and OUT) both reference a constant port number. These I/O ports are just normal 8-bit memory units, except when we work with the memory cells the actual data is either leaving the chip (OUT) or arriving at the chip (IN) from the outside world. These memory cells will be what we need to connect to our Graphical widget set to see interesting things happen.
Both of these instructions reference a register, but they do no processing. That means nothing here needs to reach the ALU. The data will either be written back into a register, or into the I/O memory cell on the next stage.
### LDI¶
This instruction has a constant that will be passed back into a register untouched by the ALU.
### RCALL/RET¶
These instructions manipulate the stack. They both alter the program counter as well The RCALL instruction does this by by adding the constant to that value. RET simply accesses the stack and puts that value back in the SP register. Just as we saw for BRNE we need to use the ALU to do the math, so the PC and that constant need to be routed to the ALU. The stack pointer math is another increment of decrement operation, but this one needs to increment or decrement bt two, since the stack will live at the top of the 8-bit data memory. Perhaps we need a special unit that can add or subtract one or two depending on our needs.
### RJMP¶
RJMP works exactly like RCALL, except it does not need to deal with the stack.
### STS¶
This instruction simple passes k along to the store unit, along with the register data. That data item will be written into the data memory at the address specified by k. No math involved.
## Summarizing¶
Let’s study the RTL definitions that tell us what needs to happen in this stage. We will ignore the basic need to add one to the PC for almost all of these, and let that happen in the decoder. That means that when the execute stage sees the PC register value, it has already been incremented by one.
RTL ALU16 ALU8
Z == 0 ? PC <- PC + k7 x
Rd <= Rd - 1 x
Rd <- Rd ^ Rs x
Rd <- A8 X
Rd <- k8
A <- Rs
[SP] <- PC, SP <- SP - 2, PC <- PC + K x
PC <- [SP], SP <- SP + 2
PC <- PC + K x
[k16] <- Rs, PC <- PC + 1 x
The only concern we really have to start this design is figuring out exactly what data items we will need to pass to the ALU:
• PC + k
• Rd op Rs
### Stack Management¶
Formally, the stack is just a piece of the data memory in out machine, and managing that stack is handled by setting up a stack pointer (the SP register, which we saw is implemented as two 8-bit registers: SPH and SPL. We normally do not manage these registers directly, instructions do that. For our simulator, we can invent a stack module and not worry about that register, The normal PUSH and POP stack operations can be set up, and we will not “execute” anything to update the stack pointer. We will leave that updating to the control unit.
### ALU Operations¶
For the other ALU operations, we see that we need to feed the ALU from multiple sources. That means we need to select from the data provided by the decoder and route the proper items to the ALU inputs. Here is a typical setup:
## Tracing Instructions¶
The best way to make sure all instructions can make it through the Execute Stage is to trace the flow of the data we need to move through that stage.
Here is a working diagram, showing a bunch of parts we need to hook together to make this stage work.
Warning
This is a work in progress. I will update this as the design stabilizes. Also note that this diagram is not broken up into distinct stages. We will add that after we get the basic data flow figured out.
Let’s focus in on just the part of this diagram concerned with the Execute Stage:
### BRNE¶
This instruction alters the value of PC, using the ALU (16-bit). Here is the flow we need:
Making sure the multiplexors are set correctly is a job for the controller. The decoder will have identified this instructions, providing the information needed to make this all happen when the execute stage is activated.
### DEC¶
Here is the data flow for this instruction:
I have seen a lot of diagrams where the multiplexor feeding the ALU has a constant value on one side. We can just teach our decoder to generate the needed constant value when it decodes this instruction.
### EOR¶
This is one of many simple two register ALU operations. ALl generate an 8-bit result:
### IN/OUT¶
Both of these instructions simply move data between registers and ports. We will consider them during the final stage of processing.
### LDI¶
This instruction passes a constant on to the store stage, so there is not much to see here.
### RJMP¶
In this instruction, we need to modify the program counter by adding the constant to the current (updated) counter value. This operation is identical to that shown for BRNE so we do not need to see anything new here.
### RCALL¶
Again, this instruction is going to update the program counter. However, the current value in that (updated) program counter points to the next instruction. We will be saving that value, unmodified, on the stack. After that, we again update the current value of the program counter by adding the constant. This is another operation already covered by the BRNE setup.
### RET¶
This instruction will update the PC using data popped off of the stack. No processing will occur here.
### STS¶
In this last instruction, we will be storing a data item locate din some register, and storing it in a location defined by the 16-bit constant. There is no processing going on, so all we need to do it route these items through the execute stage to store, where the final actions will occur.
We have managed to put together a simple Execute Stage` using only our ALU and two multiplexors. We have verified that we can route the signals where they need to go. All we need to do to complete this action is teach the control unit how to set the selectors that route the signals, then work through all the “tick” calls to make things happen. , | {} |
# Analytic method of vector addition
If two vectors display the results from two adjoining sides drawn from any point of a parallelogram in the direction of the result, then the result and direction, the parallelogram that is drawn by the diagonal has been drawn from the same point to add the vector Parallel quadratic rules
Assuming vector and are bent at the opposite angles, they are displayed in the direction of the parallel quadrilateral OPQS in the direction of OPQS and OP and the OS, then according to the rule of a parallelogram, resultant of $\vec{A}$ And will be represented by the diagonal OQ in the result and direction. In order to find the result of the resulting, we increase the side OP and pull the vertical QE from the point Q, thus
Thus, in the right-angled triangle OEQ
NOW
SO
IN Right-angled triangle
OR
So the final equation will be
, NOW
OP =A , PQ= OS = B , OQ = R ,, SO
To find out the direction of the resulting, say that the angle of $\vec{R}$ vector $\vec{A}$ is formed by $\theta$
NOW , OP = A , PE = BCOS$\theta$
To find the value of QE, in triangle PEQ
Special cases
(1) when both the vector are in the same direction
In this
Again equation
and
Thus the result of the result $\vec{A}$ is equal to the yoga of both vector $\vec{A}$ and in $\vec{A}$ and $\vec{B}$ same direction
(2) when both vectors are at a right angle to each other
(3) When both the vectors are in the opposite direction
then
Again equation
Thus the result of the resultant vector $\vec{R}$ is equal to the difference of the result of both vectors and in the direction of the large vector
It is clear from the above that the result of both vectors is maximized, then both vectors are in the same direction and this is minimal when they are in the direction
Previous Post
Next Post | {} |
# Local scale invariance without conformal anomaly
I need to know if conformal symmetry can be localized in the same manner that global symmetries like $$SU(2)$$ is localized and gauge bosons pop up?(I assume the trace anomaly doesn't violate the scale invariance symmetry)
If so what is the particle that appears after localization? Is such symmetry compatible with the rest of the Standard Model symmetries? (In other words, if it can create any violation of mixed vectorial current anomaly via triangle diagrams)
• Gauging a spacetime symmetry is very different from gauging an internal symmetry. I would look up conformal supergravity. Nov 25 at 13:19
• You can't just "localize" Lorentz invariance and get gravity either - the usual procedure of "gauging" global internal symmetries produces a Yang-Mills theory, but the Einstein-Hilbert action of gravity is not of Yang-Mills type. Without specifying what exactly you mean by "localizing" here, this question is not really answerable. Nov 25 at 13:58
• I never said it's a Yang-Mills action. Localizing a global symmetry no matter if internal or a symmetry of spacetime, is viable as it modifies the covariant derivative by the inclusion of a connection. I guess this is the only sense that can be made out of the "localization" of a symmetry. @ACuriousMind Nov 25 at 14:13
• But you have to state what the action is in order to even have a symmetry, since a symmetry (global or local) is defined as a property of the action, after all. And just saying "replace the derivative by a covariant derivative" doesn't cut it - you need an additional term in the action that produces a non-trivial e.o.m. for the gauge field. Nov 25 at 14:22
• I barely guess that actions define the symmetries, in fact, symmetries uniquely define the action or lagrangian! @ACuriousMind But for the sake of clarity let's take a scalar boson on a flat background. Trivially trace anomaly vanishes in two-dimensional flat spacetime. Is such symmetry localizable in the sense that I meant? why? Nov 25 at 14:43 | {} |
Mathematical Innovation
The problem that was given by the ancient mathematician (ARCHIMEDES) concerning the squaring of the circle by using ruler and diabetes challenged me to take the time with it, just as many people did in the past.
This known and still (unsolved) problem it’s true that it has bothered for more 2000 years many mathematicians and not only them, and it will remain unsolved for the years to come.
This happens many times; a difficult maths problem ends up in an easy solution and vice versa. This actually means that there is always a solution, unless the clues of a problem are given incorrectly, just like this particular problem (our problem). Let’s see the basic mistake, why the circle be squared from its real dimension.
We know that if take the diameter of a circle 1m. its length is said to be (3,1415…) m. a transcendental number (infinite). This is right, that (3,1415…) is a transcendental and irrational number, and it was proven by Ferdinand Von Lindermann 1882 (a) and Johann Heinrich Lambert 1761 (b).
For this reason and only, because of this (particular) irrational number ( 3,1415…) which was given wrong, the circle cannot be squared. It is logical! Because it is not the real π. number, the QUOTIENT. Since there is no theory or a formula that can prove or check the π (3,1415) the length of the periphery of a circle to its diameter that was given, then there can be a doubt if it is ( correct ).
Theorem and Proof I.E. ( Ioannis Efthimiadis )
In order to square the circle we have to know its two basic factors, the real number π and its radius. The first meaning of squaring the circle means that I check up the number π, in connection with its radius to the basis of the square. The second meaning is that they must have the same area. Therefore, up to today, they haven’t managed to square the circle and to prove number ( 3,1415…).
The ( 3,1415…) is a number approximate to the real number π. The proven number π, with the formula I.E. 2r-(2r/Φ)+r is an Sacred number: π I.E. 3,111...
Confirmation of π I.E. 3,111… with the formula I.E. 2r-(2r/Φ)+r
We take the radius of a circle 1m,
Which covers an area of π I.E. 3,111…
Square meters ( shape 1 ).
We shall square the circle with the formula
I.E. 2r-(2r /Φ)+r and we shall prove the
real number π I.E. 3,111… in the
following shape (2), (3) and (4).
Then we take the radius of the circle
1m and we double it 2r = 2m
(shape 2).
We set the golden mean in radius AO
2 r = 2m with the golden number Φ
(shape 3).
The formula I.E. 2r-(2r/Φ)+r confirms the π I.E. 3,111…
in relation with the radius of the circle, to the basis of the square,
so that the square has the same area with the circle.
Contact email: | {} |
Are there absolute reasons to prefer row/column-major memory ordering?
I've heard it said that "fortran uses column-major ordering because it's faster" but I'm not sure that's true. Certainly, matching column-major data to a column-major implementation will outperform a mixed setup, but I'm curious if there's any absolute reason to prefer row- or column-major ordering. To illustrate the idea, consider the following thought experiment experiment about three of the most common (mathematical) array operations:
Vector-vector inner products
We want to compute the inner product between two equivalent-length vectors, a and b: $$b = \sum_i a_i x_i.$$ In this case, both a and b are "flat"/one-dimensional and accessed sequentially, so there's really no row- or column-major consideration.
Conclusion: Memory ordering doesn't matter.
Matrix-vector inner products
$$b_i = \sum_j A_{ij} x_j$$
The naive multiplication algorithm traverses "across" A and "down" x. Again, x is already flat so sequential elements are always adjacent, but adjacent elements in A's rows are most often accessed together (and I suspect this is likely true for more sophisticated multiplication algorithms like the Strassen or Coppersmith-Winograd algorithms).
Conclusion: Row-major ordering is preferred.
(If you let vectors have transposes you can define a left-multiplication of matrices, $$x^T A$$, in which case column-major does become preferable, but I think it's conceptually simpler to keep vectors transposeless and define this as $$A^T x$$.)
Matrix-matrix inner products
$$B_{ik} = \sum_{j} A_{ij} X_{jk}$$
One more time, the schoolbook algorithm traverses across A and down X, so one of those traversals will always be misaligned with the memory layout.
Conclusion: Memory ordering doesn't matter.
ASCII (or similar) strings are most frequently read across-and-down. There's a lot more to consider since a multidimensional array of characters could be ragged (different length rows, e.g. in storing the lines of a book), but the usual traversal pattern at least suggests a preference for row-major ordering.
Conclusion: Row-major ordering is preferred.
Of course, this analysis is extremely crude and theoretical, but it at least suggests row-major ordering is a little more "natural" (from a performance perspective) for multidimensional arrays. Does this stand up to real-world examination? Are there any similar analyses that lean the opposite way and suggest an absolute advantage to column-major ordering?
• "it's conceptually simpler to keep vectors flat": er, there is no other possibility than "flat vectors" ! Aug 9 at 7:45
• @YvesDaoust Mmmm...sort of. Many resources keep vectors distinct from multidimensional arrays ("matrices") but permit a "vector transpose", and particularly MATLAB-y schools of thought consider them as "just" N-by-1 or 1-by-N matrices. Both of which are a sort of non-flatness. Incidentally, Julia put a lot of work in trying to establish a sane convention. Aug 9 at 13:16
• This is irrelevant. Though the matrix descriptor might distinguish between $1\times n$ and $n\times1$, the $n$ elements are still stored contiguously; it would be foolish to use a stride of $n$ (or any other stride). A vector can only be flat and a transpose leaves all elements in place. Aug 9 at 13:21
• Of course, but it does matter if you decide to allow left multiplication of a matrix (in which case column-major ordering is preferable), as I mentioned in the post. I don't mean flat as in contiguous, I mean flat as in "not having a transpose". Aug 9 at 13:24
• You should have read my answer. It shows that for matrix-vector multiplies, the storage order is indifferent. Aug 9 at 13:29
Whether row-major or column-major order is more efficient, depends on the storage access patterns of a specific application.
The underlying principle of computing is that accessing storage in sequential locations tends to be the most efficient pattern possible, whereas accessing storage at disparate locations incurs an overhead in seeking to the data on each iteration, so organising the storage to suit the typical algorithms performed on the data by a particular application, can result in a performance gain.
It's also worth considering what we mean by rows and columns. By a "row" we typically mean a set of fields that relate to one logical/conceptual entity - a row contains fields (in a hierarchical relationship). By a "column", we typically mean a set of fields that share a common meaning or type, but where each field relates to separate logical/conceptual entities - a column is a cross-cut of fields taken from multiple logical entities.
I suspect row-major ordering tends more often to be the default, because it is more common for algorithms to want to access the related fields of the same logical entity at once, than it is for them to want to access fields with the same meaning but across different entities at once.
I suspect also, given the definition of rows and columns above, that row-major aligns with how programmers are most readily inclined to think about accessing data - it's most likely to accord with their mental model of how data is organised. Deviating to column-major is something you then do for a specific performance or algorithmic reason, not by default.
Similar answer to that of Steve.
The most appropriate storage order depends on the traversal patterns. But the programmer has some freedom to optimize the pattern in a way that is cache-friendly.
E.g. a matrix-vector multiply can be implemented as
• clear all $$b_r$$,
• loop on the matrix rows:
• loop on the matrix columns:
• accumulate $$A_{rc}x_c$$ to $$b_r$$.
or
• clear all $$b_r$$,
• loop on the matrix columns:
• loop on the matrix rows:
• accumulate $$A_{rc}x_r$$ to $$b_c$$.
These two versions trade vector cache-friendly accesses for matrix cache-friendly ones. For matrix-matrix products, there are yet more options.
In full-fledges linear algebra libraries, the access patterns are so numerous and varied that every use case might have a differently affinity for one storage order or the other, and an absolute preference is impossible.
The fact is that with current computer hardware, the time to access one cache line (often as large 64 byte) of consecutive bytes is practically the same as the time to access a single word (say 8 bytes) in a cache line, so it is much more efficient to access items that are stored in consecutive memory positions. And accessing items that are say exactly 4,096 bytes apart can be especially inefficient.
Many languages don't support two-dimensional arrays, they support arrays of arrays instead. The first index will specify a subarray. The second index will specify an element within that subarray. Since any objects (including arrays) are stored consecutively, changing the second index will access consecutive items in memory; changing the first index will access items far apart.
(I have seen hardware that actually cached small squares or rectangles in one cache line. So accessing diagonal elements, or more importantly small triangles of data, required fewer cache lines. This required an allocator that would allocate rectangular data) | {} |
# Our Favourite Functions: Part I
Because we love functions!
Functions are rather important in maths. From the humble $y=$ constant to the exotic special functions, from straight lines to never-ending spirals, from those in one variable to those that live in multidimensional worlds, they’re pretty hard to escape. So we asked people what their favourite function is and you’ll find their contributions below. We’d love to hear what your one is, so get in touch at contact@chalkdustmagazine.com and you can also read Part II of this blog.
### Stream Function (Pietro Servini)
Source: Oliver Southwick
In incompressible fluid dynamics (where the density of the liquid or gas remains approximately constant), the stream function ($\psi$) is a very useful thing. The two dimensional version was introduced by Lagrange in 1781 and in Cartesian coordinates satisfies $u = \partial \psi/\partial y$ and $v = – \partial \psi/\partial x$, where $u$ and $v$ are the fluid velocities in the $x$ and $y$ directions respectively. Lines where $\psi$ is a constant are known as streamlines which, when plotted, show us the direction in which a fluid element will travel next and can result in beautiful pictures, such as this one modelling the formation of an eddy (centre in blue) as currents turn around the southern tip of Africa (in white).
This is a weird looking function, $\cosh(x) = (e^x + e^{-x})/2$, and it has a funny name (hyperbolic cosine?!) but it turns out to be a function that you see drawn every day! If you draw this function, it looks like a U, and in fact it’s identical to the shape you get if you hang a chain between two points. It’s called a catenary (because maths needs more terms, right?) and the fact that this function turns up in other parts of maths as well (you first see it when solving certain differential equations), I think is really cool.
### Indicator Function (Rafael Prieto Curiel)
Number of Underground stations within a distance of 400 metres.
Sometimes the simplest things are the most interesting. Hence my favourite function simply counts things, but in a smart way: it tells us, from a particular set of objects, how many are at most a certain distance from a fixed point. It might sound trivial, since it only counts stuff, but its power is fantastic. For example, it can tell us, like in the image, how many London Underground stations are at a walkable distance of 400 metres from where we are (like the function in the picture). Or we could count the more than 100 billion stars in our galaxy, or the over 1,000 operational satellites that are orbiting the Earth. Or tell the difference between carbon (six electrons around the nucleus of an atom) and ununpentium (115). Me, I would love to know how many pubs are open until late and are close to my house, or how many mathematicians are currently living in London, and those numbers could be obtained just by evaluating the function wisely.
### Weierstrass Function (Anna Lambert)
My favourite function has infinitely many zigzags. It’s called the Weierstrass function, and is a classic feature of a first term analysis course. It’s written
\begin{equation*}
f(x) = \sum_{n=0}^{\infty}a^n \cos(b^n \pi x)
\end{equation*}
where $0 < a < 1$, $b$ is a positive odd integer and $ab > 1+\frac{3}{2}\pi$. It might not look like much, but it was the first known example of a function that is continuous but nowhere differentiable. What does that mean? Well, a function is continuous if you can draw it without taking your pen off the paper. It is differentiable if the slope of the function varies smoothly, but it will fail to be differentiable at a point if that point is sharp. For example, a zigzag is continuous, and differentiable everywhere except for at its zigs and zags. So for a function to be nowhere differentiable, every single point on the curve must be sharp. This is an incredibly weird concept to think about. Clearly all of these sharp points cannot be visible at once, but as you zoom in, you can see more and more zigs and zags. This is just like a fractal—as you magnify, the curve looks the same and reveals even more detail.
### Hat Function (Matthew Scroggs)
My favourite function is the piecewise linear hat function.
$$f(x)=\left\{ \begin{array}{cl} 0&\mathrm{if\ }x<x_{i-1}\\ \frac{x-x_{i-1}}{x_i-x_{i-1}}&\mathrm{if\ }x_{i-1}\leq x<x_i\\ \frac{x-x_{i+1}}{x_i-x_{i+1}}&\mathrm{if\ }x_i\leq x <x_{i+1}\\ 0&\mathrm{if\ }x_i\geq x_{i+1} \end{array} \right.$$
The function is zero outside the range $(x_{i-1},x_{i+1})$, one at $x_i$ and linear on the sections $(x_{i-1},x_i)$ and $(x_i,x_{i+1})$.
Partial differential equations (PDEs) are a type of equation telling us how various quantities are changing and are used to model a large variety of situations, including those in the fields of acoustics, electromagnetics and quantum mechanics. PDEs are often very hard (or even impossible) to solve, and so numerical methods that give a very good approximation to the solution are required.
One such method is the finite element method, which breaks the x-axis into lots of smaller sections and then uses functions on these sections to make the difficult PDE into a set of simultaneous equations that is easier to solve. The piecewise linear hat function is the most common function used for this method.
### Popcorn Function (Belgin Seymenoglu)
My favourite function is called Thomae’s function, but it has many other weird and wonderful names, such as the raindrop function, ruler function, Stars over Babylon and my personal favourite: the popcorn function.
$$f(x)=\left\{ \begin{array}{cl} \frac{1}{q}&\mathrm{if\ }x\mathrm{\ is\ rational} \mathrm{\ and\ } x=\frac{p}{q}\quad\mathrm{\ (in\ lowest\ terms)}\\\\ 0&\mathrm{if\ }x\mathrm{\ is\ irrational} \end{array} \right.$$
What makes the popcorn function remarkable is that it is discontinuous on the rational numbers (or fractions), yet is continuous everywhere else (i.e. on the irrational numbers).
### Function Function (Matthew Wright)
Let $\mathbb{S}$ be the set of all strings of letters from the Roman alphabet. Then my favourite function is the map $F: \mathbb{S}\rightarrow \mathbb{S}$ such that
\begin{align*}
F(x)=\begin{cases} \textrm{fun}x &\textrm{ if } x=\textrm{ctions} \\
x &\textrm{ otherwise} \end{cases}
\end{align*}
Why do I like this function? Because it is the function which puts the fun in functions! (sorry…)
Think we can do better? Email us at contact@chalkdustmagazine.com with a few sentences about your favourite function and we’ll feature them in Part II of this blog in a few weeks time.
• ### Crossnumber winners, issue 12
Did you solve it?
• ### Read Issue 13 now!
Sus Mafia strategies, LMS president Ulrike Tillmann, and Bae's theorem 😍 all feature in our latest edition. Plus all your favourite puzzles & columns.
• ### Issue 13 launch art project
Issue 13 is here! Find out how to get involved with our mathematical art launch project | {} |
# PoS(CHARM2016)005
Open charm physics with Heavy Ions: theoretical overview
A. Beraudo
Contribution: pdf
Abstract
The peculiar role of heavy-flavour observables in relativistic heavy-ion collisions is discussed. Produced in the early stage, $c$ and $b$ quarks cross the hot medium arising from the collision, interacting strongly with the latter, until they hadronize. Depending on the strength of the interaction heavy quarks may or not approach kinetic equilibrium with the plasma, tending in the first case to follow the collective flow of the expanding fireball. The presence of a hot deconfined medium may also affect heavy-quark hadronization, being possible for them to recombine with the surrounding light thermal partons, so that the final heavy-flavour hadrons inherit part of the flow of the medium. Here we show how it is possible to develop a complete transport setup allowing one to describe heavy-flavour production in high-energy nuclear collisions, displaying some major results one can obtain. Information coming from recent lattice-QCD simulations concerning both the heavy-flavour transport coefficients in the hot QCD plasma and the nature of the charmed degrees around the deconfinement transition is also presented. Finally, the possibility that the formation of a hot deconfined medium even in small systems (high-multiplicity p-Au and d-Au collisions, so far) may affect also heavy-flavour observables is investigated. | {} |