prompt
stringlengths
2
90
completion
stringlengths
1
27.4k
Diversity in Technology and Open Source
All of the members of the [Pocoo Team][1] seem to be white men.<p>[1]: <a href="http:&#x2F;&#x2F;www.pocoo.org&#x2F;team&#x2F;#team" rel="nofollow">http:&#x2F;&#x2F;www.pocoo.org&#x2F;team&#x2F;#team</a><p>Is that a reflection of your hypocrisy? Or the difficulty of recruiting people that are not white men? I&#x27;m guessing it&#x27;s mostly the latter.<p>You wrote:<p>&gt; When you start an Open Source project today, in particular one which is further disconnected from frontend technologies there is a very high chance the organic community development will be everything but diverse.<p>and given the following, from [this comment][2] in this thread, which seems probably true:<p>&gt; Major open source projects are disproportionately managed and staffed by people with full-time jobs at major software companies, and the process of obtaining and thriving in one of those jobs is not intrinsically color and gender blind, so this argument isn&#x27;t persuasive.<p>[2]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=14488000" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=14488000</a><p>we should be tempering our judgements of open source projects, e.g. that they&#x27;re not welcoming to people that are not white men.<p>I&#x27;m confused as to what principle or principles you think should actually be adopted. Should all open source projects reflect the &#x27;diversity&#x27; of the entire world?<p>Consider the following, from [this comment][3] also from this thread:<p>&gt; LGBT people are overrepresented - almost double their percentage in the general population in fact (7% vs 4%).<p>[3]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=14488689" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=14488689</a><p><i>Assuming</i> that the above is true, is this a cause for concern? Would you similarly be concerned if it were true that, say, Asian people, or even just Asian men, were over-represented in technology or open source software projects? Is that not a cause for concern for you too?<p>It sure seems like the only cause for concern is that there are too many white men. Would anyone ever criticize an open source project for not having &#x27;enough white people&#x27; or &#x27;enough men&#x27;, let alone &#x27;enough white men&#x27;?<p>More from your post:<p>&gt; What&#x27;s worse is the longer you wait to try to get people involved in the project that would naturally not try to join the harder it will be. When your team is 4 men, the first woman which joins will make a significant impact. When your team is already 20 men you need to get a lot more women on board to have the same impact.<p>My problem with this is that you&#x27;re pretty clearly, tho implicitly, devaluing contributors that don&#x27;t help your project meet your diversity quotas. Your team is six white men. Have you considered replacing your existing members with women or people of color? When someone contacts you, your team members, or other contributors to your projects, do you ask them to identify their race, ethnicity, sex, or gender so you can discourage white men from contributing? If not, don&#x27;t you realize that every white man that joins your team or contributes to your projects is making your diversity problem worse? You&#x27;re also implicitly bashing your team members and contributors for being the wrong kind of people because you&#x27;re telling them that their homogeneity is:<p>1. &quot;not healthy for a project or a community to lack diversity&quot; 2. Contributing to an &quot;echo chamber&quot; 3. Increasing the difficulty of future diversity 4. Hurting the project because they are relatively bad at &quot;de-escalating arguments in bug trackers and mailing lists&quot; 5. Hurting the project because they are relatively bad at &quot;[taking] care of documentation&quot; 6. They are not &quot;people that make software work in new cultural contexts (localization, globalization, internationalization, etc.)&quot;, i.e. they are unable to understand or work with other &quot;cultural contexts&quot;. 7. They are not &quot;people that care about user experience&quot;<p>---<p>I&#x27;m sure you agree with me in thinking that everyone that wants to &#x27;participate in technology&#x27; or contribute to an open source project should be able to do so. And moreover, people that <i>don&#x27;t even realize that they would enjoy contributing to an open source project</i> should be given that knowledge – all else being equal of course.<p>But that&#x27;s the key <i>constraint</i> on how much marginal effort should be expended to recruit people that aren&#x27;t already participating and contributing – all else is <i>not</i> equal. Everything is costly to some degree.<p>De-escalating arguments in bug-trackers or mailing lists – let alone even <i>participating</i> in arguments – requires time and energy! And there&#x27;s only a finite supply of either! And opportunity costs are real and pervasive – arguing with people <i>can</i> be satisfying, but it can also be incredibly aggravating!<p>Writing documentation – and editing it, or maintaining it, or re-organizing it, etc. – requires time and energy! Someone has to do it and for most open source projects that means someone has to <i>voluntarily</i> do it. And this neglects the fact that &#x27;localizing&#x27; or &#x27;globalizing&#x27; that same documentation isn&#x27;t even possible unless one knows at least two languages pretty well!<p>If you&#x27;re going to &quot;artificially bring balance&quot; to your open source team, your open source project&#x27;s contributors, or your conference, you&#x27;re <i>restricting</i> the supply of possible people and thus <i>raising</i> the relative cost of whatever it is that you want done, whether it be writing documentation or providing user support in your issue tracker or mailing list.<p>I haven&#x27;t personally observed any significant <i>and unfair</i> obstacles preventing people that are not white men from participating in open source projects, or &#x27;technology&#x27; generally. But I&#x27;m <i>sure</i> they exist. Let&#x27;s get rid of them. But first, let&#x27;s actually identify them, and let&#x27;s be careful with implying that every group of people that doesn&#x27;t near-perfectly reflect the demographics of its wider community or country or whatever is guilty of overt racism, sexism, or other discrimination.
Ask HN: Why Use Social Media Logins?
As a &quot;user&quot;, I love logging in with social media. How awesome is it that I can just click a button and be logged in instantly? No worrying about setting up a password, at least, for most websites. I used to not trust it, but after becoming a developer and learning: it is just grabbing an email or a name is not too bad. Though sometimes, it may grab more than that, such as your friends list, etc. Somehow, that became okay and no one cares.<p>As a &quot;developer&quot;, things are quite different. There are a few libraries of code on Github and you can find examples across the Internet. For the most part, these examples do work after you have set up some API backend on the social media platforms themselves.<p>Code would almost seem straightforward. However, it is not always the case, especially, when a social media platform is ever-changing. For example, last week, someone released a new product on Hacker News and went about his way, only to realize, that people were attempting to login to Facebook, which recently upgraded from SDK 4 to SDK 5. Massive changes occurred. How embarrassing to go live, only for one of your social media buttons not to work? It&#x27;s not his fault, but it&#x27;s Facebook who updated their code so older versions no longer work. Most developers are trying to focus on their own code and make sure it works. Now they have to worry about potential users being unable to access the front door?<p>When I went to update Facebook SDK 4 to SDK 5 on one of my own platforms, using Facebook&#x27;s very example that they provided. I couldn&#x27;t get it to work. I ended up removing Facebook from my registration and login screen. The three that remained were Google, Twitter, and LinkedIn.<p>After some careful testing with these three, they work for the most part, except occasionally, if someone logs into Google, and then tries to login again real fast, such as your code not remembering them and keeping them logged in, Google&#x27;s token expires and it won&#x27;t auto-generate a new one, and it creates a problem, an error that you can&#x27;t even catch, so your users are exposed to seeing that error.<p>Twitter, too, seems to have some issues. Using the examples you can easily find on the Internet, and I&#x27;m no genius when it comes to this stuff, but I managed to copy some code, put it together, and make it work with my database, this would also come with problems: It keeps looping and acquiring a new oauth token without ever going back to my website with the information. After spending nearly an hour trying to figure out what the problem was and scouring the Internet with very few answers, I gave up. So Twitter has now been removed.<p>LinkedIn is my last resort, but to offer only one option: is it better at all?<p>The purpose of social media logins was to making logging in easier, but none of these social media platforms have beginner or even novice developers in mind. If any of the code was to change while your web application is live, than you are screwed, trying to come up with a patch for it. Lest, you might not even find out until someone actually reports it that your Facebook, Twitter, LinkedIn, or Google login isn&#x27;t working. How would you know otherwise, since you probably might only use one or none?<p>I sought to find out: Do we really need to use social media logins? In this day and age, it is supposedly expected. For the most part, researching claims, there is a 60% usage of social media login buttons. But are they necessary? I came across another article from a developer at MailChimp which makes a great point about social logins: while people use them, if people like your application or find it useful, they are going to register and login with a username or email address, regardless of whether you have social media or not.<p>Here is the article: <a href="https:&#x2F;&#x2F;blog.mailchimp.com&#x2F;social-login-buttons-arent-worth-it&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.mailchimp.com&#x2F;social-login-buttons-arent-worth-...</a><p>Navigating to MailChimp&#x27;s registration and login page: there is no social media presence.<p>So I know there are some attempts at making life easier for developers out there, such as Hybridauth Social Login PHP Library ( <a href="https:&#x2F;&#x2F;hybridauth.github.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;hybridauth.github.io&#x2F;</a> ), which may do a good job, but I&#x27;ve not used it yet. There are also other non-free solutions out there in which you pay for a script or a web company to handle it all for you. This too, might be good, but why? Why isn&#x27;t it easier for developers to set up social media buttons?<p>Maybe it is security concerns or whatever, but you would think that as large as these social media platforms are, they&#x27;d be better at giving you the email and username or whatever you need.. and making it easier. After all, it is technically free branding for them that you put on your website.<p>Anyone else struggle with social media login or just give up completely or maybe you found a really easy solution that grants you peace of mind?
Ask HN: How do I earn money as a teenage programmer?
Keep programming fun and do a normal job. Imagine that you decide to pay for your car bills by working on some type of car garage place, either doing mechanics or serving customers. By being part of this business you will learn many skills that may not seem as important as the latest Tensorflow coolness, but are best learned now rather than later. Learn how to put the customer first, learn how to negotiate, learn how to survive being on your feet all day, take on responsibility, have great camaraderie with the team, learn how to be an entrepreneur, learn about how much effort a company has to put in to pay taxes, staff and suppliers.<p>Sure you will be too tired when you get home to do all that wonderful programming, but this is not a forever job, it is a job that gets you solid experience that may be more useful than you think.<p>For instance, imagine some fantastic Tesla gig comes to town. You want to be programming that centre console with some Tensorflow coolness. You are up against some other guy that wants to do the same. You just so happen to know how to sell a car because you have done it, you have also done it as part of a team and appreciate the nuances of it. Your idea of what shows on the centre console will be better than the other guys because you have seen how customers behave on the showroom floor. So for you it is not just a programming job, it is about customer satisfaction and the bigger dream.<p>I provide an automotive analogy here, I recommend any &#x27;normal job&#x27; and that can be in retail or in factories or an office, it matters not. Specialist sales is true retail, stacking shelves or sitting at a till is not what you want.<p>Essentially all software is for someone or some industry, clearly there is &#x27;plan9&#x27; exceptionalism, but the general deal is that software solves a real world problem. So you can do normal jobs in this real world, to therefore understand the world of the problems that the software is trying to solve. So if you work in retail and learn how to put the customer first, that will come in handy if you have to do online sales stuff. Will they want the guy that sat in the basement programming, or the guy that spent time hard at work learning the core thing the hard way? I suspect the latter.<p>With this strategy you can keep programming fun. By that I mean not patching some legacy system that needs a complete rewrite but that is organisationally impossible. It means not being micro-managed. Also, with &#x27;normal&#x27; jobs, the hours may be long but you don&#x27;t take your work home. With software there is none of that, it is as bad as studying for always having more one can do.<p>With a lot of normal work there is an aspect of where you are making the world a better place and making a difference. If you find your work is valued by customers or the local community then there is job satisfaction that is quite hard to find if sat behind a screen.<p>Every business has pinch points, these can often be automated by someone who can code. So in that apparently mundane factory you might see an opportunity to solve a problem or two, in code. It is for you to see these opportunities, however they are everywhere and you can develop a niche new product for your company, if you polish it beyond MVP you might be able to sell that across the sector. For instance, returning to the car analogy, you might find that a common problem in a particular dealership where a product puts you through a hoop or two more than needed. You could be bright and fresh to the problem and get it right for those too encultured in the old ways to see that better is possible. Having solved the problem for your original employer you could then put a &#x27;v 2&#x27; version of your software out in a specialist marketplace, then learn how to support and sell a full commercial version of what you originally built. You can also do this whilst keeping the original normal day job. In making such a creative solution out of thin air you have got on with the job and not stood around waiting chicken and egg style for someone to hire you.<p>Regarding creativity, there is a lot to be said for getting programming gigs in fiercely competitive creative industries. Here technical talent can be hard to find, particularly those willing to cross the line of being actually creative. It is easy to hide in the programming world and to be a &#x27;dunno&#x27; with creative decision making. But if you can straddle the both then there are plenty of non-technical types wanting to give you work.
Ask HN: People who completed a bootcamp 3+ years ago: what are you doing now?
Attended a bootcamp (General Assembly) in 2014. Very positive outcome for me I got a job offer the week I graduated, but like others have mentioned, a significant number of others had a harder time getting their first gig. Some gave up and I don&#x27;t blame them. I&#x27;m in San Francisco where competition and opportunity are very high.<p>Bootcamps have their flaws, but are definitely filling a need in the tech sector. There&#x27;s too many dev jobs and not enough devs so while there&#x27;s some saturation leaking from bootcamps, it&#x27;s not because there aren&#x27;t enough jobs; it&#x27;s just very competitive and companies generally do poorly at recruiting. Bootcamps are trying to fill a void, but they&#x27;re not all equal.<p>For anyone thinking of attending a camp: look for camps that offer scholarships to attend. They&#x27;re hungry for students for a lot of reasons but also it&#x27;s indicative of a camp that really wants to offer you something and they&#x27;ve managed to get the big companies to pay the way for you. Yes that&#x27;s how many of those scholarships work. The bootcamp networks with big companies like Google who offer sponsorship for a set amount of students (usually minority students). Whatever you think of the camp, that&#x27;s a good sign they&#x27;re trying to expand their offerings and those camps will usually do a great job of helping you succeed.<p>Second, look at the more established camps. If you&#x27;re a woman, Hack Bright and Grace Hopper and the like are premiere camps. Their programs are amazing. If you can get to one, consider those your best options for getting a quality education. For others, App Academy has a well earned reputation; Hack Reactor is competitive; General Assembly is well established and has vast resources for students. I&#x27;d say Dev Bootcamp but as someone else mentioned, they&#x27;ve changed somewhat over the years and I&#x27;m not sure where the quality lies there. I work with a lot of bootcamp grads from different camps for the past few years and I continue to mentor at these camps so this is my firsthand experience with them.<p>Finally, be ready to study ...not necessarily all day everyday (people have families to attend to) but definitely for a <i>solid</i> 8-10 hours to get the most of it (and try to take 1 or 2 days off; the brain needs a break to absorb all the things you learn and it will be tempting to keep going without breaks).<p>In all cases, you&#x27;re going to be surrounded by other students and developers of varying experience daily ...this is the greatest benefit you reap from bootcamps. You have people you can go to hourly! Ask anyone who is self taught how valuable it is to have this sort of access to getting your questions answered all day every day. You&#x27;re also going to be at a place that constantly networks with companies on your behalf. Regardless of how good the camp is at placing grads, the fact is they&#x27;re already in the door and it&#x27;s a leg up for you to have them do a lot of foot work to connect you. That brings me to networking: bootcamps are a great place to do it. There will be guest speakers and events to attend every week and professionals on site whose daily job is to talk to companies so that you know what they want to hear.<p>Whatever you think of bootcamps, they&#x27;re always a hotbed for networking and learning. If you go into it with goals, a learning mindset and dedicate your mental resources for the 12 - 24 weeks you&#x27;re there, you&#x27;ll do well.<p>I must emphasize to make sure you set your goals before hand and chase them tenaciously. I think one of the best I things I did was have a mental timeline and SEVERAL acceptable outcomes that I&#x27;d be satisfied with. For example, my endgame was to get a job as a developer within 3 months of graduating. During that time I&#x27;d attend weekly workshops and network; and I set a schedule to study algorithms and build an app everyday in any language (the idea was repetition, make hacking second nature while studying algorithms was more about digging deep). I would have accepted working as a contractor, creating my own business or being employed as a junior dev and my study schedule made all of those equally likely outcomes. I focused on improving myself, establishing my own network and at the end of 3 months I&#x27;d be prepared to either strike out on my own or have a job. Two of those were in my control, and that was important. Nothing can be promised in a bootcamp no matter where you go so it&#x27;s important to set realistic expectations and to hold yourself accountable for the outcome.<p>It&#x27;s a lot of hardwork, but I found it enjoyable, productive, efficient and just flat out fun (really enjoyed late nights with other poor students and all the creative ways we found to grow together) and I highly recommend it.
Ask HN: My company has been acquired and I'm kicked out. What should I do now?
As someone who is feeling like I&#x27;m starting to climb out of a multi-year burnout episode, I can tell you that you are definitely not a failure, even though it definitely does feel like it.<p>I believe I know exactly how you feel because up until about a few days ago, I also considered myself a failure, for similar reasons.<p>It&#x27;s hard to compare apples to oranges of course, but in my case and just to give a bit of background, I quit my job while being part of team that created a very successful product for a Fortune 500 company (&quot;failure&quot; #1), my girlfriend left me 3 months after that, after 6+ years of relationship (&quot;failure&quot; #2), I felt my startup wasn&#x27;t working and I felt trapped so I also left my startup, which didn&#x27;t even had more than 3 customers (&quot;failure&quot; #3), meanwhile I pretty much lost all my savings (&quot;failure&quot; #4).<p>So, to sum up: no girlfriend, no money, no job, no startup... no nothing. Just 3 or 4 friends that endured while at my darkest moments.<p>After feeling like I failed at everything important in my life I felt lost. Like trying to sail the open sea during a cloudy night without navigation instruments, and not even knowing where was I supposed to port. I even felt so lost that I sometimes wasn&#x27;t sure if I had a boat at all, or if there even was any sea left to navigate.<p>I know this is pretty &quot;cliché&quot; advice, but I can tell you that the feeling <i>will</i> go away at some point. It might take you yearS, but it will come. You just have to let all the experiences that you lived through settle down, so you can start seeing a path (or paths) in your life again.<p>Do not underestimate the amount of information you just got slammed with, that you haven&#x27;t had the chance to process. And I don&#x27;t mean only knowledge or skills, I mean emotionally.<p>Going through something as intense as having a company, and then (and this I can only imagine) having to sell it under such conditions, sets your brain and emotions into overdrive, just to be able to figure out what&#x27;s going on, let alone to actually make anything out of it.<p>Don&#x27;t push it. Just let it rest and things will start to get sorted in time. Of course, you still need to keep an eye open to avoid falling into a deep(er) depression, but other than that, I believe you just need to ride the wave. That&#x27;s just part of the trip. And a necessary one at that.<p>In my case, I felt so disconnected to everyone else, because in my mind I was <i>that</i> guy that just can&#x27;t make anything work, you know? &quot;hey, look, he can&#x27;t even keep a healthy relationship&quot;. &quot;wow, that guy is <i>such</i> a failure, I mean, he just quit his job for some stupid dream! what a loser!&quot;, &quot;incredible how stupid can some people be, right? I mean, who in his right mind would invest his life savings into such an stupid idea!?&quot;, and so on and so forth.<p><i>However</i>, what you are not seeing (and will soon enough), is that after you process that boatload of experience, you will feel like king of the world. You might still be in the gutter (hopefully not, but it is possible), but you will feel like you at least were able to fought some of your most powerful inner demons and came out of it alive. Maybe you didn&#x27;t beat the hell out of every demon, but you certainly punched more than one very hard and fast. And that feels <i>fucking great</i> once you realize what you just did.<p>You just got what I believe would be equivalent to a Master&#x27;s degree. And I don&#x27;t say that to be dismissive to people with actual degrees, but after the amount of stuff you had to do, what you had to prove to yourself and others, what you had to build (even without the slightest clue of how), I definitely consider it as a GREAT achievement, regardless of the &quot;tangible&quot; outcome (i.e. money, sales, etc).<p>Building things ex-nihilo is one of the hardest things I&#x27;ve experienced, but it also gives you such a perspective on the world that, even though I have no money, my personal relationships got strained and in some cases even broken, I have no job and I&#x27;m still in the process of getting job interviews, while at the same time having no money (and even a bit of debt), no savings, and pretty much nothing to show for what I did the last 4 years... I&#x27;d still do it again.<p>And I believe that after the dust settles, you will believe you&#x27;d do it all over again too (and you just might!).<p>So just hang in there. Trust me, this will pass and you will be much MUCH stronger and wise thanks to it.<p>I can even adventure to say that you will look back at this and remember it as one of your best experiences in your life. Not necessarily the most pleasant though, but one of the best nonetheless.<p>Cheers!
Hard Questions
A (hopefully brief) attempt at responses -- I&#x27;m working on a more detailed one.<p>Q: How should platforms approach keeping terrorists from spreading propaganda online?<p>Briefly: consider this from an epidemiological perspective. There are infectuous agents, hosts, and vectors of propogation. In public health, a combination of factors is used to limit the spread of disease, with exceedingly high effectiveness. <i>With greater effectiveness than all of acute and therapeutic medicine, by a factor of about 85% to 15%. See Laurie Garrett&#x27;s </i>The Coming Plague<i>.<p>Monitoring, innoculation, disruption, containment, elimination of breeding and development conditions, and avoiding </i>strengthening resistance to treatment* are all core elements.<p>The question of whose terrorist is whose freedom fighter also arises, as do questions over acceptable and unacceptable tactics in various forms of warfare.<p>Q: After a person dies, what should happen to their online identity?<p>This would be a very good thing to make a determination of whilst the person is still alive.<p>There is considerable prior art, on which I strongly recommend researching the legal definition and practice of <i>will</i>.<p>There&#x27;s also a practice amongst librarians and academicians of access to personal writings, journals, etc., with consideration for both the deceased <i>and</i> those still living who might be affected by revelations.<p>Q: How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Who gets to decide what’s controversial, especially in a global community with a multitude of cultural norms?<p>Cultures vary tremendously in norms, and in what is considered acceptable or transgressive. Communications, online or otherwise, breaks down the barriers between such cultures.<p>One possible response is to perhaps resurrect at least some of those walls, at least in part. There&#x27;s a notion from travel, &quot;when in Rome...&quot;. There&#x27;s also a trope of travel, of the ugly tourist -- British, American, German, of late, Japanese, Chinese, or Russian. Issues extend to both the traveller and the native.<p>The dislocation of online space in violating a sense of &quot;whose space is this&quot; is a severe one. That was amongst the more toxic elements of Google&#x27;s exceedingly ill-conceived Anschluss of Google+ and YouTube. Not only were privacy norms (enshrined only a few years earlier in YouTube&#x27;s own privacy guidelines) violated, but members of each community found themselves overwhelmed by &quot;intruders&quot; from the other.<p>De-globalising the community would seem a partial response.<p>Q: Who gets to define what’s false news — and what’s simply controversial political speech?<p>Briefly: Someone who&#x27;s exceedingly good at it. And reasonably unbiased.<p>Non-briefly: this is among the fundamental philosophical dilemmas. There is considerable prior art, there are authorities, they should be consulted (and questioned). This is not a greenfield. Making a list of those authorities and references, sharing it, <i>and the discussion</i>, should help.<p>Epistemology, justice, the Scientific Method, the history of science (and where it has and hasn&#x27;t succeeded, and at what rates), the history of free expression (and the limits placed upon it), including J.S. Mill (who did <i>NOT</i> coin the expression &quot;the marketplace of ideas&quot;, that was a free-market advocate, Francis Wrigley Hirst), and more.<p>Q: Is social media good for democracy?<p>Wrong question.<p><i>Every single change in communications and media has had profound impacts upon, and fundamentally changed, the societies in which they occurred.</i> See Elizabeth Eisenstein, Marshall McLuhan, and others who&#x27;ve written on the social impacts of communications. And by every, I mean <i>going back to speech itself</i>, as well as writing, clay tablets, paper, print, radio, film, phonograph, television, the Internet, and mobile.<p>Facebook has to face the fact that it and Google are the two largest media institutions <i>in all of history</i>. Their reach is on the order of <i>billions</i> of people. Contrast with the most-published books ever: a few billion for the Bible and Mao&#x27;s Little Red Book, 500 million for <i>Don Quixote</i>. By contrast, &quot;Gangnam Style&quot; has been viewed over 2 billion times on YouTube alone.<p>That is great power. Spider Man on line 3 with a word about responsibility.<p><i>Social media is going to change democracy. Full stop.</i> It may end it. It may only interrupt it, as radio did in spurring on fascism. We want to look to history, psychology, sociology, anthropology, economics, communications studies, information theory, and more, to get a sense of where the hell this is headed. Of late it&#x27;s been more than a bit concerning.<p>Q: How can we use data for everyone’s benefit, without undermining people’s trust?<p>Wrong question. It presumes the answer, then poses the question.<p>Briefly: 1) respect people&#x27;s boundaries, generally and 2) consider the public welfare, overall.<p>Non-briefly: this is among the fundamental philosophical dilemmas. There is considerable prior art, there are authorities, they should be consulted (and questioned). This is not a greenfield. Making a list of those authorities and references, sharing it, <i>and the discussion</i>, should help.<p>Q: How should young internet users be introduced to new ways to express themselves in a safe environment?<p>Not solely by a party whose self-interests fail to align with those of the young. Which would exclude Facebook, amongst other present Internet Giants: FAAMG -- Facebook, Amazon, Apple, Microsoft, Google.<p>The risks of indoctrination at a young age are exceedingly great. This is a role I&#x27;d like to see placed <i>outside</i> the control of any of the major participants to the extent possible.<p>Again: a partial response. There are questions not being asked by FB which should be, and much more which might also be said. I see serious limitations to this approach, and will be voicing criticisms.<p><i>But for all that, I applaud the initiative and approach, and hope that it evolves into an exceptionally necessary discussion. Facebook have out-shone the other principle participants in this space, and I truly hope they step up to the challenge.</i><p>I&#x27;ve alluded to prior art and works. In 2015 I suggested on a G+ thread that Google compile a bibliography or syllabus, <i>make it required reading of all employees and contractors</i>, and <i>share it with the public</i>. I&#x27;ll extend that suggestion to Facebook as well.<p><a href="https:&#x2F;&#x2F;plus.google.com&#x2F;+YonatanZunger&#x2F;posts&#x2F;cKot7AKmtty" rel="nofollow">https:&#x2F;&#x2F;plus.google.com&#x2F;+YonatanZunger&#x2F;posts&#x2F;cKot7AKmtty</a>
Ask HN: How do you organize your files
From a recent backup, there are<p>417,361 files<p>in my main collection of files for my startup, computing, applied math, etc.<p>All those files are well enough organized.<p>Here&#x27;s how I do it and how I do related work more generally (I&#x27;ve used the techniques for years, and they are all well tested).<p>(1) Principle 1: For the relevant file names, information, indices, pointers, abstracts, keywords, etc., to the greatest extent possible, stay with the old 8 bit ASCII character set in simple text files easy to read by both humans and simple software.<p>(2) Principle 2: Generally use the hierarchy of the hierarchical file system, e.g., Microsoft&#x27;s Windows HPFS (high performance file system), as the basis (<i>framework</i>) for a <i>taxonomic hierarchy</i> of the topics, subjects, etc. of the contents of the files.<p>(3) To the greatest extent possible, I do all reading and writing of the files using just my favorite programmable text editor KEdit, a PC version of the editor XEDIT written by an IBM guy in Paris for the IBM VM&#x2F;CMS system. The macro language is Rexx from Mike Cowlishaw from IBM in England. Rexx is an especially well designed language for string manipulation as needed in scripting and editing.<p>(4) For more, at times make crucial use of Open Object Rexx, especially its function to generate a list of directory names, with standard details on each directory, of all the names in one directory subtree.<p>(5) For each directory x, have in that directory a file x.DOC that has whatever notes are appropriate for good descriptions of the files, e.g., abstracts and keywords of the content, the source of the file, e.g., a URL, etc. Here the file type of an x.DOC file is just simple ASCII text and is not a Microsoft Word document.<p>There are some obvious, minor exceptions, that is, directories with no file named x.DOC from me. E.g., directories created just for the files used by a Web page when downloading a Web page are exceptions and have no x.DOC file.<p>(6) Use Open Object Rexx for scripts for more on the contents of the file system. E.g., I have a script that for a current directory x displays a list of the (immediate) subdirectories of x and the size of all the files in the subtree rooted at that subdirectory. So, for all the space used by the subtree rooted at x, I get a list of where that space is used by the immediate subdirectories of x.<p>(7) For file copying, I use Rexx scripts that call the Windows commands COPY or XCOPY, called with carefully selected options. E.g., I do full and incremental backups of my work using scripts based on XCOPY.<p>For backup or restore of the files on a bootable partition, I use the Windows program NTBACKUP which can backup a bootable partition while it is running.<p>(8) When looking at or manipulating the files in a directory, I make heavy use of the DIR (directory) command of KEdit. The resulting list is terrific, and common operations on such files can be done with commands to KEdit (e.g., sort the list), select lines from the list (say, all files x.HTM), delete lines from the list, copy lines from the list to another file, use short macros written in Kexx (the KEdit version of Rexx), often from just a single keystroke to KEdit, to do other common tasks, e.g., run Adobe&#x27;s Acrobat on an x.PDF file, have Firefox display an x.HTM file.<p>More generally, with one keystroke, have Firefox display a Web page where the URL is the current line in KEdit, etc.<p>I wrote my own e-mail client software. Then given the date header line of an e-mail message, one keystroke displays the e-mail message (or warns that the date line is not unique, but it always has been).<p>So, I get to use e-mail message date lines as &#x27;links&#x27; in other files. So, if some file T1 has some notes about some subject and some e-mail message is relevant, then, sure, in file T1 just have the date line as a link.<p>This little system worked great until I converted to Microsoft&#x27;s Outlook 2003. If I could find the format of the files Outlook writes, I&#x27;d implement the feature again.<p>(9) For writing software, I type only into KEdit.<p>Once I tried Microsoft&#x27;s Visual Studio and for a first project, before I&#x27;d typed anything particular to the project, I got 50 MB or so of files nearly none of which I understood. That meant that whenever anything went wrong, for a solution I&#x27;d have to do mud wrestling with at least 50 MB of files I didn&#x27;t understand; moreover, understanding the files would likely have been a long side project. No thanks.<p>E.g., my startup needs some software, and I designed and wrote that software. Since I wrote the software in Microsoft&#x27;s Visual Basic .NET, the software is in just simple ASCII files with file type VB.<p>There are 24,000 programming language statements.<p>So, there are about 76,000 lines of comments for documentation which is IMPORTANT.<p>So, all the typing was done into KEdit, and there are several KEdit macros that help with the typing.<p>In particular, for documentation of the software I&#x27;m using -- VB.NET, ASP.NET, ADO.NET, SQL Server, IIS, etc. -- I have 5000+ Web pages of documentation, from Microsoft&#x27;s MSDN, my own notes, and elsewhere.<p>So, at some point in the code where some documentation is needed for clarity for the code, I have links to my documentation collection, each link with the title of the documentation. Then one keystroke in KEdit will display the link, typically have Firefox open the file of the MSDN HTML documentation.<p>Works great.<p>The documentation is in four directories, one for each of VB, ASP, SQL, and Windows. Each directory has a file that describes each of the files of documentation in that directory. Each description has the title of the documentation, the URL of the source (if from the Internet which is the usual case), the tree name of the documentation in my file system, an abstract of the documentation, relevant keywords, and sometimes some notes of mine. KEdit keyword searches on this file (one for each of the four directories) are quite effective.<p>(10) Environment Variables<p>I use Windows environment variables and the Windows system clipboard to make a lot of common tasks easier.<p>E.g., the collection of my files of documentation of Visual Basic is in my directory<p>H:\data05\projects\software\vb\<p>Okay, on the command line of a console window, I can type<p>G VB<p>and then have that directory current.<p>Here &#x27;G&#x27; abbreviates &#x27;go to&#x27;!<p>So, to command G, argument &#x27;VB&#x27; acts like a short nickname for directory<p>H:\data05\projects\software\vb\<p>Actually that means that I have -- established when the system boots -- a Windows environment variable MARK.VB with value<p>H:\data05\projects\software\vb\<p>I have about 40 such MARK.x environment variables.<p>So, sure, I could use the usual Windows tree walking commands to <i>navigate</i> to directory<p>H:\data05\projects\software\vb\<p>but typing<p>G VB<p>is a lot faster. So, such nicknames are justified for frequently used directories fairly deep in the directory tree.<p>Environment variables<p>MARK.TO<p>MARK.FROM<p>are used by some other programs, especially my scripts that call COPY and XCOPY.<p>So, to copy from directory A to directory B, I navigate to directory A and type<p>MARK FROM<p>which sets environment variable<p>MARK.FROM<p>to the directory tree name of directory A. Similarly for directory B.<p>Then my script<p>COPYFT1.RXS<p>takes as argument the file name and does the copy.<p>My script<p>COPYFT2.RXS<p>takes two arguments, the file name of the source and the file name to be used for the copy.<p>I have about 200 KEdit macros and about 200 Rexx scripts. They are crucial tools for me.<p>(11) FACTS<p>About 12 years ago I started a file FACTS.DAT. The file now has 74,317 lines, is<p>2,268,607<p>bytes long, and has 4,017 <i>facts</i>.<p>Each such <i>fact</i> is just a short note, sure, on average<p>2,268,607 &#x2F; 4,017 = 565<p>bytes long and<p>74,317 &#x2F; 4,017 = 18.5<p>lines long.<p>And that is about<p>12 * 365 &#x2F; 4,017 = 1.09<p>that is, an average of right at one new fact a day.<p>Each new fact has its time and date, a list of keywords, and is entered at the end of the file.<p>The file is easily used via KEdit and a few simple macros.<p>I have a little Rexx script to run KEdit on the file FACTS.DAT. If KEdit is already running on that file, then the script notices that and just brings to the top of the Z-order that existing instance of KEdit editing the file -- this way I get single threaded access to the file.<p>So, such facts include phone numbers, mailing addresses, e-mail addresses, user IDs, passwords, details for multi-factor authentication, TODO list items, and other little facts about whatever I want help remembering.<p>No, I don&#x27;t need special software to help me manage user IDs and passwords.<p>Well, there is a problem with the taxonomic hierarchy: For some files, it might be ambiguous which directory they should be in. Yes, some hierarchical file systems permitted to be listed in more than one directory, but AFAIK the Microsoft HPFS file system does not.<p>So, when it appears that there is some ambiguity in what directory a new file should go, I use the x.DOC files for those directories to enter relevant notes.<p>Also my file FACTS.DAT may have such notes.<p>Well, (1)-(11) is how I do it!
Ask HN: How does the ACA affect you and your work?
Here&#x27;s a take from an individual:<p>I was recently unemployed (January)[0]. I signed up for COBRA right away. Within about a month, I had offers and I ended up narrowing it down to two.<p>I was having a really difficult time choosing between the two jobs. One was a local company, not large, but not small that was doing really exciting things but would end up with me taking an &quot;in the office&quot; position (I had been working at home for about a decade up to now). The other was a very small outfit, out of state, which was directly in my unique area of expertise. The salary offers were the same but the smaller business was so small that they didn&#x27;t offer insurance so I&#x27;d have to buy it on &quot;the exchange&quot;.<p>I ran the numbers. I was paying $1,500&#x2F;mo[1] for COBRA coverage (family) and discovered that because I had signed up for COBRA, I couldn&#x27;t join an exchange plan until November. This, alone was a deal breaker. I went back to the small business and asked for additional compensation to cover it (and it was offered). I ran the numbers again. Going off of this years rate, I could get a plan for around $1,200&#x2F;yr. This &quot;plan&quot; came with a $12,000 deductible. Incredibly, it was <i>not</i> eligible for a Health Savings Account. And the way the laws are setup, <i>all</i> of the monthly costs of coverage and the $12,000 spent toward medical coverage would be after taxes until it reached 10% of my annual earnings. Very quickly, a well paying job became a huge downgrade in salary from my prior position. And that was this year&#x27;s rate. I anticipated at least a 10% increase come November and that&#x27;s probably low.<p>At the other job, I had three options for very good healthcare coverage that was the cheapest I&#x27;ve ever encountered. One of them was a High Deductible plan with the <i>minimum deductible required</i> to qualify for a Health Savings Account[2]. The cost of the plan was such that despite their initial offer being exactly on-par with what I was making at my prior employer, I ended up taking home a few thousand more every year. And the coverage is <i>better</i> than my last employer (and mountains better than the $12,000 plan ... from the same health insurer ... the <i>exact</i> same health insurer -- same state, same administrative unit, same brand).[3]<p>[0] I mention this only for context, not as a judgement of the company. They took great care of me and the circumstances behind the layoff had to do with changing of the focus of the company to things that I wasn&#x27;t working on. The unique nature of the position I filled made it impossible for me to effectively transition to this new kind of work without moving to another country... something I cannot do.<p>[1] For those who are unfamiliar, COBRA allows you to keep your former employer&#x27;s insurance at the same price that it was at your former employer but it&#x27;s got one big gotcha. Your employer usually pays a percentage of that monthly cost. In my case it was somewhere around 80% of that cost. It was a good deal while I was employed, but turned into a pretty terrible one when I was no longer employed. Companies often continue to pay&#x2F;fund part of the COBRA costs for a period of time after an employee is let go and because I&#x27;m not entirely sure what I agreed to keep confidential with the NDA that I signed when I was let go, and out of respect for my former coworkers&#x2F;managers, I am neither confirming nor denying that they helped out here. I certainly wouldn&#x27;t have expected them to if they did and wouldn&#x27;t have been upset if they didn&#x27;t. Sufficiently vague enough?<p>[2] HSAs are pre-tax accounts that carry over to the next year (they work a lot like a 401K that allows you to take money out any time for qualified medical expenses). I love them both because they have always saved me money in the long run and on the principal of them. I hate our broken healthcare system. It&#x27;s the worst kind of &quot;fake capitalist&quot; arrangement there is. Ask yourself what the last prescription you filled <i>actually</i> cost, or that last trip to the doctor. Most people have no idea what doctor&#x27;s visits, medications, hospital trips, and medical tests cost so there&#x27;s no shopping around and therefore no reason for &quot;the system&quot; to optimize prices. A high-deductible plan changed my behavior immediately. For unimportant things, like &quot;where I fill my prescriptions&quot;, I do my research (and <i>man</i> does it take time -- try to get the price of a drug out of the drugstore in less than an hour) and I fill things where I know they are cheapest. I used to just go to urgent care when I got sick, never realizing it&#x27;s twice the price and half the quality (at least where I&#x27;m from). I never realized that I could call my physician and get a same-day appointment on week days (I&#x27;ve <i>never</i> been turned down and though I end up having to wait a little bit, it&#x27;s never as long as the wait at Urgent Care).<p>[3] I just want to clarify, I kept the discussion about healthcare due to the post topic. This certainly wasn&#x27;t the <i>only</i> reason I went with my current employer. They are really doing awesome things and I was very excited to have been given an offer here. I&#x27;m also very glad that I took it. I can&#x27;t say that I wouldn&#x27;t have chosen them, anyway, simply because the work looked (and turned out to be) exciting. I mean that, though. I <i>really can&#x27;t</i> say. The healthcare issue was such huge thing that it took the other option completely off the table.
How do you keep track of work tasks
You may be talking about something else.. I&#x27;m a beginner (first job), but I tend to make use of comments and commit messages (this may be due to the fact I buy pens and notebooks in bulk and have a tendency to take notes).<p>Comments:<p>I write them so I can just do a git grep and have a lot of information. Read: they tend to follow a certain format.<p>TODOS are for code I write that I intend to add. Others will probably read them and might do them, or tell me not to do them because of a reason I wasn&#x27;t aware of.<p><pre><code> # TODO: Refactor this so it doesn&#x27;t make that many copies. # Something along these lines: # def foo(arg1, arg2, *, bar=None): # ... # This way we can do x, y, z. Profiling shows spam # spends most of it time doing x instead of y. # Investigate why. Maybe do it à la PEP 666. </code></pre> However, I sometimes open a ticket, assign it to myself, add a proposed implementation off the top of my head in the ticket body, then just reference the ticket in the TODO not to clutter the code (the information is captured nonetheless).<p>I also have files &quot;musing&quot; and &quot;refactoring&quot;: the first where I toy with things that, somehow, always manage to save the day when I need them. Custom data structures, utilities, tools to make things easier. I suck so I try really hard to make my code <i>really</i> easy to use and write code to be able to be as lazy as possible (and learn a few things doing it)<p>&#x27;refactoring&#x27; is when I don&#x27;t want to mess with someone&#x27;s code but I see a way of doing something I&#x27;m not certain is better and that portion is not a priority. I&#x27;ll reference the file and the function and make it better. It almost always is because I have the luxury of perspective the person that implemented it first didn&#x27;t have, whether that person is me in the past, or another human.<p><pre><code> # QUESTION: Why does foo instantiate CoolStuff with arg # when bar does it too in yo.py </code></pre> A question is useful in many cases: When the information is with someone else but I don&#x27;t want to unplug from the code; I just capture that in a question and continue working on the code. I do a git grep periodically to see if my questions are answered (I might have gained more insights into a subject, or talked with another person, or just got some sleep. The questioning not being lost is what&#x27;s important because I think code is just answering questions).<p>You can also append the person&#x27;s name or something.<p>If I do something that&#x27;s not obvious, I add comments on what I was trying to fix, how I fixed it, and why that way instead of another. This way a reader might get their answer. I add &quot;# NOTE:&quot; for emphasis.<p>If I do a temporary thing, I also use warnings so I can run and generate errors and prefix functions with _throwaway (this way I can git grep them).<p>Commit messages:<p>They tend to include what was changed and why (which issue&#x2F;wart it fixed and why it was a wart&#x2F;issue because is it really a bug or was it intended? and how it was fixed) plus the next steps I&#x27;m planning to do after that commit.<p>Goes like this:<p><pre><code> X feature is working now. Problem was that foo did xyz but didn&#x27;t commit to the db and condition Y wasn&#x27;t met yet. Change affects foo method in waw.py. Next steps: Implement Z feature and refactor W so it uses memoryview instead of making all those copies in z.py. </code></pre> When I&#x27;m the only one working on the code, I go insane with comments, etc. When the code involves other people, I tend to have a branch where I have a lot of stuff, but will push a more mentally sane branch for others to use.<p>This allows me to:<p>- Have a set of questions I can ask and get answers to. If I&#x27;m talking with someone, I do a git grep and see if their name doesn&#x27;t come up and ask them a question I wanted to ask them when I was doing x, y, or z.<p>- Know what the next steps are.<p>- Have context about my own stuff.<p>- Brain crumbs so there&#x27;s always something to work on.<p>- Hopefully be someone they can have unprotected code with.
Is it unethical for me to not tell my employer I've automated my job?
note: before posting I realized logfromblammo said what I&#x27;m trying to say and more much shorter and better: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=14657981" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=14657981</a> but now that I already rambled so much I don&#x27;t want to just throw it away either, so here goes nothing.<p>&gt; Is this the kind of example you want to set for your son?<p>Yes. I can nearly touch the very smart and decent person behind that post (which I didn&#x27;t fully read because you bolded this and I had to get my opinion out before reading on :P)<p>Use a lot of your time on that son, and some of it on helping people here and there who don&#x27;t have much time. Spend little money and lots of time! You can answer your son&#x27;s questions, you can play with him.. don&#x27;t sacrifice that luxury light-heartedly. Don&#x27;t spend that penny without turning it over lots, it&#x27;s the first of that nature you got, and many people don&#x27;t even know a person who had one.<p>Of course, as others said, also learn interesting things and keep your eyes peeled for a job that would have meaning to you you can be 100% straight about to everyone involved. But I assume you&#x27;re already doing that anyway.<p>This stroke of luck might not last forever, but it <i>is</i> a stroke of luck IMHO, from the sound and content of your post I&#x27;d say a nice thing happened to a nice person who put in the work to deserve it. Nothing unethical I can see about it. If they want it automated, they can hire a programmer. Wanting to have it automated by someone for data-entry wages, now <i>that&#x27;s</i> unethical. So if you want ethics, calculate a generously low programmer salary for 6 months, then coast along some more until they paid you this much.<p>One thing I&#x27;m sure, suffering 40 hours a week <i>when there is no need</i> is kind of the worst example you could set for your son. IMO, of course. His father at least for a moment is free from bondage, but also free from delusions that often come with &quot;aristocracy&quot; (for lack of a better word, I just mean most people who &quot;live the easy life&quot; pay with it dearly in ways they don&#x27;t even register). That&#x27;s as rare as it is beautiful. Take the advice of anyone who never tasted this with a grain of salt. Especially if you use free time to seek out things you can do or create that are interesting to you -- I don&#x27;t believe in relaxation or entertainment that much, I love being focused and busy, but I believe in autonomy and voluntary cooperation.<p><i>Everybody</i> should... well, okay, 2 hours a month wouldn&#x27;t be enough by a long shot, but I do believe life work life and starvation levels for <i>all</i> people on Earth could and should be compatible with a dignified, strong personality. But we&#x27;re really programmed to not even want that, to not even recognize that as the <i>minimum</i> responsible adults should settle for, but rather belittle it as utopian. Yeah it&#x27;s a hard problem, but it doesn&#x27;t get easier by working on unrelated gimmicks instead.<p>As you said yourself, the company already gets the end result out of you what it wanted out of you for that money. Now they get the <i>bonus</i> of you improving yourself and the world and spending more time with your son than you otherwise could. At least on a human level, anyone who doesn&#x27;t see this as an added bonus to be happy about is petty. This makes the world much better than you saving the company a job would, which often is just pissing down the drain. You didn&#x27;t get this job with the intent of automating it, and you probably started trying that without even knowing if it would work, because you like coding. And then you <i>knew</i> that they wouldn&#x27;t just say &quot;good on you, enjoy the time with your son&quot;. I know I&#x27;m trying a bit hard here, but if you squint you might say you have to &quot;lie&quot; to get them to &quot;do the right thing&quot;.<p>&gt; <i>You cannot strengthen one by weakening another; and you cannot add to the stature of a dwarf by cutting off the leg of a giant.</i><p>-- Benjamin Franklin Fairless<p>This is true. And yet, if you would let them, they would do it. To be fair, I know none of the people involved, but for a general &quot;they&quot; this is too often true. And nothing would be gained, only something would be lost, and you would have lost the most.<p>I say you got lucky, it&#x27;s yours. Use a lot of it selflessly, but use it! Maybe ask a lawyer for advice, don&#x27;t be reckless of course. But <i>if</i> your only danger to this is your conscience being infected with the general pathology of society, rectify that. Fuck survivor guilt, you know? Good for everyone who gets as far away from the prison system (in the sense of System of a Down) as they can. Don&#x27;t leave us in the ditch, but never get dragged back in either.
2D Syntax
I love attempts at visual programming. As a kid I used to pour over the then-fashionable ads for CASE (Compater-Aided Software-Engineering) tools in DDJ and elsewhere and imagined them to do far more than they actually did... Also attempts like Amiga Vision [1]<p>One of the software engineers I like to go a bit fanboy-ish about is Wouter van Oortmerssen, who I first got familiar with because of Amiga E, but who has a number of interesting language experiments [2], one of which includes a visual language named Aardappel [3] that also used to fascinate me.<p>There are a number of problems with these that have proven incredibly hard to solve (that this Racket example does tolerably well on, probably because it doesn&#x27;t go very far):<p>1. Reproduction. Note how the Amiga Vision example is presented as a video - there is not even a simple way of representing a program in screenshots, like what you see for the examples of Aardappel, which at least has a linear, 2D representation. That made Amiga Vision work as a tool, but totally fail as a language. This is even a problem for more conventional languages on the fringe, like APL, which uses extra symbols that most people won&#x27;t know how to type. The Racket example does much better in that it can be reproduced in normal text easily.<p>2. Communication. We talk (and write) about code all the time. Turns out it&#x27;s really hard to effectively communicate about code if you can&#x27;t read it out loud easily, or if drawing is necessary to communicate the concepts. Ironically, if you can&#x27;t read the code out easily, it becomes hard for people to visualise it too, even if the original representation is entirely visual. This example does ok in that respect - communicating a grid is on the easier end of the spectrum.<p>3. Tools. If it needs special tools for you to be effective, it&#x27;s a non-starter. This Racket example is right on the fringes of that. You could do it, but it might get tedious to draw without tooling (be it macros or more). On the other hand the &quot;tool&quot; you&#x27;d need to be effective is limited enough that you could probably implement it as macros for most decent editors.<p>I spent years experimenting with ways around these, and the &quot;best&quot; I achieved was a few principles to make it easier to design around those constraints:<p>A visual language needs a concise, readable textual representation. You need to be able to round-trip between the textual representation and whatever visual representation you prefer. This is a severe limitation - it&#x27;s easy to create a textual representation (I had prototypes serialising to XML; my excuse is it was at the height of the XML hype train; I&#x27;m glad I gave that up), but far easier to make one that is readable enough, as people need to be able to use it as a &quot;fallback&quot; when visual tools are unavailable, or in contexts where they don&#x27;t work (e.g. imagine trying to read diffs on Github while your new language is fringe enough for Github to have no interest in writing custom code to visualise it; which also brings up the issue of ensuring the language can easily be diffed).<p>To do that in a way people will be willing to work with, I think you need to specify the language down to how comments &quot;attaches&quot; to language constructs, because you&#x27;ll need to be able to &quot;round-trip&quot; comments between a visual and textual representation reliably.<p>It also needs to be transparent how the visual representation maps to the textual representation in all other aspects, so that you can pick one or the other and switch between the two reasonably seamlessly, so that you are able to edit the code when you do not have access to the visual tool, without surprises. This makes e.g. storing additional information, such as e.g. allowing manual tweaks to visual layout that&#x27;d require lots of state in the textual representation that people can&#x27;t easily visualise very tricky.<p>Ideally, a visual tool like this will not be language specific (or programming specific) - one of the challenges we face with visual programming, or even languages like APL that uses extra symbols, is that the communications aspect is hard if we can not e.g. quickly outline a piece of code in an e-mail, for example.<p>While having a purely textual representation would help with that, it&#x27;s a crutch. To &quot;fix&quot; that, we need better ways of embedding augmented, not-purely-textual content in text without resorting to images. But that in itself is an incredibly hard problem, to the extent that e.g. vector graphics supports in terminals was largely &quot;forgotten&quot; for many years before people started experimenting with it again, and it&#x27;s still an oddity that you can&#x27;t depend on being supported.<p>Note that the one successful example in visually augmenting programming languages over the last 20-30 years, has been a success not by changing the languages, but by working within these constraints and partially extracting visual cues by incremental parsing: syntax highlighting.<p>I think that is a lesson for visual language experiments - even if you change or design a language with visual programming in mind, it needs to be sort-of like syntax highlighting, in that all the necessary semantic information is there even when tool support is stripped away. We can try to improve the tools, but then we need to lift the entire toolchain starting with basic terminal applications.<p>[1] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=u7KIZQzYSls" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=u7KIZQzYSls</a><p>[2] <a href="http:&#x2F;&#x2F;strlen.com&#x2F;programming-languages&#x2F;" rel="nofollow">http:&#x2F;&#x2F;strlen.com&#x2F;programming-languages&#x2F;</a><p>[3] <a href="http:&#x2F;&#x2F;strlen.com&#x2F;aardappel-language&#x2F;" rel="nofollow">http:&#x2F;&#x2F;strlen.com&#x2F;aardappel-language&#x2F;</a>
Thoughts on Insurance
I have come to believe that the insurance industry in the U.S. has become part of the cause of expensive medical care. I am not opposed to the idea of insurance, but here are some of the problems I see in how it works today.<p>I do not believe that insurance companies actually negotiate the price down. I have seen too often that the price goes up when I produce proof of insurance. While it may seem counter-intuitive, insurance companies have perverse incentives to push prices up. Even the most well-meaning insurance company will opt for certainty over potentially inexpensive. But I don&#x27;t think they are well-meaning to begin with and I think that it isn&#x27;t even structurally beneficial to them to reduce costs. For an example of when lower costs benefit insurers, see the price of identical healthcare purchases outside the U.S. Products manufactured inside the U.S. magically become cheaper in Belgium and Nigeria.<p>The insurance industry has worked with the medical industry to obfuscate the cost and price of healthcare. This is in both industries&#x27; interests. If I pay $100 for a doctor to splint a broken finger, I&#x27;m happy I had insurance to make it so cheap. I&#x27;ll let my insurance company pay the other $1700 and I may never even look at the bill that shows a total cost of $1800. I won&#x27;t see that they charged $25 for the popsicle stick, $25 for the tape, $5 for the pen used to fill out the medical chart, $30 for the receptionist, $75 for the nurse, $300 for the PA, $200 for the xray, $200 for the radiologist, $50 for the ibuprofen, etc. If I actually had to pay that bill, I would argue it line by line. I would absolutely refuse to pay large portions of the bill. $5 pens that they reuse for the next patient are obviously an unethical ripoff. But I don&#x27;t have to pay those line items, so I don&#x27;t fight it. I just have to pay a $1500 premium every month. And I&#x27;d be crazy to refuse to pay that.<p>We all know that insurance premiums are high because healthcare costs so much. This is because of lawsuits and freeloaders. This is how price obfuscation shifts the battle. We don&#x27;t question anybody&#x27;s integrity over the $5 pen because the discussion is all about lawsuits and freeloaders. If I had to pay the bill myself, I&#x27;d ask why I got billed for 4 x-rays, but only received 3. That 4th one that didn&#x27;t turn out because the radiologist put the cartridge in backwards? I&#x27;m not paying for that. I&#x27;d ask why the receptionist seems to be making $500&#x2F;hr. I&#x27;d ask what the lab fee was for. I&#x27;d comb my bill and question everything. $1800 is a lot of unplanned expense for me. $100? Not so much.<p>If I had to pay my medical bill in full then file for reimbursement, I&#x27;d fire a company with 90 day turn arounds and switch to the insurance company with 5 day turn arounds. And the next time I got stung by a bee, I&#x27;d maybe not run to the emergency room. Yes, that could lead to bad outcomes. But the current practice has its own bad outcomes.<p>If I had to keep fighting an insurance company for timely reimbursement and started noticing that I pay $1500x12 every year to cover 80% of my ~$6000 medical expenses, I&#x27;d start getting all kinds of ideas. I would think about putting $1000 a month into an HSA and look for a $40000 deductible policy. I&#x27;d think about pooling my HSA with 100 of my closest friends to start a healthcare co-op where I could borrow up to 1x my balance for 1 year at low interest to pay for big medical bills and then get a $60k deductible policy instead. Then, in five or six years, when I&#x27;ve built up $30k, I&#x27;d drop my monthly payment into my HSA to $500, and just let the balance earn interest. Then, when the unthinkable happens, I drain the account, borrow another $30k, file a claim on everything else and spend the next year paying back the $30k and lining up contingencies for any event that will happen before I can build my HSA back up. I&#x27;d start daydreaming about pooling my $12k&#x2F;year with others to build and staff a very small clinic that could take xrays and treat bee stings. Just 100 founders paying the same $1000&#x2F;month could generate $1.2M per year. That could secure a 15-year $3M bond to build the clinic. It could pay a long way into supplies and staffing, too. One could even imagine that members get free services (ignoring the $1000&#x2F;month) while non-members could pay $1800 for a broken finger.<p>In other words, I&#x27;d figure out how to rely as little as possible on insurance companies. And the money I now spend to fund their paperwork and profit would stay in my account.
Ask HN: How far should parents control their children?
People write books in attempts to answer these questions and it&#x27;s pretty hard to provide detailed answers here but I&#x27;ll chime in a bit...<p>All children are people, and all people are different in how they learn and react, and what intrigues and inspires them.<p>My advice is to first understand your child is a person and to treat them like one, as opposed to a piece of personal property or a &quot;dog&quot; you can command to do tricks.<p>Study: Yes, you have to &quot;make them do things&quot; they don&#x27;t want to do, like take a bath, clean their room, do their homework, and help with household chores because they&#x27;d rather not.<p>When our youngest was 17 he was failing his Senior year in High School and I pulled him aside and explained that if he had visions of sitting in his bedroom playing video games all day after he got out of school what he&#x27;d find was me putting all his stuff on the curb and turning his bedroom into my home office and he could go find someone else to house and feed his lazy ass. He bucked up, graduated, and is now a manager at the company he works for.<p>Religion: We live in the &quot;Bible Belt&quot; where a lot of what is preached comes from &quot;Ministers&quot; who are more interested in what&#x27;s in the &quot;collection plate&quot; than teaching the lessons Jesus laid out. I stopped attending &quot;Church&quot; when I was a teen because I realized this but since most all of my children&#x27;s friends went to church they wanted to go as well and my wife and I let them and gave them rides so they could attend and then we&#x27;d discuss what they were told and taught afterwards. It wasn&#x27;t long before they started seen contradictions between the sermons they heard and what Jesus said and taught and my wife and I would go over these with them.<p>A good example is &quot;gay marriage&quot;. Jesus said &quot;there is no marriage in heaven&quot; and to &quot;love one another&quot;, and they all grew up around a few gay couples that are close friends of ours and knew they were loving and kind and caring people who were hurt by the huge anti-gay stance of the evangelicals we&#x27;re surrounded by. By the end of &quot;High School&quot; they didn&#x27;t want to go to church anymore because of the hate and vitriol they saw promoted there, but they also studied the Bible on their own to better understand the teachings of Christ and to defend themselves and their beliefs and by the end of High School they knew and understood scripture better than many of the adults that were &quot;teaching&quot; it to them.<p>Life Lessons: We gave each of our kids their first car and told them they&#x27;d be buying their 2nd one themselves. They all hated their first car because it wasn&#x27;t a brand new sporty car their friends would be envious of, and they all smashed that first car up while being stupid and then whined when we didn&#x27;t buy them another. And then they all worked hard, bought their own 2nd car, and not one of them has crashed their cars up since, or complained their car wasn&#x27;t &quot;cool&quot; enough to be seen in.<p>Internet: I&#x27;ve been making web sites and web software since the mid-1990s so they all grew up with it. I explained that there&#x27;s a lot crap on the internet and they&#x27;d be wise to not waste their time on the crap they could find there. And I told them I could, and would, track the sites they visited and pull the plug if they got stupid on it. And I explained that even if I was not tracking them, the access providers they use and sites they visit are tracking them so they needed to aware that others might find out what they do there and expose them and they&#x27;d better keep that in mind because they&#x27;d have to deal with that all on their own. But I never put any restrictions on what they could find there because they&#x27;d have none as adults and could find it somewhere else anyway.<p>Sex, Drugs, and Rock &amp; Roll: Yeah, you have to deal with that too. Our approach was to teach them that it&#x27;s fine to party but don&#x27;t make a career out of it. Don&#x27;t aspire to be the most drunk or do the most drugs at the party, and that doing hard drugs is always a stupid choice. All our children grew up seeing friends, family, and acquaintances do stupid shit to get high and when high and the consequences that ensued as a result and we didn&#x27;t shield them from that. We were very up front about it: &quot;Uncle Joe is in jail again because he&#x27;s a dumbass who&#x27;s spent his life chasing drugs and that&#x27;s why we&#x27;re not bailing him out, and we won&#x27;t bail you out either if that&#x27;s the path you take, so don&#x27;t expect or ask us to.<p>None of them wanted to be like &quot;Uncle Joe&quot; and none of them went that route.<p>The short of it is, raise your kids to be adults from the get go.
Beyond Bitcoin: Truly Decentralized Banking
This is really confused. Nothing has intrinsic value, but certain things have objectively-measurable properties that make them more suitable to be used as money.<p>Scarcity, fungibility, divisibility and portability are some of these properties. Mass adoption is another (because money effectively has &quot;network effects&quot;), but if a society widely uses a particular form of money when a better substitute is available, they will very likely switch at some point (see: dollarization).<p>Bitcoin and gold are both superior to fiat in terms of scarcity. (The author is correct that the gold supply grew massively at certain points in history, but there&#x27;s very little chance of this happening again any time soon. That said, there&#x27;s still a lot of Au in the earth&#x27;s crust left to dig up and with modern mining techniques the global stocks grow at about 1.5% a year). Anyone with a lot of usd (or rmb or euros or whatever) puts it somewhere less prone to inflation (stocks, bonds, real estate). This means there&#x27;s always a lot of fiat currency looking for a home - though people at the bottom don&#x27;t see this over-abundance of money, it is visible in things like the crazy prices for premium real estate, the crazy size of tech acquisitions (everyone else is investing their money in FB&#x2F;AMZN&#x2F;GOOG&#x2F;AAPL, so those guys have huge cash hoards with little opportunities to put them to use - so they buy many startups).<p>On top of that, bitcoin is highly divisible and very portable (it&#x27;s easier to send btc around the world than gold or usd). A couple of years ago I saw nothing that could stop it long term. I&#x27;ve since become convinced that its weird supply formula makes the price inherently, permanently volatile. Longterm, the finite supply limit will incentivise everyone to hoard, causing the price to skyrocket - but if everyone hoards the amount traded will become so low that even a small amount leaving the hoards (as hoarders suddenly spend some of their overpriced btc) then causes a big price drop. This bouncing up and down incentivises people to stay away.<p>Gold, on the other hand, has ties to economic reality - if the price increases gold miners can mine more, and vice versa. This natural adjustment in supply dampens price swings. Unfortunately gold needs to be kept in a vault and can&#x27;t be transferred easily (that said, for non-technical users, securing one&#x27;s bitcoins appears equally difficult).<p>If someone made a cryptocurrency with a gold-like supply rate, that would have real world domination potential. Maybe some startup founder with the connections to pull this off is reading this - if so, take a look here for more theory: <a href="https:&#x2F;&#x2F;keithweinereconomics.com&#x2F;2013&#x2F;12&#x2F;28&#x2F;the-theory-of-interest-and-prices-in-paper-currency&#x2F;" rel="nofollow">https:&#x2F;&#x2F;keithweinereconomics.com&#x2F;2013&#x2F;12&#x2F;28&#x2F;the-theory-of-in...</a> (his argument is more complicated than those you may have encountered in libertarian&#x2F;goldbug circles. It&#x27;s not simply that the scarcity of gold means it will be more valuable long-term - it&#x27;s the arbitrages that modulate the supply and thereby make it suitable as a store of value).<p>Writing this comment has gotten me thinking, so here&#x27;s a few other thoughts on how this will pan out. I&#x27;ve read many different thinkers who write about such topics, and spent much time evaluating and integrating, so hopefully some stray reader will find this of value.<p>The other thing I&#x27;ve come to realise is that economics is one aspect of human society. It has a kind of permanent, fixed reality, despite being based on human decisions and actions, because the incentives humans face are universal and eternal (everyone needs to save, everyone needs to trade, even in a socialist dictatorship semi-regular economic activity continues on the black market). But the ideas that spread in a culture also count. If most people believe that the usd is money, and bitcoin and gold are for weirdos, then the usd can remain money for a long, long time. I thought that over time people would see the bitcoin price constantly rising (though unstably), and realise that they needed to buy in. (First the risk-takers and early adopters, then the more adventurous asset managers investing 0.5% of their funds, then the early majority, and so on). But with the recent price spike, I saw that though many people suddenly wanted to pile in, they had no understanding and little interest of bitcoin&#x27;s fundamentals. They buy now and hope to sell before the next crash. Maybe some entrants will become long-term holders and help drive the price up - but not many. The btc market cap is around $50bn - tiny in the world of finance. If&#x2F;when it approaches $1tn, things will get interesting. Our current world economic order (BW2) is a hacky fix from the 70s on an agreement made in the 40s. It suited the great powers of the time and was based on certain assumptions. Now the assumptions are proving wrong and the balance of power is shifting.<p>I don&#x27;t believe in the mass automation&#x2F;sustainable energy&#x2F;basic income future most of the tech crowd is looking forwards too. I foresee two likely futures - one, a return to 19th century great power politics, as Putin hopes for - definitely not a utopia, but hopefully a world where at least some countries realise that freedom is the path to prosperity and advancement. The other, a darker world, where dictatorships (both in socialistic and nationalistic flavours) become dominant on every continent. In both scenarios the usd is likely to lose its status as the global reserve currency, but in neither scenario is bitcoin guaranteed to be the replacement.
Why We Chose Typescript
Gotta comment ... I dove into TypeScript about a year ago and dropped it. I saw the value but because of a large number of libraries and custom components of my own, switching purely to TypeScript wasn&#x27;t easy and I was in a hurry.<p>Fast forward several months and I picked it up again. I&#x27;ve now been writing <i>everything</i> that I would have done in JS in TypeScript and have built several applications using both Angular and React entirely in TypeScript. I&#x27;ve also sold my teammates on switching to the language.<p>When I was (stuck) writing JavaScript, the frustration factor was high for me. I&#x27;d get nebulous errors[0], hunt around the line it referenced, swear a little, and trace the code back to the cause. I would wildly estimate that one out of ten attempts at blindly running my code would succeed[1]. Even on unit tests, which had a higher degree of success since they were testing much less, still had a <i>much</i> higher fall-over[2] rate than I get in typed languages that I enjoy. The addition of types, which adds a little overhead, flipped that over. I am <i>still surprised</i> every time I refresh a page that uses code I&#x27;m modifying and <i>it loads</i>. The reduction in time spent debugging (and swearing) makes me enjoy the language more every day that I use it. It&#x27;s even left me longing when I am writing code in other languages (mostly Java&#x2F;C# these days) and features I have come to really enjoy (union types, intersection types and to a lesser extent the duck-typing nature of the language[3]).<p>Since crapping on <i>any</i> language, or feature&#x2F;lack of feature of a language tends to become a religious war fought with verbal violence, allow me to admit a few points: Traditionally, I avoided JavaScript and jobs related to it. Personally, I hate the language. This means I&#x27;ve spent considerably less time researching all of the best practices&#x2F;techniques for surviving those cases where I <i>have</i> to write JavaScript. I started in C and Pascal and prior to a few months ago spent 99% of my time in typed languages. I am an advocate for unit testing[4], but I find test-driven development requires me to work backward and it&#x27;s less productive for me. I&#x27;d imagine that if I went <i>all in</i> with TDD, I might see fewer of these problems, but many of the best practices for JS development are best practices in the languages I am more proficient in and despite following these practices, JS design led to these practices being less effective at reducing bug frequency. Yes, I could just be a yelling &#x27;get off my lawn&#x27; because I&#x27;m unwilling to change[5]. But I&#x27;ve also worked closely with highly-skilled JS developers who could rapidly produce incredible things as a result of its flexible, dynamic typing. Incredibly, though, one of those &#x27;huge JavaScript advocates&#x27; was the one who told me to give TypeScript a chance late last year. Though he would always fight me on the &quot;dynamic vs. static&quot; thing, his argument was that TypeScript&#x27;s type system was light-weight enough to keep out of his way while strict enough to lower the frequency of self-inflicted foot bullet-holes (paraphrased). Really, though, ... two nulls, asinine boolean implicit conversions necessitating code like double-bang and === &#x2F; !== operators[6] should be enough.<p>[0] Often depending on whatever framework I was using, but I&#x27;ve rarely found one that returns an error that results in a really <i>obvious</i> &#x27;oh, I know what I did to cause that&#x27;<p>[1] I like to check that a component renders visually appropriately and often do a quick check before I&#x27;ve written all of the required unit&#x2F;integration tests to make sure it&#x27;s rendering accurately (right data&#x2F;right result).<p>[2] As in, something fails badly enough to stop execution rather than just failing on an assert for an incorrect result.<p>[3] It&#x27;s a love-hate thing for me -- the result is being able to reduce boilerplate making mostly-compatible types interact, but the down-side is that the compiler giving a pass to &quot;A=B&quot; when &quot;A&quot; has at the properties of &quot;B&quot; results in some subtle bugs that have already bitten me more than once.<p>[4] Though I don&#x27;t buy into TDD (either before writing the code or after) solves most of the issues of dynamic typing. I&#x27;ve had more than a few tests fail because of a type-related issue...in the test.<p>[5] Except that I <i>love</i> learning new languages and &#x27;keeping up&#x27; and have found that as I&#x27;ve aged, I can pick up new languages far more quickly than I could in my early 20s... [plug]RUST![&#x2F;plug] I&#x27;m also not terribly old, nor terribly sensitive about being called an oldster.<p>[6] I don&#x27;t recall who, but someone was once reading code out-loud and said &quot;if action <i>fuckin&#x27; equals</i> &#x27;ADD&#x27; and payload.Length <i>doesn&#x27;t fuckin&#x27; equal</i> zero&quot;. Adding in the <i>fuckin&#x27;</i> every time he encountered the &quot;really, really [not] equals&quot; operator. So that is how I mentally read those. I&#x27;ll never forgive him (sorry for the swears ... and doubly sorry if you end up reading code like this as a result).
What is a file? (2011)
&gt; The first suggestion for a way forward is perhaps the most obvious: it entails rethinking the role of metadata. [...] But metadata is also now becoming central to what users understand as a file, though they might not always think of tags, comments, playlist information and so forth as metadata. For what a file is is now often bound up with the things added to it, not only by the originating user but by others too.<p>&gt; Consider for example, behaviours reported by [5]. In their study of teenagers and their virtual possessions, participants reported that part of the value of photos posted on Facebook was the metadata associated with them: comments and ‘likes’ were so pertinent that they were sometimes printed out alongside photos and pasted onto bedroom walls as a collection. This materialisation of the digital is indicative of a difficulty associated with the current technological landscape.<p>&gt; It is not clear how one would digitally export a Facebook photo in order to view it in this way with another computer program or application, and this remains so despite recent innovations in the Facebook service. Yet it is not surprising that users should want to treat these entities in the way they treat a file. If they can upload their photos to Facebook, and given that they do so the photos are file-like objects, why can they not download them again, while retaining the value they have accrued, but still with the benefits of file-like properties? Although it is now easier for users to export their data from Facebook, these exports, once represented simply as ‘a file’ on a hard disk, lose their potency.<p>Certainly what we don&#x27;t need is more meta-data attributes on files.<p>IMO one should either<p>1. Create a simple file format that bundles the contents of a post. For example a zip file with the media, comments, and likes and such, or<p>2. Have the post (for example as JSON) and media in separate files and store the references to the other files in the post, sort of like in HTML. Perhaps have the computer system be able to extract such references and let you easily operate on files that &quot;belong together&quot;.<p>They even mentioned databases and relationships earlier, and grouping files together in different ways.<p>---<p>&gt; This bundle, this new ‘file’ type, is not merely a complex data type; the important thing from the users’ point of view is that it is a mirror of the social life that the file enables.<p>I have no idea what they are trying to say here.<p>If you create a file format like I said that contains all of the data that made up the original post then you can represent that at a later point and you can choose to render it just like facebook would. Surely that&#x27;s exactly what the users want?<p>---<p>&gt; However, this immediately raises complexities. For instance, images posted to Facebook might be copied not only by the person who posted them, but also by others. In these circumstances, should these others be able to copy the metadata, the tags as well as the thing-itself? If so, what of the rights of the owner or, if you prefer, the maker of the initial file? When people copy an originating file, would they be creating a new file or would their new entity be a version of the original one? Is there an order of precedence that we are proposing and ought this to be reflected in the concept of a file that might apply?<p>Bits don&#x27;t have color. <a href="http:&#x2F;&#x2F;ansuz.sooke.bc.ca&#x2F;entry&#x2F;23" rel="nofollow">http:&#x2F;&#x2F;ansuz.sooke.bc.ca&#x2F;entry&#x2F;23</a><p>Trying to accurately track origin of a post is going to lead to nothing but trouble.<p>Don&#x27;t try to build a technical solution for something that is not a technical problem. If someone breaks copyright laws you take them to court and sort it out there.<p>&gt;It seems to us that there is a distinction that ought to be made between things that are put on the web, which the originator wants to have file-like properties (even as that thing develops a social life once on the web), and those things that are posted that the user does not want to have file-like properties. The properties we are thinking of have to do with questions like whether ‘ making a copy’ means making a copy, a version of the thing itself, or having and owning (as it were) the originating thing itself and all that has ensued in that thing’s social life.<p>WHAT??? Just, WHATT??? Are they purposely trying to ruin the internet? It&#x27;s not up to one person to decide in which manners others copy it or not. Once again, if someone is doing something illegal, take them to court. And if they&#x27;re not doing something illegal, don&#x27;t try and restrict what other people are trying to this.<p>Fuck this. No, really. I&#x27;m done reading that paper.
What is a file? (2011)
When I first started using computers, I did so on Microsoft Windows 95. The first applications that I used were Netscape Navigator and MS Paint, as well as a few games. Being a child when I was introduced to computers, I did not have any notion about what was going on inside of the computer at all. All I knew was that there was a screen, a mouse and a keyboard, and that I could click on things on the screen and I could type on the keyboard.<p>The first time I was confronted with the notion of a file was certainly when I had painted something in MS Paint and I had clicked the save button. I think I had been told to not click &quot;My Computer&quot;, which makes sense -- you don&#x27;t want a child to accidentally move, delete or rename files on your computer. Hence I had no notion of the file system. All that was know to me was the desktop, the start menu, and a select few programs accessible through either icons on the desktop or in the start menu.<p>I had played a couple of games on the computer, and in those I could save the game and then the next time I could resume the game someplace near to where I had last been -- or rather, I could have my father help me resume the game. So when I managed to save the painting I sort of expected that it would just show up on screen the next time I started MS Paint. When it didn&#x27;t I was a befuddled for a moment but I just concluded that I didn&#x27;t understand what had happened and didn&#x27;t give much more thought to it. I think this is pretty typical of how most children treat situations that confuse them.<p>This is user level 0. You are able to move the cursor and to type a little bit on the keyboard and to run some specific programs, but that&#x27;s it.<p>During the next few years I learn how to work with files in MS Paint and other specific applications.<p>However, not all files are equal. If I try to open a file that was made in one application with another application it will often either result in an error message or in garbage on the screen.<p>This TIED my notion of a <i>file</i> to <i>specific programs, and to the content that is shown on screen</i> for the LONGEST time. It is a bit difficult to explain what I mean here but I think that to the majority of the population of a whole, this is what a file is to them. They view a file as an icon that you can open in a SPECIFIC program. And they call that file a &quot;&lt;name of program&gt; file&quot;, or they call it by the extension, but they have no idea, or they have the wrong idea, about what is the contents of the file. If you give a regular Windows user two files which are both named say .dat, (a commonly used generic file extension for data,) then they will think that those two files necessarily must be of the same kind &quot;somehow&quot; and that they are to be opened, both of them, with some specific, unknown program. This is bad and harmful in my opinion.<p>Likewise, I was quite confused for the longest time about &quot;My Documents&quot; and &quot;My Pictures&quot;. I was confused by why things were being put in &quot;My Documents&quot; by default when those things were not things that I considered to be &quot;documents&quot;.<p>Furthermore it was quite mystical to me for a long time how &quot;My Computer&quot; could be on the desktop at the same time as my desktop was under a folder that was within &quot;My Computer&quot; itself. This however is not a <i>huge</i> deal. Just yet another thing that didn&#x27;t make sense to me while I was trapped with the graphical representation of the system.<p>The paper is arguing a point of view that a different abstraction should be used than the hierarchical file system. I agree to some extent but not for the same reason perhaps.<p>I think that the desktop metaphor is inherently harmful as a first introduction to computing. The desktop is fine ONCE you&#x27;ve understood how the system works from a bit of a different point of view (though perhaps NOT <i>necessarily</i> a <i>lower</i> level as such), but until then the desktop metaphor will trick you into believing very many things that are simply not true, and which are going to come back and bite you in ways like those mentioned in the paper.<p>As for the doing away with the hierarchical file system, I agree. Throw it out. I enjoy Unix, but I don&#x27;t hold the hierarchical file system particularly dear. In fact, I think Unix has some very powerful ideas, and it sucks a lot less than Windows, but Unix is just a local optimum and nothing more.<p>---<p>Finally, on a bit of a different note, I&#x27;d like to state how I tend to think of files now.<p>To me, a file is data. Often that data will have been structured in a particular way, and sometimes it will have been structured in no particular way. A valid python program has a structure that allows the python interpreter to execute it. A text file that someone wrote using a plain text editor has a certain encoding but no structure beyond that. A file that was created by putting random data into it will not have any structure. A file that was corrupted will have some data that does not conform to the intended structure.<p>I am aware that the order of the bytes and their property of being one single, continuous unit, in persistent storage might don&#x27;t match the order that is presented to applications by the operating system, but in my use of computers this has not yet mattered so I choose to ignore that fact. So I too am living a bit of a &quot;lie&quot; with regards to how I think about files, I admit that.<p>Some files are not really files, but they are convenient because they offer you a simple interface to some useful functionality. I am speaking of course of &#x2F;dev&#x2F;urandom and friends.<p>Regular files are representations of &quot;something&quot;. Like a text, or a photo, or anything else that you can create a meaningful representation of. You can load the data into a program as long as that program has been programmed to understand the structure that is used in the file that stores your representation of your data. If the program that you would like to use does not understand that file format you either convert the file to some other format if an acceptable conversion is possible with an existing piece of software. If none exists, you implement it yourself, either into the program that you are using, because it&#x27;s open source as is the vast majority of the rest of your software, or as a standalone program that does just conversion, in both cases you are able to do this because the file format is sufficiently simple, or at least the subset of the format that you need is, and the format is open. If you are using proprietary software (including using multiple pieces of proprietary software) in combination with proprietary file formats, then either<p>1. The proprietary piece(s) of software is&#x2F;are able to do <i>absolutely everything</i> that you need to do (or at least, you think so), or<p>2. The data is not sufficiently important to you to warrant building better tools yourself, or<p>3. It&#x27;s so difficult to build these tools that you don&#x27;t have an alternative. For example, the data might come from very complicated equipment that you couldn&#x27;t build yourself even if given a million years to do so, and the data is so complex that you aren&#x27;t able to understand it from inspection nor from reverse engineering, or<p>4. The requirements changed (see also point one about thinking that the software did everything that you needed it to do), or<p>5. You&#x27;ve done fucked up. You didn&#x27;t do your research and now you&#x27;ve stuck with this. (See also point one about thinking that the software did everything that you needed it to do.)<p>Anyway, once you&#x27;ve loaded your data into a piece of software that is able to read it, something happens, and it brings us back to what we were talking about.<p>---<p>Once you&#x27;ve loaded your data into a piece of software that is able to read it, or &quot;opened a file&quot; as is often the way that this is achieved, I would argue that you are not operating on a file. You are working on the data that was in the file. This is a very important distinction.<p>When you hit save, you are not saving the file. You are requesting that parts of the application state to persistent storage. Again, a very important distinciton.<p>If people knew this and understood this, computers would make more sense to them. They are right in the paper that the mismatch between what developers, computer scientists and others think of as files is the cause of a lot of the kinds of problems that people have when they are dealing with files.<p>But where did the users gain their misinformed ideas about files from? I&#x27;ve said it already, I think the desktop metaphor is to blame. Again, it&#x27;s an ok metaphor <i>as long as it&#x27;s not the only metaphor</i>.
What’s in a Continuation (2016)
Continuation based list flattening in TXR Lisp:<p><pre><code> This is the TXR Lisp interactive listener of TXR 181. Quit with :quit or Ctrl-D on empty line. Ctrl-X ? for cheatsheet. 1&gt; (defun yflatten (obj) (labels ((flatten-rec (obj) (cond ((null obj)) ((atom obj) (yield-from yflatten obj)) (t (flatten-rec (car obj)) (flatten-rec (cdr obj)))))) (flatten-rec obj) nil)) yflatten 2&gt; (yflatten &#x27;(1 (2 3 (4) . 5) 6)) #S(sys:yld-item val 1 cont #&lt;intrinsic fun: 1 param&gt;) </code></pre> Oops, we need the obtain macro work with a function which yields:<p><pre><code> 3&gt; (obtain (yflatten &#x27;(1 (2 3 (4) . 5) 6))) #&lt;interpreted fun: lambda (: resume-val)&gt; 4&gt; [*3] 1 5&gt; [*3] 2 6&gt; [*3] 3 7&gt; [*3] 4 8&gt; [*3] 5 9&gt; [*3] 6 10&gt; [*3] nil 11&gt; [*3] nil </code></pre> No continuation passing here: real stack where we can have unwind-protect.<p>Let&#x27;s do it again --- but this time let&#x27;s trace the function. For this we beak out flatten-rec into a top-level function we can trace.<p><pre><code> (defun flatten-rec (obj) (cond ((null obj)) ((atom obj) (yield-from yflatten obj)) (t (flatten-rec (car obj)) (flatten-rec (cdr obj))))) (defun yflatten (obj) (flatten-rec obj) nil) </code></pre> Now:<p><pre><code> 1&gt; (trace yflatten flatten-rec) nil 2&gt; (obtain (yflatten &#x27;(1 (2 3 (4) . 5) 6))) #&lt;interpreted fun: lambda (: resume-val)&gt; 3&gt; [*2] (yflatten ((1 (2 3 (4) . 5) 6)) (flatten-rec ((1 (2 3 (4) . 5) 6)) (flatten-rec (1) #S(sys:yld-item val 1 cont #&lt;intrinsic fun: 1 param&gt;)) 1 4&gt; [*2] nil) (flatten-rec (((2 3 (4) . 5) 6)) (flatten-rec ((2 3 (4) . 5)) (flatten-rec (2) 2 5&gt; [*2] nil) (flatten-rec ((3 (4) . 5)) (flatten-rec (3) 3 6&gt; [*2] nil) (flatten-rec (((4) . 5)) (flatten-rec ((4)) (flatten-rec (4) 4 7&gt; [*2] nil) (flatten-rec (nil) t) t) (flatten-rec (5) 5 8&gt; [*2] nil) nil) nil) nil) (flatten-rec ((6)) (flatten-rec (6) 6 9&gt; [*2] nil) (flatten-rec (nil) t) t) t) t) nil </code></pre> At this point, the flattening is done. What if we keep calling it?<p><pre><code> 10&gt; [*2] nil) (flatten-rec (nil) t) t) t) t) nil 11&gt; [*2] nil) (flatten-rec (nil) t) t) t) t) nil </code></pre> It&#x27;s just sputtering now, repeating the slice of execution spanning from several nestings deep into flatten-rec, up to the delimiting prompt, because no new continuation is captured.
Antisocial Coding: My Year at GitHub
&gt; &quot;I was well aware of GitHub&#x27;s very problematic past, from its promotion of meritocracy in place of a management system&quot;<p>I don&#x27;t see what&#x27;s wrong with meritocracy as opposed to an unspecified &quot;management system&quot;.<p>--<p>&gt; Feature releases such as these are frequently promoted on the GitHub blog, and the product manager on my team encouraged me to write a post announcing what I had shipped. Since it was so important to me personally, I wrote an impassioned piece talking about how this feature closed a security gap that had directly affected and provided an abuse vector against me. The post also served as an announcement to the world of the new team and the kinds of problems that we were charged with solving.<p>&gt;<p>&gt; The post was submitted for editorial review. It was decided that the tone of what I had written was too personal and didn&#x27;t reflect the voice of the company. The reviewer insisted that any mention of the abuse vector that this feature was closing be removed. In the midst of my discussions with the editorial team, trying to reach a compromise, a (male) engineer from another team completely rewrote the blog post and published it without talking to me.<p>GitHub was correct here. Feature announcements on the company blog should remain neutral in tone. In addition, the published feature announcement seems to mention the motivations behind the feature quite well:<p>&gt; Previously, anyone could automatically add other developers to their repositories without explicit permission. This model openly provided some users with opportunities to harass members of our community by inviting them to offensive or attention-seeking repositories.<p>So, it was rewritten to a neutral explanation of why this feature is a benefit to those who have had experiences like that of the author.<p>--<p>&gt; In addition to my development work, I had started weekly mentoring sessions with one of my teammates (a recent boot camp graduate) on Ruby and Rails fundamentals that she had not been exposed to in her program. When I talked to my manager about how she was progressing, I was told to stop the formal mentoring and allow this person to &quot;learn at her own pace, without any pressure from you.&quot; I was mystified: mentoring is an essential part of being a senior engineer, and this teammate seemed to be benefiting from it.<p>No quote was given from the manager telling the author to stop mentoring entirely, just that they wanted the _formal_ mentoring to stop (by formal, I assume it to mean it was initiated by the author, not the student). It&#x27;s possible the manager&#x27;s intention was as quoted: to see how this &quot;recent boot camp graduate&quot; was able to grow on her own.<p>--<p>&gt; I was very disappointed at this 101 mistake, and sadly opened an issue referencing the question.<p>This has a presumptuous tone to it.<p>--<p>&gt; The same day that I had this review, I got some devastating personal news. I have bipolar depression and was already in a bad place mentally, so I found myself feeling crushed and hopeless. In an attempt to deal with things I ended up taking a dangerously high dose of my anti-anxiety medication. When I reached out to my therapist for help, she recommended that I go to the emergency room. This was the start of an eight day ordeal involving involuntary commitment to a mental health facility. I shared this experience on Twitter and won&#x27;t rehash it here, but suffice it to say that I was severely traumatized by what happened to me in the hospital.<p>I read the &quot;experience on Twitter&quot;. It sounds to me that the author&#x27;s judgement was impaired (re: the author&#x27;s self-described overdose of anti-anxiety medication) and the hospital&#x2F;psychiatrist believed they were enough of a risk to themselves to warrant admitting them as a patient.<p>--<p>&gt; Thursday and Friday were not good days. I had a lot of trouble focusing. I was making simple mistakes and in some cases doing the wrong work. Friday afternoon I reached out to my boss to tell her that I was having trouble and that I didn&#x27;t know what to do. She suggested that I take medical leave, but I told her what my therapist had said about the importance of getting back to normal life. My manager was adamant that if I couldn&#x27;t work at full capacity that I had no choice but to take medical leave.<p>&gt; ...<p>&gt; The following week I had scheduled conferences to attend, so my focus on work was put on hold.<p>&gt; ...<p>&gt; After the meeting I messaged her and shared the more personal aspects of what I was going through, the trauma that I had experienced in the hospital and its lingering effects on my mental health. I was told that I should have accepted the offer of medical leave, and she said she felt like I was trying to manipulate her by sharing my feelings in the hopes of influencing the PIP. I was dismayed.<p>The author &quot;had a lot of trouble focusing&quot;. It was recommended multiple times that she take medical leave. Instead, the author&#x27;s &quot;focus on work was put on hold&quot; in order to attend conferences. I&#x27;m as suspicious of PIPs as much as the next person, but this seems pretty cut-and-dry to me as unsatisfactory work performance.<p>--<p>Regarding the firing on grounds of lack of empathetic communication, perhaps it was related to why this person was hired in the first place?<p>&gt; They wanted to offer me a job. They had just created a team called Community &amp; Safety, charged with making GitHub more safe for marginalized people and creating features for project owners to better manage their communities.<p>--<p>&gt; I think back on the lack of options I was given in response to my mental health situation and I see a complete lack of empathy.<p>As stated above, they provided the option of medical leave. The author chose not to take it.<p>&gt; In the past several months GitHub has fired at least three transgender engineers and many more cisgender women.<p>Why were they fired? How many others were fired in the past several months? What&#x27;s the ratio there?<p>&gt; In a return to its meritocratic roots, the company has decided to move forward with a merit-based stock option program despite criticism from employees who tried to point out its inherent unfairness.<p>Again, what&#x27;s unfair about rewarding based on merit? Should people who contribute relatively little compared to others get just as much reward? Why would the higher-performers bother to burden themselves in that case?<p>&gt; So yes, in looking back over my year at GitHub I see that there was, in fact, a real problem with empathy.<p>&gt;<p>&gt; But that problem wasn&#x27;t mine.<p>So in the end, the author seems to absolve themselves of responsibility. This shows a lack of growth; in my opinion you should _always_ try to fault yourself if you&#x27;re going to fault others.
Antisocial Coding: My Year at GitHub
I am not a Ruby developer so I don&#x27;t really know Coraline, but there were quite a few things in this blog post which I found disturbing:<p>&gt; In the midst of my discussions with the editorial team, trying to reach a compromise, a (male) engineer from another team completely rewrote the blog post and published it without talking to me.<p>The entire paragraph about her writing a blog post has nothing to do with gender, why is it relevant to explicitly try to correlate it with gender in a situation where someone has done something which she wasn&#x27;t happy about? It reads sexist to me as an outsider.<p>&gt; In addition to my development work, I had started weekly mentoring sessions with one of my teammates... When I talked to my manager about how she was progressing, I was told to stop the formal mentoring and allow this person to &quot;learn at her own pace, without any pressure from you.&quot;<p>Yes mentoring is an essential part, but mentorig is not becoming someone&#x27;s teacher. That is very weird indeed. Here she says that she was additionally mentoring someone on a weekly basis, which implies that she was doing some sort of teaching lessons to another team member outside of regular development work. This is not mentoring AFAIK. Mentoring is something you do along the way as you work with someone together. You offer help when help is needed, you give advice when advice is requested, you keep your ears open and chip in with help or information when you see someone is struggling, you lead and teach by example and not by lessons.<p>I can totally understand if the manager was thinking that his person should not feel the pressure of a more senior developer in such a situation. She thinks the developer was benefitting, but how does she even know that? Perhaps the more junior developer was too shy to speak up or felt intimidated by a senior employee telling him&#x2F;her what and how to do things.<p>&gt; Discussions were directed to comments on issues and pull requests.<p>I can see how this was in some situations difficult, but I (as someone who thinks of himself as very open minded) can also totally see why it might be beneficial:<p>- A discussion on an open issue or pull request provides automatic documentation - It can be found and read by anyone - It fosters more of a culture where anyone feels invited to contribute if they think they have relevant information - It doesn&#x27;t get lost among other conversations. In Slack or in real life you might talk about 3 issues at the same time and the history of one issue is totally swamped among the chat of all other issues. In GitHub each issue&#x2F;PR has it&#x27;s own discussion. - A discussion is often far more civilised when it is done in an issue or PR than in real life or in Slack, which is a huge benefit to establish a positive work culture IMHO.<p>Again, I can agree that there is also downsides to it, but as someone who prides themselves as an open minded person I would have hoped that she would have looked a bit further than just the downsides and also try to make an effort and understand why it was done like this at GitHub. Being open minded, look at positives and try to foster a positive attitute is one of the most important skills in an employee in my opinion.<p>&gt; I asked my manager what had happened to upset her and was told that it was the feedback I provided on the gender question. I read back to her the body of the issue that I had opened and asked what I should have done differently. She responded that she didn&#x27;t know, that my wording seemed direct but non-confrontational, but that I was forbidden to interact any further with the author of the survey.<p>I have really no foundation for this assumption, but it really reads that her tone in approaching other co-workers was rather direct and harsh, which from my personal experience is never a good way to get people on your side. Especially with people who you have never met or rarely seen I think it is crucial to put some thoughts into how certain things are phrased. At the end of the day you don&#x27;t know the other person, don&#x27;t know how they will read it and react to it. If you have a genuine interest to get the best out of this together, rather than just trying to show off that you are more knowledgable than someone else, then you would certainly phrase things more friendly.<p>Say what you want, but I have 10+ years of experience as well and I know that there was never a situation where I was not able to get my point across in a VERY obvious friendly tone.<p>&gt; Starting in December, in my weekly one-on-one meetings with my manager, we would review all of my written communication (issues, pull requests, code reviews, and Slack messages) to talk about how I could improve. It felt ridiculous but I went along with it, and did my best to address my manager&#x27;s feedback and concerns.<p>What attitute is that? A good manager will try to get the best of evey reportee and focus on personal strenghts&#x2F;weaknesses. What is ridiculous about trying to improve communication skills? Honestly that is such a bad attitute, as if anyone would ever be such a communication god that there&#x27;s nothing to improve anymore. That sentence alone is very negative IMHO.<p>--<p>Obviously it is very sad and unfortunate that she had to go through mental health issues and the loss of her grandmother, but it really feels like her manager was trying to work with her to go through all these situations and Coraline was doing her own thing. I think it was very responsible of her manager to ask her to take medical leave. Imagine what would have happenend if she didn&#x27;t? If her manager would have been like &quot;alright, just continue working then&quot;, then later she would sue GitHub for not taking her mental health problems seriously and firing her based on that. Her manager was extremely open about her weaknesses, was putting effort into working through them with her and taking the right and responsible steps to get her back on track, but honestly there&#x27;s only so much you can do before you have to let someone go...
Hard Truths about Programming
From the outset, Programming is hard. Anyone who says it is easy is missing the salient point of programming.<p>That salient point is that you as the programmer have to gain an understanding of the target field you are programming for. Programming is about solving problems for someone, not just writing code to a specification.<p>A lot of the comments, so far, are talking about programming languages, programming tools, programming frameworks.<p>These are NOT programming, these are the tools you use to program an adequate solution to your problem. Unless you are continually gaining knowledge in every field that you are programming for, you are staying a novice. This does not mean that you have to be a subject matter expert, but it does mean that you have to gain enough knowledge in that field to be able to provide a solution for the subject matter experts (or others that will be dealing with your solution in that field).<p>Too often, I have found that &quot;so-called gun programmers&quot; have not only NOT understood the the field they are providing a solution for, they dictate what that field is supposed to &quot;put up with&quot;.<p>I have spent nearly 40 years programming and I have come across many people who can churn out code much faster than I, but many of those cannot provide an adequate solution for the problem at hand. They cannot and have not been taught to think outside the &quot;box&quot;.<p>I have also worked with many who cannot churn out the code but what they do give you is (at the minimum) an adequate solution to the problem at hand. They effectively solve the problem as it actually is.<p>Many years ago now, I took over the maintenance of a small system that was being used by 6 or so people. By the time I was finished, it was handling 60+ people simultaneously in at least 7 different fields for a single telecommunications project. The reason I tell this is for the following:<p>The base system in use was that venerable old girl, MS-Access 2000. This was one of the constraints of the system.<p>The original programmer was a highly paid foreigner (British) who basically told the subject matter experts that this was how they would have to work.<p>Part of the process I undertook was to find out what and how they needed to work and rebuilt the system to do it appropriate to the needs of the project. The satisfaction and appreciation was very encouraging.<p>Once this was seen, each of the other functional areas wanted to get on board. The application was then expanded to include these additional groups.<p>I got various complaints from various people about how hard some of their tasks were because MS-Access could not do want they needed. They had been told this by management. Finding out what they needed became the incentive for providing a solution. That meant for one person, I cut their after hours report production down from 3 to 3.5 hours every night to about 3 minutes (included production of reports, formatting, emailing to international management, emailing to national management and local report printing). His wife and daughter were very happy to get him home at night at a reasonable hour.<p>Programming is understanding the problem at hand and providing a workable, efficient solution using the tools at hand. This means that you have to be able to understand the subject field with enough detail to provide that solution and make it easy for the end user to use.<p>Early in my working life, I had people who encouraged this mindset and multi-discipline learning. Not everyone is capable of this, or even wants to do this. This then leads to the shmozzle that is the industry today and its continuation is the outworking of many of the major IT industry players who are interested only in the easiest way to make a profit.<p>Programming is hard because you have to become a &quot;jack of all trades&quot; as well as an expert in your own. You have to be able to document all the assumptions that have controlled your development activity, the &quot;whys and wherefores&quot;, the paths taken and in some case the paths not taken and why. Too many &quot;programmers&quot; think that documentation is not really needed. They are happy to show off their &quot;tricks&quot; to enhance their reputation as good or great programmers, but anything too hard to do is left for some other shmuck. To those of us who are tasked with expanding, maintaining, correcting or even rebuilding applications, we find that the lack of intelligible documentation just makes the entire process that much harder. It does then behoove us to provide said information.<p>I have extended family members still in their teens who are recognised as brilliant programmers, yet they fail to appreciate that knowing programming languages, frameworks, tools and toolsets is only the beginning of the process to becoming a good and maybe in the future a brilliant programmer. My job as an old programmer is to expand their multi-disciplinary education is as many ways as possible, so that they become much better than me.<p>Programming is hard and it is not what a lot of programmers believe it to be. It is much much more.
Still locked out of my AWS account
I had two experiences getting locked out of accounts.<p>First was an old pre-Atlassian BitBucket one that just broke due to shenanigans with Atliassian accounts integration or SOMETHING. But big props to them. I complain and I get it fixed super quick. Just how it should be. Solid 4 out of 5 (5 is for guys who managed to not lock my account due to weird mergers, I&#x27;m even OK with the weird &quot;don&#x27;t use FRex, form now on login with your email: smtsmt@gmail.com&quot; I get on attempting &#x27;FRex&#x27; + password).<p>Second is Twitter. FUCK THEM so hard. Excuse my French but I have no other words for how idiotic this situation is, it&#x27;d make a saint mad.<p>I make an Twitter account using my secondary&#x2F;side gmail account that has been phone verified and 2FA using Google Auth for Andoird, verified my Twitter account by clicking link in the email they sent there, connect it to my YouTube account that has been phone verified, send out the welcoming tweet they propose (something like &quot;Hello Twitter!&quot;, I think it was just a button press or a combo box to pick from but I might be wrong now) and I get banned for (exact wording may vary) &#x27;suspicious&#x2F;possibly automated activity&#x27; (mhmm... these huge botnets of phone verified 2FA gmail and YT accounts operating out of EU ips... good job catching me Twitter).<p>I could of course act like a good peon and provide them a phone number and be graciously allowed by the Twitter Heavenly Emperors to use my 10 minutes old account. I write to their support via some super idiotically hidden panel of theirs on twitter while still in my &#x27;locked&#x27; account and.. I get an automated (!) email to my gmail (!!) telling me in steps how to just fuck off and enter my phone number (!!!) and to ask for help if I don&#x27;t have it unlocked after providing a phone number and waiting a few minutes (&#x27;fucktastic&#x27; was the word of the day that day, seriously, that made my day). I wrote another one, telling them to shove it (in kinder words and with zero profanity but firmly making it clear I&#x27;d not provide my number on account I did literally nothing on and want to use for YouTube connectivity to a verified channel and created on a 2FA and phone verified gmail account I verified by clicking the link in the email) and got another bot email and no reply since then (about a month ago). Total human replies: 0. Bot replies: 3+ (see below). And I&#x27;m the one running an automated operation in here.<p>And the cherry on top: I still get trending political BS tweets (because that&#x27;s what trending where I live every week) sent to my social tab in gmail and can&#x27;t disable it since my account is locked and throws me to &#x27;provide a number&#x27; screen that only has &#x27;help&#x27; (blabbering about how I must be the one in the wrong here but if I provide a phone number..) and &#x27;log out&#x27; available. Good fucking riddance. I truly dodged a bullet by using my alternate gmail!<p>And all this on a service that has users that are outright bots, Nazis, terrorists (ISIS itself), hacktivists (you can argue some of it is a positive force for change or securing up but it&#x27;s still highly illegal and often done just for lulz) and the like.<p>Of course I&#x27;m not going to give in to this BS. I can sort of understand Google&#x2F;YouTube with their stuff and it actually helped me once by requiring SMS verification when my kinda weak old password got cracked&#x2F;guessed but what Twitter did is downright dumb extortion (&quot;gib phon number! gib, gib, don&#x27;t write support requests! 1st gib!&quot;) or them being idiots (what did I do that&#x27;s suspicious exactly.. make a Twitter account in 2017?) and grossly neglecting their users (0 human reply, ever). Twitter fortunately would just be a nice-to-have for my side hobby of YTing and I have the privilege of saying &quot;fuck no&quot; to them for this and shitting on them on every occasion but if this was my mail gmail it&#x27;d do me in for weeks before I recovered all of my stuff.<p>There are horror stories on YT too, see Millbee (let&#x27;s plays) or I Hate Everything a.k.a. IHE(critique&#x2F;shitposting), banned overnight (Millbee for a nip snip in an anime game despite all the nudity, GONE SEXUAL and borderline CP on YT going unpunished and IHE for &#x27;community guidelines&#x27; for a video of smashing a film DVD that was later hand judged as not in violation), both returned after a social shitstorm but with no apology, explanation, nothing. I bet if I was a high up in some company and had a company account tweet what BS I went through it&#x27;d all suddenly be fixed in a jiffy with no need for my phone number. But what are internet and real life rank and file tech nobodies supposed to do..?
Ask HN: How to prepare for an Engineering Manager interview?
I applaud your willingness to take on engineering management. Having made the move myself about 4 years ago, it&#x27;s an often unappreciated form of contributing but I find it highly rewarding. It seems that most new engineering managers seem to get promoted from within since it allows them to leverage the respect they&#x27;ve earned inside the organization as an engineer. In no particular order, here&#x27;s a few recommendations.<p>- Think back to the managers you&#x27;ve had and think about what they did well and what didn&#x27;t work as well. The more you can talk about and have an opinion about what makes a good manager, the more you can show your desire and ability to become a good manager. Before becoming a manager, I spent a year going through my career and really looking in depth at my previous managers so that when I started I could try to use that to be better myself. What I found is that this made me very good at managing down and I was very popular with my team, but managing up was somewhat of a problem. So when you look back at your history and your previous managers, be sure to look at not only how your managers interacted with you and your teammates but also how your managers interacted with their managers and the rest of the org. This can be harder to see, since you&#x27;re not a part of those interactions, but if you think back, you might remember at least some part of that.<p>- The most important part of being a manager, from my experience, is being able to deliver feedback. The more effortless and clear you are, the more easily you can provide frequent and minor course corrections as well as provide natural encouragement of desired behavior. It also makes firing&#x2F;disciplining employees easy. For one, if you&#x27;re giving constant feedback, those instances are much less frequent since employees can make those course corrections. But when it does become necessary, it&#x27;s not a surprise. Either there was some major incident or there&#x27;s been a long build-up where suggestions&#x2F;warnings have been repeatedly ignored. When I&#x27;ve interviewed other managers, I&#x27;ve looked for their ability to deliver feedback and, crucially, their abilities to notice the things they should be giving feedback about. Many managers, especially new managers, just don&#x27;t have the awareness to constantly be looking for small course corrections or the feel for when an employee needs a bit of emotional buoying that can come with positive feedback. Hopefully in your mentorship and lead dev experience, you&#x27;ve developed some of that awareness, so the more you can talk about that, the more you&#x27;ll show you&#x27;re ready. As far as delivering feedback, there&#x27;s a lot of theory on the right way to do that, but it also requires practice. Read up on that and then find a friend who&#x27;s willing to help and role play a few different feedback scenarios. You&#x27;ll quickly get better with practice.<p>- I&#x27;m going to expand your question beyond the interview because I think it will help you with your interview. Because if you get hired, that&#x27;s not the end of it. It&#x27;s not a case of showing that you can do the work, getting hired and then just organically becoming good at it. Once you get hired, that&#x27;s when you need to start diving into the theory behind the discipline of engineering management. If you can internalize that, then you&#x27;ll be able to convey to your potential employer at the interview your willingness to work to become better. Try to show your interviewer that you have a plan for learning how to be a great manager and the concrete steps you&#x27;ll take to achieve that goal. Because if you have zero experience and they know that going into the interview, that&#x27;s the most they can expect from you.<p>- Not every engineer actually enjoys management. Many engineers really like knowing all the little details and have a hard time stepping away from that level of knowledge and only knowing the larger building blocks. If you can talk about your excitement to work at that higher level and willingness to give up that lower level, you&#x27;ll at least convince them that you really want the job. Make sure that this is actually true, because it&#x27;s hard to fake. But if you can show that enthusiasm, you&#x27;ll subtly make a better impression.<p>- Lastly, try to stress areas of being a manager that you&#x27;re already good at. For instance, as a lead developer, you&#x27;ve probably interviewed a lot of engineers. If you&#x27;re great at hiring&#x2F;recruiting, it makes being a manager a lot easier. If you can show that you&#x27;re able to bring great engineers into their organization, that alone makes you a great hire. Another thing you&#x27;ve probably done is write 360 reviews for other engineers. If you can find one that you&#x27;re particularly proud of, remove all identifying information from it, print it out and bring it to your interview as an example of the kind of thinking you&#x27;ll bring to their organization.<p>Best of luck in the interview!
Lisp's mysterious tuple problem
TXR Lisp, working with native Win32&#x2F;Win64 &quot;tuples&quot; like WNDCLASS, POINT, MSG, RECT and PAINTSTRUCT.<p>This is an almost line for line translation of the MSDN &quot;Your First Windows Program&quot; C demo:<p><pre><code> (typedef LRESULT int-ptr-t) (typedef LPARAM int-ptr-t) (typedef WPARAM uint-ptr-t) (typedef UINT uint32) (typedef LONG int32) (typedef WORD uint16) (typedef DWORD uint32) (typedef LPVOID cptr) (typedef BOOL (bool int32)) (typedef BYTE uint8) (typedef HWND (cptr HWND)) (typedef HINSTANCE (cptr HINSTANCE)) (typedef HICON (cptr HICON)) (typedef HCURSOR (cptr HCURSOR)) (typedef HBRUSH (cptr HBRUSH)) (typedef HMENU (cptr HMENU)) (typedef HDC (cptr HDC)) (typedef ATOM WORD) (typedef LPCTSTR wstr) (defvarl NULL cptr-null) (typedef WNDCLASS (struct WNDCLASS (style UINT) (lpfnWndProc closure) (cbClsExtra int) (cbWndExtra int) (hInstance HINSTANCE) (hIcon HICON) (hCursor HCURSOR) (hbrBackground HBRUSH) (lpszMenuName LPCTSTR) (lpszClassName LPCTSTR))) (defmeth WNDCLASS :init (me) (zero-fill (ffi WNDCLASS) me)) (typedef POINT (struct POINT (x LONG) (y LONG))) (typedef MSG (struct MSG (hwnd HWND) (message UINT) (wParam WPARAM) (lParam LPARAM) (time DWORD) (pt POINT))) (typedef RECT (struct RECT (left LONG) (top LONG) (right LONG) (bottom LONG))) (typedef PAINTSTRUCT (struct PAINTSTRUCT (hdc HDC) (fErase BOOL) (rcPaint RECT) (fRestore BOOL) (fIncUpdate BOOL) (rgbReserved (array 32 BYTE)))) (defvarl CW_USEDEFAULT #x-80000000) (defvarl WS_OVERLAPPEDWINDOW #x00cf0000) (defvarl SW_SHOWDEFAULT 5) (defvarl WM_DESTROY 2) (defvarl WM_PAINT 15) (defvarl COLOR_WINDOW 5) (deffi-cb wndproc-fn LRESULT (HWND UINT LPARAM WPARAM)) (with-dyn-lib &quot;kernel32.dll&quot; (deffi GetModuleHandle &quot;GetModuleHandleW&quot; HINSTANCE (wstr))) (with-dyn-lib &quot;user32.dll&quot; (deffi RegisterClass &quot;RegisterClassW&quot; ATOM ((ptr-in WNDCLASS))) (deffi CreateWindowEx &quot;CreateWindowExW&quot; HWND (DWORD LPCTSTR LPCTSTR DWORD int int int int HWND HMENU HINSTANCE LPVOID)) (deffi ShowWindow &quot;ShowWindow&quot; BOOL (HWND int)) (deffi GetMessage &quot;GetMessageW&quot; BOOL ((ptr-out MSG) HWND UINT UINT)) (deffi TranslateMessage &quot;TranslateMessage&quot; BOOL ((ptr-in MSG))) (deffi DispatchMessage &quot;DispatchMessageW&quot; LRESULT ((ptr-in MSG))) (deffi PostQuitMessage &quot;PostQuitMessage&quot; void (int)) (deffi DefWindowProc &quot;DefWindowProcW&quot; LRESULT (HWND UINT LPARAM WPARAM)) (deffi BeginPaint &quot;BeginPaint&quot; HDC (HWND (ptr-out PAINTSTRUCT))) (deffi EndPaint &quot;EndPaint&quot; BOOL (HWND (ptr-in PAINTSTRUCT))) (deffi FillRect &quot;FillRect&quot; int (HDC (ptr-in RECT) HBRUSH))) (defun WindowProc (hwnd uMsg wParam lParam) (caseql* uMsg (WM_DESTROY (PostQuitMessage 0) 0) (WM_PAINT (let* ((ps (new PAINTSTRUCT)) (hdc (BeginPaint hwnd ps))) (FillRect hdc ps.rcPaint (cptr-int (succ COLOR_WINDOW) &#x27;HBRUSH)) (EndPaint hwnd ps) 0)) (t (DefWindowProc hwnd uMsg wParam lParam)))) (let* ((hInstance (GetModuleHandle nil)) (wc (new WNDCLASS lpfnWndProc [wndproc-fn WindowProc] hInstance hInstance lpszClassName &quot;Sample Window Class&quot;))) (RegisterClass wc) (let ((hwnd (CreateWindowEx 0 wc.lpszClassName &quot;Learn to Program Windows&quot; WS_OVERLAPPEDWINDOW CW_USEDEFAULT CW_USEDEFAULT CW_USEDEFAULT CW_USEDEFAULT NULL NULL hInstance NULL))) (unless (equal hwnd NULL) (ShowWindow hwnd SW_SHOWDEFAULT) (let ((msg (new MSG))) (while (GetMessage msg NULL 0 0) (TranslateMessage msg) (DispatchMessage msg)))))) </code></pre> Lisp doesn&#x27;t have a tuple problem; it has a &quot;clueless programmers blogging about it&quot; problem.<p>An ounce of lie does damage that a pound of truth is needed to repair.
Bitcoin – Potential Network Disruption on July 31st
I wish there was a concise way to explain what is going on, but there really isn&#x27;t. I&#x27;m going to do my best though.<p>Bitcoin is a consensus system. This means that the goal is to have everyone believe the exact same thing at all times. Bitcoin achieves this by having everyone run identical software which is able to compile a list of transactions, and from there decide what money belongs to which person.<p>As you can imagine, it&#x27;s a problem if you have $10, and Alice believes she owns that $10, Bob believes he owns that same $10, and Charlie believes that the money was never sent to either of them. These three people can&#x27;t interact with eachother, because they can&#x27;t agree on who owns the money. Spending money has no meaning here.<p>In Bitcoin, there are very precise rules that define how money is allowed to move around. These rules are identical on all machines, and because they are identical for everyone on the network, nobody is ever confused about whether or not they own money.<p>Unfortunately, there are now 3 versions of the software floating around (well... there are more. But there are only 3 that seem to have any real traction right now, though even that is hard to be certain about). Currently, all versions of the software have the exact same set of rules, but on August 1st, one of those versions of the software will be running a different set of rules. So, depending, people may not be able to agree on the ownership of money. If you are running one version, and your friend is running another, your friend may receive that money, or they may not. This is of course a bad situation for both of you, and its even worse if you are working with automated systems, because an automated system likely has no idea that this is happening, and it may have no way to fix any costly mistakes.<p>It gets worse. The version of the software that is splitting off actually has the power to destroy the other two versions of the software. I don&#x27;t know how to put this in simple terms either.<p>In Bitcoin, it is possible to have multiple simultaneous histories. As long as all of the histories are mathematically correct (that is, they follow all of the formal rules of Bitcoin), you know which history is the <i></i>real<i></i> history based on how much work is behind it. The history with the most work wins. If the history is illegal, you ignore it no matter how much work is behind it.<p>So, this troublemaker version of the software (the UASF version) has a compatible set of rules with the other 2 versions. Basically, everything that it does, the other versions see as valid. So if its history is the longest, the other versions will treat that history as the one true history. The thing is, this troublemaker version of the software is stubborn, and so even if the histories of the other two versions have more work, it&#x27;ll ignore them and focus only on its own version of history.<p>So, the dramatic &#x2F; problematic situation happens if the UASF software initially has less work in its history. What&#x27;ll happen is a split, and two different versions of Bitcoin will exist at the exact same time. But then, if the UASF software ends up with more work after some period of time (days, weeks, etc.), the other versions of the software will prefer its version of history over their own.<p>Basically, what happens there is that entire days, or weeks, etc. of history get completely obliterated. The UASF history becomes canonical, and the histories built by the other versions all get destroyed. Miners lose all of their money, people who accepted payments lose those payments, people who made payments get those payments back. Basically a lot of chaos where people end up losing probably millions and millions of dollars.<p>----<p>I hope that helps. This whole situation is screwed up, and really the best thing to do is to put your coins in a cold wallet (one that you control, not an exchange), and then just not send or receive any coins for a few weeks. Let the dust settle, and then resume using Bitcoin once its clear that the turmoil is over.<p>----<p>The most likely situation here is that nothing interesting happens at all. My personal opinion is that the vast majority of people who matter in Bitcoin aren&#x27;t even paying attention to the drama, and something dramatic is really only possible if the majority of Bitcoin users opt-in to doing something. I don&#x27;t think that&#x27;s the case at all, which means essentially nothing interesting is going to happen.<p>But, I could be wrong. There&#x27;s a non-zero chance that something very unfortunate happens, and there&#x27;s a pretty easy way to isolate yourself: don&#x27;t send or receive any Bitcoins starting July 31st, and don&#x27;t resume until it&#x27;s clear that the storm has passed. It&#x27;ll likely take less than a week to come to a well-defined conclusion.
To Grow Faster, Hit Pause
Okay, I read all that OP.<p>To me, the OP has a huge gap, so huge what it left out is more important than all it put in.<p>What it left out has a name; in the academic fields of organizational behavior and public administration, the left out part is called <i>goal subordination</i>.<p>The definition is an employee, often a middle manager, behaving according to what they see as their own narrow interests even if that behavior is fairly obviously against the interests of the company. So, such an employee <i>subordinates</i>, that is, gives lower priority to, the goals of the company than to their own personal goals.<p>So, a common part of goal subordination is to fight with people inside the company down the hall instead of competing with people outside the company.<p>So, essentially the company is in some version of internal war, civil war. Then cliques form; people become loyal to the cliques; and the cliques fight each other.<p>E.g., the OP mentioned who gets invited to meetings. Well, commonly that is based more on cliques and goal subordination than on what was mentioned in the OP.<p>One of the techniques of internal clique war is gossip, sometimes called yentas. So, there can be lots of whispering. When a clique wants to attack a person, the gossip can get that person accused, tried, convicted, and punished all without the person knowing anything about it. About all the punished person knows is that they are not invited to meetings; they are not on distribution lists; they don&#x27;t get e-mail; any e-mail they write is ignored; routine communications are avoided; they are avoided in the offices; they are treated as if they are well known to be helpless idiots, etc. It was all done by gossip from cliques at war.<p>Beyond gossip, another technique for a clique to attack a person is for the clique to have a team that, one person at a time, drops by the person&#x27;s office to talk. The talk is never very substantive. The person is reluctant to be rude and throw the people out, but, net, due to the team of the clique, there&#x27;s no way the person can get any work done in their office. One approach for the person is to look serious and busy and just to say, right away, &quot;I can&#x27;t stop now.&quot;<p>Another consequence is, an employee with some really good ideas and work can, then, be seen by everyone else as a threat and, then, attacked, by gossip, sabotage, etc. by everyone else. The old advice that the nail that sticks up gets beaten down is part of this. So, people deliberately avoid doing their best work. E.g., maybe in an old piece work shop, the employee that is a high performer and exceeds their quota and gets an award one month has their tires slashed the next month. Much the same thing can happen without piece work.<p>E.g., maybe in some aspect of production and operations the company is wasting money. So some employee sees an opportunity to save the waste and help the company, say, works out some math (say, as in operations research), writes the corresponding software, runs the software and demonstrates some significant cost savings, writes a paper showing the work and the savings, distributes the paper, develops a dozen foils, and announces a talk to explain. Suddenly he can discover that lots of managers, especially his own, can call and say that the meeting can&#x27;t be held because they have a conflict in their schedule. It can be the case that even the CEO can feel uncomfortable because this employee is starting to look essential and, maybe, by threatening to leave, <i>hold up</i> the company, be an exception to the compensation plan, etc.<p>Another one is, in the OP, a decision with low impact that can be reversed is to be made at low levels. Okay. Except, lots of employees have learned that any instance of anything that can be regarded as a mistake can be used by the cliques, gossip, internal wars, etc. to attack and destroy the person who made a fast decision. Maybe with two weeks more study, there was an alternative that would have cost $10 less: Presto, bingo, the person can be accused of wasting money. Even if the person chips in the $10 from their own billfold, the accusation still stands. So, lots of employees just will NOT make a fast decision. Instead, they want everything thoroughly studied, in a paper report big enough to be a door stop, and approved by a committee of a dozen people. More generally, such a person will do everything they can to avoid anything like responsibility or to do anything where they could be blamed or accused of a mistake. That&#x27;s one of the main reasons companies grind to a halt and one of the main opportunities for startups until they start doing the same thing.<p>One of the issues is cheating going on. E.g., maybe some part of the operations needs copper tubing so has a big supply that gets used right along for lots of projects. Well, at times copper tubing is expensive. So, maybe the relevant manager has not implemented anything like inventory control over the copper tubing. Then that manager is running a personal cash and carry midnight copper tubing supply business. Anyone who starts to ask about anything at all related gets threatening scowls and, thus, learns just to f&#x27;get about copper tubing.<p>Or, some manager has two secretaries. One of them is busy all the time, and that&#x27;s the stated reason for the need for the other secretary. This other secretary comes in late, leaves early, and takes long lunch hours on Tuesday and Thursday, is not around on Friday and Monday, and avoids any scheduled meetings on Wednesday because it ruins two weekends. She has a great figure, gorgeous hair, 6&quot; high heels, short skirts, and spends most of her time at her desk reading romance novels or doing her nails. Been known to happen.<p>Sure, the OP has lots of nice stuff. Sure, with all that nice stuff, everyone working effectively and cooperatively, joining hands, singing Kumbaya, sounds good. But I suspect that the dysfunctional issues I&#x27;ve mention are, in reality, more important. And when a lot of stock compensation is on the line, the dysfunctional issues can become much more common; people can fight like mad dogs for a little more in stock options.<p>One response is that the CEO can surround himself with people of long time, unquestioned loyalty, and, then, there is a <i>palace guard</i> that is really running the place.<p>Finally the CEO may just divide the work into departments, divisions, etc., for each of those have some accurate enough quantitative measures of performance, insist that the managers accomplish their performance goals, and otherwise largely ignore the small stuff. If some manager has a secretary doing her nails but otherwise is doing great on his performance numbers, then great -- the CEO takes the results of the good performance to the bank or the BoD and otherwise relaxes. Ugly situation, but so is a lot of clique internal war, dysfunctional goal subordination, etc.<p>Let&#x27;s see: This OP was from First Round Capital. No doubt they have some BoD seats. Then as BoD members, they need to pay attention to goal subordination as here. For the singing Kumbaya stuff in the OP, f&#x27;get about that.
Ask HN: Does success in work bring you happiness?
What I&#x27;m about to say goes completely against what society and the majority of those engaging in virtue signaling claim is the key to happiness.<p>I am quite happy at the moment, and it started back in 2004 when I wrote off my family and commanded them to never contact me again. It turns out removing negativity in your life, whatever the source, no matter how well intentioned you may be in helping someone, goes a long way to being blissfully happy. It is said that &quot;you&quot; are the average of the 5 people you spend the most time with. So consider if your relationships are a positive of negative influence on your life. Remove the negative influences, no one is immune from being removed despite what society tries to feed you about how important &quot;family&quot; is.<p>In 2008 I went to the CTO of the company I was working for at the time, told him that I was planning to quit even though I had just started 3 months ago and proceeded to explain how my manager could be doing their job better. I listed out how I would run things. A week later I had my manager&#x27;s job and a $13k raise, several months after that another $20k raise. Needless to say, the student loan debt that plagued me since graduating in 1999 was paid off in 5 months. As were the rest of my debt. Never underestimate how not having any debt can lead to real happiness.<p>In 2011 I quit the last &quot;real&quot; job I&#x27;ve had at 36. I was not and am still not independently wealthy. I have no family to rescue me if I go broke. At the time I was planning to make an iPhone game, 6 months in coming up to speed on Objective-C, drawing graphics the job I quit needed help desperately I threw out a price of $7500 a week. To my surprise they went for it. So I put the game on hold and worked for 9 months. Accumulating $240k for the year. The money really did make me happy, because of how quickly it piled up. No scrimping and saving and gradually building wealth. Thinking of doing that makes me want to honestly eat a bullet. The old... yeah, save, work 40 years, 2 weeks vacation a year, plus having holidays when the rest of the country does too... die two years into retirement thing. No thanks... Anyways 9 months in and they try to hire me full time as the director of software engineering. 5 years earlier that would have been a dream job. But I really didn&#x27;t want a &quot;job&quot; anymore. So I quit, took a 10 day vacation to Cozumel with my girlfriend and when I got back spent 2 years working on my game.<p>I was just about to release the game and then apple announced new ipad and iphone resolutions. So much rework, especially artwork. Then an old co-worker needed help, I told him I would if I could work from home. I was living on Lake Tahoe at the time and no way was I going back to the Bay. Especially since I was on the Nevada side and there was no way I was paying California a dime in income tax (Luckily it was a New York CO so they don&#x27;t try to tax you out of state until you&#x27;ve made $1 million). The last year I was there I paid $18,600 to California for NOTHING. I got no benefit for that tax I paid to the state. Despite anyone who would argue with me to the contrary. As a note I currently live in Wyoming, and there is nothing more I want from the state, No income tax is glorious.<p>Anyway long story short, consulting gigs, where I work 100% from home drop in my lap every year or two. I make so much money on those that it pays for 2-3 years of not working.<p>The key to happiness is not working (for a client or a job, I like to work on projects of my own that have nothing to do with software). While simultaneously having money to do or buy whatever I want (within reason).<p>I never want to commute to a job ever again. After breaking up with my girlfriend of 5 years I have no interest in getting into another relationship. It&#x27;s like &quot;I&#x27;ve been there done that&quot; and just don&#x27;t have an interest anymore. When I&#x27;m working on my own projects I get so wrapped up in them I lose track of the time, I don&#x27;t know what day of the week it is. I might talk to the neighbors or chat with an old friend once a week. I may not talk to or see another human being for a week and it doesn&#x27;t bother me at all. It might be 10 days before I drive somewhere, it&#x27;s amazing how long a car lasts when you barely use it.<p>As a side note, I have no interest in charity it does nothing for me, it&#x27;s like the part that&#x27;s supposed to fill me with joy is missing with regards to that. I don&#x27;t want to contribute to society or do anything that makes the world a better place. And yet my happiness, contentedness, blissfullness has not lessened since quiting my last job in 2011.<p>So contrary to the frequently parroted &quot;secret&quot; to happiness that involves sacrifice, family, children, being part of a &quot;team&quot;. I&#x27;m here to let you know, some of us have found happiness doing the opposite...
Fast pentomino puzzle solver ported from Forth to Python
The declarative programming language Prolog is very well suited for solving such tasks, because it ships with built-in backtracking and several methods such as <i>constraints</i> that help to significantly prune the search space. Here is a Prolog solution:<p>First, let us describe the possible tiles as 0&#x2F;1 matrices, where 0 means an empty cell:<p><pre><code> tile([[1], [1], [1], [1], [1]]). tile([[0,1,1], [1,1,0], [0,1,0]]). tile([[1,1,0], [0,1,1], [0,1,0]]). etc. (remaining facts left as an exercise) </code></pre> Before we continue, let us run two brief validity checks over this data. First, how many tiles did we define:<p><pre><code> ?- findall(., tile(T), Ts), length(Ts, L). </code></pre> The system answers with:<p><pre><code> L = 18. </code></pre> This matches what the article states. Next, let us ask Prolog: Have we accidentally introduced a typo somewhere, i.e., is there a tile that is <i>not</i> a pentomino?<p><pre><code> ?- tile(T), L #\= 5, append(T, Ls), include(=(1), Ls, Ones), length(Ones, L). </code></pre> The system answers: No. Thus, we can proceed.<p>I am now posting a full solution, using CLP(FD) constraints:<p><pre><code> polyominoes(M, N, Rows, Vs) :- matrix(M, N, Rows), same_length(Rows, Vs), Vs ins 0..1, transpose(Rows, Cols), phrase(all_cardinalities(Cols, Vs), Cs), maplist(call, Cs). all_cardinalities([], _) --&gt; []. all_cardinalities([Col|Cols], Vs) --&gt; { pairs_keys_values(Pairs0, Col, Vs), include(key_one, Pairs0, Pairs), pairs_values(Pairs, Cs) }, [sum(Cs,#=,1)], all_cardinalities(Cols, Vs). key_one(1-_). matrix(M, N, Ms) :- Squares #= M*N, length(Ls, Squares), findall(Ls, line(N,Ls), Ms0), sort(Ms0, Ms). line(N, Ls) :- tile(Ts), length(Ls, Max), phrase((zeros(0,P0),tile_(Ts,N,Max,P0,P1),zeros(P1,_)), Ls). tile_([], _, _, P, P) --&gt; []. tile_([T|Ts], N, Max, P0, P) --&gt; tile_part(T, N, P0, P1), { (P1 - 1) mod N #&gt;= P0 mod N, P2 #= min(P0 + N, Max) }, zeros(P1, P2), tile_(Ts, N, Max, P2, P). tile_part([], _, P, P) --&gt; []. tile_part([L|Ls], N, P0, P) --&gt; [L], { P1 #= P0 + 1 }, tile_part(Ls, N, P1, P). zeros(P, P) --&gt; []. zeros(P0, P) --&gt; [0], { P1 #= P0 + 1 }, zeros(P1, P). </code></pre> As pointed out in the sibling, this models the task as an <i>exact cover</i> problem.<p>For a 6x10 puzzle, you can see what the rows look like with the following query:<p><pre><code> ?- polyominoes(6, 10, Rows, Vs), maplist(portray_clause, Rows). </code></pre> Here are the first few lines emitted by the query:<p><pre><code> [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1]. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0]. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0]. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0]. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0]. </code></pre> Each line describes a possible placement of one of the tiles, where 1 denotes which of the cells are covered. Note that each element of the rows above represents one of the positions of the whole board. Therefore, each list has 6x10 = 60 elements. The exact cover problem states that each of these cells is covered exactly once. Therefore, the essential constraint in this puzzle is:<p><pre><code> sum(Cs, #=, 1) </code></pre> This is a CLP(FD) constraint that constrains the <i>sum</i> of all integers in Cs to 1.<p>Now, let us use Prolog to <i>generate</i> solutions. For this, we use label&#x2F;1 to trigger the built-in backtracking:<p><pre><code> ?- polyominoes(6, 10, _, Vs), label(Vs). </code></pre> This yields, in at most a few seconds:<p><pre><code> Vs = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0] . </code></pre> It shows which of the placements need to be picked to cover the whole board. Further solutions can be generated on backtracking.<p>To display solutions in a more meaningful way, we can use the following query:<p><pre><code> ?- polyominoes(6, 10, Rows, Vs), label(Vs), pairs_keys_values(Pairs0, Vs, Rows), include(key_one, Pairs0, Pairs1), pairs_values(Pairs1, Selected), maplist(portray_clause, Selected). </code></pre> Here is the exact cover, i.e., the first solution:<p><pre><code> [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0]. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0]. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]. [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. [0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. [0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. [0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. </code></pre> Note that if you sum columns, the sum of each column is 1, i.e., each cell is covered exactly once.
Ways a VC says no without saying no
Here I consider just early stage information technology VC -- later stage and bio-medical can be much different.<p>Yup, from my experience, the OP has what a lot of VCs do.<p>One thing for an entrepreneur to do is to read some remarks from a VC or their firm about what their interests are. Then, when their interests well cover my startup, I write them and explain how their interests cover my startup. So, sure, I rarely hear back with anything and otherwise nearly always just as some in the OP.<p>So, then I get pissed: (A) They said what their interests were; (B) I wrote them showing how their interests covered my startup, but (C) they ignored my contact. Bummer. So I used to, sometimes, wait a week or two and then write them and say that they were so unresponsive that there would be no way we could work together successfully and stated that I withdrew my application.<p>Since then, in part I <i>wised up</i>. By process of elimination, I began to conclude some basic facts about VCs.<p>(1) Mostly their stated &quot;interests&quot; don&#x27;t much matter.<p>(2) They actually do have some interests and these are nearly universal across VCs and their firms: They are interested in traction, significant and growing rapidly, especially in a large market.<p>(3) Really, the situation is essentially as in the old Hollywood line, &quot;Don&#x27;t call us. We&#x27;ll call you.&quot; Or, really, VCs want to learn about the startup from existing buzz, virality, etc. They want to see the product&#x2F;service, play with it, and try to estimate how successful it will be in the market.<p>(4) For a first step, for a VC, (1)-(3) is about all that matters.<p>Actually, (1)-(4) seem to be so astoundingly uniform that they must have some common cause. My guess at the common cause is the larger LPs, e.g., pension funds; they insist on (1)-(3).<p>For me, I&#x27;m a sole, solo founder, toilet cleaner, floor sweeper, ..., computer repair technician, systems administrator, ..., programmer, user interface designer, data base administrator, software designer, product manager, CTO, COO, and CEO with a tiny <i>burn rate</i>. Some venture funding could have made some of the work go faster, but really I haven&#x27;t needed venture funding and don&#x27;t really need it now.<p>But with all the above, there is a surprising situation: My burn rate is so low that I can continue self-funding until my Web site is live. Then, if users like my work, soon I&#x27;ll have enough revenue from routine efforts running ads that I will have plenty of free cash for <i>organic</i> growth without equity funding. If I get that growth, then I&#x27;ll have a <i>life style</i> business with, again, plenty of free cash for more organic growth.<p>About that time, some VCs will learn about my startup and give me a call. They will expect that my company has about five co-founders, each with maxed out personal credit cards, has a business bank account close to $0.00. They will assume that the company and each of the co-founders is just desperate for an equity check on just any terms, say, because each of the co-founders has a pregnant wife. Then the VCs will believe that they can play hard to get, strike a hard bargain, and grab control of my company for next to nothing.<p>At that time I will check my computer, confirm the name of their VC firm, and let them know the date long before when I sent them a description of my company they ignored. So, I&#x27;d inform them that they were too late, that my plane has already left the runway, and no tickets were for sale.<p>So, now sometimes I write VCs just for fun, so that if my startup does work and they do call me, then I can tell them that I wrote them and they ignored my contact!<p>To me a biggie point is that apparently the VCs want nothing to do with any business planning, crucial core <i>secret sauce</i> technology, etc. To me, such things are the keys to the big successes the VCs must have to get the investment returns their LPs have in mind to invest in VCs. Further, such planning, special technology are the keys to the many amazing technology successes of US national security.<p>Well, again, apparently VCs want to wait for traction significant and growing rapidly. Maybe that approach will usually be okay for VCs: At least apparently the VCs believe that on the way to a big company, a startup will nearly always need some equity capital.<p>But for a sole, solo founder with a tiny burn rate and writing software, the VCs can miss out: That is, by the time the VCs want to invest, the founder will no longer want or need the investment.<p>A big example of such a sole, solo founder success was the Canadian romantic match making site Plenty of Fish.
Ways a VC says no without saying no
In the last few years, for early stage, information technology venture capital, the situation has been changing radically:<p>A blunt fact is, that the VCs very much need big wins, commonly, say, 30% ownership in a company with exit value $1+ billion. Moreover, even more seriously, to get their limited partners (LPs) excited, they need some ~30% ownership in another Microsoft, Apple, Cisco, Google, or Facebook. That&#x27;s just the facts of life. To pass the giggle test, that&#x27;s the game they are playing, the business they have chosen.<p>We need to keep in mind, beyond Moore&#x27;s law and the Internet, the examples Microsoft, Apple, Cisco, Google, or Facebook don&#x27;t have a lot in common. So, we can&#x27;t hope to extract much in the way of predictive patterns by just external empirical observations.<p>So, if VCs or anyone is to find <i>another Microsoft, ..., Facebook,</i> they they will have to look deeper than just patterns from external observation.<p>Also we should keep in mind, say,<p><a href="http:&#x2F;&#x2F;www.kauffman.org&#x2F;newsroom&#x2F;2012&#x2F;07&#x2F;institutional-limited-partners-must-accept-blame-for-poor-longterm-returns-from-venture-capital-says-new-kauffman-report" rel="nofollow">http:&#x2F;&#x2F;www.kauffman.org&#x2F;newsroom&#x2F;2012&#x2F;07&#x2F;institutional-limit...</a><p>and<p><a href="http:&#x2F;&#x2F;www.avc.com&#x2F;a_vc&#x2F;2013&#x2F;02&#x2F;venture-capital-returns.html#disqus_thread" rel="nofollow">http:&#x2F;&#x2F;www.avc.com&#x2F;a_vc&#x2F;2013&#x2F;02&#x2F;venture-capital-returns.html...</a><p>on the average venture capital return on investment. One word summary, the average return is poor, not high enough to excite LPs.<p>Here is a hint at the nature of the radical change: At<p><a href="http:&#x2F;&#x2F;a16z.com&#x2F;2014&#x2F;07&#x2F;30&#x2F;the-happy-demise-of-the-10x-engineer&#x2F;" rel="nofollow">http:&#x2F;&#x2F;a16z.com&#x2F;2014&#x2F;07&#x2F;30&#x2F;the-happy-demise-of-the-10x-engin...</a><p>Sam Gerstenzang, &quot;The Happy Demise of the 10X Engineer&quot;<p>with in part<p>&quot;This is the new normal: fewer engineers and dollars to ship code to more users than ever before. The potential impact of the lone software engineer is soaring. How long before we have a billion-dollar acquisition offer for a one-engineer startup? &quot;<p>So, a solo founder building a company worth $1 billion?<p>Of course, there is <i>half</i> of an example -- the Canadian, Internet based, romantic matchmaking service Plenty of Fish with a solo founder, with two old Dell servers, $10 million a year in revenue, all just from ads from Google. He added people and sold out for $500+ million. So, his ~$500 million is half of the $1 billion A16Z mentioned.<p>So, what are the causes of the radical changes?<p>(1) Cheap Hardware.<p>From any historical comparison, within computing or back to steamships, now computer hardware is cheap, dirt cheap; transistors are cheap; so are compute cycles, floating point operations, main memory sizes, hard disk space, solid state disk space, internal data rates, LAN and Internet data rates, etc. Dirt cheap.<p>(2) Infrastructure. It used to be that an information technology startup could expectd to have to build or at least wrestle with lots of infrastructure. Now quite broadly, getting the needed infrastructure is much easier and cheaper.<p>So, nearly any room in the industrialized world with a cable TV connection can be a quite active server farm because the rest of the infrastructure, to a local Internet service provider, a static IP address, a domain name, and plenty of Internet data rate for a quite serious business, is right at hand.<p>Of course, the big quantum leap in easy infrastructure is the cloud, from, say, Amazon, Microsoft, etc.<p>(3) Software. Now software is much easier. There is a lot of open source software, excellent software for quite reasonable prices, etc. And really it&#x27;s much easier just to write new applications level software. Web pages, graphics, database operations, algorithms, etc., all are much easier.<p>So, with (1)-(3), a solo founder with a good idea for a startup to be worth $1+ billion can for darned little cash write the software, bring up the idea as a Web site, run ads, get publicity, and, if users come, get good revenue.<p>It&#x27;s easy to argue that at current ad rates, a server costing less than $1500, kept busy, could generate monthly revenue $200+ K for investment by the founder of basically just their own time. Such a solo founder with that revenue, then, will just laugh at any suggestion that he should take an equity check, form a Delaware C-corporation, and report to a BoD. Instead he will just form an LLC and remain 100% owner.<p>Then, the main issue now is the evaluation of the basic <i>idea</i> of the sole founder. Or if the idea is really good and VCs wait until there is traction significant and growing rapidly, then the VCs will be too late. Or, the solo founder wrote the software, has one server from less than $1500 in parts connected to the Internet, has a static IP address and a domain name, has done and is doing some publicity things, and otherwise is running the business each month for not much more than pocket change, for less than a lot of people spend on McDonald&#x27;s or pizza or Chinese carryout. Literally. So, the founder&#x27;s startup is just dirt cheap to run. If enough users like the site to keep the server busy, then the founder is getting maybe $200 K a month in revenue, plenty to grow the size of the server farm, and in a few months buy a nice house, for cash, put several nice new cars in the garage, for cash, and spend a hour each afternoon in the nice infinity in-ground pool. Then a VC calls and wants to invest $10 million for 30% of the business and have the founder report to a BoD of a Delaware C-corp. -- we&#x27;re talking LOL.<p>Does that situation happen very often yet? Nope. But now it is just such situations that the VCs desperately need in order to get a significant fraction of ownership in $1+ billion exit values.<p>Or, put very bluntly, the VCs desperately need really exceptional startups. For Microsoft, ..., Facebook, there are no visible patterns. The founders no longer need big bucks for a team of developers, expensive servers, and communications data rate.<p>Net, for the projects the VCs must have, by the time they want to invest according to their old rules, a solo founder with a good idea has already got plenty of revenue for rapid organic growth and a life style business and won&#x27;t accept an equity check.<p>Again, so far there are not a lot of examples of such solo founder startups, but the radical change and the big deal for the VCs is that it is just such startups that stand to be the exits the VCs desperately need. So, for the next Facebook, etc., by the time the VCs call the founder, all they will hear back are laughs, and the VCs will have to push back their chairs, think a little, and realize that they just missed out. The VCs will see that, really, there has been a radical change and they must make some radical changes or just miss out and go out of business.<p>So, finally we discover that the core idea is what is just crucial because for a good idea a solo founder can do the rest alone for essentially just his own time as the investment. So, to evaluate startups, must evaluate the idea at just the idea stage and just hope that the founder will accept a check.
How economists rode maths to become our era’s astrologers
I&#x27;m torqued at the economists, but from what I saw in the OP I should be torqued at that, also.<p>The article seemed to say that <i>econometrics</i> or some such was not within their criticism and that, instead, they were criticizing the -- pure, theoretical, mathematical Nobel-prize seeking, etc.? -- economists or some such. Uh, it was not too clear just which brand, type, style, category, etc. of economists they were criticizing.<p>Okay, I&#x27;ll defend some of the economists!<p>The field of economics is a train wreck, a theoretical, empirical, practical, intellectual, scientific, academic train wreck. E.g., they still have no F = ma (Newton&#x27;s second law), and their predictions are as bad as those for the weather.<p>Why? Sure, likely like the weather, the economy is darned complicated.<p>Next, it&#x27;s tough to get good data about the economy, e.g., the usual US Department of Labor, etc. statistics are crude approximations to reality.<p>E.g., for the crash of 2008, it was mostly a fairly closely held secret just how bad so many of the mortgages were, and those bad mortgages were the key to the crash. So, for the 2008 crash, which was over 8 years ago and that we are still pulling out of, that is, has been 2&#x2F;3rds as long as the Great Depression, we were missing clear views of just the basic data. Outrageous. E.g., at<p><a href="http:&#x2F;&#x2F;www.pbs.org&#x2F;wgbh&#x2F;pages&#x2F;frontline&#x2F;oral-history&#x2F;financial-crisis&#x2F;richard-kovacevich&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.pbs.org&#x2F;wgbh&#x2F;pages&#x2F;frontline&#x2F;oral-history&#x2F;financi...</a><p>see the <i>Frontline</i> interview of Well Fargo CEO Richark Kovacevich with in part:<p>&quot;... when they came to me, I would say: &#x27;This is toxic waste. We&#x27;re building a bubble. We&#x27;re not going to like the outcome. I&#x27;m very concerned.&#x27;&quot; Not many people had enough data to see the problem. Outrageous.<p>Next, emotions, fear, mob behavior, the news looking for headlines all can affect what people do and the economy.<p>So, to do economics as a science is tough.<p>Well people in high end academics are supposed to do research. In the case of economics, they are supposed to try to make a good science out of it. So, they try.<p>In particular, the most respected work in science <i>mathematizes</i> the field.<p>Recently on the news was a remark about something else, but we can use it here: &quot;Ask a Navy Seal how to eat an elephant, and he will say &#x27;One bite at a time&#x27;&quot;.<p>Well, in trying to make economics a science, in particular, to mathematize the field, about all the high end research academic economists can do is try one bite at a time. So, they do. So, the work has not yet eaten all the elephant, moved all of the mountain, cleaned up all the mess, finished building the castle, etc. So, it&#x27;s a work in progress. In particular, for the castle, they are still working on the foundation, and so far there is nothing like a roof. So, when it rains, there&#x27;s no roof and anyone in the castle gets wet. Or, the kitchen just isn&#x27;t ready to serve good food yet; the land is not cleared, and we are not ready to grow a crop yet.<p>Some of the mathematics I studied for my Ph.D. and some of the math research I did and published is close to some of the math some of the economists have tried to use. So, that math gives me a view of math in economics.<p>For that math, some of it really is okay for some questions about economic things. E.g., optimization, mathematical programming, e.g., linear programming, linear integer programming, actually can and sometimes do save money in some economic, business situations. The academic economists usually call such situations <i>micro economics</i>. So, some decades ago, as such math was developed for operations research or whatever, the math got used by academic research economists. IIRC linear programming was the core math of several Nobel prizes in economics.<p>My view is that linear programming, and optimization more generally, has next to nothing to do anytime soon with predicting, say, the growth in GDP over five years, but one could call that use of math in the research one bite of the elephant, one brick for the castle, one tree stump on the way to clearing the land ready for row crops, etc.<p>E.g., there was a famous paper in mathematical economics by Arrow, Hurwicz, and Uzawa. Arrow won his Nobel prize long ago, and Hurwicz won a few years ago. The paper makes use of optimization, in particular, the Kuhn-Tucker conditions (KTC) of non-linear optimization. So, in grad school, when I was studying the KTC (not much like KFC), I saw a tricky problem, didn&#x27;t see a solution in the library, did some research, and got a solution. Later when I went to publish I saw that my work also answered a question stated but not answered in the Arrow, <i>et al.</i> paper. Okay.<p>So, alright, in some sense I knew the KTC as well as or better than Arrow, etc. My opinion of the KTC is that they have next to nothing to do with predicting, say, the growth in GDP over five years, but maybe that use of the KTC is a little progress. Maybe that work got some Nobel prizes not because it was especially good work, say, did for economics what Newton did for physics, but because it was the best work in economics the Nobel committee could find that year!<p>So, net, economics is not yet a science; some researchers are trying to make it a science, in particular, to mathematize the field.<p>There is a description of how this research goes: The researcher looks at some math and the economy, makes some simplifying assumptions about the economy, so, has a <i>model</i> of the economy, sees where they can apply that math to that model, makes the application, reads off the consequences for that model economy, publishes the stuff, and hopes for a Nobel prize, at least for tenure, and hopes that their students will do more along such lines. Yes, it&#x27;s like in freshman physics were we have a frictionless ball bearing and no air resistance and, then, calculate how fast the ball would roll except, e.g., the KTC, much worse.<p>Gee, if the research is really bad, then there should be a big opportunity to jump into the field, get a relatively good salary, maybe get a Nobel, get famous, get high consulting fees, maybe get some adoring coeds, etc.<p>For me? I&#x27;ve got a startup. I&#x27;ve already done the original applied math research and written the software and am eager to go live. If the startup works, I&#x27;ll be nicely wealthy. So, I&#x27;ll stay with my startup!<p>Besides, when I tried to study economics, I thought that the subject was so badly done I should just close the book and f&#x27;get about it.<p>But once it was suggested that I take a course in economics. So, I did. I showed for the class, took notes, and said nothing. That was the first day with lots of freehand supply and demand curves. After the class, with just the prof and I, I asked him, nicely I thought, just what he was assuming about his curves, continuous, differentiable, continuously differentiable, infinitely differentiable, continuous and differentiable almost everywhere with respect to Lebesgue measure, convex, pseudo-convex, quasi-convex, etc. (or some such). Soon I got a call and was told &quot;You are out of the economics class.&quot;. Gee, I was just asking!<p>Gee, those questions were not so bad! For that thing I did for the KTC and the Arrow, <i>et al.</i> paper, part of the work was to show that for the real numbers R, a positive integer n, a set C a subset of R^n and closed in R^n with the usual topology, there exists an infinitely differentiable function f: R^n --&gt; R so that f(x) = 0 for all x in C and f(x) &gt; 0 otherwise. E.g., the Mandelbrot set is closed. So, there&#x27;s an infinitely differentiable f 0 on the Mandelbrot set and strictly positive otherwise. Instead of the Mandelbrot set, use a sample path of Brownian motion, a Cantor set of positive measure, etc. -- amazing. So, gee, for the Arrow, <i>et al.,</i> paper I considered infinitely differentiable!
Ask HN: What are your favorite physics sites, documentaries, books?
I broke up my list into cultural (that is, about the people, history, etc), popular (that is, not aimed at a student or an expert), and texts.<p>CULTURAL<p>Einstein - Essays in Humanism<p>Frayn - Copenhagen<p>Feynman - Surely You&#x27;re Joking, Mr. Feynman!<p>Feynman - What Do You Care What Other People Think?<p>de Grasse Tyson - Death by Black Hole<p>Hoffman - The Man Who Loved Only Numbers<p>Kaiser - Drawing Theories Apart<p>Kaiser - How the Hippied Saved Physics<p>Macaulay - The Way Things Work<p>Paulos - Innumeracy<p>Sagan - Cosmos<p>Sagan - Broca&#x27;s Brain<p>Sagan - The Demon-Haunted World<p>Salam - Science in the Third World<p>Seife - Zero<p>Weisskopf - The Joy of Insight<p>POPULAR<p>Deutsch - The Beginning of Infinity (especially his explanation about fungibility in quantum mechanics)<p>Feynman - The Meaning of It All<p>Feynman - Lectures on Physics<p>Feynman and Weinberg - Elementary Particles and the Laws of Physics<p>Galison - Einstein&#x27;s Clocks, Poincaré&#x27;s Maps<p>Gamow - One, Two, Three... Infinity<p>Hadamard - Psychology of Invention in the Mathematical Field<p>Hawking - A Brief History of Time<p>Hofstadter - Gödel Escher Bach<p>Heisenberg - Physics and Philosophy: The Revolution in Modern Science<p>Polya - How to Solve It<p>Schrödinger - What is Life?<p>Susskind - The Theoretical Minimum<p>Susskind - Quantum Mechanics<p>Wallace - Everything And More<p>Weinberg - The First Three Minutes<p>Wiener - God &amp; Golem, Inc.<p>TEXTS<p>Aaronson - Quantum Computing with Democritus (but I don&#x27;t have a version with me in the acknowledgements <a href="https:&#x2F;&#x2F;books.google.com&#x2F;books?id=jRGfhSoFx0oC&amp;lpg=PR31&amp;ots=PCRKMZ9sg_&amp;dq=evan+berkowitz+democritus+aaronson&amp;pg=PR31&amp;hl=en#v=onepage&amp;q=evan berkowitz democritus aaronson&amp;f=false" rel="nofollow">https:&#x2F;&#x2F;books.google.com&#x2F;books?id=jRGfhSoFx0oC&amp;lpg=PR31&amp;ots=...</a> )<p>Abelson and Sussman - SICP<p>Abrikosov, Gorkov, and Dzyaloshinski - Methods of Quantum Field Theory in Statistical Physics<p>Cohen-Tannoudji - Quantum Mechanics (1+2)<p>Dirac - Lectures on Quantum Mechanics<p>Eddington - Space, Time, and Gravitation<p>Feynman - Feynman&#x27;s Thesis<p>Feynman and Hibbs - Quantum Mechanics and Path Integrals<p>Fermi - Thermodynamics<p>Gattringer &amp; Lang - Quantum Chromodynamics on the Lattice<p>Goldstein - Classical Mechanics (the old version, NOT with Poole and Safko)<p>Griffiths - Introduction to Electrodynamics<p>Griffiths - Introduction to Quantum Mechanics<p>Jackson - Classical Electrodynamics (2nd edition---the last one entirely in CGS---is preferable)<p>Kleppner and Kolenkow - An Introduction to Mechanics<p>Landau and Lifshitz - any book in this series<p>Nielsen and Chuang - Quantum Computation and Quantum Information<p>Pauli - Selected Topics in Field Quantization<p>Peskin &amp; Schroeder - An Introduction to Quantum Field Theory<p>Purcell - Electricity and Magnetism<p>Ryden - Introduction to Cosmology<p>Sakurai - Modern Quantum Mechanics (up to chapter 5, after which Sakurai dies and the editors put his notes together)<p>Sussman and Wisdom - Structure and Interpretation of Classical Mechanics<p>Sipser - Introduction to the Theory of Computation<p>Thorne - Black Holes &amp; Time Warps<p>Thouless - The Quantum Mechanics of Many-Body Systems<p>Weinberg - The Quantum Theory of Fields I, II, and III<p>Zee - Quantum Field Theory in a Nutshell
Encryptica Foundation | Free One Month VISP Account Giveaway
[Part I]<p>We are a vISP, originally from the Dark Net, opening up trial accounts on the Internet&#x2F;ClearNet to test the waters.<p>We a non-profit NGO (social enterprise?) that is still in the process of acquiring 503(c) status in the United States.<p>We are testing out a new quasi-anonymous network design based on OpenVPN and Tor (more connection methods to our access points will follow in the future, such as IKEv2 and SSTP).<p>Technical Details: Initially one connects to the Tor network via the Tor Browser Bundle, once the connection to the Tor network is established, the user instructs the OpenVPN client to connect to our OpenVPN server (using our configuration file), which is sitting behind a .onion Hidden Service; the traffic is then forced to exit through one of our Tor Exit Nodes instead of a possibly malicious Tor Exit Nodes that tracks users or performs MITM on HTTP traffic or simply tcpdump.exe&#x27;s your traffic for fun and profit. We provide DNS resolution through our service to avoid leaks and preserve privacy.<p>We do not know your IP address, you do not know our server&#x27;s IP address, and nobody can prove that activity exiting through our Tor Exit nodes is our users&#x27; activities and not a user on the Tor network not belonging to our vISP (to rephrase: The Tor Exit Nodes are run by us to save the user the trouble of avoiding malicious Exit Nodes, to increase anonymity and privacy they are also shared with the rest of the Tor network; a Tor user may exit through our exit nodes even though they do are not one of our users). Please note that while connected to our network, you will also be able to resolve .onion addresses.<p>We don&#x27;t keep logs, we are a decentralized and distributed team of netsec, cipherpunks, activists, journalists, privacy &#x2F; security &#x2F; encryption &#x2F; anonymity &#x2F; paranoid schizophrenic fanatics spread around the globe, nevertheless our &quot;Foundation&quot; is HQ in the USA as: a. We&#x27;re not required to keep logs by law (and we are not able to) b. We&#x27;re &quot;protected&quot; by some of the best LE agencies in the world. When you incorporate abroad, these same agencies are in your threat model. In our case, their threat while present, is diminished.<p>Once we obtain non-profit status by the IRS your monthly donations ($29.95 {note: you are under no obligation to sign up for a paid account after the free month account expires}) will be tax deductible. We are also thinking of starting the &quot;Church of the Free Bits&quot; with a mission statement of protecting users&#x27; rights&#x27; on the Internet, keeping bits colorless, encouraging anonymous free speech and freedom of expression on the Internet, and enhancing the user&#x27;s Internet experience via the use of cryptography, encryption, mix-networks, True(tm) Net Neutrality and privacy enhancing technologies.<p>We used to not have a website, not even on the dark net (word of mouth), now we must apparently. We are sticking with the informal no website policy for now (or WordPress&#x27;ing it at some point). Don&#x27;t judge us, front-end is not our domain nor focus. We are not very Social. We should have a blog soon however.<p>Don&#x27;t confuse us with &quot;VPN providers&quot; or &quot;Residential proxies&quot;. We are a Virtual ISP, and while we may use some similar technologies, we are in the RiseUp &#x2F; Telecomix league, not your fly-by-night VPN&#x2F;VPS provider. Don&#x27;t confuse us with a solution you can roll yourself either; we have X users, as long as you stick to the Tor browser bundle rules you can remain anonymous. If you roll your own you&#x27;re the only one originating from that IP address.<p>10% of our monthly profits go to EFF, ACLU, The Pirate Party. The rest is our salaries and reinvestment in the Foundation. We are audited once a year technically (certified to not be keeping logs), are legally insured, and welcome volunteers.<p>We are still at the MVP stage and our setup reflects it. Our SLA is far from 99.999%, try 85%-90%. That said we are experimental as f*ck and we break things often and break things fast, yet aspire to be digital Bodhisattvas who do no evil.<p>If you have any feedback or comments, drop us a line: sysop@encryptica.org. We answer email (for now).<p>[Gentle Reminder: We&#x27;re giving away free accounts: email sysop@encryptica.org with subject line &quot;REGISTER HN&quot; and you will receive your credentials in the next 72 hours as well as the instructions. We are offering a one month give away to interested HN users to test out our network and iron out any remaining issues.]<p>PS. We provide perma-free accounts (aside from this offer) for .edu &#x2F; .ac.uk &#x2F; academia &#x2F; military personnel stationed abroad &#x2F; veterans, and Australians (no, really).<p>Ask away if you have any questions, though keep in mind our answers are polymorphic, we are still building our infrastructure, and our policies are still taking shape and form, according to user demand, local laws, and our own paranoia. Our service is not completely secure yet and may be hackable (it&#x27;s an MVP before we move to big iron), we only ask for an email address at the moment for authentication purposes, your password is automatically generated, nothing more.
Fear is America’s top-selling consumer product
Hey all, I&#x27;m interested in this topic and created several summaries for my own use (below). Since this article is somewhat long and isn&#x27;t as easy to understand as it could be, I figured other HNers might find these summaries useful:<p>-----------------<p>Short plain-English summary of the major things he says in the article:<p>People in the US are generally much safer than in the past, but they also seem to be more afraid than in the past, and it seems to be because there are powerful groups that benefit (or believe they benefit) from this state of affairs: those associated with or members of the news media, the military and its private-sector suppliers, politicians, the very rich, and the police.<p>This shift to having the public generally fearful seems to have started in 1949 when we in the US learned that the Soviet Union had nuclear weapons. Consensus in Washington became that the Soviet Union was a more immediate and serious threat than it probably really was, and the news media sold papers by stirring up fear of WW3. In the 1960s the news media made people afraid of the possibility of an actual armed revolution within the US by leftists. With the fall of the Berlin Wall the news media and politicians shifted to fear of drugs, and since 9&#x2F;11 it has been terrorism.<p>-----------------<p>The main ideas &#x2F; questions discussed, in his words:<p>[Motivating problem:] In no country anywhere in the history of the world has the majority of a population lived in circumstances as benign and well-lighted as those currently at home and at large within the borders of the United States of America. And yet, despite the bulk of reassuring evidence, a divided but democratically inclined body politic finds itself herded into the unifying lockdown imposed by the networked sum of its fears—sexual and racial, cultural, social, and economic, nuanced and naked, founded and unfounded.<p>[Main questions:] How does it happen that American society at the moment stands on constant terror alert? Why and wherefrom the trigger warnings, and whose innocence or interest are they meant to comfort, defend, and preserve? Who is afraid of whom or of what, and why do the trumpetings of doom keep rising in frequency and pitch?<p>-----------------<p>Paragraph-by-paragraph-ish main ideas (as far as I could tell), in his words:<p>Fear [is] the oldest and strongest of the human emotions.<p>[There is] real fear and neurotic fear, the former a rational and comprehensible response to the perception of clear and present danger, the latter “free-floating,” anxious expectation attachable to any something or nothing that catches the eye or the ear.<p>I’m old enough to remember when Americans weren’t as easily persuaded to confuse the one with the other. I was taught that looking fear straight in the face was the root meaning of courage.<p>[After] August 1949, when the Soviet Union successfully tested a [nuclear] bomb, my further acquaintance with fear was for the most part to take the form of the neurotic.<p>The Cold War with the Russians produced the doctrine of mutual assured destruction. For the everybodies whose lives were the stake on the gaming table, [this] didn’t leave much room for Teddy Roosevelt’s looking real fear straight in the face.<p>Expectant anxiety maybe weakens the resolve of individual persons, but it strengthens the powers of church and state.<p>Fear is the most wonder-working of all the world’s marketing tools. Used wisely, innovatively, and well, it sells everything in the store—the word of God and the wages of sin, the divorce papers and the marriage certificate, the face cream and the assault rifle, the grim headline news in the morning and the late-night laugh track.<p>[He tells a story of working as a reporter in NYC in 1962, receiving a press release from the Russians about new weapons tech, and having the editor of the paper mold it into a front-page fear-soaked story, presumably motivated by the desire to sell more papers.]<p>Expectant anxiety sells newspapers.<p>The Cold War was born in the cradle of expectant anxiety; so were the wars in Vietnam and Iraq.<p>The innovative and entrepreneurial consensus in Washington resurrected from the ruins [of Russia post-WW2] the evil Soviet Empire—stupendous enemy, world-class and operatic, menace for all seasons, dread destroyer of American wealth and well-being.<p>Fattened on the seed of openhanded military spending (upward of $15 trillion since 1950) the confederation of vested interest that President Eisenhower identified as the military-industrial complex brought forth an armed colossus the likes of which the world had never seen.<p>The turbulent decade in the 1960s raised the force levels of the public alarm. The always fearmongering news media projected armed revolution; the violent fantasy sold papers, boosted ratings, stimulated the demand for repressive surveillance and heavy law enforcement that blossomed into one of the country’s richest and most innovative growth industries.<p>The tearing down of the Berlin Wall in 1989 undermined the threat presented by the evil Soviet Empire, and without the Cold War against the Russians, how then defend, honor, and protect the cash flow of the nation’s military-industrial complex? The custodians of America’s conscience and bank balance found the solution in the war on drugs.<p>The stockpiling of domestic fear for all seasons is the political alchemist’s trick of changing lead into gold, the work undertaken in the 1990s by the presidential campaigns pitching their tents and slogans on the frontiers of race and class.<p>Like the war on drugs, the war on terror is unwinnable because [it is] waged against an unknown enemy and an abstract noun.<p>[The War on Terror] is a war that returns a handsome profit to the manufacturers of cruise missiles and a reassuring increase of dictatorial power for a stupefied plutocracy that associates the phrase <i>national security</i> not with the health and well-being of the American people but with the protection of their private wealth and privilege.<p>Unable to erect a secure perimeter around the life and landscape of a <i>free</i> society, the government departments of public safety solve the technical problem by seeing to it that society becomes <i>less</i> free.<p>The war on terror brought up to combat strength the nation’s ample reserves of xenophobic paranoia, the American people told to live in fear.<p>Given enough time and trouble over the last sixteen years, their collective fear and loathing collected into the cesspool from which Donald J. Trump became the president of the United States.
Social Media Is the New Smoking
Addictions aren&#x27;t necessarily bad. Some need drugs to help get by. Others junk food. TV shows. Validation. I indulge in a number of things when I just want my brain to turn off. I let my desires run on autopilot and give myself what I need.<p>Sometimes, these can reveal important things about our personalities. One of my addictions is particular kinds of information. Complex and novel ideas, long-form essays, &quot;insight porn.&quot; The type of things I gravitate towards reading tells a lot about me and what I find interesting in this world. I&#x27;d like to make some kind of dent in the universe in these spaces I find interesting before I die. Sure it might suck when I end up reading too much at the expense of something else, but what can I say, that&#x27;s what I enjoy.<p>For those whose lives revolve around social media like Facebook or Instagram, their attention might be largely allocated towards validation, a sense of belonging, identity. Things that are not necessarily new to a post-Facebook world. In an alternative universe, maybe they&#x27;re doing some other kind of social climbing or image-crafting. I&#x27;d never hold that against somebody if that&#x27;s what they&#x27;re wired towards. I say, own it and use it as an asset in your life.<p>An addiction is a habit, and a habit is merely a stable pattern of human behavior. All of our non-novel behaviors are habits. The things our mind gravitates towards are habits. I think it crosses into a &quot;negative addiction&quot; once a habit starts to become detrimental to something we want out of our lives. That&#x27;s when we should be concerned.<p>We are flawed, suboptimal creatures. It&#x27;s ok to give into our addictions, good and bad. We should incorporate addictions as constraints in our models of living, working, interacting. We&#x27;ll always have them. In my experience, trying to get rid of certain habits hurt me because they served a function to my well-being or personality.<p>You need to understand your own addictions. If a sense of belonging is important to your personality, you will feel a void in your life without it. There&#x27;s a reason why poor men in third world countries might partake in cockfighting culture with other men rather than spend it on food.<p>That being said, I don&#x27;t think the negatives of social media are all that bad. I would argue that grade school is way more toxic than social media. Social media usually makes a convenient target for those in the &quot;good old days before technology,&quot; &quot;humans need authentic human connection&quot; camp.<p>I am willing to extend a platform+API metaphor and grant that social media (the platform) allows us to build an external layer of ourselves (the API) that becomes the means with which others interact with us while while we hide our internals, but you&#x27;ll have to convince me that that&#x27;s mostly a bad thing. I don&#x27;t think appealing to an &quot;authentic human connection&quot; value works. Sounds like a win for technology if you ask me!<p>Here&#x27;s the thing. Social media is still a choice. Contrast this to the toxic and unnatural environment that is high school, which is forced upon us.<p>Social media feeds our insecurities? Well, high school <i>created</i> them in the first place.<p>Technology almost always affords you the freedom to engage or disengage. Take online dating for example. In the &quot;good old days,&quot; men would have to impress and court a whole family just to date one person -- you&#x27;re essentially dating a whole family. Now you can just find someone online within minutes, and start connecting with the one person you actually want to. Is some dude harassing you online? You&#x27;re free to block him with the click of a button or just turn off the app. Sure there are tradeoffs, but the key here is choice.<p>Think freedom to respond to a text whenever you want vs being &quot;on-call&quot; all the time.<p>I hate to be the bearer of bad news, but if you are hanging out with people who are constantly on their phones, then the truth is, they are making a personal choice to disengage from that situation, and they value more whatever&#x27;s going on in the &quot;real internet world&quot; rather than the &quot;tiny bubble&quot; that is the room you guys are sitting in. Maybe the latest meme or what&#x27;s happening in North Korea is more interesting. The actual problem is a mismatch between you and your company. Technology simply affords them choices in dealing with social expectations they might not actually want.<p>Facebook is not the end game, and technology can do better. But I think pointing fingers at social media for ruining us all is a bit silly.
I’ve Had a Cyberstalker Since I Was 12 (2016)
I have personal experience defending against a stalker. Questions were asked down-thread about the effectiveness of restraining orders, so I&#x27;m posting in response. I got away, and I find the slight chance that this helps someone else compelling. Also, I can&#x27;t really talk about this in real life, so I wish other people could know that it can happen, and how it works.<p>TLDR answer is: To defend against a stalker you will likely need legal advice and representation.<p>This is distasteful and expensive, and judgment is required to decide when to take that step. I am not a litigious person, and generally prefer to avoid conflict, or negotiate reasonable solutions. A stalker will take advantage of this. Stalkers (at least mine) operate by asserting control gradually, and retaliating against your attempts at self defense in a tit-for-tat fashion.<p>I think it likely that if you have even considered seeking legal redress, it&#x27;s probably already time to hire an attorney. The cost and risk of civil cases or lawsuits is generally much smaller then the cost&#x2F;risk of potentially getting involved in a criminal case later, so if you can solve the problem in civil court, it is highly desirable to do so.<p>The police may arrest and charge one (or both) of you if they respond to an in progress assault. However, they will not want to evaluate contradictory factual claims made by you and the stalker about things that happened while they weren&#x27;t there. When you ask for help, they will probably encourage you to obtain a restraining order, which makes the problem someone else&#x27;s job for now, and sets at least a low bar for complainants, before the police have to get involved.<p>Your legal position in the future will be constrained by early decisions and statements that you make. The article describes this question from a police officer: “&#x27;Were you ever afraid for your life?&#x27; he asked, still apparently on my side.&quot;<p>In my state, fear of one&#x27;s life or safety was a legal requirement for obtaining an ex parte domestic violence civil restraining order. Think very carefully in advance about how to answer questions like this and don&#x27;t ever lie to anyone or change your story. Lawyers are required to tell a judge if you do, and you will also need to protect your reputation with people you know against claims made by the stalker. Your only advantage over the stalker is truthfulness and consistency.<p>A civil restraining (or &quot;protective&quot;) order is issued by a civil court which orders one party to stay away from another, possibly along with other provisions . Ex parte means &quot;without the other party&#x27;s presence.&quot; Some states (including mine at the time) allow such an order to be issued without an adversarial hearing. You obtain this by: going to court clerks office and submitting the paperwork they give you. Soon, (because this is presumed to be an emergency) you are given a short hearing in which you must explain why you are afraid for your life, and what&#x27;s happed so far. If the judge grants the order, the respondent will be served a paper copy of the order by an officer, who will explain to the respondent that it&#x27;s a crime to approach or bother you while the order is in force. These orders are short (mine was 21 days) because the respondent is not allowed an advance adversarial hearing.<p>After being served, my stalker retaliated by obtained an ex parte restraining order against me. I have come to understand that this is not uncommon in states where reciprocal orders are allowed. Eventually, after some stalling, I was granted a hearing and that order was dismissed at my request. Ultimately, after many further hearings over the course of about a year, I was granted a long term civil restraining order. During the litigation I dropped my college courses, resigned from my internship, and finally transfered to a university in another town. I did this partly because it was advised by my attorney, but mainly because I wanted to move on with my life.<p>It was also my experience that other people tended to trivialize the problem. It was hard for some people to understand that simply ignoring the stalker would not make it possible for me to attend work, or class, or use public spaces. My stalker would wait for me outside of my school and workplace, hold the doors shut, and threaten to report an assault if I tried to get in. They would also follow me in public or into businesses and create disruptions by yelling, making false reports to police or security, or other authority figures. The goal seemed to be to deny access to a space and&#x2F;or provoke a physical altercation so I could be charged with assault.<p>I got out of this for a few thousand in attorney fees (plus a year of my life), and was never injured or charged with a crime. If I had not hired an attorney, I think there is a chance I would have been injured, killed or imprisoned. There were some peculiar features of my case that probably make it exceptional, so I&#x27;m not sure how well my advice generalizes, but this is it: Don&#x27;t engage with the legal system without a lawyer, and don&#x27;t wait to start defending yourself.
What are potential disadvantages of functional programming?
I am currently studying the excellent <a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Concepts-Techniques-Models-Computer-Programming&#x2F;dp&#x2F;0262220695" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Concepts-Techniques-Models-Computer-P...</a>, and I believe that this book answers the questions in the OP very clearly, and although maybe you find it too theoretical, it does in fact provide loads of practical advice, and is very readable; not for the faint of heart though ;)<p>Anyway, just to practice what I&#x27;ve learned so far I will try to answer some of your questions from the top of my head; apologies in advance for my verbosity:<p>First of all, let&#x27;s define functional (in fact, to be strict, declarative; more on this below):<p>An operation (i.e. a code fragment with a clearly defined input and output) is functional if for a given input it always gives the same output, regardless of all other execution state. It behaves just like a mathematical function, hence the name.<p>This gives a declarative operation the following properties:<p>1) Independence: nothing going on in the rest of the world will ever affect it.<p>2) Statelessness (same as immutability): there is no observable internal state; the output is the same every single time it is invoked with the same input.<p>3) Determinism: the output depends exclusively on the input and is always the same for a given input.<p>So what is the difference between functional and declarative? Functional is just a subset of declarative = declarative - dataflow variables<p>These properties give a functional program the following key benefits:<p>1) It is easier to design, implement and test. This is because of the above properties. For instance, because the output will never vary between different invocations, each input only needs to be tested once.<p>2) Easier to reason about (to prove correct). Algebraic reasoning (applying referential transparency for instance: if f(a)=a^2 then all occurences of f(a) can be replaced with a^2) and logical reasoning can be applied.<p>To further explore the practical implications of all this lets say that, given that all functional programs consist of a hierarchy of components (clearly defined program fragments connected exclusively to other components through their inputs and outputs) to understand a functional program it suffices to understand each of its components in isolation.<p>Basically, despite other programming models having more mindshare (but, as far as I can tell, aren&#x27;t really better known, and this includes me ;), because of the above properties functional programming is fundamentally simpler than more expressive models, like OO and other models with explicit state.<p>Another very important point is that it is perfectly acceptable and feasible to write functional programs in non strictly functional languages like Java of C++ (although not in C, I won&#x27;t explain why, it&#x27;s complicated but basically the core reason has to do with how memory management is done in C).<p>This is because functional programming is not restricted to functional languages (where the program will be functional by definition no matter how much you mess up).<p>A program is functional if it is observably functional, if it behaves in the way specified above.<p>This can be achieved in, say, Java, with some discipline and if you know what you are doing; the Interpreter and Visitor design patterns are exactly for this, and one of the key operations to implement higher order programming, procedural abstraction, can easily be done using objects (see the excellent MIT OCW course <a href="https:&#x2F;&#x2F;ocw.mit.edu&#x2F;courses&#x2F;electrical-engineering-and-computer-science&#x2F;6-005-elements-of-software-construction-fall-2008&#x2F;index.htm" rel="nofollow">https:&#x2F;&#x2F;ocw.mit.edu&#x2F;courses&#x2F;electrical-engineering-and-compu...</a> for more on this).<p>Because of its limitations, it is often impossible to write a purely functional program. This is because the real world is statefull and concurrent. For instance, it is impossible to write a purely functional client-server application. How about IO or a GUI? Nope. I don&#x27;t know Haskell yet, it seems they somehow pull it off with monads, but this approach, although impressive, is certainly not natural.<p>Garbage collection is a good thing. It&#x27;s main benefit to functional languages is that it totally avoids dangling references by design. This is key to making determinism possible. Of course, automatically managing inactive memory to avoid most leaks is nice too (but not all leaks, like, say, references to unused variables inside a data structure, or any external resources).<p>However, functional programs can indeed result in higher memory consumption (bytes allocated per second, as opposed to memory usage, which is the minimum amount of memory for the program to run), which can be an issue in simulators, in which case a good garbage collector is required.<p>Certain specialised domains, like hard real time where lives are at stake, require specialised hardware and software anyway, never mind whether the language is functional or not.<p>So, for me, for the reasons above, the take home lesson so far is:<p>Program in the functional style wherever possible, it is in fact easier to get right due to its higher simplicity, and restrict and encapsulate statefulness (and concurrency) in an abstraction wherever possible (this common technique is called impedance matching).<p>Each programming problem, or component, etc, involves some degree of design first, or modelling, or a description, whichever word you prefer, it is all the same. There are some decisions you must make before coding, TDD or no TDD.<p>What paradigm you choose should depend first on the nature of the problem, not on the language. Certain problems are more easily (same as naturally) described in a functional way, as recursive functions on data structures. That part of the program should be implemented in a functional way if your language of choice allows that .<p>Other programs are more easily modelled as an object graph, or as a state diagram (awesome for IO among other things), and this is the way they should be designed and implemented if possible. But even in this case, some components can be designed in a functional way, and they should be wherever possible.<p>There is no one superior way, no silver bullet, it all depends on the context. It is better to know multiple programming paradigms without preferring one over the other, and apply the right one to the right problem.
Cargo cult data science
This is all very old stuff.<p>One earlier version was for AI expert systems.<p>Then there was object request broker architecture.<p>Such considerations were ubiquitous for the biggie operations research (OR) with optimization, simulation, etc. OR was so big that it was required in B-school programs.<p>Similarly for management science.<p>The lessons for how to make applications, as in this OP, were all there in the past. Indeed, operations research (OR) and management science (MS) merged to become OR&#x2F;MS with a journal <i>Interfaces</i> that talked a lot about the points in the OP.<p>I went through a lot of that history and discovered lessons much like those in the OP.<p>&gt; Fundamentally, to be a data driven company, data needs to be part of the internal dialogue spoken by all members.<p>Okay, let&#x27;s stop right there! Who the heck, why, where, when did anyone ever say, argue, justify that any company should be &quot;a data driven company&quot;? Maybe a &quot;market driven company&quot;, but data driven?<p>Really, for what kind of company should have, there is very wide agreement, from a home based business to Wall Street, and that is a money making company!<p>What turns on the CEO and the BoD is making money!<p>But not nearly all projects, data science, ..., Taylor&#x27;s time and motion studies, are directly connected with making money. E.g., when I wrote software to schedule the fleet at FedEx, the main goal was just a schedule, printed out, on paper, with departure times, flight times, arrival times, etc., that would pass expert review as &quot;flyable&quot;. Actually, saving money, i.e., <i>optimization</i>, was of much less interest.<p>&gt; So, to avoid a cargo cult of data, organizations should stop chasing technology and start working with experienced technologists who can apply technology to solve organizational problems.<p>Yup.<p>&gt; Executives, to understand how their project relates to company goals, and how success would be reported.<p>Really, reasonably well experienced problem sponsor executives will ask &quot;Why should I do that?&quot; and need a good answer or won&#x27;t do it. Sure, one reason to do the project may be just to be playing with the latest buzz words, but most organizations have highly sensitive BS detectors that will be triggered by buzz words.<p>&gt; With their bosses demanding analytical results, managers will demand analytical results from their peers, and so on, down throughout the subgroup.<p>Why would bosses be &quot;demanding analytical results&quot;? How many bosses understand good analytical results versus a lot of BS, have an accurate view of the potential of analytical results, could explain why it might be good for results to be analytical, know how to do projects that yield solid analytical results, or see how analytical results could help their careers or the goals of the company? Answer: Only a small fraction. E.g., only recently has Wall Street taken analytical results seriously for trading instead of intuitive, judgment <i>stock picking</i>.<p>&gt; My reasoning was simple: anyone with data science on their side would be able to prove that their efforts worked better than their peers.<p>Then? How about the peers feel threatened and mount a gossip and sabotage campaign against the data scientist and their work? The management chain can also feel threatened.<p>&gt; Basically, I had assumed a data-driven culture exists, when in reality businesses are struggling to create that culture in the first place.<p>They are not even &quot;struggling to create that culture&quot;. It is a fertile, gullible imagination that believes that many organizations believe that they want &quot;a data-driven culture&quot;.<p>&gt; Data science is best viewed as a form of company culture, rather than a set of technologies.<p>No. Data science is best viewed as a technique, box of tools, that sometimes can, likely with work with other tools and techniques, yield some valuable results.<p>&gt; I argue that it’s best to spread a data-driven culture from the top of an organization down, by requiring that reports be analytical.<p>Neither the spreading nor the requiring will work. Only a tiny fraction of the people in the organizations have significant ability with data science, and they will NOT make any such spreading or requiring of something they don&#x27;t understand possible in the organization.<p>&gt; Solutions that help measure and improve the performance of a part of the company (“we’ll help you measure marketing ROI”, or “we will introduce predictive maintenance), will spread and become enduring organizational strengths.<p>Not really. For &quot;enduring organizational strengths&quot; look to, say, high quality reasoning, writing, and presentations, powerful innovation, high determination, careful attention to the markets and the customers.<p>For &quot;Solutions that help measure and improve the performance of a part of the company&quot;, that will be down somewhere near a good company Web site, good telephone courtesy, keeping lunch breaks under an hour, stopping pilfering, having good computer network management, having good computer security.<p>Sometimes <i>data science</i>, or just call it applied mathematics, and the rest of math, can mean super big bucks for a company:<p>Supposedly a big example is the trading software of James Simons&#x27;s Renaissance Technologies.<p>IIRC once the CEO of American Airlines said that their subsidiary Sabre for reservations and scheduling was so important he&#x27;d sell off all the planes and just keep Sabre.<p>Likely the old linear programming application of the diet problem is still used effectively (i.e., save big bucks) in feed mixing for livestock, cat food, dog food, etc.<p>Linear and non-linear programming are likely still pillars of, worth big bucks for, operating an oil refinery.<p>There may be some big bucks from applying math to ad targeting on Web sites.<p>For large projects, the old linear programming application of &quot;program (or project) evaluation and review technique, commonly abbreviated PERT, .... PERT was developed primarily to simplify the planning and scheduling of large and complex projects. It was developed for the U.S. Navy ...&quot; Closely related is the &quot;critical path method (CPM)&quot;.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Program_evaluation_and_review_technique" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Program_evaluation_and_review_...</a>
Inside Patreon, the economic engine of internet culture
I&#x27;m <a href="https:&#x2F;&#x2F;www.patreon.com&#x2F;airwindows" rel="nofollow">https:&#x2F;&#x2F;www.patreon.com&#x2F;airwindows</a> and I&#x27;m writing audio DSP plugins in AU and VST form, for a living. Here are my observations over the past year of relative success on Patreon.<p>I&#x27;m in the top 3.2% of all Patreon, sitewide. That amounts to only a little over $700 a month (I&#x27;m using it to replace a for-pay business model that wildly oscillated from $400 to $3000 a month). It&#x27;s growing.<p>I&#x27;m having to put out twice or three times the work, but I&#x27;m happier with a &#x27;free&#x2F;patronage&#x27; model because what was happening to me under the for-pay model was, I got locked into a &#x27;hype cycle&#x27; versus other developers and companies. The sense I had was, my industry sector is dying. The way we treat customers is worsening, and it&#x27;s a race to DRM-based, extremely invasive monthly software rental and a degree of dishonesty that didn&#x27;t sit well with me. I feel that I bailed &#x27;in time&#x27; to turn my ten years of reputation and experience into just-barely a subsistence using Patreon, and that if I hadn&#x27;t done so, I would have been run out of business by competitors using every sort of deceptive and customer-abusing practice, and the epitaph would&#x27;ve been &#x27;A shame, he was one of the good ones. Tough business&#x27;.<p>As such I feel I have a real-world view of what Patreon actually is. It&#x27;s a form of payment processor that can let you bill for basically &#x27;goodwill&#x27;: the strong point is, it lets you render your income more predictable, at the cost of not being able to exploit individual creations which might be more valuable.<p>Never, NEVER get sucked into the &#x27;just 0.1% of all living humans donating one cent a month will make you rich!&#x27; argument. If you have a hundred thousand known fans, MAYBE you can get a hundredth of them to give to you. You&#x27;ve got no control over what &#x27;the crowd&#x27; will do. I don&#x27;t know how many times I&#x27;ve revealed on HN that I&#x27;m creating mass quantities of code with an open-source (planned MIT license) future, on Patreon, and of the 347 patrons I&#x27;ve got, ALL of them are from my existing connections who already use my software. I&#x27;m looking to do an experiment with Facebook ads where I literally link to my entire library as a free zip to download and say &#x27;I&#x27;m paying Facebook to tell you that I made this for you&#x27;. Haven&#x27;t done it yet, don&#x27;t have high hopes for it.<p>ALL your traction on Patreon comes organically from what you&#x27;re already doing. In no way does it find you patrons: it&#x27;s your shopping cart software. That does have one unusual consequence: since they aggregate patronage together and bill people in a lump sum, I&#x27;ve never seen anything more effective at enabling content that is routinely censored by credit card companies. Anyone who knows anyone who&#x27;s tried to run an internet content business with NSFW material as part of the mix (I know a bunch of cartoonists) knows the dangers of getting banned by Visa and Mastercard (IIRC, particularly Visa won&#x27;t touch you if you&#x27;re dirty-minded). Patreon is a layer of abstraction that has enabled a startling opening up of opportunity for censored content, and that&#x27;s shown in the NSFW side of Patreon. It&#x27;s still not a &#x27;free ticket to money&#x27; as you still have to generate your own attention, but obviously if you&#x27;re good at NSFW content and distributing it free then the internet will beat a path to your door, and Patreon is accepted (in fact, the paywall model seems popular among NSFW creators with few objections to the idea. Premium content may not last long before being &#x27;liberated&#x27; but I rarely see objection to the basic concept of a paywall around the freshest source of the creator&#x27;s output).<p>I&#x27;ve been keeping records of what constitutes the top 1% of all Patreon, because I was keeping records of where I stood (started out at top 10% almost immediately because I had ten years of existing relationships w. customers). About a year ago, the 1% mark sat at around $2350 a month, with total creators between 41,000 and 45,000. It&#x27;s been dropping, and as Patreon approaches 78,000 creators the 1% mark is dropping below $1890. This is while key patreon accounts are hitting new records for monthly income. It&#x27;s definitely the internet power-law thing in action: the number of participants doubles, but most people are doing worse: the distribution is NOT staying the same, it&#x27;s getting more skewed towards the outliers. I&#x27;m guessing this is partly caused by a flood of people who think it&#x27;s an internet lottery ticket and not a way to bill masses of existing customers…<p>Summary: Patreon is probably even less prone to &#x27;discovery of worthwhile projects&#x27; than Kickstarter, because the mode of engagement is different: rather than seek out &#x27;discoveries&#x27; it&#x27;s a method of inserting benevolent digital leeches onto people&#x27;s credit cards, very much like DRM-based rental schemes but less coercive. Because it can be used in a &#x27;strictly voluntary&#x27; way, the revenue you&#x27;ll get seems to be a quarter to a tenth what you&#x27;d get on a &#x27;direct sales&#x27; model, but the consistency of a massed small-donation model combined with billing people&#x27;s credit cards gives you a steadiness of income that is a LOT more easy to live with than boom-and-bust product development (which I did for a decade, pre-Patreon).<p>If you can budget for a growth month-over-month that&#x27;s a little better than, say, the growth of index funds, and you&#x27;ve got created product with a decent number of people already aware of what you do, it&#x27;s great. I have no regrets about going Patreon. I passed up an opportunity to do my whole &#x27;for-pay&#x27; model over again to a market at least twice the size of my original (my whole decade of for-pay work was Mac only, and I relaunched targeting PC VST) but I&#x27;m glad I did. It let me double down on my positioning as a product maker, and completely avoid spending any time on being an internet cop. I just give everything away now, and the patronage gradually gets closer to minimum wage ;)<p>For now, I am your audio DSP waitress, on roller-skates. I always figured that was what ten years of creative work was worth ;)
GoboLinux: A distro that redefines the entire filesystem hierarchy
Many readers of HN won&#x27;t need this explanation of the Unix&#x2F;Linux&#x2F;MacOS filesystem hierarchy, but for those that find it confusing here is a very brief summary. See Wikipedia for a more in depth discussion [1].<p>When Unix was invented, the concept of hierarchical filesystems wasn&#x27;t new, notably Multics had a hierarchical filesystem, but there were conflicting visions. At the time, most other operating systems divided the filesystems between a number of top-level containers containing files, but no nested containers. IBM&#x27;s contemporary time-sharing system, VM&#x2F;370, supplied users with a set of top-level &quot;virtual&quot; drives each containing any number of files. There were no recursively nested directories.<p>The designers of Unix wanted a simple Operating System suitable for software development so they tended to implement an idea and then use it as much as practical in the OS design. Rather than have top-level containers like drives that contain folders that contain files, Unix just had directories and files (both implemented with the same on device structures called inodes). This combination of simple ideas fully generalized can be seen throughout Unix: multiple pathnames can point (link) to the same inode providing a form of aliasing for filenames, the api for files is generalized to include access to (possibly raw) devices so devices show up in the filesystem hierarchy, etc.<p>[&#x2F;]<p>The top level directory has the name &#x2F;, unlike other systems all filesystem pathnames start at this single point.<p>[&#x2F;etc]<p>System configuration is found here. Most of these files are simple text files that may be edited by system administrators. In the past administrators might directly edit &#x2F;etc&#x2F;passwd to remove a user. Now, things are more complex, but there is still a backwards compatible &#x2F;etc&#x2F;passwd file (containing encrypted passwords, etc.).<p>[&#x2F;bin and &#x2F;sbin]<p>Unix, from the beginning was, like it&#x27;s inspiration Multics, a multi-user system. Reconfiguring the system, for example to add a printer or drive, normally required running in single user mode at a privileged level. For this reason the programs needed by administrators running single user were segregated and placed in &#x2F;bin. The rest of the system could be left offline, useful when working on the rest of the system.<p>As Unix grew, more and more utilities were added to &#x2F;bin until it made sense to segregate it into those utilities needed when even a normal user might find themselves in single user mode vs a system administrator doing something dangerous. The super-user type programs now go in &#x2F;sbin while the essential, but far less dangerous utilities go in &#x2F;bin. Ordinary file copy is &#x2F;bin&#x2F;cp while the reboot command is &#x2F;sbin&#x2F;reboot. Some of the division look arbitrary to me, like &#x2F;sbin&#x2F;ping instead of &#x2F;bin&#x2F;ping, but there is probably logic behind it.<p>[&#x2F;usr and &#x2F;var]<p>Once booted up normally in multi-user mode, &#x2F;usr contains system read-only content (for example, programs and libraries) and &#x2F;var contains system content that is variable (like log files).<p>The &#x2F;usr directory is large and subsequently divided into a number of second level directories.<p>[&#x2F;usr&#x2F;bin]<p>This is where the rest of the Unix &quot;binaries&quot; (i.e. programs) reside. So while ls (the list directory command) is &#x2F;bin&#x2F;ls the c compiler resides in &#x2F;usr&#x2F;bin and is &#x2F;usr&#x2F;bin&#x2F;gcc.<p>[&#x2F;usr&#x2F;sbin]<p>Like the division between &#x2F;bin and &#x2F;sbin more administrative commands not necessary for single user mode (see &#x2F;sbin) are placed in &#x2F;usr&#x2F;sbin. For example, the command &#x2F;usr&#x2F;sbin&#x2F;setquota used to set disk quotas on users is in &#x2F;usr&#x2F;sbin and not in &#x2F;usr&#x2F;bin.<p>[&#x2F;usr&#x2F;lib]<p>This is where Unix supplied software development libraries go.<p>[&#x2F;usr&#x2F;include]<p>This is where Unix supplied (predominantly C and C++) include files are placed.<p>[&#x2F;usr&#x2F;share]<p>System documentation, the man pages, and spelling dictionaries are examples of the files found in &#x2F;usr&#x2F;share. They are read-only files used by the system under normal (multi-user) operation.<p>By now &#x2F;usr&#x2F;share is full of further subdivisions since many programs and systems need a place to put their read-only information. For example there is a &#x2F;usr&#x2F;share&#x2F;cups for the Common Unix Printing System to store information about printers, etc.<p>[&#x2F;usr&#x2F;local]<p>System administrators may add additional local data and programs to their system. It make sense that these would be segregated from the rest of the files under &#x2F;usr that come with Unix. The contents of &#x2F;usr&#x2F;local are local to the machine and are further subdivided into programs in &#x2F;usr&#x2F;local&#x2F;bin and read-only data for these programs in &#x2F;usr&#x2F;local&#x2F;share.<p>[&#x2F;usr&#x2F;X11...]<p>Once, one of the biggest subsystems in Unix was it&#x27;s support for the graphical user interface known as the X window system. There were development libraries and include files, commands, man pages, and other executables. Rather than swamping the neat file hierarchy with these new files they were placed under a single subdirectory of &#x2F;usr. You will probably not need to look at this very often.<p>[&#x2F;home]<p>Users home directories are placed here; mine is &#x2F;home&#x2F;todd. Often &#x2F;home is on a different physical partition so that it can be unmounted and a new version of the operating system can be installed without touching &#x2F;home.<p>[&#x2F;tmp]<p>Temporary files, for example that don&#x27;t need to be backed up, are placed here.<p>[&#x2F;mnt]<p>I&#x27;ve mentioned mounting and unmounting. Unix systems support a number of different filesystem formats, and devices can contain multiple filesystems (for example on distinct partitions). These filesystem become accessible once they are mounted into the hierarchy. Users need to temporarily (and now automatically) mount USB drives somewhere so that their files can be accessed via a pathname. In the past, users would just pick a directory to mount over. Now, it&#x27;s traditional for temporary mounts to be placed in &#x2F;mnt.<p>[&#x2F;dev]<p>In Unix, low level access to devices is done through device drivers that support a number of file interfaces. Because they appear as special files they can be found under this directory. This is a convenience for programs because it makes naming and access to devices very similar to naming for ordinary files.<p>[Finding out more information about Unix and its hierarchy]<p>The Unix manual pages are a great resource. Originally, the manual was a physical book (I&#x27;ve still got a couple) divided into sections. Section 1 were the commands one might type on the command line. Now it is much easier to simply use the man command to access the exact same content. Use a terminal and enter &quot;man ls&quot;, for example, to get all the information on the ls command to list directory contents. (You will be surprised at the number of options.)<p>When you aren&#x27;t sure of the command&#x27;s name try the command &quot;apropos&quot; followed by a word you&#x27;d like to find the man page for. Passwords are an important subject so &quot;apropos password&quot; on my system lists 47 different manual pages for information on passwords. I see one listed as &quot;passwd(1)&quot;. The (1) means it&#x27;s section one so its about a command named &quot;passwd&quot;, the command for changing your password.<p>Here are the sections:<p>(1) Commands<p>(2) System calls<p>(3) Library functions<p>(4) Special files like device files<p>(5) File formats<p>(6) Recreations like games<p>(7) Misc info<p>(8) System admin and daemons<p>If I was looking for the format of the password file I would enter the section number 5 as the first argument to the man command: &quot;man 5 passwd&quot;. This would show me the documentation for the password file, &#x2F;etc&#x2F;passwd.<p>Filesystem hierarchy manual page: &quot;man 7 hier&quot; or, since it is in only one section: &quot;man hier&quot;. To find the name, &quot;hier&quot;, of this man page one could have used the command &quot;apropos filesystem&quot;.<p>I hope this helps; it&#x27;s probably too long.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Filesystem_Hierarchy_Standard" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Filesystem_Hierarchy_Standard</a>
About This Googler's Manifesto
The amount of emotionally charged black and white thinking that permeates this debate is mind-boggling, as is the ability of the outraged side to create a straw man out of thin air through cherry-picking and blatant misinterpretation. Truly one of the most impressive displays of mental gymnastics I have seen and I&#x27;m not sure which is more terrifying - the idea that this is done unconsciously or consciously.<p>From the original manifesto:<p>&gt; [...] I value diversity and inclusion, am not denying that sexism exists, and don’t endorse using stereotypes. [...]<p>&gt; [...] Differences in distributions of traits between men and women may in part explain why we don’t have 50% representation of women in tech and leadership. [...]<p>&gt; [...] If we can’t have an honest discussion about this, then we can never truly solve the problem. [...]<p>&gt; [...] Of course, men and women experience bias, tech, and the workplace differently and we should be cognizant of this, but it’s far from the whole story. [...]<p>&gt; [...] Note, I’m not saying that all men differ from women in the following ways or that these differences are “just.” I’m simply stating that the distribution of preferences and abilities of men and women differ in part due to biological causes and that these differences may explain why we don’t see equal representation of women in tech and leadership. Many of these differences are small and there’s significant overlap between men and women, so you can’t say anything about an individual given these population level distributions. [...]<p>&gt; [...] I hope it’s clear that I’m not saying that diversity is bad, that Google or society is 100% fair, that we shouldn’t try to correct for existing biases, or that minorities have the same experience of those in the majority. [...]<p>&gt; [...] I’m also not saying that we should restrict people to certain gender roles; I’m advocating for quite the opposite: treat people as individuals, not as just another member of their group (tribalism). [...]<p>Gizmodo&#x27;s summary:<p>&gt; In the memo, which is the personal opinion of a male Google employee and is titled “Google’s Ideological Echo Chamber,” the author argues that women are underrepresented in tech not because they face bias and discrimination in the workplace, but because of inherent psychological differences between men and women.<p>From Zunger&#x27;s post:<p>&gt; You have probably heard about the manifesto a Googler (not someone senior) published internally about, essentially, how women and men are intrinsically different and we should stop trying to make it possible for women to be engineers, it’s just not worth it.<p>How any person of average intelligence possessing a modest grasp of the English language can draw these conclusions is beyond me.<p>Granted there is an argument to be made for the manifesto&#x27;s author not doing the best possible job of expressing himself in such a way as to increase the likelihood of sparking a civil debate on what is clearly a very sensitive issue. I am also not the biggest fan of his preoccupation with the one-dimensional and in my opinion overly simplistic left&#x2F;right political spectrum.<p>I will not argue for or against the specific conclusions he draws in regard to the role of genetics in the different distribution of traits among men and women, as I have neither the time nor the interest to look into the research (nor the opportunity, really, as a number of links have been apparently omitted from the publication, which, one might assume, lead to sources the author was basing his arguments on).<p>What I do find fascinating however, and the reason I&#x27;m writing this, is that I see someone basically going &quot;guys, have we actually stopped to consider that maybe <i>that</i> could be one of the reasons why we&#x27;re observing <i>this</i> and if not why not - here&#x27;s what I think&quot; while doing everything humanly possible to emphasize that they are in no way denying that there is a problem and in no way suggesting that <i>that</i> is the only reason. And instead of getting &quot;yes we did and here&#x27;s what we concluded and why&quot; or &quot;no we didn&#x27;t, let&#x27;s talk about it&quot;, they&#x27;re viciously shamed and attacked for entertaining the very thought.<p>A comment here referred to this as thoughtcrime punishment and I couldn&#x27;t agree more. This is plain and simply dogma at work, and the way I see it, it has no place among intelligent people engaged in science and&#x2F;or engineering. And yet here we are.<p>We have no problem accepting that differing trait distribution between different genetic groups can be a significant factor in a disproportionate representation of some groups in certain fields - soldiers in combat roles, construction workers, athletes, etc. And yet when it comes to the brain, suggesting a similar difference is suddenly taboo.<p>Even before doing any research, the idea that the brain is somehow exempt from all of this seems highly questionable and would merit the most rigorous examination to confirm or reject.<p>Here&#x27;s a thought though - it doesn&#x27;t matter to this debate. We could talk about if&#x2F;why men are on average better&#x2F;worse than women in whatever and throw around studies until we are blue in the face, but in the end when someone wants to do a job, the only thing that should matter is - can that person deliver. We need to be focused on making sure that this is indeed the only thing that matters and let natural tendencies and capabilities produce whatever representation of genetic groups they produce and if it&#x27;s similar to the general population that&#x27;s fine and if it&#x27;s not, that&#x27;s fine too.<p>Let me emphasize again, that <i>we&#x27;re already doing this</i> in many fields. Everyone is not born equal. We know that&#x27;s true, we know enough about genetics to recognize that differing distribution of traits among genetic groups are a thing and yet we&#x27;re so terrified of being seen as racist or sexist, that we&#x27;ll keep a few precious blindspots no matter what and defend them to the death whenever someone dares suggest that we might want to shine a light on them.<p>Another thought - there are children growing up right now that don&#x27;t understand race. I guess some might even be lucky enough that they don&#x27;t understand gender. They see different people with different skin, hair, features, body shapes, genitals, skills, manners, likes and dislikes. They&#x27;d do perfectly fine going through life with the simple understanding that &#x27;yes, people are different&#x27;, but then we get to them and explain how, you see, this group of people is oppressing that group of people, these people are like this and those are like that, and instead of seeing individuals we teach them to see the emotionally charged baggage-ladden labels that we insist on slapping on everyone - black people and white people, men and women, gay and straight, African American, Hispanic American, Chinese American, Native American and so on.<p>Significant, lasting cultural change doesn&#x27;t happen overnight, it takes decades and it takes children looking at the world with fresh eyes and adults capable of recognising their biases and making sure to die without passing them on in order to make place for someone better.<p>Of course, discrimination is a problem that needs some solution now rather than in decades. It&#x27;s likely too late for the adults among us to erase our biases. We can recognize we have them, we can minimize them and we can put measures in place to make sure they can&#x27;t do too much damage - blind paper reviews&#x2F;auditions&#x2F;tests as well as bias awareness training strike me as solutions that can only do good.<p>Among other things, we&#x27;re most definitely missing out on brilliant female engineers in CS due to sexism and an often toxic environment, which is clearly a lose-lose situation for everyone. I think it&#x27;s worth considering that we might just be missing out on brilliant male engineers as well, due to affirmative action, which is also sexism.<p>Forcefully engineering and moulding society into whatever shape someone decided it&#x27;s supposed to have through positive discrimination isn&#x27;t change. It&#x27;s the appearance of change, while fueling social conflict, hurting economical, technological and scientific progress and drawing the lines that divide us, thus reinforcing the very foundation of racism, sexism, religious intolerance and any of the countless other stupid reasons we come up with to fight each other - seeing people as members of a group rather than individuals.<p>I&#x27;d like to suggest we take a step back and reevaluate whether we&#x27;re more interested in pragmatic solutions that genuinely lead to a stronger, happier and more harmonious society or in playing make-believe and indulging in some justice fantasy with a very questionable basis in reality.
Ask HN: How did you find your great side project idea?
I was a moody teenager (maybe 16 years old) who had just been forced to move from her home in the US to Australia. We moved in summer and had nothing to do for a few months except explore Fremantle, and all of our luggage had not been transferred yet - we were staying for the summer in a company-provided apartment and my parents decided there was no point in getting everything shipped to this temp apartment. I was mad at my parents for making me move, bored, missed the horse farm I worked at in Alabama. Usually I&#x27;d always be tinkering on the PC making weird web projects and playing games, but now I had no computer. Basically I had a lot of time to think about random crap. Thinking back on it now it was my favorite time to be in Australia. Fremantle is a gorgeous, quirky city and I had all the time in the world to walk around the cool little hipster artsy stores and the &quot;psychic&quot; shops selling crystals and stuff. My imagination ran wild!<p>Anyway, during this time I thought a lot about all the places we&#x27;d lived and was feeling a bit nostalgic both for Alabama and my original home in Ukraine. I thought back on my favorite childhood memories, which were all at my grandparents&#x27; summerhouse in Kherson. One day when I was maybe six or seven years old it was raining really hard and a bunch of snails were crawling around everywhere. I captured some and had them race on the pavement. I &quot;trained&quot; them to crawl in a straight line (I swear this actually happened - or at least that is how I remember it). When I was done I put them all in my orange fishing bucket with leaves, water, and berries and put them aside figuring they&#x27;d be gone by the evening. When I came back in the evening they _were_ gone, but I spotted them all around the bucket (crawling away). The next morning, though, they were all back! This went on for a few days - the snails would leave around the evening and be back in the bucket by the next day. I thought it was really cool!<p>A few days later we had planned to go fishing the next morning with my grandfather so I knew I&#x27;d need my orange bucket back. That night before going to bed I put all the snails out into our garden patch and cleaned out the bucket to be ready by morning. But in the morning, the snails were back again. So I couldn&#x27;t go fishing. This went on for another couple of days and each time I got more and more annoyed at the snails coming back. Even though I tried to &quot;hide&quot; the bucket from the snails by moving it around, they would always find it. One time I put the snails out into the patch again in the morning and went to get ready for fishing thinking they wouldn&#x27;t be able to crawl back that fast, but when I got back most of them were just back again. I&#x27;m not sure why my kid-mind at the time didn&#x27;t just put the snails away again right before leaving and take bucket, but I didn&#x27;t.<p>Finally one morning after a few days of this I was angry. I was really excited about going to fish and there were a bunch of snails in my bucket again. I grabbed the bucket and started <i>throwing</i> the snails out one by one into the patch. I was so annoyed and didn&#x27;t care about taking them out of their home anymore. The snails landed out of sight and in my mind I wasn&#x27;t hurting them, since I was throwing them where they&#x27;d land on vegetation or soft earth. Except I misjudged a throw and accidentally threw one of the snails right in front of me - it hit a rock or branch or something and its shell cracked in a really bad way. I could see the body spilling out of the shell, and it was still alive and moving but I knew it was dying. That&#x27;s when I realized I&#x27;d been hurting them, and now I&#x27;d killed at <i>least</i> this one. I was horrified, started crying - the thought of putting the snail out of its misery didn&#x27;t even cross my mind. I felt awful and decided the snails could have my bucket and live there for as long as they want, so I tried to find some of the other snails I&#x27;d thrown away but it was too late - I couldn&#x27;t find them anywhere. I ended up leaving to go fishing with the bucket.<p>As a kid I got over and forgot the incident by probably the next day, but in Fremantle when I thought about it again I just felt guilty again. And then I remembered how cool it was that the snails would crawl in a straight line when I raced them, and how it was even cooler that they kept coming back &quot;home&quot; even though I wasn&#x27;t trapping them in the bucket! So I got the idea for a snail racing website where people could find virtual snails, take care of them, race them against each other, and breed them. My favorite games to play at the time were PHP browser games, so I envisioned it being written in PHP.<p>I had a few false starts over the years; when I first had the idea I only knew a bit of HTML and CSS and had no skills to build this thing. I didn&#x27;t seriously start working on it until later, but that is my side project - a snail and snail management simulation - and I have a feeling I won&#x27;t move on to anything else for a very long time.
Why does Sattolo's algorithm produce a permutation with exactly one cycle?
This is a nice post and I hadn&#x27;t heard of Sattolo&#x27;s algorithm before. The proof is a bit long though. The reference linked from Wikipedia: <a href="http:&#x2F;&#x2F;algo.inria.fr&#x2F;seminars&#x2F;summary&#x2F;Wilson2004b.pdf" rel="nofollow">http:&#x2F;&#x2F;algo.inria.fr&#x2F;seminars&#x2F;summary&#x2F;Wilson2004b.pdf</a> proves the correctness of Sattolo&#x27;s algorithm in three sentences. I found it fairly easy to understand, while I didn&#x27;t manage to read through the linked post in detail to get the same level of understanding. Let me try to explain the proof I understood, without assuming mathematical background, but instead introducing it. (I&#x27;ll use 0-based indices as in the post, instead of 1-based indices that mathematicians would use.)<p># What is a permutation?<p>There are (at least) two ways to think of (or define) a permutation:<p>1. A list (an ordering): a specific order for writing out elements. For example, a permutation of the 10 elements [0, 1, 2, 3, …, 9] means those 10 elements written out in a particular order: one particular permutation of 0123456789 is 7851209463. In a computer, we can represent it by an array:<p><pre><code> i 0 1 2 3 4 5 6 7 8 9 a[i] 7 8 5 1 2 0 9 4 6 3 </code></pre> 2. A reordering. For example, the above permutation can be viewed as &quot;sending&quot; 0 to 7, 1 to 8, and in general i to a[i] for each i.<p>Instead of describing this reordering by writing down 10 pairs 0→7, 1→8, …, 7→4, 8→6, 9→3, we can save some space by &quot;following&quot; each element until we come back to the beginning: the above becomes a bunch of &quot;cycles&quot;:<p>- 0→7→4→2→5⟲ (as 5→0 again) (note we could also write this as 4→2→5→0→7⟲ etc., only the cyclic order matters)<p>- 1→8→6→9→3⟲ (as 3→1 again)<p>You can think of cycles the way you think of circular linked lists. This particular permutation we picked happened to have two cycles.<p># What is a cyclic permutation?<p>A cyclic permutation is a permutation that has only one cycle (rather than two cycles as in the above, or even more cycles). For example, consider the permutation 8302741956:<p><pre><code> i 0 1 2 3 4 5 6 7 8 9 a[i] 8 3 0 2 7 4 1 9 5 6 </code></pre> If we follow each element as we did above, we get 0→8→5→4→7→9→6→1→3→2⟲ where all 10 elements are in a single cycle. This is a cyclic permutation.<p>Our goal is to generate a random cyclic permutation (and in fact uniformly at random from among all cyclic permutations).<p># Sattolo&#x27;s algorithm<p>Note that in a cyclic permutation of [0, ..., n-1] (in our example above, n=10), for the highest index n-1, there will be some smaller j such that a[j]=n-1 (in the example above, a[7]=9). Now if we swap the elements at positions n-1 and j (which in the example above is:<p><pre><code> Before After i 0 1 2 3 4 5 6 7 8 9 i 0 1 2 3 4 5 6 7 8 9 a[i] 8 3 0 2 7 4 1 9 5 6 a[i] 8 3 0 2 7 4 1 6 5 9 </code></pre> where we swapped a[7]=9 and a[9]=6 to make a[7]=6 and a[9]=9), then in general we get a[n-1]=n-1, and a[0]…a[n-2] form a cyclic permutation of [0…n-2]. In the above example, in the &quot;after&quot; case, if we ignore i=9 and consider only positions 0 to 8, we have the cycle 0→8→5→4→7→6→1→3→2⟲. (This is our original cycle 0→8→5→4→7→9→6→1→3→2⟲ with 9 &quot;removed&quot;, as we&#x27;d do when deleting an item from a linked list.)<p>This holds in reverse too: if we had started with the cyclic permutation of [0, …, 8] that is in the &quot;after&quot; column above, added a[9]=9, and swapped a[9]=9 with a &quot;random&quot; element a[7]=6, we&#x27;d get the cyclic permutation of [0, … 9] that is the &quot;before&quot; column.<p>In general, you can convince yourself that there is a unique way of getting any cyclic permutation on [0, …, n-1] by starting with a cyclic permutation on [0, …, n-2], considering a[n-1]=n-1, picking a particular index j in 0 ≤ j ≤ n-2, and swapping a[n-1] and a[j].<p>This gives the following algorithm, which we&#x27;ve already proved is correct (or derived, rather):<p><pre><code> def random_cycle(n): a = [i for i in range(n)] # For all i from 0 to n-1 (inclusive), set a[i] = i for i in range(1, n): # For each i in 1 to n-1 (inclusive), j = random.randrange(0, i) # Pick j to be a random index in the range 0 to i-1, inclusive a[i], a[j] = a[j], a[i] # Swap a[i] and a[j] return a </code></pre> In the post linked above, you swap with a random element that is &quot;ahead&quot;, instead of one that is &quot;behind&quot;; also you start with a list of length n and shuffle it according to the randomly generated cyclic permutation of [0…(n-1)] instead of simply generating the permutation. From the post:<p><pre><code> def sattolo(a): n = len(a) for i in range(n - 1): j = random.randrange(i+1, n) # i+1 instead of i a[i], a[j] = a[j], a[i] </code></pre> This is slightly different, but the proof is similar: in fact this is the algorithm (except going downwards) that is proved correct in the linked paper. (And even if it is not obvious to you that the two algorithms are equivalent, you have an algorithm that generates a random cycle and is just as easy to code!)
A Review of Perl 6
<p><pre><code> 「Man is amazing, but he is not a masterpiece」 </code></pre> Is different than the other two, it can be thought of as short for<p><pre><code> Q 「Man is amazing, but he is not a masterpiece」 </code></pre> While the others are short for<p><pre><code> qq “Man is amazing, but he is not a masterpiece” </code></pre> Which is short for<p><pre><code> Q :qq “Man is amazing, but he is not a masterpiece” Q :double “Man is amazing, but he is not a masterpiece” </code></pre> Which is also short for<p><pre><code> Q :s :a :h :f :b :c “Man is amazing, but he is not a masterpiece” Q :scalar :array :hash :function :backslash :closure “Man is amazing, but he is not a masterpiece” </code></pre> For more information see the [Quoting Constructs](<a href="https:&#x2F;&#x2F;docs.perl6.org&#x2F;language&#x2F;quoting" rel="nofollow">https:&#x2F;&#x2F;docs.perl6.org&#x2F;language&#x2F;quoting</a>) documentation.<p>---<p>There are modules for [debugging and tracing](<a href="https:&#x2F;&#x2F;github.com&#x2F;jnthn&#x2F;grammar-debugger" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jnthn&#x2F;grammar-debugger</a>) Grammars.<p>A regex is just a special kind of method by the way.<p><pre><code> say Regex.^mro; # ((Regex) (Method) (Routine) (Block) (Code) (Any) (Mu)) </code></pre> Which is part of the reason it doesn&#x27;t have some of the niceties of parser generators built-in yet. The main reason is it got to a working state, then other features needed more designing at that point.<p>---<p><pre><code> # Perl 5 &#x2F;(Male|Female) (?:[Cc][Aa][Tt]|[Dd][Oo][Gg])&#x2F; # is better written as &#x2F;(Male|Female) (?i:cat|dog)&#x2F; </code></pre> In Perl 6 you can turn on and off sigspace mode<p><pre><code> &#x2F;:s (:!s Male | Female ) [:!s:i cat | dog ]&#x2F; # or fully spelled out &#x2F;:sigspace (:!sigspace Male | Female ) [:!sigspace :ignorecase cat | dog ]&#x2F; # or just use spaces more sparingly &#x2F;:s ( Male| Female) [:i cat| dog]&#x2F; </code></pre> Note that in this case it is more like<p><pre><code> &#x2F;( Male | Female ) \s+ [:i cat | dog ]&#x2F; </code></pre> In other contexts it could be slightly different. Basically it ignores insignificant whitespace.<p>Note that `( a || b )` is more like the Perl 5 behaviour, but `( a | b )` tries both in parallel with longest literal match.<p>Regular expressions are also the reason `:35minutes` is in the language by the way<p><pre><code> say &#x27;a ab abc&#x27; ~~ m:3rd&#x2F; \S+ &#x2F;; # 「abc」 </code></pre> Rather than make it a special syntax, it was generalized so it can be used everywhere.<p>---<p>The asterix in a Term position turns into a Whatever, when that is part of an expression it turns into a positional parameter of a WhateverCode.<p><pre><code> $deck.pick(*); # randomized deck of cards $deck.pick(Whatever.new); # ditto $dice.roll(*); # infinite list of random rolls of a die $dice.roll(Whatever.new); # ditto %a.sort( *.value ); # sort the Pairs by value (rather than key then value) %a.sort( -&gt; $_ { .value } ); # ditto </code></pre> Note that the last asterix was part of an expression, while the others weren&#x27;t.<p><pre><code> my &amp;shuffle = *.pick(*); # only the first * represents a parameter to the lambda # the other is an argument to the pick method </code></pre> The main reason I think for its addition to the language is for indexing in an array<p><pre><code> @a[ * - 1 ]; </code></pre> Rather than make it a special syntax exclusively for index operations, it was made a general lambda creation syntax.<p>I will agree that it takes some getting used to, but it is not intractable. WhateverCode lambdas should also only be used for very short code, as it can get difficult to understand in a hurry.<p>---<p>A `$_` inside of `{ }` creates a bare block lambda, basically this removes the specialness of Perl 5&#x27;s `grep` and `map` keywords.<p>There is a similar feature of placeholder parameters `{ $^a &lt;=&gt; $^b }` to remove the specialness of Perl 5&#x27;s `sort` keyword.<p>Another feature is pointy block, which removes the specialness of a `for` loop iterator value syntax.<p><pre><code> # Perl 5 (this is the only place where this is valid) for my $line (&lt;&gt;) { say $line } # Perl 6 for lines() -&gt; $line { say $line } # not special lines().map( -&gt; $line { say $line } ); # really not special if $a.method-call -&gt; $result { say $result } </code></pre> ---<p>There is more to NativeCall that you haven&#x27;t discovered yet.<p>For example, you can directly declare an external C function as a method in a class, and expose it with a different name. (if the first parameter is the instance)<p>Also it doesn&#x27;t matter what you put in the code block for a NativeCall sub, as long as it parses. That is why it doesn&#x27;t matter if you put a Whatever star (asterix) there or a stub-code ellipsis `...` in it. (you can also leave it empty)<p><pre><code> use NativeCall; sub double ( --&gt; size_t ) is native() is symbol&lt;fork&gt; { say &#x27;Hello World&#x27; } say double; say &#x27;$*PID == &#x27;,$*PID; # 1555 # 0 # $*PID == 1552 # $*PID == 1555 </code></pre> ---<p>Supplies <i></i>can<i></i> be pattern matched, just use `grep` on them as if they were a list. It in turn returns a Supply. You can also call `map`, `first`, `head`, and `tail` on them. Basically every List method is also on a Supply, along with special methods like `delayed`.<p>---<p>A lot about what you talked about with lists, and itemization is something that does take some time to get used to. It does get easier, but is always something you have to be cognizant of. Sort of like returning lists from subroutines are from Perl 5. It allows control over flattening that isn&#x27;t available in Perl 5.
It's Never Too Late to Learn Guitar
I started guitar in my mid forties.<p>Four years later, I&#x27;ve reached the point where (some) non-musicians are envious, and actual musicians categorize my sounds as “music” more often than not.<p>There&#x27;s lot of great advice on this thread. (Especially about deliberate practice; and about avoiding instruments so cheap they&#x27;ll just increase the difficulty while making you sound worse than you are.) I&#x27;ll fill in a few notes from my own journey. YMMV.<p>Choosing a guitar:<p>* Steel string is hell on the fingers for a beginner. It is effective altitude training for nylon and electric. I love the tone and I&#x27;m really glad I started with steel string. However, had I known how hard it would be, I don&#x27;t know that I&#x27;d have advised myself to start with it.<p>* Electric is easy-peasy (callus- and finger-strength-wise). I think a decent electric is cheaper than a decent acoustic too. Con: (1) you&#x27;re less likely to pick up the instrument for two minutes of spontaneous practice when you walk past it on its stand. (I&#x27;ve probably got a lot of practice in from two minutes that turned into fifteen that turned into fifty.) (2) If you&#x27;re a unmusical nerd like me, it&#x27;s easy to get distracted from the hard work of making cleaning notes, by the comfort zone of playing with electronics.<p>* Nylon (classical) is easier on the fingers than steel but more difficult than electric. It might represent a happy medium between beginner fingers and practice activation energy, even if you don&#x27;t play classical. Beware: new nylon needs to be tuned again every five minutes. After a month, it finishes stretching out, and holds its tuning about as well as steel.<p>* You can play any style of music on any guitar, <i></i>except<i></i>: (1) you can&#x27;t really bend (a blues staple technique) a nylon string; (2) electric guitars have much greater sustain than any acoustic guitar: if the music you want to play sounds more like a violin or voice (sustained, tonal) than like a piano (percussive), you need an electric; (3) let&#x27;s not talk about slide.<p>* What everyone else says about too cheap a guitar (or, “guitar-shaped instruments”, such as acoustics available for &lt;$150-200). Specifically, you&#x27;re looking for (1) “low action” (don&#x27;t have to press a string too far for it to reach the fret), (2) good intonation (fifth fret low E is the same pitch as the open A, etc.), (3) stays in tune. Depending on your tolerance, you may also require (4) good tone (timbre). Cheap instruments are generally deficient on all of these, and this can really kill your motivation.<p>Decide whether you want to strum (harmony) or play individual notes (melody, or “lead”; also most of classical). These are almost different instruments. (For the beginner. A lot of melody is playing off of chord shape hand posiitons; and, coming from the other direction, embellished chords get to sound a lot like lead.) If accompanying (you sing or play harmonica, or can play with in a band), strumming is a valuable contribution. If you&#x27;re playing solo, you probably need to play lead in order to have music that can stand its own.<p>Decide whether you want to play with a pick or finger style. It&#x27;s easier to be loud with a pick, and it&#x27;s especially easy to be loud strumming with a pick. If you&#x27;re playing acoustic with a band, this may matter to you. Classical, on the other hand, is always finger style. Otherwise, don&#x27;t make too much of this decision – there&#x27;s a fair bit of technique to each, but most of what you need learn as a beginner applies to both, and if you continue with guitar you&#x27;ll probably want both under your belt anyway. Just find some musicians or music that you want to sound like, and do what they do.<p>Choosing a learning style. Try each of the following; see which one(s) seem effective for you.<p>* Learn from an instructor. If this is how you learn best and you have funds and access, find someone you can trust, and then ignore everything else I&#x27;ve written.<p>* Play by ear. Listen to something, try to make your guitar sound like that. Many of today&#x27;s older guitarists wore out vinyl records this way. Man I wish I could learn this way.<p>* Jam with friends. This may work if you&#x27;re musically talented (I&#x27;m not), and&#x2F;or coming from another instrument. Ditto.<p>* Learn by watching. Watch people&#x27;s fingers, do what they do. If this is your learning style, YouTube is the golden age. My pseudo-science theory is that if you&#x27;re good at learning from watching (e.g. sports, dance, watching people&#x27;s hand motions when they sew, cook, juggle, drive, etc.), maybe you have a well-functioning mirror neuron system and this will work well for you. Yet another style I aspire to but suck at.<p>* Play from tab. Learn to play tab, and get some pieces under your belt. I kind of feel like this is the least-musician-y thing to do, but it matches my strengths so much better than the others that I went with this. At this point the road forks: (1) Classical: now learn the (newer!) standard staff, and leave tab behind – that was your starting wheels. (2) Folk&#x2F;pop&#x2F;rock&#x2F;etc.<p>* Play from standard notation. This is really only used in classical repertoire. Even for classical, tab (which is the older notation) is still easier as a stepping stone, and has the advantage of building in performance notes about fingering, but if you&#x27;re working with an instructor or you&#x27;re not going to be put off by learning several hard things at once, then skipping tab gets you to the endpoint more directly.<p>* Play from chord diagrams or lead sheets. This is good for strumming (harmony). I&#x27;m not a strummer; don&#x27;t have much to say here.<p>Practice:<p>* “What you do is what you do every day.” 15 minutes &#x2F; day beats two hours once a week<p>* “The journey is the reward.” Find a way to enjoy the practice itself.<p>* Allocate some time each day for (1) skills, (2) repertoire, (3) noodling around. When you&#x27;re starting, you won&#x27;t have any repertoire (2), and there may not be much distinction between (1) and (3) (“I&#x27;m just trying to get <i>any single note</i> to sound clean.)<p>* At any particular point, have one or more skills you&#x27;re working on. I&#x27;ve iterated through: play without fret buzz, use each finger without fret buzz, finger adjacent strings, finger non-adjacent strings, chromatic scales, blues scales, various chords, various chord alternations, making a barre, hammer-ons and pull-offs with various fingers, flat picking at tempo, flat picking alternate strings at tempo, finger picking a chord, bending a note, banjo rolls. Spend fifteen minutes a day on each active skill. (Exception: when you&#x27;re starting, you may only have strength or callus for five minutes. “It&#x27;s a marathon not a sprint.”) I probably spent fifteen minutes a day on barre for a week or two before hearing anything that even vaguely resembled success; eventually, it locked in.<p>* Learn to play at higher tempo by (1) playing as fast as you can with <i></i>zero<i></i> errors. Drop the pace until you can do this. There&#x27;s a theory that you can learn even faster by (2) playing even faster, with errors, so long as you also do (1). This has worked for me.<p>* Some great musicians swear by practicing with metronomes; some advocate doing without them so that you develop your own sense of rhythm. If the latter, record yourself and play it back against a metronome, so you can tell if your tempo is as good as you think is is.<p>* Isolate and repeat difficult sections. Spend a minute, or five, or fifteen, on the smallest problematic section (maybe a single chord, or chord alternation, or four notes from a melody); don&#x27;t play the whole piece and only hit the problem spot every few minutes.
I’m an Ex-Google Woman Tech Leader and I’m Sick of Our Approach to Diversity
Been there. Done that. Got the T-shirt, scars, bruises, etc. Believing that stuff was by far the worst mistake of my life. Strong advice: Don&#x27;t do that. For all the attempts at <i>gender diversity</i> in STEM fields, f&#x27;get about it or, except in very rare cases as in the OP, suffer from pain to the agonies of the damned to serious harm to your life (I did) and worse, maybe death, literally, for the associated female.<p>I&#x27;ll give details below, but, bluntly, bottom line, as essentially parents with any insight and objectivity at all with children of both genders learn quickly, in short, right from the crib, with rare exceptions, the girls are interested in people and the boys, in things. Sorry, that&#x27;s just the way it is. They are BORN that way, and the difference does NOT go away with time. There is a small fraction of exceptions in both genders, but otherwise that&#x27;s the fact, Jack. Sorry &#x27;bout that. A really simple argument shows that the difference has held strongly for at least 40,000 years. Gads, from some recent research, the difference even holds for Rhesus monkeys which shows that it has held for some millions of years.<p>I tried that: As a college sophomore she told me &quot;Women don&#x27;t just have to be cared for. Women can do things, too. I want a career.&quot; Well, since she had been Valedictorian of her high school class, a year earlier in the freshman trigonometry course I was teaching, had been the best student in the class, with twice as many test points as the next best student, and was well on her way to <i>Summa Cum Laude</i>, PBK, Woodrow Wilson Fellow, and NSF Fellow, all of which she got, I believed her. Wrong. Dumb.<p>Later I was on the team that did IBM&#x27;s artificial intelligence language KnowledgeTool. Well I understood the language, and on our team we had some very bright and aggressive guys, young men, who had written some early sample programs. KnowledgeTool was a pre-processor to IBM&#x27;s PL&#x2F;I, a huge language.<p>From one of the world&#x27;s best research universities, she got her Ph.D. in mathematical sociology, with lots of multi-variate statistics, with matrix theory, analysis of variance and experimental design, hypothesis testing, SPSS usage, etc. All of that was easy for her.<p>So, I showed her how to use our home PC to logon to my office VM&#x2F;CMS account, use the editor XEDIT, use the scripting language Rexx, and right away she wrote a nice, useful Rexx program to report on disk space usage. Then I gave her a one hour tutorial in KnowledgeTool. A week later she had a nice, first sample program running. It did what she wanted. I gave her a 30 minute lecture explaining the intended role of rules as <i>knowledge representation</i>, and two weeks later she had fully in line with the idea of rules and knowledge representation by far the best early KnowledgeTool program I ever saw.<p>She was genuinely brilliant. She beat me like a rented mule in Scrabble. I kept asking her to play so that I could get better, and I did, but she got better faster than I did until the difference was absurd. The OP mentions GRE scores of 800 -- that&#x27;s exactly the score I got on the Math GRE. So, I was bright enough, but she was brilliant, plenty good in math, and much better than me in verbal and essentially every other non-STEM subject.<p>She was brilliant and was a super fast, brilliant student at KnowledgeTool with essentially no instruction at all, no text, no notes, just did it.<p>A STEM field diversity success? Heck no. She hated the STEM fields, including computing. Her view of the STEM fields was &quot;I&#x27;m not that kind of person.&quot;. What she wanted was &quot;A career that helps people.&quot;, and that mostly meant volunteer work. The idea of working for money was an anathema to her -- so she had no future in business.<p>To get a job, she kept trying the STEM fields, e.g., with IBM. She was miserable, desperately miserable. In a training class, she made the highest score in the class, but she was miserable. She went into a depression and clinical depression, was trying to recover with her mother at her family farm, and soon her body was found floating in a lake.<p>Diversity? A grand failure.<p>To believe her &quot;women can do things, too&quot; for anything like the world of work, business, technology, computing, or the STEM fields, she could do it, best in class, for a while but HATED it and too soon found it fatal. Instead she desperately wanted a career that &quot;helped people&quot;.<p>Bluntly, she was interested in people and not in things, technology, applied math, etc.<p>Are we learning yet?<p>Can some small fraction of women, as in the OP, do well in the STEM fields? Yup. Did I mention a &quot;small fraction&quot;?<p>Generally for &quot;diversity&quot;, do everyone, men, women, universities, companies, society, a huge favor -- f&#x27;get about it. Certainly don&#x27;t push it, encourage it, urge women to get into fields with <i>things</i> instead of <i>people</i>. Don&#x27;t do that. To do that is dumb de dumb dumb, dumb, harmful, and sometimes fatal.<p>Instead? Let women pick their own directions. Stop pushing women to be like a dog that walks on only two legs -- usually they can do it, but they nearly never do it well.<p>For this diversity stuff, to borrow from an Indiana Jones movie character Marcus Brody “You are meddling with forces you cannot possibly comprehend.”.<p>I tried, HARD. Biggest mistake of my life. Diversity good? Don&#x27;t believe that stuff, not for your sisters, girlfriends, wife, or daughters. DON&#x27;T do that. The author of the OP? She&#x27;s one of the rare exceptions. Leave it at that.
Ask a Female Engineer: Thoughts on the Google Memo
As a person of color, I feel the need to give my perspective here, since I am part of an underrepresented group in tech, just like women. So I feel like I have some insight on the issue even though my reaction may still not be the same as what a woman would say about this.<p>When you are trying to make an effective argument, you have to anticipate what the responses would be. The biggest problem with Damore is that apparently he didn&#x27;t take enough into account about what women felt about the issue, as mentioned in the article. People in the majority are naturally the ones with the louder voice, but it can be misleading if you are speaking on a minority group. You probably won&#x27;t have the same experiences, and one or two studies is not enough to explain something as complicated as biology, psychology, or sociology. This is especially true if the science is easily refuted. I really wonder why Damore chose to write about the lack of women in tech specifically? It seems to me so that he could better support his proposal to not focus on diversity efforts or to face less backlash for not speaking on race. If this is the case, strengthening confirmation bias is not an effective solution because there may be a lot more than what meets the eye if you&#x27;re not an expert on the subject.<p>So from this, the two biggest questions are was he right? and was Google right?<p>Was he right? Somewhat. I can&#x27;t say yes or no 100%. He tried to explain his view as best as he could but he supported it terribly. It deserved the backlash. But he made some good points about it being unsafe to express his opinion. If he was smarter, he wouldn&#x27;t have been fired. This is the bigger problem with the situation. A good engineer, in my opinion should be more flexible in thought, think more about surrounding outcomes, and be better at interacting with people.<p>Was Google right? Absolutely. The bigger problem is that many people think Google fired him for having a dissenting opinion, which I think is not true and unfortunate because it polarized America more between the left and the right. Damore should&#x27;ve been smarter and we would not have this discussion. I&#x27;m almost certain that Google wouldn&#x27;t be where it is today without diversity of thought. You can have a differing opinion and express it without pissing everyone off. Google would have been dammed if they did fire him, dammed if they didn&#x27;t fire him, but more dammed if they didn&#x27;t. It got leaked and the media attention, complete with hostile arguments from both sides for a reason and it harmed Google&#x27;s image and female employees. We can all mostly agree that Damore had some decent points to be made if he were a better writer and emphasizer. Would you still say that he was fired because Google is a left leaning organization?<p>You can&#x27;t say women are not biologically suited for an engineering position at Google, face harsh backlash including termination from work and say that your views weren&#x27;t respected. Come on. That&#x27;s what sexism is. What if you said this about Hispanics, or Native Americans?<p>If you want to say that the diversity efforts at Google are misguided, then make a better argument than saying &quot;we don&#x27;t need diversity programs at Google&quot;. I would say not to strive for 50% women because not all of the engineers are 50% women. Strive for a closer percent of their actual representation. Google can&#x27;t have 50% of all women engineers because some of them work for different companies. Google shouldn&#x27;t use immoral or illegal hiring practices to achieve this number. But Google should still have diversity programs so that more women get into tech which would bring the number of women in the workforce in general closer to 50% and it could benefit everyone. Also keep in mind that a minority can possibly have more or less qualified people as a whole proportionally within the group. Hiring more of one minority group does not necessarily lower the bar.<p>Biases do exist, but it doesn&#x27;t always mean it&#x27;s bad. I told you I was a person of color at the beginning of this to make you form a bias against me. I want my voice to be heard in the hundreds of these comments when my probability of being read is lower because there are likely a lower percentage of women and other minorities posting. I don&#x27;t think that I face tough obstacles, which turns people off of arguments like this, but I want to say that I do, however minor. Often times I act a certain way BECAUSE I don&#x27;t want to be seen as &quot;the black guy&quot; and I&#x27;ve done this enough of my life that people say that I&#x27;m not the same as many other black guys. They don&#x27;t say it in a negative way, because I still act &quot;somewhat&quot; black, if that makes sense to you.
Ask a Female Engineer: Thoughts on the Google Memo
&gt; &quot;I disagree completely and utterly that the (yes, real) average differences between men and women map to being better or worse at certain jobs.&quot;<p>Where did the memo say this? This is the most common strawman used to attempt to discredit the memo. The memo author never states that men are better at software engineering, just that these biological differences may help explain why women are underrepresented, choosing to pursue computer science and careers in tech less than men.<p>&gt; &quot;the takeaway from the memo is literally that the onus is on me to prove to men in tech that I’m not an “average” woman&quot;<p>&gt; &quot;This is literally a discussion of whether half the human race is innately unsuited for a certain kind of work&quot;<p>See above<p>&gt; &quot;I disagree that it’s possible to write what he did about general populations, then walk it back to say “but of course it doesn’t apply at an individual level.”<p>Essentially what you&#x27;re saying is, &quot;we&#x27;re not allowed to talk about biological differences that make underrepresented minorities look bad&quot;. I disagree. No fact should be barred from mentioning, and it&#x27;s not the author&#x27;s responsibility to ensure that you don&#x27;t misconstrue his facts to advance your own agenda of claiming oppression.<p>&gt; &quot;He did not address any counter arguments or research that opposes his views, or the validity of the studies he did cite and their reproducibility.&quot;<p>Did you address any counter-arguments in your research report? This is some dude&#x27;s memo in an opt-in internet forum, not a comprehensive&#x2F;rigorous discipline-defining research report seeking publication in an academic journal.<p>&gt; &quot;He claimed that Google’s diversity efforts represent a lowering of the bar.&quot;<p>I agree that he should have elaborated on this bold claim. Though it&#x27;s not outrageous to suspect that this could be the case given that affirmative action policies in higher education do lower the bar.<p>&gt; &quot;Some people at Google reacted by saying “well if he’s so wrong, then why not refute him,” but that requires spending a significant amount of time building an argument against the claims in his document. On the other hand, if I remain silent, that silence could be mistaken for agreement. I should not be forced into that kind of debate at work.&quot;<p>The memo was posted in an opt-in forum, you didn&#x27;t have to debate it. Silence does not imply that you agree with him.<p>&gt; &quot;I’m also disappointed that the men I know, including most of my male colleagues, remained silent on the topic. And the ones that did participate, either seemed to support Damore or demonstrated a fundamental lack of understanding for the issues women engineers are faced with and care about.&quot;<p>Why do you think they remained silent? You said it yourself - anyone who disagrees with you is wrong. Damore got fired for stating a well-articulated opinion, why would any of your male colleagues jeopardize their jobs as well by speaking honestly on the matter?<p>&gt; I wish more successful men in tech thought deeply about the advantages they’ve had – the situations in which they were more likely to be trusted, deemed competent, promoted, given raises, etc. as men than they would be as women. This exercise isn’t intended to place blame, but to inspire empathy toward those who feel the weight of their gender each day at work.<p>I wish more women in tech vilifying Damore would think about the advantages they have received (affirmative action), think about it from the perspective of a male in a field where we get no hand-holding or &quot;women in tech&quot; scholarships and are constantly accused of being sexist oppressors, and stop pretending like discrimination is the only or main reason women are underrepresented in tech. I firmly believe that women chose to pursue tech less than men and that this is the biggest driver of the underrepresentation (posted a little about that here <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=15012364" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=15012364</a>). If it is indeed the case that women are choosing to pursue the study of computer science less than men, then stop placing the burden on us males to increase your participation.<p>Before you automatically dismiss me and others with views differing to you as a misogynist, consider that most of us in the first world do not consider women as being any less capable than men at software engineering, let alone ANY discipline. Women used to dominate the field, and no reasonable man believes that women aren&#x27;t fit for the job or shouldn&#x27;t pursue this field.<p>&gt; By remaining silent on this topic or tweeting support for Damore, they are sending a message that philosophical arguments and principles take precedence over the lived experiences of many smart, talented female engineers and technical founders<p>What does this even mean? It&#x27;s just another way of saying &quot;by not agreeing with me, you&#x27;re wrong&quot;<p>&gt; I think he could have written it differently, so that people who chose not to read the whole 10-pages could have read the tl;dr and not immediately concluded he was sexist.<p>So the onus is on the author to ensure that people don&#x27;t flippantly conclude that he&#x27;s a sexist?<p>&gt; This is an emotional topic<p>That&#x27;s the problem, it shouldn&#x27;t be.<p>&gt; He was not fired for speaking truth to power, he was fired for mishandling a complex subject in a way that caused harm to his employer (and many of his colleagues).<p>He wouldn&#x27;t have been fired if the memo argued the opposite viewpoint
What Is the “Social Justice” Endgame?
Hey a game!<p>Disclaimer: not a native english speaker, i might lack vocabulary and do some grammar mistakes.<p>1&#x2F; I have no idea of what a protected group is. A discriminated group however, depends on where you live. Erythreen jews in israel are discriminated against by the “dominant” group (white conservative isreali). Chinese suffer discrimination in some parts of India and Philipines, while people from Philipines suffer some in China (sorry, no idea how englsh call them).<p>Past oppression is only relevant if some discrimination is done because of this (Hello Rwanda). Let’s take South Africa. Past oppression was relevant even after Nelson mandela because the culture was still related to the apparteid. This is less true today, so this is not as relevant now (my question: how did you not figure this on your own? Do you like asking leading question that much?)<p>2&#x2F;Leading question, again, nice. None. But people in discriminated groups should have an easier time joining profession that brings power or visibility, so related to law (even if it is as paralegal), politics or television, or company management. Again with the S.A example, the culture changed because there were more and more black people in courthouses, politics and at the head of national companies. The fact that there is some racial inequalities there come from class inequalities (not the subject here) and not racism.<p>3&#x2F; What agenda?<p>4&#x2F; None. I don’t see your point there, it might be lost in translation.<p>5&#x2F; What? Why would you punish statement made at work? Except if there is a client (let’s say, the state want to buy your product, and you say “public worker are paid doing nothing” to his face, get ready to be fired or put in a closet). Is this a leading question?<p>6&#x2F; Victims are defined by law in my country, i don’t know about the us though. I’d define “offensive” a statement that i would not say out loud to all my friends or family. Like “i find that any theism is dumb as f…”, which i would never say to my grandmother, despite being my real feeling, is offensive. 3rd part is a leading question, but my response is that i do not care about you and you can tell whatever you want to anyone, but if you say the previous sentence to anyone and get called out, i would not help you<p>7&#x2F; I’m not advocating for anyone, but i can see (in my country anyway) that some people (obese people) have it worst than anybody in some places (hospital mainly). I defend them orally with my friends and on internet when i feel like it (not often in english though). Well, since i’m 20 i don’t really hang with anybody that would judge people for their looks (because its tiring &#x2F;time-consumming to defend people i’m not related too for no reason but my personal ethic) but i did that.<p>8&#x2F; Do you know that apostasia is punishable by death in pakistan, but not in other countries (tunisia come to mind), and in some, nobody cares. So it depends on WHERE you live, obviously. Do you feel that in USA&#x2F;UK, the way most of the people treated Cat Stevens&#x2F;Yusuf Islam when he choose, after nearly dying twice, to change his religion, was justified&#x2F; subjectivly fair&#x2F;fair isn’t the question? Where i lived, no one cared about this. So probably MORE protected if you live surrounded by dumb a-holes, about the same else.<p>8bis: Sorry about the passive-agressive subtitle there, i might have no personnality and just mimic the way of expression of the people i’m talking to.<p>9&#x2F; Well, you can see my response in 6&#x2F;. Yes, there is studies that make correlation about IQ and how religious people feel, still, making any remark about that out loud (or on a forum where religious people gather, on a video about gregorian chant) is just rude.<p>10&#x2F; Depending on where you live, discriminated people are not the same, so i don’t really care. I spent entire vacations with my racist cousin, we had no problems. He is quite honest with his hate and while i disagree, i still quite like him. Honestly, most of those are leading question and i decided early to go away from people i perceive as manipulative, so i would never hang out with you anyway, but we could coexist just fine.<p>11&#x2F; No, i don’t care about the majority, i’m old enough to think on my own.<p>12&#x2F; Sorry, how does work safety come with guilt and social pressure? I think this is manipulation, but w&#x2F;e. Guilt and social pressure are sadly two educational levers, that are used and abused with children in school quite often (and in dog training too). Using it against adult are less effective, but it still works for some reason. I do hate this way of doing things, but whne i was a youth camp consellor, i had to use social pressure to make activities run smoothly. I’m still feeling guilty about it. I disliked doing it because i feel this was dishonest, even if this was the easiest way to get thing started. Those questions being a bit dishonest, are you feeling a bit guilty about it or do you feel its fine being dishonest to try to convince people they are mistaken (or do you think those question where objectively honest and i’m mistaken)?<p>It was an interesting game, thank you.
Internet turns on white supremacists and neo-Nazis with doxing, phishing
Two quotes:<p>&quot;But you are, perhaps, ready to ask, &quot;What has this to do with the perpetuation of our political institutions?&quot; I answer, it has much to do with it. Its direct consequences are, comparatively speaking, but a small evil; and much of its danger consists, in the proneness of our minds, to regard its direct, as its only consequences. Abstractly considered, the hanging of the gamblers at Vicksburg, was of but little consequence. They constitute a portion of population, that is worse than useless in any community; and their death, if no pernicious example be set by it, is never matter of reasonable regret with any one. If they were annually swept, from the stage of existence, by the plague or small pox, honest men would, perhaps, be much profited, by the operation.--Similar too, is the correct reasoning, in regard to the burning of the negro at St. Louis. He had forfeited his life, by the perpetration of an outrageous murder, upon one of the most worthy and respectable citizens of the city; and had not he died as he did, he must have died by the sentence of the law, in a very short time afterwards. As to him alone, it was as well the way it was, as it could otherwise have been.--But the example in either case, was fearful.--When men take it in their heads to day, to hang gamblers, or burn murderers, they should recollect, that, in the confusion usually attending such transactions, they will be as likely to hang or burn some one who is neither a gambler nor a murderer as one who is; and that, acting upon the example they set, the mob of to-morrow, may, and probably will, hang or burn some of them by the very same mistake. And not only so; the innocent, those who have ever set their faces against violations of law in every shape, alike with the guilty, fall victims to the ravages of mob law; and thus it goes on, step by step, till all the walls erected for the defense of the persons and property of individuals, are trodden down, and disregarded. But all this even, is not the full extent of the evil.--By such examples, by instances of the perpetrators of such acts going unpunished, the lawless in spirit, are encouraged to become lawless in practice; and having been used to no restraint, but dread of punishment, they thus become, absolutely unrestrained.--Having ever regarded Government as their deadliest bane, they make a jubilee of the suspension of its operations; and pray for nothing so much, as its total annihilation. While, on the other hand, good men, men who love tranquility, who desire to abide by the laws, and enjoy their benefits, who would gladly spill their blood in the defense of their country; seeing their property destroyed; their families insulted, and their lives endangered; their persons injured; and seeing nothing in prospect that forebodes a change for the better; become tired of, and disgusted with, a Government that offers them no protection; and are not much averse to a change in which they imagine they have nothing to lose. Thus, then, by the operation of this mobocractic spirit, which all must admit, is now abroad in the land, the strongest bulwark of any Government, and particularly of those constituted like ours, may effectually be broken down and destroyed--I mean the attachment of the People. Whenever this effect shall be produced among us; whenever the vicious portion of population shall be permitted to gather in bands of hundreds and thousands, and burn churches, ravage and rob provision-stores, throw printing presses into rivers, shoot editors, and hang and burn obnoxious persons at pleasure, and with impunity; depend on it, this Government cannot last. By such things, the feelings of the best citizens will become more or less alienated from it; and thus it will be left without friends, or with too few, and those few too weak, to make their friendship effectual. At such a time and under such circumstances, men of sufficient talent and ambition will not be wanting to seize the opportunity, strike the blow, and overturn that fair fabric, which for the last half century, has been the fondest hope, of the lovers of freedom, throughout the world.&quot;<p>Abraham Lincoln, Lyceum address<p>The next quote is from A Man for all seasons.<p>William Roper: So, now you give the Devil the benefit of law!<p>Sir Thomas More: Yes! What would you do? Cut a great road through the law to get after the Devil?<p>William Roper: Yes, I&#x27;d cut down every law in England to do that!<p>Sir Thomas More: Oh? And when the last law was down, and the Devil turned &#x27;round on you, where would you hide, Roper, the laws all being flat? This country is planted thick with laws, from coast to coast, Man&#x27;s laws, not God&#x27;s! And if you cut them down, and you&#x27;re just the man to do it, do you really think you could stand upright in the winds that would blow then? Yes, I&#x27;d give the Devil benefit of law, for my own safety&#x27;s sake!<p>If we have learned anything from Trump&#x27;s election, it is that people who we may think unlikely to come to power, can come to power. If we tear up our principles of free speech and rule of law to after the Nazis, what will we hide behind when would be authoritarians come to power? The principle of free speech is what allows people to protest Trump and for newspapers to criticize him. The rule of law makes the police protect even those protestors they disagree with from violence. Be careful what you wish for, you just might get it.
Ask HN: What's your spectacular burnout story?
Never told this story to anyone but here goes...<p>I was working at a consulting agency as a linux sysadmin pulling crazy hours for two years. I ran support for a client that had an app that in house devs had &#x27;modified&#x27; and a mission critical file transfer service. I was on a team of two with 24&#x2F;7 on call support. Thing was, no one ever called the other guy so I was always the one getting 5am phone calls on Saturday mornings. Weekly late night (8pm - 3am) deployments were common and considered successful in the eyes of the company.<p>After about a year of this my lifelong struggle with depression started to reemerge. Feelings of loneliness and doubt began to crop up and I would cry uncontrollably on my commute back home from work. It was around this time that the daily suicidal thoughts took a turn for the worse. It was all I could think about, every minute of the day.<p>One day I was chatting with a co-worker and my boss when they complimented me on some recent weight loss. I was in a mood that day and told them the truth: I was having trouble eating. I wasn&#x27;t eating breakfast or lunch and most nights would trade dinner for whiskey. After my weight loss was noticed, I decided to hide the fact I couldn&#x27;t eat by telling everyone I was on a new diet. Side note: I had gained a considerable amount of weight over the time I spent at that company. I recently celebrated my 100 lbs weight loss.<p>I continued to lose weight, though not entirely by choice. The suicidal thoughts were deafening, blocking out any hope or joy in my life. I had become my job and saw no way out.<p>Eventually the client I was working for no longer needed my services and I was removed from the contract. I tried to celebrate but was so numb inside I didn&#x27;t feel any happiness at all. I took a week off but still had the same feelings of dread and depression. I did a lot of reading on burnout and realized I was on that slippery slope.<p>After returning from my sole week off, I was placed &#x27;on the bench&#x27;. For those who have never worked at a consultating agency, this means you still get a paycheck but have no work to do. It also means you are in a constant state of fear for your job until the agency finds you a new billable position. That didn&#x27;t help much to lighten my mood.<p>I made the switch from sysadmin to webdev during this &#x27;bench&#x27; period. I was able to secure a position as an internal React.js dev and for a few weeks started to climb out of burnout. I thought I could start being happy again with my new role but my company had different plans for me.<p>As I was still &#x27;on the bench&#x27; and not billable, the company decided to move me to a new contract doing dev work for M$ sharepoint. The project was in shambles, had no tech lead, and the only other dev had decided to format the site with tables (!) as he didn&#x27;t know any other way. I expressed how displeased I was but my complaint fell on deaf ears. I decided I couldn&#x27;t take it anymore.<p>After convincing the manager to make me &#x27;lead sharepoint dev&#x27;, I put my two weeks in. I had setup a job at a boat rental I had worked at in summers past. I now work the same hours but get paid for every hour, which is great.<p>I took a full month off after my two weeks. Spent the time laying around the house and playing video games. One of the best months of my life. I thought a lot about where I had been and where I was headed. I started hanging out with friends &amp; family again and realized I was on the right track.<p>I can now saw I&#x27;ve never felt better in my life. I lost a bunch of weight, met a girl, and genuinely enjoy every hour of every day. The choking thoughts of dread and suicide are gone, replaced by the joy and happiness I thought I would never have again. I recently started my own consulting company and have vowed to never let myself dip back into burnout again. Every day is a new journey; you just have to find a way to make it work while not wanting to die every day.<p>My advice is to recognize the signs of burnout early. It is far too easy to attempt to &#x27;push through&#x27; and stress yourself out more. Many companies are willing to sacrifice your well being only to turn around and ask for more. Dont be afraid to run far, far away from any place that prioritizes their bottom line over your mental health.<p>apologize for formatting, wrote this on mobile.
Paradox of Tolerance
Since Charlottesville, I have seen many references to this by people looking to justify, or cast in a positive light, violence by AntiFa groups and their affiliates. It would be useful for people to read the entire &quot;The Open Society and Its Enemies&quot;[1]. The &quot;Paradox of Tolerance&quot; appears in Note 4 to Chapter 7:<p><pre><code> Less well known is the paradox of tolerance: unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them.—In this formulation, **I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be most unwise.** But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant. We should claim that any movement preaching intolerance places itself outside the law, and we should consider incitement to intolerance and persecution as criminal, in the same way as we should consider incitement to murder, or to kidnapping, or to the revival of the slave trade, as criminal. (emphasis mine) </code></pre> which is less than a full-throated defense of violence against people whose speech we find disgusting and reprehensible.<p>The context in which the note is referenced is this:<p><pre><code> One particular form of this logical argument is directed against a too naïve version of liberalism, of democracy, and of the principle that the majority should rule; and it is somewhat similar to the well-known ‘paradox of freedom’ which has been used first, and with success, by Plato. In his criticism of democracy, and in his story of the rise of the tyrant, Plato raises implicitly the following question: What if it is the will of the people that they should not rule, but a tyrant instead? The free man, Plato suggests, may exercise his absolute freedom, first by defying the laws and ultimately by defying freedom itself and by clamouring for a tyrant[4]. </code></pre> That is, Popper sees paradoxes of freedom and tolerance as related. Later, he resolves this like Kant before him did:<p><pre><code> I believe that the injustice and inhumanity of the unrestrained ‘capitalist system’ described by Marx cannot be questioned; but it can be interpreted in terms of what we called, in a previous chapter[20], the paradox of freedom. Freedom, we have seen, defeats itself, if it is unlimited. Unlimited freedom means that a strong man is free to bully one who is weak and to rob him of his freedom. This is why we demand that the state should limit freedom to a certain extent, so that everyone’s freedom is protected by law. Nobody should be at the **mercy** of others, but all should have a **right** to be protected by the state. </code></pre> I have always been particularly fond of the conclusion:<p><pre><code> Instead of posing as prophets we must become the makers of our fate. We must learn to do things as well as we can, and to look out for our mistakes. And when we have dropped the idea that the history of power will be our judge, when we have given up worrying whether or not history will justify us, then one day perhaps we may succeed in getting power under control. In this way we may even justify history, in our turn. It badly needs a justification. </code></pre> [1]: <a href="https:&#x2F;&#x2F;archive.org&#x2F;details&#x2F;TheOpenSocietyAndItsEnemiesPopperKarlSir" rel="nofollow">https:&#x2F;&#x2F;archive.org&#x2F;details&#x2F;TheOpenSocietyAndItsEnemiesPoppe...</a>
Ideal OS: Rebooting the Desktop Operating System
&gt;<i>In fact, in some cases it&#x27;s worse. It took tremendous effort to get 3D accelerated Doom to work inside of X windows in the mid 2000s, something that was trivial with mid-1990s Microsoft Windows. Below is a screenshot of Processing running for the first time on a Raspberry Pi with hardware acceleration, just a couple of years ago. And it was possible only thanks to a completely custom X windows video driver. This driver is still experimental and unreleased, five years after the Raspberry Pi shipped.</i><p>That&#x27;s because of Open Source OSes though, which vendors don&#x27;t care about and volunteers aren&#x27;t enough and able to match the work needed for all things to play out of the box. Nothing about this particular example has anything to do with OS research or modern OSes being behind.<p>&gt;<i>Here&#x27;s another example. Atom is one of the most popular editors today. Developers love it because it has oodles of plugins, but let us consider how it&#x27;s written. Atom uses Electron, which is essentially an entire webbrowser married to a NodeJS runtime. That&#x27;s two Javascript engines bundled up into a single app. Electron apps use browser drawing apis which delegate to native drawing apis, which then delegate to the GPU (if you&#x27;re luck) for the actual drawing. So many layers.</i><p>Again, nothing related to modern OSes being inadequate. One could use e.g. Cocoa and get 10x what Electron offers, for 10x the speed, but it would be limited in portability.<p>&gt;<i>Even fairly simple apps are pretty complex these days. An email app, like the one above is conceptually simple. It should just be a few database queries, a text editor, and a module that knows how to communicate with IMAP and SMTP servers. Yet writing a new email client is very difficult and consumes many megabytes on disk, so few people do it.</i><p>First, I doubt one of the reasons &quot;few people do it&quot; is because it &quot;consumes many megabytes on disk&quot; (what? whatever).<p>Second, the author vastly underestimates how hard it is handling protocols like IMAP, or writing a &quot;text editor&quot; that can handle all the subtleties of email (which include almost a whole blown HTML rendering). Now, if he means &#x27;people should be able to write an emailer easily iff all constituent parts where available as libraries and widgets&#x27;, then yeah, duh!<p>&gt;<i>Mac OS X was once a shining beacon of new features, with every release showing profound progress and invention. Quartz 2D! Expose! System wide device syncing! Widgets! Today, however Apple puts little effort into their desktop operating system besides changing the theme every now and then and increasing hooks to their mobile devices.</i><p>Yeah, and writing a whole new FS, a whole new 3D graphics stack, memory compression, seamless cloud file storage, handoff, move to 64-bit everything, bitcode, and tons of other things besides. Just because they are not shiny, doesn&#x27;t mean there are no new futures there.<p>&gt;<i>A new filesystem and a new video encoding format. Really, that&#x27;s it?</i><p>Yeah, because a new FS is so trivial -- they should also rewrite the whole kernel at the same time, for extra fun.<p>&gt;<i>Why can I dock and undock tabs in my web browser or in my file manager, but I can&#x27;t dock a tab between the two apps? There is no technical reason why this shouldn&#x27;t be possible. Application windows are just bitmaps at the end of the day, but the OS guys haven&#x27;t built it because it&#x27;s not a priority.</i><p>There&#x27;s also no real reason this should be offered. Or that it should be a priority. If every possible feature someone might thing was &quot;a priority&quot; OSes would be horrible messes.<p>&gt;<i>Why can&#x27;t I have a file in two places at once on my filesystem? Why is it fundamentally hierarchical? Why can I sort by tags and metadata?</i><p>Note how you can do all those things in OS X (you can have aliases and symlinks and hard links, can add tags and metadata, and can sort by them). And in Windows I&#x27;d presume.<p>And it&#x27;s &quot;fundamentally hierarchical&quot; because that&#x27;s how we think about stuff. But it also offers all kind of non hierarchical views, Spotlight and Tags based views for one.<p>&gt;<i>Any web app can be zoomed. I can just hit command + and the text grows bigger. Everything inside the window automatically rescales to adapt. Why don&#x27;t my native apps do that? Why can&#x27;t I have one window big and another small? Or even scale them automatically as I move between the windows? All of these things are trivial to do with a compositing window manager, which has been commonplace for well over a decade.</i><p>Because bitmap assets. Suddenly all those things are not so &quot;trivial&quot;.<p>There are good arguments to be made about our OSes being held back by legacy cruft (POSIX for one) and new avenues to explore, old stuff that worked better than what we have now, etc.<p>But TFA is not making them.
Right to Privacy a Fundamental Right, Says Indian Supreme Court
Privacy : You have a right to try to keep things as private as you want. You should not be prosecuted for merely trying to keep things private.<p>Your responsibility :<p>1. Don&#x27;t share things that you want to keep private.<p>2. Carefully weigh the trade offs when you agree to share things about you. There is no retroactive privacy on things that you yourself shared.<p>3. You can attempt to retract what was shared about you, but you can&#x27;t hold society responsible for successful retraction of that piece of information, from media or minds. You can add addendum e.g. an apology from someone, you can claim damages, but we can&#x27;t rewind time.<p>Government responsibility:<p>1. Don&#x27;t criminalize people trying to keep things private. This would be similar USA Fifth Amendment, do not force people to share what they don&#x27;t want to share. Government can ask &quot;What crimes you committed in the privacy of your home?&quot;, but it can&#x27;t force people to answer that question or punish for not answering it.<p>2. You can&#x27;t plead fifth and deny proving your identity when you want to take food stamps from government, or when you get unearned tax credit. Just like in any transaction, Government can ask you to prove who you are and may demand increasing levels of proof depending on the transaction. Your choice would be to not participate in such transactions, in certain situations you implicitly give permission to Government to demand proof of identity from you, e.g. if you request a loan to dig a well or subsidy to buy fertilizer or collecting unemployment benefit. Security of exchange of money from government to people is Government&#x27;s responsibility and it may demand increasing levels of identification depending on the nature of the transaction, as deemed appropriate by abused observed or potential for abuse. In places with high corruption rates, strong identification would be required and would be appropriate. I don&#x27;t think people would be OK if someone collects their pension using just name, address and birth date, and government throwing hands in the air accusing you for not protecting your name, address and birth date.<p>What you can&#x27;t do:<p>1. Make the world forget what it already knows. Can&#x27;t ask Google to delete a piece of information about you from entire internet, once you yourself post it on Blogger. You can delete the post from Google, you can delete your account, but you must realize that once something is not private, you have no control over who has seen it and how many formats&#x2F;copies of that information got created.<p>2. Get into a contract to drop certain privacy and then deny fulfilling the contract because of privacy rights. E.g. a model can&#x27;t say that she won&#x27;t show her face on a fashion ramp because of privacy after taking payment. A storybook author can&#x27;t say that she won&#x27;t share her book with publisher because of privacy after taking payment.<p>3. Make a demand that a private entity, on its private premises, can&#x27;t have monitoring equipment. A store may decide to have cameras at the self checkout lanes, and it may deny self checkout to folks with full face covering. Your choice would be to not shop at such places, you can&#x27;t use law to shut down the business&#x27;s ability to monitor their private premise as they wish. An employer may make alcohol breath analyzer test required e.g. for a surgeon before surgery, a pilot, air traffic control at the start of the duty, or a long distance train driver. The employees in this case can&#x27;t claim privacy rights to deny such tests.<p>4. When you are in public place e.g. a sidewalk, you are participating in a public endeavor that comes with you dropping the privacy protection e.g. compared to what you would get in your bedroom. The rays of light that bounce off of you or your belonging are fair game to be captured. Photographers do not need to take your permission to capture rays of light that are travelling in their direction when they stand on a public place or a private place they own or a private place where the owner has given them permission to capture the rays. Those photographs can only be used for personal consumption or for non-profit activities e.g. an investigation, news reporting. Any commercial use of the photo e.g. in an product advertisement, would require release agreement from the person in the photo.<p>I think Strong Privacy and Strong Identification both are required, for some things they are mutually exclusive, in some parts you trade one for another. Authentication&#x2F;Authorization&#x2F;Encryption&#x2F;Non-Repudiation is needed to deliver these rights.<p>Consider this, if privacy laws are absolute in every aspect of life then you can&#x27;t have antitrust laws that stop competitors from fixing prices or agreeing to anti-competitive behavior. If privacy laws are absolute then smartphone apps that capture photo&#x2F;video of an crime unfolding won&#x27;t be allowed due to privacy concerns of the criminal. If you can keep something private (lock the door to your room, your safe deposit box), no one will force you to expose it, but one can&#x27;t demand privacy in situations that naturally expose information to others, unless you explicitly set the expectation of privacy (attorney-client, doctor-patient, a service provider) as part of a contract. Government may make laws to cover most common situations e.g. your real estate agent sharing your budget with the seller of the property, your medical records etc.<p>Privacy law is natural. What I draw and erase on a doodle board in the privacy of my home is my business, you can&#x27;t force me to divulge it. What I say in my head to myself is my business, there is no thought crime. What I sing when on a trail is my business, no one can force me to say which song I sung. When government or corporation tries to invade the natural privacy, it should be stopped. In that regard, privacy is a fundamental right. But, privacy can&#x27;t be claimed to hide criminal record from your neighbors or employers.<p>More of me trying to sort it out in my own head.
Some wealthy people are injecting blood from teenagers to gain ‘immortality’
The aim is to see whether or not this can usefully change the balance of signaling molecules to, say, spur greater stem cell activity. There has been a trial in Alzheimer&#x27;s patients, but some signs in animal studies that transfusions from young to old don&#x27;t do much. It seems useful to speed up the process of determining whether or not transfusions are an interesting line of research, or something that only looked promising. That means more patients and larger trial populations, which Ambrosia is working on.<p>These transfusion initiatives are one of a number of outgrowths of parabiosis research in mice. Heterochronic parabiosis is the name given to connecting the circulatory systems of an old and a young individual. The older mouse shows a modest rejuvenation in a number of measures of aging, and the younger mouse shows some greater signs of aging - though most of the focus here has been on the old mouse. In recent years this technique has been used to search for potentially actionable differences in levels of specific signal molecules circulating in the bloodstream. For example, stem cell activity declines with aging, and this is likely governed by signaling processes. If levels of the most relevant molecules could be adjusted in old individuals, it might be possible to produce benefits that look quite similar to those of stem cell therapies: increased regeneration and tissue maintenance. This class of approach puts damaged, aged cells back to work, and does little to address causes of aging based on accumulation of metabolic waste, such as cross-links that stiffen blood vessels, but to the degree that it can improve health it is probably worthy of further investigation in the same way as stem cell therapy was back in the day.<p>One potential shortcut to the production of therapies is to perform transfusions: deliver young blood or young plasma to old individuals. I call this a potential shortcut because it really is still very uncertain as to (a) whether or not the whole process works in humans anywhere near as well as it works in mice, and (b) whether or not transfusions will recapture the effects of parabiosis to a useful degree. The evidence in mice suggests so far that it may not. It is possible to paint all sorts of scenarios in which the fact that old and young cells are in contact, feeding signals to one another in a feedback loop, is necessary to produce beneficial changes in the old individual. It is also possible to imagine signals with a short half-life, that won&#x27;t be recaptured in transfusions, or changes in the old environment that are based on an increased level of specific signal molecules. That increased level won&#x27;t be changed in the slightest by the arrival of some amount of young blood plasma. Only reduced levels are likely to be impacted that way.<p>In any case, testing and perhaps ruling out the fast path of transfusions seems like a fair plan. If it works, it will draw in more funding to build the better option of manipulating signal molecule levels directly. It if doesn&#x27;t work, that result will direct scientists to focus on more productive lines of research and development. There is some grumbling from the expected quarters over the structuring of this present initiative by Ambrosia, but getting it done is better than not getting it done. The data will be useful in the sense that only sizable effects are interesting, and thus before and after data for participants will be convincing. Marginal effects, of the sort in which it would have been useful to have a control group to establish whether or not any benefits actually resulted, would mean that this probably isn&#x27;t worth further exploration. Still, this well demonstrates the fact that many scientists who work within the heavily regulated, slow, and repressive system of medical development really don&#x27;t like it when people try to get things done more rapidly and more inventively. To the extent that it closes down productive avenues, this is a dangerous attitude.<p>Recent commentary suggests that none of the results so far are either large enough or extensive enough to definitively be something other than the placebo effect, chance, or other items such as a patient making lifestyle changes. I think there is some skepticism regarding the potential effectiveness of transfusions of young blood in any case; the data is somewhat mixed, and underlying theory on what is going on still in flux. Recent research suggests that the effects observed in parabiosis studies of mice with joined circulatory systems are due to a dilution of harmful factors in old blood rather than a delivery of helpful factors from young blood, for example. If the case, that would mean that transfusions should produce very limited results at best. Still, obtaining data is the important thing, and that is what is being done here. Those complaining the loudest should put in the work to raise funds and run a study they way they would prefer to.
Python-mysql-pool(PyMysqlPool)
You can use the <i></i>python-mysql-pool<i></i> [<a href="https:&#x2F;&#x2F;github.com&#x2F;LuciferJack&#x2F;python-mysql-pool][1]" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;LuciferJack&#x2F;python-mysql-pool][1]</a><p>so easy config and use support multi database and support dynamic pool and fixed pool.<p><pre><code> step:1 &quot;&quot;&quot; file:mysql_config.py change to your db config &quot;&quot;&quot; db_config = { &#x27;local&#x27;: { &#x27;host&#x27;: &quot;10.95.130.118&quot;, &#x27;port&#x27;: 8899, &#x27;user&#x27;: &quot;root&quot;, &#x27;passwd&#x27;: &quot;123456&quot;, &#x27;db&#x27;: &quot;marry&quot;, &#x27;charset&#x27;: &quot;utf8&quot;, &#x27;pool&#x27;: { #use = 0 no pool else use pool &quot;use&quot;:1, # size is &gt;=0, 0 is dynamic pool &quot;size&quot;:0, #pool name &quot;name&quot;:&quot;local&quot;, } }, &#x27;poi&#x27;: { &#x27;host&#x27;: &quot;10.95.130.***&quot;, &#x27;port&#x27;: 8787, &#x27;user&#x27;: &quot;lujunxu&quot;, &#x27;passwd&#x27;: &quot;****&quot;, &#x27;db&#x27;: &quot;poi_relation&quot;, &#x27;charset&#x27;: &quot;utf8&quot;, &#x27;pool&#x27;: { #use = 0 no pool else use pool &quot;use&quot;:0, # size is &gt;=0, 0 is dynamic pool &quot;size&quot;:0, #pool name &quot;name&quot;:&quot;poi&quot;, } }, } step:2 &quot;&quot;&quot; Note:create your own table &quot;&quot;&quot; step:3 (example show below) &quot;&quot;&quot; pool size special operation &quot;&quot;&quot; def query_pool_size(): job_status = 2 _sql = &quot;select * from master_job_list j where j.job_status in (%s) &quot; _args = (job_status,) task = query(db_config[&#x27;local&#x27;], _sql,_args) logging.info(&quot;query_npool method query_npool result is %s ,input _data is %s &quot;, task , _args) return &quot;&quot;&quot; single query &quot;&quot;&quot; def query_npool(): job_status = 2 _sql = &quot;select * from master_job_list j where j.job_status !=%s &quot; _args = (job_status,) task = query_single(db_config[&#x27;local&#x27;], _sql,_args) logging.info(&quot;query_npool method query_npool result is %s ,input _data is %s &quot;, task , _args) return &quot;&quot;&quot; insert &quot;&quot;&quot; def insert(nlp_rank_id,hit_query_word): #add more args _args = (nlp_rank_id,hit_query_word) _sql = &quot;&quot;&quot;INSERT INTO nlp_rank_poi_online (nlp_rank_id,hit_query_word,rank_type,poi_list,poi_raw_list,article_id,city_id,status,create_time,version,source_from) VALUES (%s,%s,%s, %s, %s,%s, %s,%s, %s,%s,%s)&quot;&quot;&quot; affect = insertOrUpdate(db_config[&#x27;local&#x27;], _sql, _args) logging.info(&quot;insert method insert result is %s ,input _data is %s &quot;, affect , _args) return &quot;&quot;&quot; update &quot;&quot;&quot; def update(query_word,query_id): _args = (query_word,query_id) _sql = &quot;&quot;&quot;update nlp_rank set query_word = %s WHERE id = %s&quot;&quot;&quot; affect = insertOrUpdate(db_config[&#x27;local&#x27;], _sql, _args) logging.info(&quot;update method update result is %s ,input _data is %s &quot;, affect , _args) return [1]: https:&#x2F;&#x2F;github.com&#x2F;LuciferJack&#x2F;python-mysql-pool</code></pre>
The Accidental Elitist: Academia is too important to be left to academics
&gt;There’s a huge difference, for instance, between defending academic jargon as such and defending academic jargon as the typical academic so often uses it. There’s likewise a huge difference between justifying jargon when it is absolutely necessary (when all other available terms simply do not account for the depth or specificity of the thing you’re addressing) and pretending that jargon is always justified when academics use it. And there’s a huge difference between jargon as a necessarily difficult tool required for the academic work of tackling difficult concepts, and jargon as something used by tools simply to prove they’re academics.<p>What I find so amusing about this paragraph is that the word &quot;jargon&quot; is, in this context, jargon. It is not explained when jargon is necessary, or when it isn&#x27;t, or what kinds of discussions would make it necessary. In fact, it is probably ideal to be familiar with the use of some sort of jargon, and with contexts in which it is necessary and unnecessary, in order to really make sense of what the author means here.<p>And something important got left out, which is: jargon is often <i>easy</i>. Easy like reaching into the bag of chocolate-peanut clusters rather than making some actual food. Jargon has two primary functions: it increases the number of things you can talk about <i>specifically</i> and it removes the emotional impact of talking about those things. The cognitive-somatic empathy impact of &quot;blunt trauma to the genital region&quot; is less wince-inducing than &quot;kick in the nuts&quot;. Using jargon puts a Latinate wall between you and any difficult emotions that you might have about the subject that you&#x27;re discussing.<p>But there is another purpose of jargon, too, in which it comes closer to that other famous tool of the academic, the formalism. Language is fluid and the meaning of words changes, but in order for the academic canon to remain useful for centuries, it has to stay somewhat the same. Jargon is supposed to fulfill the need for language that doesn&#x27;t change. That&#x27;s why, when academic jargon like &quot;microaggression&quot; -- a term which has appeared in papers since 1989 ( <a href="https:&#x2F;&#x2F;s3.amazonaws.com&#x2F;academia.edu.documents&#x2F;3424662&#x2F;Microaggression.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&amp;Expires=1504168265&amp;Signature=cCqcSv+M9KHuuTxw274CCUdMMYQ=&amp;response-content-disposition=inline; filename=Law_as_microaggression.pdf" rel="nofollow">https:&#x2F;&#x2F;s3.amazonaws.com&#x2F;academia.edu.documents&#x2F;3424662&#x2F;Micr...</a> ) -- becomes common in social parlance, it also stops <i>being jargon</i>, because it is no longer tied to its original meaning. To be brief (and somewhat wrong) jargon tries to be prescriptive, whereas real language is descriptive. And because language which is no longer jargon cannot be <i>correctly</i> used as though it were, having fluency in an academic field does not imply being able to communicate with ordinary people about it. In order to have conversations in colloquial language you have to <i>learn</i> to use colloquial language, which is a separate skill-set from what the academic is expected by their job title to accomplish.<p>So if you try to fix this communication problem by turning the jargon switch on and off:<p>&gt;It goes without saying we are implicitly celebrating a kind of technocratic anti-politics, though, when we contribute to making the discussion of politics intelligible only to a select few. If Trump’s election didn’t teach us that this kind of thing is a death wish, nothing will.<p>well, maybe you should be beaten over the head with a copy of <i>On Certainty</i>.<p>Is the discussion intelligible only to a select few? No, if it were, it would be physics, and there are very few popular conspiracy theories about physics. Except for that one funded by billionaires, but that&#x27;s a different problem. There are lots of popular conspiracy theories about sociology, and that&#x27;s because it isn&#x27;t that they hear you talking and don&#x27;t get any ideas out of it, it&#x27;s that they hear you talking and they think you&#x27;re some kind of monster because of it. That doesn&#x27;t happen when we&#x27;re discussing wavefunctions.<p>And that&#x27;s because this:<p>&gt;The perpetual conceit of academics in the humanities is that translating their work into a more accessible vernacular will “dumb down” what are necessarily complex subjects. Important stuff will be lost. Behind this conceit, though, is an implicit presumption from just about every academic that they could perform this kind of translation if pressed to. It has been one of the great sources of my disillusionment with academia to realize that a staggering majority of jargonauts, when pressed, actually can’t.<p>doesn&#x27;t happen because academics don&#x27;t really understand the subject they&#x27;re talking about, it happens because they don&#x27;t understand the language they&#x27;re trying to translate it into. So I would certainly agree that<p>&gt;It requires that one goes and does the painstaking work of learning languages that express and condition the worldviews of their speakers, that are encoded with specific logical systems and empowered by cultural conventions, which must also be learned and practiced. It requires daily efforts to understand how this cultural material works for smaller and larger publics as well as repeated attempts to construct workable critical stances out of that very material.<p>but ordinary language is not by its naturally fluid nature an appropriate foundation for critical theory. Rather the task of communicating in ordinary language must be approached in and of itself. I think there&#x27;s another question of whether the researchers themselves should be tasked with this, or a separate group of people, who may not exist.<p>Yet it is probably not at all comforting to academics, a generally self-assured culture, to suggest that they can&#x27;t fix politics alone. In a more harmonious culture, perhaps journalists would bridge this chasm, but today, they seem more interested in dredging it.<p>PS:<p>&gt;[the right was] shifting people’s sentiments in such a way that they’ll even support things that are fundamentally bad for them.<p>The words &quot;fundamentally bad&quot; link to a book about tax cuts in Kansas. Problem: while the whole rest of the essay focused on sociology, this book is about economics. Problem 2: Kansas&#x27;s tax-reform experiment did not actually pass because of wide popular support. In fact, libertarian economics doesn&#x27;t actually poll very well on its own (look at all the love for free trade agreements in 2016), it just happens to be attached to the banner of the Republican Party. If you want to be <i>convincing</i>, it helps not to be wrong in the ways that even laypeople can recognize.<p>Does my postscripted objection make sense? The context is a sentence about the right convincing people to vote for things that are bad for them in an essay about sociology; the example is something people <i>didn&#x27;t vote for</i> which was <i>implemented</i> by the right and which was bad <i>economics</i>. I don&#x27;t think that&#x27;s a big deal in isolation, but consider the choice the author made, by using this example over any other example, in the context of my first paragraph. There are many things the right <i>really</i> convinced people to vote for that could be bad for them, such as building a goddamn wall on the Mexican border or repealing a law that everyone was suddenly in favor of when you call it by its original name, and yet the author -- <i>while decrying jargon</i> -- chose an example which is unfamiliar and uncontroversial. Easy. Chocolate-peanut clusters.<p>&gt;What makes a subject difficult to understand — if it is significant, important — is not that some special instruction about abstruse things is necessary to understand it. Rather it is the contrast between the understanding of the subject and what most people want to see. Because of this the very things that are most obvious can become the most difficult to understand. What has to be overcome is not difficulty of the intellect but of the will.
Comparative Macrology (2014)
&gt; Macros in CL are just functions that run at compile time<p>Macros work fine in a Common Lisp interpreter at runtime.<p>Here we use an interpreter:<p>Define a macro, which prints a message when it is running.<p><pre><code> CL-USER 13 &gt; (defmacro swap (x y) (print &#x27;(&gt; running the swap macro function)) (let ((tmp-sym (gensym))) `(let ((,tmp-sym ,x)) (setf ,x ,y) (setf ,y ,tmp-sym)))) SWAP </code></pre> Use it:<p><pre><code> CL-USER 14 &gt; (defun foo (a b) (let ((a1 a) (b1 b)) (print (list a1 b1)) (swap a1 b1) (print (list a1 b1)) (values))) FOO </code></pre> The macro function wasn&#x27;t running, otherwise we had it print something.<p>Run the code. The macro is run, too.<p><pre><code> CL-USER 15 &gt; (foo 20 30) (20 30) (&gt; RUNNING THE SWAP MACRO FUNCTION) (30 20) </code></pre> Run the code again. The macro is run, too.<p><pre><code> CL-USER 16 &gt; (foo 10 40) (10 40) (&gt; RUNNING THE SWAP MACRO FUNCTION) (40 10) </code></pre> Compile the code, the macro now runs, too.<p><pre><code> CL-USER 17 &gt; (compile &#x27;foo) (&gt; RUNNING THE SWAP MACRO FUNCTION) FOO NIL NIL </code></pre> In compiled code the macro is no longer needed to run:<p><pre><code> CL-USER 18 &gt; (foo 10 40) (10 40) (40 10) </code></pre> Note also what pkhuong said, the SWAP macro will run the place forms twice.<p>Additionally it&#x27;s possible to observe the change:<p><pre><code> CL-USER 25 &gt; (let ((a (vector 10)) (b (vector 20))) (swap (aref a (progn (print (list :a a)) 0)) (aref b (progn (print (list :b a)) 0))) (values a b)) (:A #(10)) (:A #(10)) (:B #(10)) (:B #(20)) ; &lt;- here a is already changed when we get the place value for b #(20) #(10) </code></pre> Common Lisp has a built-in ROTATEF, which takes care of that and which we can use as a replacement for SWAP:<p><pre><code> CL-USER 27 &gt; (let ((a (vector 10)) (b (vector 20))) (rotatef (aref a (progn (print (list :a a)) 0)) (aref b (progn (print (list :b a)) 0))) (values a b)) (:A #(10)) ; &lt;- only once computed in the first subform to ROTATEF (:B #(10)) ; &lt;- only once computed in the second subform to ROTATEF ; also the computation of the new value for B does not observe ; the new value of A #(20) #(10) </code></pre> Thus a &#x27;real&#x27; SWAP macro will need to look to be different from what the article presents. Otherwise it will not play nicely with the rest of the language: multiple evaluations and side effects are the problems to address.
Drones Are Speeding Hurricane Harvey Response
My SO helped with the crowdsourced civilian effort to dispatch water rescues of stranded people. The effort is wound down now; it will likely be hailed as a coup of &quot;social media&quot;, but that&#x27;s overhyping the social media aspect. Social media got the word out of where to go online to enter the coordination, but had little to do with the actual coordination itself. The group my SO worked with coordinated mostly through the Zello mobile app, Glympse mobile app, houstonharveyrescue.com (going dark soon, due to PII concerns), and a Google Sheets to track water rescue requests.<p>TFA is mainly discussing use of drones in the recovery phase, and only tangentially touched upon drones for rescues. Drones were not used much for long-range water rescues, because they were a hazard for the many volunteer helicopters that responded.<p>This pointed out the need for a solution (preferably as automated as possible) that allocates helicopter and drone flight paths in a disaster area.<p>The whole experience was very eye-opening for me. There isn&#x27;t a good solution for coordinating disaster response by civilians, but even just the <i>ad hoc</i> quick-and-dirty collection of apps used by the various civilian groups that responded showed how much leverage Internet-enabled coordination delivered. The latency of civilian response is much lower than government response, but once the government landed resources, the government response had much greater volume. Mix both groups at the right times, and you&#x27;d have an admirable disaster response, pretty much what happened in Houston.<p>Observations from listening in on my SO during meal times (the only times I could break from work):<p>* Misinformation is rife. This is a difficult problem to address. Example: rumor starts that a rescuer was slashed with a machete. Story morphs into shot and slashed, then slashed-got-sepsis. Turns out a guy stepped on glass and got a nasty gash.<p>* No good solution to map rapidly-changing road conditions. Piles of rescuers with valuable boats in the first critical hours of response were diverted to drive around to find a way into the right areas of Houston to deploy. Need a way to effectively intake reports from people with just trucks (lots of citizens responding with no boats wanted to help in some way), snapping pictures at a specific location, giving location and time, and reporting road closures due to specified height of water, electrical line, <i>etc.</i> Bonus for AR-enabled measurement of water depth, based upon baseline measurement of vehicle. Extra bonus for measuring water speed by tossing a recognized object into the water and tracking it. Then people who pull up the heat map of closures will flood-fill (pardon the pun) out possible routes, avoiding lots of redundant checks of possible routes. A lot of valuable time was wasted on this, the first few hours were filled with civilians an hour from arriving at the area (as instructed over social media) calling in and asking how they can reach where they can drop their boats, because the main routes were all closed.<p>* No good solution to map flooded areas, how deep, and forecasted levels. People pieced it together by hand and passing along the grapevine. Depth matters: below a certain level, outboards were getting stuck. Below a different level, and all boats had to watch for fences they could get snagged on (had a few that capsized on such obstacles). Ideal: remote-reporting gauges scattered in a grid pattern throughout the area, or gauges that can be dropped down during the initial rescue efforts, and reclaimed later.<p>* We reached out to Uber and Lyft. IMHO, this was a PR coup sitting around for the taking. You have a system that optimizes for efficiently tracking and queuing requests, matching requests to vehicle capacity, directing the closest vehicle to the request, and showing requesters the live status. This was precisely what the water rescue coordination needed. Uber gave a canned &quot;we&#x27;re standing down for the safety of our drivers, for those who are outside of the areas of Houston that kicked us out that we can still operate in&quot;. Lyft said great idea, but the conversation black holed after that.<p>* Any app-based solution will have to be very sensitive to energy usage. Rescue requesters ran out of power on their phones distressingly often. Zello was established early on as a bad way to communicate with requesters; it drained batteries very quickly. Instead, requesters reached out to relatives&#x2F;friends they knew who were safe, instructed them how to get Zello and get on the rescue channels, then put in a request, and then those relatives&#x2F;friends would periodically query for a status update on the request. Use strongest WiFi if available, fall back to cell data (lowest-tech with strongest signal available), then SMS, then voice.<p>* These status update requests (see previous entry) took up a lot of bandwidth at the height of rescues, and added to the stress on the rescuers. A queuing system that operated over WiFi if available, then over cell data if not, telling requesters they are number N in line for the nearest rescuer, would have made the coordinating a lot easier.<p>* A unique ID was eventually established for assigning each request. A voice recognition system could easily listen in on a group and automatically assemble in timeline form all conversations that mention a particular ID, so anyone looking at a particular rescue request could see all historical discussion about that request.<p>* The Zello conversations quickly got unwieldy when there were too many people vying for &quot;the microphone&quot;. Fortunately, people figured out how to manage this somewhat, splitting into Port Arthur and Houston-specific channels, for example. An app to auto-split by role (dispatcher, rescuer, requests, <i>etc.</i>) and density-based geography (bounded by neighborhood boundaries, perhaps) would have helped some of the confusion.<p>* A voice recognition system to simply assign people to the right channel based upon their initial request would be helpful. There was an opportunity here for someone like Twilio to set up a single phone number that did this. In the first few hours, people did this through their personal lines: &quot;Call me at xxx-xxx-xxxx when you are an hour out on I-10 from Conroe to get the current rally point.&quot; Then you hear later: &quot;I&#x27;m an hour out, called xxx-xxx-xxxx number as instructed, and it&#x27;s been busy for the last 15 minutes, what else can I do to find the current rally point?&quot;<p>* Most useful feature of Zello: historical recording of every single transmission while you were listening. This let people go through them and follow up on water rescue requests, then mark them safe if they were rescued. This was a big problem at first: rescuers were pulling up a map on the web app, rushing to a request, then getting disappointed when they find out the rescue request was long since taken care of by another rescuer. Zello could improve on this: the historical recording was only for the duration you were listening; a feature (even paid) that pulled last N minutes&#x2F;hours from their servers would be even better. Even better is a solution that tracks a rescuer to a rescue request, then presents a simple confirmation screen (# of adults, elderly, children, disabled, babies, pets rescued, any variation from pre-arranged drop-off point, any voice notes required), and auto-marks a request.<p>* Need a solution that maps water conditions at a specific location, ideally with tagged input of submitter, time, audio&#x2F;picture&#x2F;video, and NMEA data. At one point, the flat-bottomed boats were having a lot of trouble navigating choppy waters as Harvey came back in and churned up the flooded areas. Fold in with weather data, and predictively age out the conditions if possible, displaying that the computer model <i>thinks</i> conditions might be so-and-so but be careful because it could still be the reported condition, until another submission confirms calmer conditions.<p>* People <i>REALLY love to help</i>. But if their efforts go unappreciated, or go to waste (about the same as unappreciated), they will get dejected <i>very</i> quickly. This is why precise, comprehensive coordination is so critical to manage.<p>* There was initial concern about fake rescuers. This concern should not be dismissed, but as far as we could tell, this didn&#x27;t happen.<p>* Need a solution that maps shelter facilities &#x2F; government resources as they come online, capacity, and current utilization, so rescuers can efficiently forward rescuees to the best available facilities, most of whom are somewhat in a state of shock. Many shelter facilities early on were just school gyms, churches&#x2F;temples&#x2F;mosques, warehouses, and retail stores.<p>* A large number of private helicopters volunteered early. Knowing the most urgently medical-critical requests to prioritize was all manually performed.<p>* I suspect that the <i>ad hoc</i>, thrown-together approach of apps to coordinate the rescues is close to their scalability limit. About 10K rescues were logged, in round numbers. I don&#x27;t think the same approach will work beyond 3-50K rescues, because bottlenecks were becoming apparent to me even with what we had.<p>* Best part of Internet-enabled rescue coordination: anyone now has a choice to actively participate in the rescue no matter where in the world they are. That&#x27;s incredibly powerful and a game-changer.<p>That&#x27;s all I have off the top of my head.<p>All in all, I&#x27;m quite impressed how well this went, despite the difficulties and setbacks I saw, and it shows some of the best humanity has to offer. There are some really interesting, deep CS and software engineering problems to solve in disaster response management and coordination.<p>Special shame to Joel Osteen: after his weak response compared to the area churches and mosques with far less funds who threw their doors open within the first hours after Harvey hit, he should be ostracized, as if &quot;prosperity ministry&quot; wasn&#x27;t bad enough on its own. Didn&#x27;t know who this bloke was before Harvey, other than &quot;some guy who runs a megachurch&quot;, but after reading the news stories, I can&#x27;t believe he convinces so many parishioners into following him.
Global Technology Advancements
Octavio Paz Lozano, a Mexican poet and diplomat, pens technology as &quot;not an image of the world but a way of operating on reality.”<p>It’s amazing to see how technology has wrapped itself around “man’s little finger.” With multitasking fast becoming the mantra of the day, it’s tough to imagine being away from man’s newest friend and companion: the cell phone. Carol Connell, a poet from poetrysoup.com, captures the pun of technological integration into our daily lives in his poem &quot;Cell Phone Phenomenon.&quot;<p>It reads thus:<p>The husband and wife go out to dinner to have quality time alone; as they sit across from each other, both of them on their phone.<p>And what shall we do with our cell phone next? Should we start a photo stream or send out a text? Our day is just not complete, it seems without sending or receiving the latest memes.<p>The technology evolution of the past decade has been so hectic that it’s tough to pen it down and to analyse the how the cornerstones of technology are deeply embedded into our daily lives. Experts believe that such integration was possible because of chain reactions fuelled by the innate needs of humanity to remain connected, irrespective of communication barriers. Like all other eras, the digital revolution was observed in both the software and the hardware side. As authorities viewed and documented human history, they distinguished Information Age as an economy based on information computerization. This information era&#x27;s foundation was laid with the invention of early computing devices such as the abacus in Babylonia over 5000 years ago.<p>As time ticked away, a relatively advanced computing device was assembled by the &quot;Father of Computers&quot;, Charles Babbage, an English mechanical engineer and polymath of the 19th century. But, as the British government decided that the then project was not as viable as previously conceived, it was dissolved thereafter. However, this advancement sparked the need to develop a more sophisticated computer system – analog computers followed by the mightier digital computers.<p>The two dreadful world wars, which claimed lives of about 11 million military personnel and about 7 million civilians, was seen relying heavily on machines and computers. In order to achieve the favourable outcome and establish technological dominance, the government of United States heavily funded its navy to develop an electromechanical analog computer small enough to be used aboard a submarine, hence successfully shrinking its size, equipping the machine with precision, and educating it to solve firing problems using trigonometry.<p>It is undeniable that man&#x27;s need to win has often yielded in technological advancement. As the world recuperated from the disasters of war, a new battle field was prepped for the show of dominance. The entire world witnessed the launch of the Soviet Union&#x27;s artificial satellite, Sputnik, on October 4, 1957. Some viewed this news with suspicion, some with envy, and some with caution as the foundations were laid to usher in a new era. Another such war-worn invention that is noteworthy is the Internet (WWW - the world wide web).<p>With the breaking of the war, the importance of communication and information sharing within a small window of time became very crucial. The military realised efficiency of the packet networking system for communication. The US Department of Defence sanctioned funds for project ARPANET. This project helped in developing an eco-system of connecting networks. Simultaneously the UK government funded experts like Donald Davies to design and develop packet network between the 1960s and early 1970s.<p>Soon, man realised the application of computer and communications to non-defence sectors like business, commerce, pharmaceutical, etc. Because of the realisations that private entities operated differently to that of the armed forces, principal designers, J. Presper Eckert and John Mauchly, of the United States, invented UNIVAC – the first commercial computer. Having seen the foundation for the PCs &#x2F; laptops, the market then saw many entrants. Facing the cut-throat competition, IBM, another American multinational technology company, invented IBM 702 to go toe-to-toe with UNIVAC.<p>As humans progressed, they envisioned a computer for each individual, terming them personal computers, and called it laptops in a few years. The first personal computers were assembled in 1975 and came as kits: The MITS Altair 8800, followed by the IMSAI 8080, an Altair clone. It seemed that a new universe of possibility lay at our feet. And first to try their luck was college dropout Bill Gates with his childhood friend Paul Allen, establishing Microsoft. Leading ahead on the path of progress, Microsoft added many feathers to its hat - Microsoft Disk Operating System (MS-DOS), Windows, Web browser – Internet explorer, gaming console Xbox, and much more. Soon the technology sector saw fierce competition as Steve Jobs and Steve Wozniak began their Apple journey; Intel was founded by Gordon Moore and Robert Noyce; Larry Ellison, Bob Miner, and Ed Oates started a software company ORACLE, and so on.<p>On April 3, 1973, Martin Cooper made the first call via a mobile phone. The initial handset though not made with consumers in mind was sold for $4,000 each. Standing true to its meaning Mo-bile – on the go communication – the sector stands witness to staggering facts. With a legacy of 150 years, Nokia entered the mobile communications in 1968 and soon won over 250 million consumers, making their model Nokia 1100 the best-selling electrical gadget in history.<p>Not long after, Packet networking system combined with satellite commutation leading to the development of the present Internet. Attendee Mark Pesce during the first International Conference on the World Wide Web, Geneva, addressed the audience, &quot;if the web can be said to have had a starting gun, it fired on that Wednesday morning at CERN,&quot; marking the arrival of WWW in the year 1994. The Internet opened such a world of technology where man is only at the hem of it. As the human civilization grows, so does his imagination to do the unthinkable. Technology has helped man to achieve that I-m-Possible.<p>With social networking, RFID, QR Code, graphic user interface, bar codes and scanners, SRAM flash memory, augmented and virtual reality, etc., technology is progressing at a very fast pace. Combine technology with man’s imagination and we are sure to find GOD soon. Although having progressed so much, technology has few shades tainted red. Critics are often heard blaming technology advancement for impersonal communication, growing distances, globalising community, privacy and security frauds etc. Another worthwhile debate is pin pointing the master vs. the slave. But having given the democratic setting, it’s safe to say for now that it’s user’s call of how to employ technology to enhance the quality of life.
The real prerequisite for machine learning isn’t math, it’s data analysis (2016)
(I&#x27;m assuming that we are talking about the original post <a href="http:&#x2F;&#x2F;sharpsightlabs.com&#x2F;blog&#x2F;machine-learning-prerequisite-isnt-math&#x2F;" rel="nofollow">http:&#x2F;&#x2F;sharpsightlabs.com&#x2F;blog&#x2F;machine-learning-prerequisite...</a> reposted on r-bloggers, focusing on business&#x2F;company use of data science, not necessarily on the many other uses).<p>Lots of folks like to slag these articles and talk about how &quot;If it&#x27;s not real Maths, it&#x27;s crap&quot; (<a href="http:&#x2F;&#x2F;www.dailymotion.com&#x2F;video&#x2F;xgzfxs" rel="nofollow">http:&#x2F;&#x2F;www.dailymotion.com&#x2F;video&#x2F;xgzfxs</a>), but if you&#x27;ve worked in larger orgs, you recognize that there is a need for more than just advanced math. There really are myriad needs, and the failure of AI&#x2F;ML (yes, I&#x27;m combining them for simplicity here) in an org is usually the lack of understanding these needs. Here are some I&#x27;ve seen:<p>1) What is the problem that AI&#x2F;ML is being applied to? What is it optimizing, deciding, predicting, forecasting, categorizing? How will this decision be used in a process or flow? This requires a business analytic approach, understanding available data, _what it means_, how it&#x27;s generated, and how the business might evaluate the impact of the ML&#x2F;AI. These folks need to interact with the business folks as well, so some communication skills are helpful... though that&#x27;s true for every role these days.<p>2) How will said decision be implemented both in tech build, test&#x2F;QA&#x2F;FUT&#x2F;UAT&#x2F;etc, and prod? This technical architecture and approach is data engineering, but some folks in the data science world are amazing at this. BTW, a model that works in dev may not scale in prod. The fact that so many folks keep &quot;re-discovering&quot; this is scary to me. There is a whole class of amazing folks who can re-implement models to scale them, and if you know them, reward them well.<p>3) How will models&#x2F;algos&#x2F;systems be built? This workflow is often pretty sloppy, just a bunch of jupyter notebooks or a tonload scripts (but they&#x27;re in Git, so it&#x27;s ok), and so replication and scaling becomes painful... esp. in regulated industries. Again, data engineering approach, but needs a more nuanced understanding of the vagaries of ML&#x2F;AI. I find that solving business problems may not always fit a traditional software workflow (call it &quot;agile&quot; all you want, it doesn&#x27;t always fit) but ymmv. So, you may need to create new workflows for your org&#x27;s needs, and this tooling may not be off the shelf, but like automated testing, this tech debt will need to be paid sooner or later.<p>4) What ML&#x2F;AI approach should I use, esp. if I&#x27;m using pre-existing approaches? This becomes more of the data science analytic approach, mixing the understanding of how ML&#x2F;AI works with the data landscape (how was my internal data generated? What does it mean? Is it stable? What&#x27;s available at score time, and what&#x27;s it&#x27;s latency?) and how to build various working predictive models. Note that this is traditionally where data scientists&#x2F;model builders&#x2F;analysts spend their time, from data cleansing to preliminary analysis to generating&#x2F;training various models to crying when none of them predict well to stumbling onto a fascinating and amazing combination of models and approaches at 3am. Yes, math is defn helpful here, but _understanding_ the underlying math is often helpful enough, vs. _mastery_. A good understanding of the levers affecting each model&#x2F;algo&#x2F;DL architecture&#x2F;etc. and how to diagnose them can get you pretty far, though you may violate assumptions or overfit if you aren&#x27;t careful.<p>5) Real Algo design: Using all that math that underlie the models to not just diagnosing a pre-designed package but making your own optimizer, or your minimizers, or your own way of computing the Hessian, or a new weighting approaches, or whatever new approach your expertise is in. From writing your own primitives to making your own deep learning architecture, this is often the real wizardry, but it&#x27;s not needed in EVERY case. But the effort can really pay off, from optimized prediction, improved use of resources, and unique IP which can be a competitive advantage. Remember, custom work is great, but maintenance of the resulting product can be painful, esp. if done in a language few folks in the org understand and if few resources are available outside.<p>Finding skills to meet all of these needs in one person is certainly possible, but somewhat unicornish. And you may not need all of these either. But the best orgs that have to scale have at least a &quot;Data Prep&#x2F;Data Engineering&quot; role, a &quot;Data Science&quot; role, and if needed, a &quot;Data Analyst&quot; role who can help translate specific business needs into analytic problems, and also carve off the easy stuff (ad-hoc pulls and simpler analyses).<p>So, to original post: yeah, sort of. You need to be able to analyze, but you also need to be able to do some of the math, and understand the impact of the rest. If you are awesome at the math, go up to making your own magic. But if you aren&#x27;t, at least try to understand as much as you can, while trying to also be familiar with the other areas. I do think that the emphasis on visualization is somewhat glossy-ily ignoring real analysis skills, but it&#x27;s one of many helpful approaches to understanding the problem. (Pet peeve: Just drawing graphs&#x2F;charts is usually not analysis. Sorry, but true.)<p>If you are hiring for a data scientist, at least be clear with yourself on what of these needs you are looking for, and if you need them all. Also be clear about where you hope to grow, and if you are hiring for now, near future, or the year 3000.<p>(Ok, one more: do yourself a favor and learn experimental design. I am sort of shocked at how many data scientists I chat with who haven&#x27;t really tested champion&#x2F;challenger models, or compared 2 or more processes or approaches in a randomly assigned test, or gotten as close as they can with quasi-experimental or matched-groups approaches. The simple ones are indeed really Excel simple, but if it&#x27;s so easy, why haven&#x27;t you at least tried it? And the causal-modeling stuff is a bayesian&#x27;s dream come true, so it&#x27;s worth learning.)
Digital Disruptors
During 1800’s investors in the stock market traded with paper securities, yelling and screaming at the top of their voices in order to ‘transact’ and own stacks of those paper. Little did they know that those physical papers would fade in oblivion in future and that securities would be held virtually in digital formats.<p>Although the transition from physical world to today’s all-encompassing virtual lives has been slow, such changes have been important catalyzers for technology companies to begin innovating and redefining how traditional processes work. Paper security converted to virtual demats was just one of the numerous examples of how technology made its way into traditional processes.<p>Ones and zeros are eating the world. The creation, storage, communication, and consumption of information is being digitized - turned into the universal language of computers at breathtaking pace. All forms and types of enterprises – from small businesses to large corporations, from non-profits to government agencies – are going through “digital transformation”, taking the help of digitisation to create new processes, activities, and modes of transactions.<p>Digitization is changing the way we work, shop, bank, travel, educate, govern, manage our health, and enjoy life. The technologies of digitization enable conversion of traditional forms of information storage such as paper and photographs into binary codes (ones and zeros) of computer storage. The process of converting analog signals into digital signals is a sub-set of such a workflow. Its noteworthy to note though that the digital transformation of economic transactions and human interactions is more pronounced than conversion of various forms of media into bits and bytes. In the last decade or so, the rates at which technology has been evolving have become overwhelmingly steep. From driverless cars to artificial intelligence, high-tech companies are competing to strive to achieve the unthinkable; a life where WORK IS FOR MACHINES AND LIFE IS FOR HUMANS.<p>Recent developments in technology industry has been quite fascinating. Artificial intelligence (AI), which refers to the use of computer systems to perform tasks that normally require human understanding, has been around for nearly 60 years. But, it is only recently that AI appears on the brink of revolutionizing industries as diverse as health care, law, journalism, aerospace, and manufacturing, with the potential to profoundly affect how people live, work, and play.<p>Within 3 to 5 years, it is expected that commercial uses of AI would increase exponentially. The idea is that AI embedded in product applications would benefit end customers. AI is also being used in process applications and in corporations to automate processes and enhance customer satisfaction. Automated voice response systems have been used for a few years now to replace human customer service agents for customer support. Eg. The Hong Kong subway system has employed AI to automate and optimize the planning of workers&#x27; engineering activities, building on the learning of experts. Insight applications harness advanced analytical capabilities such as machine learning to uncover insights that can inform operational and strategic decisions across an organization. Chipmaker Intel employs predictive algorithms to segment customers into groups with similar needs and buying patterns effectively using this information to prioritize its sales efforts and tailor promotions (Intel expects that their AI-based approach will generate an additional $20 million in revenue once it is rolled out globally).<p>The next big disruptor is the Blockchain technology. Blockchain is a secured and reliable technology that is used to securely stock and share data or list transactions on an infinitely large and democratized networked system called the ledgers. It enables to perform transactions in a relatively fast and cost effective manner and ejects brokers out of the system leading to direct peer-to-peer interactions.<p>The Blockchain data storage technology is predicted to be a massive disruptor in the next 3-5 years. Because the existing cloud storage services are centralized, users have to trust a single storage provider. With Blockchain, this can become decentralized. Eg. Filecoin is testing cloud storage using a Blockchain-powered network to improve security and decrease dependency. Additionally, users can rent out their excess storage capacity, Airbnb-style, creating new marketplaces.<p>Blockchain technologies make tracking and managing digital identities, both, secure and efficient, resulting in seamless sign-on and reduced fraud. Be it banking, healthcare, national security, citizenship documentation or online retailing, identity authentication and authorization is a process intricately woven into commerce and culture worldwide. Blockchain offers a unique service called smartcontracts. These are legally binding programmable digitized contracts entered on the blockchain. What developers do is implement legal contracts as variables and statements that can release of funds using the bitcoin network as a ‘3rd party executor’, rather than trusting a single central authority. For example, if two people would like to exchange $100 at a specific time in future when a set of preconditions are met, the conditions, payout, and parties’ details would be programmed into a smart contract. Once the defined conditions are met, funds would be released and sent to the appropriate party as per terms.<p>Virtual reality is all set to take over the way we experience and view media. Some experts call it the ultimate empathy machine because VR helps to live vicariously in others’ shoes. Currently, the gaming industry has the hottest VR applications because it is fun and people can understand easily why VR adds value to their experience.<p>The education industry has started to embrace VR for teaching students. For example, Schools would invest into Google Cardboard (VR Headset), give it to students in classes and experience the Google Expedition app on Android&#x2F;IOS. This app allows students to go on field trips from Galapagos to the International Space Station. It makes learning more interesting because it is more likely that they will remember what they have experienced rather than just reading a book.<p>As the advertising space becomes more and more congested with competition, VR has opened up a new space for brands to communicate to their customers. For example, Audi´s Sandbox experience which uses real-time trackers, an Oculus Rift, a driving seat and a sandbox to create a unique experience. The idea is that you can create a real track in the sandbox that is referenced in VR where you drive an Audi.<p>The real estate sector is also set to benefit from the VR technology. Companies are giving virtual tours on VR headsets for prospective buyers, with just a camera and software that is available on cloud. The whole process of making a Virtual tour of a mansion takes around 2 – 3 days. In addition, the camera only costs $3600 and the software is around $49 – $149 a month. The speed, price and ease of doing, makes this a real disruptor in the industry.<p>Disruptive technologies usually start out performing worse than their current generation counterparts, which are much used and are, so called, incumbents. This leads to a tendency to ignore disruptors until it’s too late or &#x27;in your face&#x27;. History is testament to the fact that transformational ideas emerge victorious in the end. So could be the cases of AI, Blockchain, AR, VR, Drones etc., and a few others that are hidden but lurking around the corner.
To Understand Rising Inequality, Consider Janitors
The roots of modern wealth inequality on deep, vast, and way more complicated than companies just outsourcing unskilled labor because they are optimizing for profit over being charitable.<p>* Automation advances in every industry. It is like AI - there is no sudden on switch when everything is automatic and post-scarcity is suddenly achieved. It is a slow march of small improvements and optimizations over time. Productivity per human labor hour has increased a hundredfold in the last century on the backs of this automation and innovation.<p>* A combination of social organization, peer pressure, the variable range in the quality of a persons parents, the variability in wealth of a family, the trends towards and away from high median wealth, the prevalence of anti-intellectualism, the laws at the time, the laws in the past, the general availability of materials, the competitive market internationally, and ones own biology contribute to the prevalence or absence of an educated populace. No society has figured out how to take <i>every</i> human born and turn them into a scholar, however. We <i>all</i> have the disenfranchised who are not educated (in fields the market deems valuable) but still need avenues to survive. This creates our unskilled labor market.<p>* The laws of your country influence how wealth moves throughout it. The laws of other countries also influence the behavior of private actors in your economy when interacting with foreign ones. The market is global - business decisions are not made based on arbitrary lines on a map, they are made based on the planetary market forces and trends of all seven billion+ people. Thus, you cannot set local policy (that influences wealth inequality) in a vacuum.<p>* Aside fiscal policy, you also have, relatively independent of other variables, how open your society is to innovation and entrepreneurship. This is another cultural marker, but if your country supports and incentivizes startups you can offset the problem of a waxing classical labor market. It also has the converse effect of generating jobs through its successes.<p>* Finally, and least significantly, is fiscal policy. In some countries the lack of a trustable market or rule of law can make this much more meaningful, but if your company has a functioning internationally connected economy nowadays all your fiscal policy is doing is pushing the boat rather than building it. Policies like favoring investment over salary <i>favor</i> increasing inequality, but are not the cause of it - they just accelerate it.<p>The TLDR is automation, education, globalization, entrepreneurism &#x2F; innovation, and fiscal Policy, but there are more, and these alone are just as abstract trends as wealth inequality is above them, just as much as how the future of humanity itself is just a abstraction above that.<p>Together all these effects, and more, influence the total market and control how prosperous or despondent people are. When citing historical wage averages as having stagnated in the US in the seventies it is not one aspect in isolation - from what this article talks about in regards to corporations only hiring immediately for its core competency and outsourcing other labor - but the combination of all of them. Rising automation reduces the need for labor. Rising populations increase the supply of labor. Fiscal policy favors wealth centralization. Globalization favors economies of scale. Giant companies throwing around more money means more influence - which means more regulatory capture. Regulatory capture means exploitation. Exploitation is always parasitic - it suffocates growth and prosperity to fill few pockets. Exploitation and rent seeking go hand in hand. Regulatory policy and capture come back again, creating rentiers markets to pillage and exploit more. Globalization reduces the sovereignty of individual nations, making it near impossible to fight back with just one government run by its people.<p>It feels inevitable. That as we advance in our ability to make so much from so little, that the real beneficiaries of it have to be those who were first movers on it, that were sociopathic enough to discard anyone else in the pursuit of power. To condemn billions to suffering and to suffocate markets and drain pocketbooks to make fractions of a cent more, to centralize resources just slightly faster into your control and domain of influence. Like global warming, like space colonization, like scientific innovation, none of these are the purview of one slice of Earth&#x27;s surface area. None are limited in scope to a few people. They matter to everyone, but we have no functional system of making everyone matter in regards to them. These market forces are operating beyond the bounds of one country - beyond the walls of Kodak and Apple. They are operating on the entirety of humanity and all their macroeconomic behaviors are dictated by the entirety of the accessible market capital can reach. But what is supposed to keep them in line, constrain capitalism to be to the benefit of both workers and capitalists and not just the later, is still isolated to thin strips of land subdivided by thousand year old traditions.<p>Focusing on the individual pieces of the puzzle remains valuable, but something as pervasive as rising inequality - of understanding the movements of markets, of a global economy that is beyond the complete knowledge of any one person anymore (it is simply moving too fast) - is the product of a billion causal relationships, not just whether companies want to invest in their employees now to prosper their local economies in the medium term.
EleVR leaving Y Combinator Research
Looks like you have an impressive amount of technical accomplishments.<p>I know what you mean about irrelevance. Maybe I can be less so.<p>No surprise that most of your new technology is not within reach of attracting commercial interest since that was not the idea to begin with.<p>&gt;Unfortunately, a combination of forces in the world make nonprofit long-term research a tough sell right now. It doesn’t matter how good we are at what we do. Everyone is overextended trying to solve all the world’s problems at once, and we’re in the unpopular space of being neither for-profit nor directly and immediately philanthropic.<p>So true, but I actually feel like it was a much more rare combination of events which made it possible to do what you have done. It&#x27;s almost never going to be an easy sell. You were so fortunate and wise to have jumped at the opportunity to research in a way that few will ever experience. Even if you did not have very much chance of making it your life&#x27;s work without appropriate funding over such a term in advance, you seem to have immediately utilized what you did have by devoting the maximum amount toward as much technical progress as possible. From experience I say that carrying on as if you had funding for longer term open-ended projects is the best way to make technical progress without distraction. It can still take many years to get good enough to make even an exponential increase in the breakthrough rate become tangible or useful though.<p>I&#x27;m an extreme alternative researcher where my life&#x27;s work has been to independently out-research some of the most well-funded petrochemical giants without a PhD myself using the same equipment on my own analytical benches. So I guess that is ambitious too. Took a while to get good here and people didn&#x27;t think it could be done but experimentation &amp; discovery always was one of my strengths. Paying for it as I go by operating at an insignificant fraction of their cost, and when the opportunity is there, prioritizing commercial projects where money can be made relative to the rate at which it would cost them to do it themself. Having a commercial component in service to such high rollers in their regular operation was the path of least resistance for the young me to gamble on the likelihood of my ships continuing to come in.<p>I like clean environments, would prefer less toxicity and have always been an extreme energy saver so otherwise I don&#x27;t need more tankers on my own behalf, but it&#x27;s our local industry, and the most promising thing for survival when I was young was to get into alternative fuels and additives, so it is what it is. Even though I&#x27;ve been a small-time operator, the environment is a hell of a lot better off than it would be with anyone who would have otherwise replaced me. Battery research seems more promising than ever now, and I feel so bad for having done almost nothing in that field but it would probably take a couple years to get up to speed. Not having actual prosperity I could never start that without giving up my current life&#x27;s work, but it does seem like an area where butt could be kicked to widespread advantage.<p>As long as you need to devote excessive effort toward survival activities, you never get to really do what you prefer to do or are best at for enough of the time to accomplish but a fraction of your technical potential.<p>Anyway, in a situation where a good year still yields only 1% of breakthroughs that could be made profitable over the near term on the commercial side, it was essential to keep the nose to the grindstone maximizing the amount of experimentation. So you end up finding an abundance of excess stuff which would be good for other kinds of businesses or could become the foundation for entirely new businesses, most of which would require capital so that would be out of the question. Without capital having been available to get rolling doing this, there has never been anything like a network in place. When you&#x27;re making unprecedented progress on technical breakthroughs that can be exploited for survival using the resources you already have, one of the least rewarding gambles you can make is to divert attention to pusuit of elusive new sources of backing rife with dead ends and unfavorable terms to boot when there is interest.<p>Any way you look at it there&#x27;s an incredible balance where you can&#x27;t depend too much on continued good fortune and you can&#x27;t justify dramatically slowing technical progress by diverting the amout of resources it takes to avoid the ravages of all possible bad fortune with absolute certainty.<p>You&#x27;ll get better at this.<p>You are going to do extremely well, already experienced at getting up on the tightrope without a net not knowing what lies at the other end, tripping up, falling off, badly injured and now very near death in this incarnation.<p>Even if the Grim Reaper completes the call, you are still willing to try again in the same type situation where a single mistake or miscalculation can be devastating.<p>Ambitious people you are.<p>If you want to continue to try it the same way all you are going to need is a better network. You&#x27;ve accomplished a huge milestone with only a single obstacle remaining, not like when you were first getting started any more.<p>And I&#x27;m here to remind you that there are unexplored alternatives however unlikely, with the best option probably not thought of yet.<p>I would get to work heavily researching both of these possibilities thoroughly. You all need to talk to the maximum people everyday anyway in various network directions and during the hard sell maybe you already have a product or service that could be offered for a fee when you run into someone who could not provide you with financial help otherwise. Salvage from what you have accomplished if possible. Whenever someone doesn&#x27;t respond positively get two names &amp; numbers from them and you will eventually never be able to call them all.<p>Seems like the best opportunity would be expected when you find someone who is benevolent and directly has a close relationship with a highly suitable potential partner, and you have their trust to the degree that they will actually make the introduction for you. You would be surprised too when a contact does the opposite and gives you the number of someone they dislike who they want you to bug instead of them. If you expect the unexpected this may also have some potential itself. Benevolence seems to be what you need for mere survival now rather than the overall strength which could give a bigger impact in the long run.<p>YCR sounded like an interesting concept to me since nonprofits are one of the alternatives I have always considered experimenting with. Extreme money-making under that umbrella can be done where it&#x27;s perfectly legitimate to optimize for producing new or providing low-cost already-baked technology and licensing it or providing a service around it for much more money since you&#x27;re just going to use the income no differently than donations for continued operations anyway, with no greedy shareholders to get in the way. With the impression I get of the YC network it just seems like butt could be kicked through YCR somehow unforseen.<p>It&#x27;s almost always going to be impossible for most to survive financially as a byproduct of what you do without diverting extreme effort away from what you actually do at least occasionally.<p>You wouldn&#x27;t have done this if you weren&#x27;t going to someday be comfortable enabling other companies to bring in more income or solve more problems leveraging and commercializing your breakthroughs than you would ever expect for yourself to begin with. That&#x27;s the business model that exists which you can not help finding yourself in without trying.<p>Not too dissimilar to me who has had no choice but to operate in a capitalist market when I have not been a capitalist, merely an entrepreneur focusing on research overwhelmingly more so than development, according to my resources.
If You’ve Never Lived in Poverty, Don’t Tell Poor People What They Should Do
Some of the issues discussed in this thread evolve quickly, or may be highly situational. As a consequence, problems that appear to result from negligence may be difficult even for reasonably conscientious people to avoid.<p>For example, overdraft of a bank account may have been more difficult to predict in some periods than others. When I was younger, I was payed by check (direct deposit was not available from some of my employers), I payed my rent with a check, had automatic bill pay for a small utility bill, and used a debit&#x2F;check-card to buy food and sundry necessities. The timing to post these transactions was inconsistent; the time to clear the checks varied, and the auto bill pay occasionally failed. One month my rent check cleared quickly, but my paycheck took a few days. I bought a small amount of groceries with my checkcard, which authorized, but overdrew my account by a few dollars. It was not my intent to overdraw my account, and I thought that “authorized” meant that sufficient funds were available. Even one or two NSF or overdraft fees would not have been a huge problem, but because the bank reordered transactions from largest to smallest, I had several hundred dollars in bank fees which consumed my income for the rest of the month.<p>The account was free, and as far as I can understand and recall, the transaction reordering was mandatory, and “Overdraft Protection” was offered as a value added opt-in service for higher cost account types. I made at least some effort to understand the account documentation I was given when I signed the contract, but I failed to understand that transactions were reordered, let alone recognize the risk implications of that policy. When I told my dad about this story, he thought it was just a mistake, and that the bank might fix it if I called them. He hadn’t found out about transaction reordering because he had ample savings since before it was invented.<p>Another example: prepaid cellphones were extremely expensive at that time and place, pay phones were disappearing, and I was not able to have a home phone for practical reasons (moved too much, installation delays, not on the lease, etc). I needed a contact phone number for employment reasons, and I accepted a contract cell phone plan because it was so much less expensive then prepaid.<p>The contract plans available changed occasionally, and to save money I changed my plan a couple of times; eventually, I was erroneously told by an agent of the provider that unlimited “off peak” hours of my plan began at 7pm, but in fact they started at 8pm. My efforts to cut my expenses resulted in a surprise bill for several hundred dollars. I couldn’t even take my business elsewhere because the contract had a deposit and a $500 cancellation fee. (In general, I have learned to suspect that when services are predicated on a mandatory line of credit, it is in order to drive up consumption using price obfuscation.)<p>I’m offering these as examples of cases where consumers must make a poorly understood cost&#x2F;risk trade off in order to access basic financial or utility services. It seems to me that this disproportionately affects people with less money because they are more price sensitive and less resilient to unexpected costs.<p>The rules and conventions tend to change overtime, so depending on our circumstances we might have no idea of the details of these kinds of problems as they exist now if we aren’t having them. Since I can now easily keep several thousand dollars cash in a back account, I wouldn’t know first hand about transaction reordering if it had been introduced more recently. Similarly, the pricing structure for phones has changed completely since I had “bill shock”, and I can now easily afford more service then I ever use in any case.<p>A few members of my extended family grew up poor enough to have serious problems securing food and housing; sometimes they lived in a barn and survived on wild game. These options were totally unavailable to me when I needed them. They are now middle class retirees, and they have no more current experience then I do with the kinds of financial issues discussed here.<p>A couple of years ago, an acquaintance of mine who lost his job was advised by his grandmother to try selling oranges by the freeway. He was an adult man with bills to pay and an estranged family – his grandma offered a simple, well meaning suggestion that demonstrated her total misapprehension of the nature of the problem. If my experiences with financial distress are not in the same place and time as another person, I assume that I am probably somewhere on the knowledge spectrum between that person, and my friend’s grandma.<p>The point I mean to make is that if people seem to have problems that could be trivially avoided, it could be due to negligence, or it could be that we don’t fully understand the problems.
The Cryptocurrency Singularity
Software is eating the world.<p>Those who think a digital currency of some kind _won&#x27;t_ displace cash are going to be made fools.<p>About the article ... there are a lot of questions here.<p>I guess the main thrust of the article is that Bitcoin&#x27;s volatility is declining, and thus it is becoming more attractive for use as a tool for buying lunch (where lunch is a stand-in for common day-to-day transactions). That hinges on the idea that Bitcoin wasn&#x27;t attractive for that purpose before, because its value was too volatile.<p>1) The graph the article uses to demonstrate that Bitcoin is becoming less volatile seems to indicate, to me, that Bitcoin is just as volatile as it ever was. If I&#x27;m reading the graph correctly, the average of volatility is the same, but the std deviation of volatility has been decreasing. In other words, Bitcoin is just as volatile, but it&#x27;s more consistently volatile. That&#x27;s ... a weird metric to measure. Either I&#x27;m reading the graph incorrectly, or OP is.<p>2) The OP says &quot;I wanted to keep them because Bitcoin has, since its inception, on average increased in value at about 150% per year&quot;. So why bring up volatility? Volatility isn&#x27;t relevant to whether Bitcoin goes up in value over time or not. It&#x27;s clear that, as long as Bitcoin continues to be useful, it will continue to deflate long term. So it&#x27;s clear that Bitcoin will always have this &quot;issue&quot;.<p>3) But that presumes that deflation is an issue to begin with. Is it? I&#x27;m naive on the subject. For the majority of human history we used deflationary currencies; precious metals. The world didn&#x27;t stop turning then. But then the question is, are inflationary currencies better? Is our modern use of them an evolution, then?<p>On the one hand, we can think of it as horrible that the majority of people are storing their value in a currency that is decreasing in value over time. Their work, their labor, earns them wealth that decreases over time. That&#x27;s disturbing.<p>But maybe it _should_ be that way? Having people store their wealth in inflationary stores of value implies that work is only valuable in the immediate time frame. And that kind of makes sense. A burger I flip today is valuable today, but not so much years from now, let alone decades from now. Paying me a deflationary currency today for that burger flip is weird, then, because you&#x27;ve traded something that increases in value over time for something that decreases in value over time.<p>So you could argue that in today&#x27;s economy employers trade cash, something that decreases in value over time, for work that also decreases in value over time. And ... doesn&#x27;t that make sense?<p>And yet, if given the opportunity and knowledge, wouldn&#x27;t everyone want to store their value in deflationary vehicles? And if that&#x27;s the case, wouldn&#x27;t everyone, as the article implies, only _have_ deflationary vehicles to trade with ... so we&#x27;d just re-evolve to using deflationary currencies again.<p>Does the average person even _know_ that their currency is inflationary? I doubt it. Maybe the choice of deflationary&#x2F;inflationary doesn&#x27;t even matter.<p>I dunno, it&#x27;s just a complicated question. I don&#x27;t think it&#x27;s clear cut that a deflationary currency is better or worse.<p>My point is, the deflationary property of Bitcoin doesn&#x27;t necessarily preclude its use as a daily driver. Volatility sure might, but deflation I&#x27;m not so sure.<p>It&#x27;s probably irrelevant to the average person. The average person will see value in replacing HSBC, who would normally freeze their bank account randomly and destroy their business.<p>4) It&#x27;s important to mention that, at this point, we have reason to believe that Bitcoin and clones based on its model can never be used to buy lunch (and other such small transactions). The cost of decentralization is too high, and we have no way to decrease those costs by the orders of magnitude needed to handle the transactional loads of things like buying lunch. We are working to decrease them, and have recently succeeded in a modest improvement on the Bitcoin network, but orders of magnitude is ... out of reach without some massive innovation.<p>It&#x27;s more likely that on-top-of networks like Lightning Network and its evolutions built on top of Bitcoin will be the thing people interact with on a daily basis.<p>The average person will get their paycheck in Bitcoin, but do their daily transactions using IOU networks like Lightning that settle behind the scenes on a less frequent basis. This allows the average person to use Bitcoin as their store of value, giving them by default the advantages that traditionally only a small fraction of the population have had, but still allowing cheap daily transactions for buying lunch.<p>That doesn&#x27;t change the meaning of the article. But it&#x27;s important to mention how Bitcoin is evolving to fulfill the future the article proposes.<p>So ... maybe that&#x27;s the future. Or maybe a side chain will evolve with inflationary properties and we just use that to buy lunch and get paid. Maybe every country will have their own cryptocurrency, pseudo-centrally controlled, with atomic swaps for global trade.<p>But one thing I know for sure. Software is eating the world. You either choose to ride that wave, or you get eaten by it.
If Carpenters Were Hired Like Programmers (2004)
I&#x27;m sorry: In my career, I&#x27;ve been through a lot in the OP and more, all really wasteful nonsense. As a result, I&#x27;ve concluded:<p>In the US, for significant, long term financial security, it is close to essential to start, own, and run a business and make it successful.<p>As a young person in a hot field, being an employee can look good, but usually too soon, well before retirement, being an employee is awful.<p>For computing: If computing is valuable, and of course it can be very valuable, then somehow a person in computing needs not to be an employee but to own their own business and make it successful.<p>Broadly a key to being successful in business is to sell directly or nearly so to the end user. So, in computing, for the business, develop a product or service that sells to, gets revenue (or eyeballs for ad revenue, here and below) from, the end user. For this, pick a problem the end users want solved and that you can solve with computing, have a barrier to entry, ..., and get the revenue. Try to have the product, service so that the number of end users times the net revenue from each makes a good business, e.g., gets you some good financial security.<p>There is a huge, but easily overlooked, difference: (A) As an employee, you are getting all your money from your employer, only from your employer, and they know exactly how much money you are getting. If the employer concludes that they are paying you more than necessary, then they will work to pay you less. It&#x27;s super tough to get good financial security as an employee from an employer. (B) As a business owner, you are getting money from usually at least a few and often many users, and usually the users have no idea what your profit margins and net revenue are, and, even if your revenue is high and they know it, there&#x27;s next to nothing they can do about it. So, if you can find a way to get a lot of net revenue, then you have a good chance of some good financial security; the users have little or no way to keep you from the revenue.<p>In simple terms, if your work is valuable, then you should get a lot of money for it. Being an employee is a poor way to get that money; instead, your employer gets the big bucks. So, own your own business and keep that money, the big bucks, plus what would have been your employee&#x27;s salary, for yourself.<p>In the terms of the OP, there are some huge advantages in owning your own business. To illustrate this point, I will draw from the best example I have, my startup:<p>I saw the problem, did some applied math technical work for a solution, designed a Web site as the means of delivering the solution, designed the rest of the software, learned a little HTML and CSS (but no JavaScript -- didn&#x27;t need it) and Microsoft&#x27;s .NET Framework (for writing applications software), ASP.NET (for Web pages), ADO.NET (for programming use of relational database, e.g., Microsoft&#x27;s SQL Server), typed in 24,000 programming language statements in 100,000 lines of typing, and am going live on the Internet ASAP.<p>Well, all I had to learn was just what I needed, and for more I didn&#x27;t learn it. So, e.g., compared with C#, I prefer Visual Basic .NET as easier to learn, teach, read, write, and debug and otherwise, as access to .NET, etc., essentially equivalent to C#. E.g., C# has much of the deliberately idiosyncratic syntax of C, a language designed to be really sparse and low level and to compile and run on a 5 KB DEC mini computer; and C++ was originally just a pre-processor to C (in those days, language pre-processors, e.g., RATFOR, were popular). So, for me, to heck with the C# syntax, and I have not studied C# and have yet to write a single line of it.<p>Yes, apparently Python is quite useful and very popular. As I understand it, it is interpretative which means it can be easier to use but slow. As I understand it, Microsoft&#x27;s Iron Python is compiled and provides good access to .NET. As I understand it, Python has some really nice software &quot;packages&quot;. Maybe in time I will need or be able to make good use of Python; then I will learn and use it. But so far I&#x27;ve had no use for Python and have not used, learned, or even installed it.<p>Python is just an example: I just learn and get needed experience with the software tools I need for my startup, and that&#x27;s much, much less than would would be needed for the scenarios, in my experience accurate as a parody, in the OP.<p>Now what I concentrate on learning is not software tools (so far I have what I need and from early in my career much more) but what I need for my business, e.g., now, getting good initial data, publicity, and products (processors, motherboards, main memory, mass storage, etc.), and server construction, monitoring, maintenance, and administration.<p>E.g., for my servers, can I use AMD Ryzen processors with ECC (error correcting coding) main memory and Windows Server 2016 (that wants ECC main memory)?<p>So, I&#x27;m 100% owner of my startup. Thus, if there is good value in my software, then I will get that all that value instead of some only some small fraction of that value with some employer getting the rest.<p>So, to respond to the situation in the OP, I suggest that people who can write valuable software should start their own business, own 100% of it, pick a good problem, write valuable software for a valuable solution, and collect the big bucks themselves.<p>E.g., the OP has the employer asking really dumb questions. Why should a good software developer put up with such nonsense? They shouldn&#x27;t and, instead, should have their own business. Such nonsense is not a good path for either the employer or the employee to make much money.
Delta Goes Big, Then Goes Home
There is much to touch on from the comments so far, but I&#x27;ll try my best to keep it on point. (Note that I didn&#x27;t say &quot;short&quot; &lt;wink&gt;.)<p>Bona fides: FAA Licenced Aircraft Dispatcher; 11 years industry experience. (Left the industry in 2000) Staff title: Chief Dispatcher<p>Preface: I&#x27;ll not attempt to address the many meteorological or airframe engineering aspects of this mission other than to note that Delta staffs its own meteorology department (or did last I was privy to their operations). The capabilities that lend to the carrier are net positive, as should be obvious.<p>On to some specific questions raised ...<p>(0) &gt;&gt;&gt;[qume] The captain makes the same call on every flight. The plane and passengers are her responsibility regardless of the situation. Edit - also the call is made continuously. They can back out any time.<p>Response: Only half true: the pilot-in-command (PIC) along with the aircraft dispatcher share responsibility for the &quot;initiation, operation and termination of the flight.&quot; (Yes, I think the regs use &#x27;termination&#x27;; that always made me wince.)<p>So, it&#x27;s a quorum of two: if either one chooses to terminate (or &#x27;not initiate&#x27;) a given flight, it cannot be operated. That doesn&#x27;t mean that a dispatcher who disagrees won&#x27;t spend some effort presenting evidence for his position (e.g., ten-minute phone calls), but at the end of the day of those two don&#x27;t agree, the flight cannot operate.<p>(1) &gt;&gt;&gt; [phkahler] They may or may not have volunteered for the flight, but they do get the final decision to go in or turn back once the airline OKs it.<p>Response: This statement begs the question: who is &quot;the airline?&quot; The relevant US regulations (CFR 14 Part 121) refer throughout to this entity as the &#x27;certificate holder&#x27; -- because an air carrier holds an operator&#x27;s license -- a business license, of sorts -- granted by &#x27;the administrator,&#x27; the regulatory term for the FAA. I&#x27;d gamble that since the FAA descended from the CAA that the policy authors thought it wise to anonymise the parties wherever possible in the event of future language changes. Smart move.<p>So for any given flight, &quot;the airline&quot; would be the dispatcher, who by proxy, exercises the right of &#x27;the certificate holder&#x27; to &quot;operate a particular flight over a specific route under specific conditions&quot; (going from memory, mind you).<p>This authorization is formally granted by way of a legal document prepared by &#x27;the certificate holder&#x27; (read: dispatcher) known as the &#x27;dispatch release,&#x27; which includes a minimum set of specific information (flight plan, equipment type and number, flight crew, fuel min&#x2F;max&#x2F;burn, alternate airports, and so forth) but typically have an abundance of supplementary information to better brief the flight crew of the expected conditions and details of alternatives that are likely to be available if the proposed plan cannot be followed.<p>Bonus fact: If you ever were waiting after boarding and the crew came over the PA to say they are &quot;waiting on paperwork,&quot; most of the time it is a bag&#x2F;weight count, but some times it&#x27;s the dispatch release. If you know your flight is going across, or into, some crappy weather, the chances of the latter are greater than average.<p>(2) &gt;&gt;&gt;[passivepinetree] Does anybody who works in the industry have any idea of what the risk management is like for these types of events?<p>Response: The airline operations hundreds of flights a day; all have some risk, and all decisions must be made in real-time. In cases of long-running events such as a hurricane, there is likely to be some general tone taken by the carrier at the highest operational levels (chief pilots, chief dispatchers, VP of Operations, etc.). For my carrier, these strategic positions would be discussed in the morning meeting, which was a recurring conference call between all those parties and department supervisors -- kind of like a stand-up, except we were all sitting around a conference table.<p>Aside from that, as the day wears on the specific handling of a given route, weather event, etc., is handled in real-time by the assigned dispatcher and support team (meteorologists, mechanics, etc.)<p>Of note: some larger carriers maintain a &#x27;Trouble Desk&#x27; staffed by dispatchers who are assigned a lighter workload than the regular line folks. This is a great system (one that my carrier didn&#x27;t have) because, let me tell you, <i>just one</i> fubar flight can monopolize all your capability and time for quite a while. If&#x2F;when one of those flights pops off the queue it can wreck your throughput for the remainder of the shift. For my money, the trouble desk is an excellent mitigation tactic capable of keeping the workload in the dispatch office sharded appropriately.<p>(3) &gt;&gt;&gt;[joemi] Does anyone know more about this type of pre-hurricane flight? Is that a common thing airlines do, to try to squeeze one last flight in? Are there rushes before other types of bad weather?<p>Response: I only had a handful of duty experiences with hurricane landings that were in our region of operation, but I do have a wealth of experience with other severe weather systems here in the US -- aka, tornado season.<p>There were many occasions when a strong cold front would be bearing down on cities we served, with solid lines of thunderstorms sweeping through the region. The kinds of weather that serve up severe or extreme turbulence, large hail, and tornadic activity. And there aren&#x27;t any &quot;holes&quot; to &quot;slip through,&quot; either.<p>Frequently, it would come down to trying to get one more flight in and out of a city. (The pressures of &#x27;completion factor&#x27; at an airline are a whole discussion in itself.) In these cases, I&#x27;d be measuring the relative velocity of the line versus the distance to the airfield, estimating the time of impact, so to speak, and cross-referencing that against my computed time en route and considering the turn-time at the station, etc., etc.<p>Assuming you judge that it can be operated safely, and you can provide a suitable alternate plan (and &quot;turn back to base&quot; is certainly a common choice) you have to get on the phone (or radio) and brief the PIC on some or all of the details (all of the details are in the dispatch, but people like to hear a human voice when facing stressful situations; think 911 operators), and if they concur (or accede, in some cases) then, from (1), by the necessary joint agreement, the flight is initiated.<p>And as you might expect, sometimes the flight got in and out, and sometimes it diverted or came back to base. In either case, occasionally the crew might call or radio back in with reports on the conditions (or vociferous complaints about the ride quality -- hey, it happens).<p>So then my unqualified answer to this question from joemi is &quot;Yes.&quot;<p>--<p>Finally I&#x27;d like to call out the commenters who made mention of the ground (and other station) crew and their exposure to risk in these &quot;irregular operations&quot;; I&#x27;d say they exhibited a measure of aplomb no less than the flight crew, and those employees are too often overlooked as essential parts of the carrier&#x27;s operations; both day-to-day and in extreme cases such as a hurricane landfall.<p>HTH, &#x2F;Acey
Ask HN: Anyone here make a 'comeback' from rock bottom? What is your story?
I don&#x27;t know if this qualifies as rock bottom, but here goes.<p>As a baby, I was adopted by the mother of the man who was married to my biological mother, and at the time thought to be my father, but wasn&#x27;t my father. Growing up, I knew this, and always had underlying issues relating to the mess which enshrouded the ordeal; my parents being drug addicts, the person who was thought to my father dying of an overdose, and a lot of other stuff. I knew the whole story quite young. On top of this, the family put me in the company of a handful siblings which were drug attics, whose actions were quite clearly visible to me.<p>My adopted mother was in her mid 50&#x27;s when she got me and my sister, at which point her life should have been winding down, but she had been taking care of her own kids from the age 17 until her mid 40&#x27;s, and here were two kids again. She was angry, and bitter, and while I could see why, she very aggressively, even though she still denies and refuses to realize it, took that out on us. It wasn&#x27;t fair to us, but I think it&#x27;s safe to say she was better than the alternative (and she did quite literally save my life; I would have died of pneumonia and meningitis at 3 months while sick and living in a car with my drugged-out biological mother, and I apparently almost died anyways).<p>Anyways, going into our teens years, me and my sister grew farther and father away from her. I and I alone remained close with my adopted dad, but it was mostly because he kept quiet to prevent her verbally abusing him, as well. Plus, he was always teaching me about electronics, woodworking, mechanics, and the general skill of &quot;building shit&quot;.<p>Around 13, my sister and I got in an argument with our mother, and she told us she wished she&#x27;d never adopted us and that she couldn&#x27;t wait for us to be gone. So we left. I was gone for a week, staying with friends and sleeping a night in a park before she picked me up from school. My sister, she fell in with some bad people, got into some stuff, and ultimately ended up coming home a week after me, telling the cops she wanted to kill herself, and got placed in a mental hospital. She was there for 8 months or so, then moved to a group home. After 4 or so months in the group home, my mom realized she could do the same to me, so, come summer before highschool, she did.<p>I was an hour away from anything I knew, my sister, my only blood, being 3 hours in the other direction. It played quite hard emotionally; suicidal and whatnot. I was only there a year, and saw far more tragic stories of kids left behind by the world, and honestly it made me resent my family-mostly mother-even more. These kids had terrible, terrible situations, and I was being thrown in there because I wasn&#x27;t wanted. Not out of necessity, as they were. I remember, one time, the staff made my mom take me to a doctors appointment because they were busy with the kids who belong there. The whole ride she bitched and moaned, and I remember her once saying &quot;this is their job, not mine.&quot;<p>After the homes, we moved back with my mother, but she picked up, left California, and moved to Tennessee with us. My dad, the parent, I was close to, did not join us. A whole mess of shit happened after that, too, but suffice to say the whole ordeal and the rebelliousness before it lead to grim situations for us. As a kid, I was always good in school, top of my class, but not since about a year before the home, and not after, either. I absorbed myself in my computer at night, and when I wasn&#x27;t asleep at school, I was taking out my rage and angst on the teachers. I wasn&#x27;t violent or anything, just an angry, seething confused jackass of a kid out to prove the world wrong; it didn&#x27;t help that this school was full of unqualified teachers, that just made me worse. From 8th grade to graduation, I had had 6 expulsions and at least 200 days of suspension (most expulsions were a result of excessive suspension). I failed almost every class I had. At one point, my mom had the brilliant idea that this might be due to some form of retardation, so she had me tested for special education (I obviously failed that test, just like all of my classes). Throw in some teen pregnancies, abortions, and miscarriages for some extra emotional issues (in retrospect, it was the best personal outcome that none of those went to term, but doesn&#x27;t change the emotional damage they had).<p>By the end of it, I &quot;graduated&quot; with a 0.89 GPA. I still don&#x27;t think it was possible, but there were a small handful of amazing teachers that I&#x27;m sure played some role. It was rare, but the competent teachers, the passionate ones, I respected and they respected me, and I think they realized there were a ton of issues, and thus pulled the strings on making sure I made it out. Maybe not, I&#x27;m not sure.<p>Graduation came about a month after my 18th birthday, on which my mom presented me with a lease agreement and asked me to sign it before having any cake. She wanted $800 a months for rent and utilities. I happened to know her rent of that four bedroom, of which I had the smallest room with no climate control, was $800. I paid the first month with all the money I&#x27;d saved over the years, but after that, I had nothing We got in a fight, so I moved out.<p>I lived in my car (the car she bought for me, but I took it, no other option really) for a few weeks. Spent some time living with an ex girlfriend, also. After about a month, I picked up and moved to Oklahoma to do freelance coding work with a friend; those years absorbed in my online life had one good outcome: I learned to code. That lasted for 4 months, but I was alone there and miserable. Went back to Tennessee, lived with my ex girlfriend&#x27;s family, and got a job in a factory. The day after I knew I had the job, my online business blew up overnight. I had been making bots for online games and selling them. The income was in the low hundreds per year. But, one day, my competitor closed up shop and I got all of his business; a surge of $2000 or so on the first day and a couple thousand a month following. I worked in the factory for 5 months, working 10 hours a day 7 days a week, not a single day off. Whenever I wasn&#x27;t there or sleeping, I was improving my bot. I quit in May, interviewed for a programming job it Atlanta in June, and started that job in July. After a year and a half, I got a security engineering role in Silicon Valley, and took that. I had made connections along the way, and was writing a Game Hacking book at the time, which is now finished. I worked there for 3 years, then moved to another security company. I&#x27;m currently working there from the comfort of my downtown condo in San Jose. Along the way, I wrote my book, spoke at almost a dozen conferences, and started working on some online classes for Pluralsight (currently in progress!). I&#x27;m 24 now. The side business with my bot has been going the whole time; it&#x27;s shutting down sometime this year due to the game changing their client, but I&#x27;m okay with that. It&#x27;s made about half a million gross by now, and I&#x27;m extremely proud and humbled by the experience.<p>Multiple times throughout this climb, and even now, I find myself confused emotionally. It&#x27;s extremely hard to be happy, or to smile. It&#x27;s hard to have any negative emotion besides anger. I can laugh and have fun, but I don&#x27;t just smile, I&#x27;m quick to anger like my mom was, and I&#x27;m never just in a ground state of happiness. I find myself at times seeking pain because it&#x27;s what I knew, and this success is still new to me. When I was first in Georgia, I realize I was trying to develop problems. I drank more than I cared to because I wanted something to be wrong with me, etc. It&#x27;s fucking weird. I hate my personality and attitude now, but I shouldn&#x27;t. I should be happy. I&#x27;m proud and excited about the future, but for some reason not content.<p>Man, I know there are people who had it way worse than me. I lived with some of them, and I know there are others in much worse situations (worm torn countries and such), so I feel really selfish to call this rock-bottom, but it was mine. I feel like it&#x27;s wrong of me to think I came from some astronomically shitty odds, knowing what the real odds are for a lot of people, but I do. I don&#x27;t know, I&#x27;ve never got to really share this story (and there&#x27;s a lot I&#x27;m leaving out for obvious reasons), but it feels nice to, and I don&#x27;t know why. I&#x27;m looking forward to now reading other people&#x27;s stories.<p>P.S. don&#x27;t tell me to see a therapist or something please, I&#x27;m not here seeking advice, just wanted to share.
Mal – Make a Lisp, in 68 languages
I tried finding the lines of code in each language with cloc. C&#x2F;C++ Header contains a sum of C and CPP header files. Python has been used across different language dirs as wrapper script or so, otherwise, it&#x27;s own implementation is probably 1350 lines at max. This would also probably include bugs in cloc.<p><pre><code> github.com&#x2F;AlDanial&#x2F;cloc v 1.70 T=2.86 s (433.6 files&#x2F;s, 67570.0 lines&#x2F;s) -------------------------------------------------------------------------------- Language files blank comment code -------------------------------------------------------------------------------- Visual Basic 36 1569 132 8036 Swift 36 1199 1606 7951 SQL 36 956 649 7594 Pascal 19 704 951 6253 Ada 28 1975 380 5856 Elm 19 1619 226 5647 awk 16 290 15 5169 Lisp 37 825 256 4562 VHDL 17 434 32 4228 C# 20 658 346 4175 Python 37 574 394 4018 make 89 1244 873 4010 C 19 466 392 3499 Perl 35 413 171 3450 Go 17 281 148 3397 Rust 18 304 130 3331 JavaScript 43 449 247 3235 Forth 18 516 158 3228 C++ 18 568 71 2986 Java 17 325 135 2938 D 17 363 10 2938 Racket 35 480 385 2740 TypeScript 17 284 56 2740 Markdown 8 714 0 2730 Rexx 17 283 47 2713 Bourne Shell 30 390 315 2692 Tcl&#x2F;Tk 17 284 43 2416 Dart 16 255 41 2303 F# 20 337 13 2298 Objective C 17 285 150 2189 Erlang 17 277 191 2185 MATLAB 26 184 128 2165 Haxe 19 254 79 2161 Crystal 18 417 58 2146 vim script 17 242 50 2076 Haskell 17 331 99 2000 PHP 18 249 120 1955 Lua 18 248 87 1874 Scala 16 221 113 1816 Elixir 19 366 35 1809 JSON 28 246 0 1765 R 17 201 100 1612 Groovy 17 170 98 1582 Kotlin 17 312 0 1554 Julia 17 189 126 1405 Nim 16 320 36 1397 lex 18 209 0 1355 Ruby 17 167 87 1264 OCaml 15 109 98 1211 PowerShell 11 128 71 1146 ClojureC 15 257 129 1053 CoffeeScript 17 195 126 1037 CSS 6 150 132 792 C&#x2F;C++ Header 22 266 44 752 HTML 2 36 35 486 Bourne Again Shell 47 1 4 135 YAML 2 6 10 86 Maven 1 2 7 85 Clojure 2 8 12 65 ClojureScript 1 1 0 2 -------------------------------------------------------------------------------- SUM: 1242 24806 10447 158293 --------------------------------------------------------------------------------</code></pre>
Daniel Kahneman “I placed too much faith in underpowered studies”
I really wanted to start this comment with the HN standard zero-content dick remark of &quot;Is anyone really surprised ...&quot;, because I&#x27;m really not surprised. Except the fair comment would be, is anyone who actually bothered to look up the references and citations of some of his more bold (but oh so juicy) claims, really surprised, really?<p>It&#x27;s a typical situation of that telephone-whisper game; researchers do a test they came up with on a group of 21 students. In social psychology this is considered a reasonable sample size. From where I&#x27;m standing, this is a non-starter for doing research and calling it science. Just don&#x27;t even bother. It&#x27;s better to <i>not</i> know anything than to do it any way and catch a bias (yes like the disease it is).<p>Stopping attributing <i>meaning</i> to what is essentially random data is what got us into this whole modern era of technology in the first place. It seems pretty clear by now which fields of science took that seriously and which ones preferred to instead read (meta meta) meta studies and &quot;sit in their office, bouncing ideas of one another&quot;[0].<p>But hey, I&#x27;m from computational science and we can always generate more data, which starts at sample sizes of about, oh, 10K or so? (at least)<p>I always get the idea that these guys just wanted to sell books, filled with slightly-counterintuitive yet somehow plausible factoids. Add in the veneer of scientific credibility and you&#x27;ve got a <i>very</i> juicy best-selling combo. For people who like to feel they are scientific &#x2F; rational intelligent, make it a part of their ego--something which I can totally relate to, btw, but I <i>try</i> to be better.<p>A few things that stood out from Kahneman&#x27;s comment that reflect his ego (even though he uses words to sound humble, he can&#x27;t quite find the courage, if he could it wouldn&#x27;t have gotten this far);<p>I had to look up &quot;file-drawer problem&quot;, turns out it is a cutesy euphemism for &quot;publication bias&quot;. Care to guess why he doesn&#x27;t use that word? He does it twice, even, so it&#x27;s definitely not to add variation or flavour to his writing style. Especially when replacing the two usages of the term in context would become &quot;severe publication bias&quot; and &quot;substantial publication bias undermines the two main tools that psychologists use to accumulate evidence&quot;, which sounds really pretty damning, much more so than calling it &quot;file-drawer problem&quot;.<p>&gt; first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples.<p>He likes to coin terms a lot. I know the &quot;Law of Small Numbers&quot; in a mathematical context where it means something entirely different. So I looked it up and it turns out to be a euphemism for a &quot;hasty generalization fallacy&quot;, kind of the exact opposite of what he suggests here.<p>Anyone care to check this citation? What is this magical law that allows researchers to trust the results of underpowered studies with unreasonably small samples? It sounds <i>beyond</i> implausible to me. You can statistics your way around this in circles but really you can also just like, <i>dismiss</i> underpowered studies with unreasonably small samples.<p>&gt; We also cited Overall (1969) for showing “that the prevalence of studies deficient in statistical power is not only wasteful but actually pernicious: it results in a large proportion of invalid rejections of the null hypothesis among published results.” Our article was written in 1969 and published in 1971, but I failed to internalize its message.<p>Yeah right. The real answer is &quot;because I had books to sell, research grants to obtain and an ego to maintain&quot;. You don&#x27;t <i>need</i> someone to write a paper about this to come to this conclusion. Of course it&#x27;s harmful, how can this not be obvious? It&#x27;s also, like, a MAJOR part of social psychology and similar studies because sample sizes are always stupid small. And Kahneman is an expert in this field. So apart from that he already KNEW this because he has common sense, he MUST have already internalized it because he&#x27;s an expert in this field and you come across this particular bit of common sense all the time. Therefore, no, you CHOOSE to ignore that reality (not message because you were perfectly aware of this before that paper you cited).<p>&gt; if a large body of evidence published in reputable journals supports an initially implausible conclusion, then scientific norms require us to believe that conclusion. Implausibility is not sufficient to justify disbelief, and belief in well-supported scientific conclusions is not optional.<p>If you come from the point of view of the exact, hard sciences, like physics, math or computational science, it&#x27;s kind of hard to see anything wrong with this statement, on the surface.<p>But if you know a thing or two about social sciences and the like, the bullshit that goes on there, you know that almost <i>everything</i> is wrong about the above statement.<p>Reputable journals often aren&#x27;t. And there is nothing, absolutely nothing, in this world that can <i>make</i> someone believe. You cannot require it. And to say that &quot;belief&quot; is not optional is almost an oxymoron. Unless you use brainwashing. Except I&#x27;m not sure if &quot;brainwashing&quot; even really works because, you know, guess what fields conducted the unethical studies into it.<p>&gt; This position still seems reasonable to me – it is why I think people should believe in climate change.<p>Please don&#x27;t drag climate science through the same mud as your clusterfuck of research. The hard numbers and sample sizes they have access to, are so large you&#x27;d soil your pants.<p>&gt; But the argument only holds when all relevant results are published.<p>Which you KNEW is not the case, so that&#x27;s not really an excuse is it.<p>&gt; I knew, of course, that the results of priming studies were based on small samples, that the effect sizes were perhaps implausibly large, and that no single study was conclusive on its own.<p>But I had books to sell, research grants to obtain etc etc<p>&gt; However, I now understand that my reasoning was flawed and that I should have known better.<p>But I had books to sell, research grants to obtain etc etc<p>&gt; I knew all I needed to know to moderate my enthusiasm for the surprising and elegant findings that I cited, but I did not think it through.<p>But I had books to sell, research grants to obtain etc etc<p>&gt; I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware.<p>If he&#x27;s so convinced that &quot;belief in well-supported scientific conclusions is not optional&quot;, then it also holds when the scientific conclusions say the opposite, and he should DROP this belief right this instance until proper evidence is obtained. This is not belief, it&#x27;s stubbornness.<p>I mean sure, I would personally say, don&#x27;t throw out the baby with the bath-water (because I also don&#x27;t quite agree with the other one, science can be flawed like any human endeavour). But he only <i>just</i> went hard-line science on this idea, in the very same comment, and you should at least be consequent about it.<p>I really don&#x27;t think he&#x27;s learned any lesson. He led his scientific beliefs be guided by ego, clouded by the idea that research proved what he wanted to be true, and he&#x27;s published books filled with untruths that are out there right now. Is he going to issue retractions? Errata? Cause you know, lay people are going to read this years into the future, and believe this crap.<p>[0] In a book by the Kahneman&#x2F;Tversky&#x2F;Taleb trio of juicy pop-psych writers, they described their research methodology this way. Proudly so, because what could be better science than such incredibly smart people giving the freedom to &quot;bounce ideas of one another&quot; ... Sorry this post is not proper science, I can&#x27;t recall and properly cite what book it was :-&#x2F; I think it was Taleb talking <i>about</i> his buddies Kahneman and Tversky.
Patching is hard; so what?
&quot;I don’t dispute this point. It’s absolutely valid.&quot;<p>I do dispute it. Patching is <i>not hard</i>. It&#x27;s incredibly easy to the point that some places and projects have the process automated from notification to application to testing for breakage to deployment. For reducing downtime, one can use clusters with rolling deployments of patches. Patching is only hard for things such as web applications when the company&#x27;s application or processes are done in a way that makes patching hard. As in, they have to be incompetent or just not care about IT. Far as competence, your claim about fragile systems is a good example that might have happened.<p>&quot; then you’ve implicitly made the decision that you’re never ever going to allow those vulnerabilities to fester.&quot;<p>You&#x27;re thinking like an engineer that cares about quality. You instead should think like an Equifax CIO or something. To start with, this is a company that collects PII against people&#x27;s will to sell to third parties for their main goal of huge profits. Politics plays more a role than engineering talent in people getting the senior, management positions in such companies. They also tend to chase whatever is popular among Fortune 500, esp with cheap labor or ecosystems available. Java was one of the fads that financial sector was all over. Combine all this to have a company whose fad-chasing CIO keeps costs down and profits up applying the thing he or she read in a computer magazine with the cheapest talent available on a tight budget. The result of their work is a pile of garbage they have trouble patching. If you doubt this, just look at the security of the web site they deployed for credit monitoring and apparently to help hackers get at people interested in credit monitoring. Or they just made mistakes so easily avoided that they&#x27;re either inexperienced beginners or don&#x27;t care at all.<p>&quot;So what would those systems look like?&quot;<p>Well, they would have built it some time ago. So, let&#x27;s work our way from old, high-assurance security toward something commercial and affordable from at least 2000-2005 era. The original work in securing data involved security kernels:<p><a href="https:&#x2F;&#x2F;www.acsac.org&#x2F;secshelf&#x2F;book001&#x2F;19.pdf" rel="nofollow">https:&#x2F;&#x2F;www.acsac.org&#x2F;secshelf&#x2F;book001&#x2F;19.pdf</a><p>Several of those are still available but expensive. Both security kernels and databases such as Trusted Rubix built for them. Today, those look more like the next link with companies such as Sirrix and Green Hills selling them commercially:<p><a href="https:&#x2F;&#x2F;os.inf.tu-dresden.de&#x2F;papers_ps&#x2F;nizza.pdf" rel="nofollow">https:&#x2F;&#x2F;os.inf.tu-dresden.de&#x2F;papers_ps&#x2F;nizza.pdf</a><p>In any case, we&#x27;d need a robust combination of OS, database, and application code. Nothing hits the database without going through the app server first. So, we embed our security policy into app server. How to implement it? Ever since Dijkstra&#x27;s THE OS (1960&#x27;s), we knew to specify correct behavior with preconditions, invariants, and post conditions. Then use anything from formal analysis to testing to runtime checks to ensure that behavior is enforced. The security kernels did the former where Design-by-Contract in Eiffel used tests and runtime checks. Got to pick method with best bang for buck. Two links to illustrate safer languages of past with Ada 2005 since we&#x27;re looking at earlier stuff.<p><a href="http:&#x2F;&#x2F;www.adacore.com&#x2F;knowledge&#x2F;technical-papers&#x2F;safe-secure&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.adacore.com&#x2F;knowledge&#x2F;technical-papers&#x2F;safe-secur...</a><p><a href="https:&#x2F;&#x2F;www.eiffel.com&#x2F;values&#x2F;design-by-contract&#x2F;introduction&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.eiffel.com&#x2F;values&#x2F;design-by-contract&#x2F;introductio...</a><p>Given labor situation, we use DbC in a safe language. All prior work showed simplicity was necessary for security. So, it would be an app server done in Eiffel, Ada, or (if absolutely necessary) Java&#x2F;C++ using Design-by-Contract methodology plus lots of testing. The web component would be a middleware done similarly that basically translates web actions to simple protocol the server uses for requests&#x2F;responses. Any new types of problems with web apps are mitigated in the simple framework a la Airship CMS, caught with some kind of monitoring a la Spectre, or both.<p>This would have to run on something robust. You&#x27;d have the database&#x2F;storage, app server, web component, management, and monitoring which is behind one-way link (data diode) w&#x2F; logs copied to write-only media. A company from likely mainframe background now run by wise technologists might first make the software target IBM but portable. Balancing reliability, security, and cost, the best targets would be AS&#x2F;400 or AIX. The former is a capability-oriented architecture with few 0-days whereas other is a rock-solid UNIX. Portability means they&#x27;d migrate part or all of this to new OS&#x27;s as they showed up. Eventually they&#x27;d notice OpenBSD&#x27;s security advantages or buy one of those certified-secure kernels w&#x2F; POSIX layer. Depends on their viewpoint and budget. Probably throw some clustering software with rolling releases in there since OpenVMS and NonStop had high-availability in the 1980&#x27;s. It was well-known strategy.<p>So, there was a straight-forward way to make a robust stack with a web front end which gets more robust over time. This could be done in high availability or just easy patches. They instead end up with fad-driven crud possibly running on other crud that&#x27;s easy to hack but hard to patch. Just bad engineering that&#x27;s typical in the market versus methods such as I described. Also note that there were (are) large institutions using some of what I recommended for those benefits. The tools and people just cost a bit more which doesn&#x27;t align with incentives of companies as greedy and IT hating as Equifax. Sure turned out extra profitable, didn&#x27;t it? ;)
Alan Kay is still waiting for his dream to come true
For me, the flip side of all of Alan&#x27;s (and other&#x27;s) inspiring work on Smalltalk is that, having started programming with Smalltalk in the late 1980s (especially with VisualWorks and OTI&#x27;s Envy), my entire professional career since then has seemed like a long succession of dealing with less pleasant systems around people who just don&#x27;t get it. That has been frustrating and painful (even if the money is good) -- in some ways, it might be better not to know what is possible because then you don&#x27;t miss it.<p>It&#x27;s sad that ParcPlace Systems made such a mess of commercializing ObjectWorks&#x2F;VisualWorks Smalltalk (including not licensing it to Sun affordably for settop boxes, where Sun then reacted by creating Oak&#x2F;Green&#x2F;Java) and then they sold Smalltalk for a song to Cincom instead of open sourcing it and becoming a services company. I remember talking with one ParcPlace salesperson who kept going on about how Digitalk was their competitor, with my trying to point out that Visual Basic was really their competitor (and ParcPlace buying Digitalk was another part of the disaster for Smalltalk commercially).<p>It was also sad to see IBM abandon <i>two</i> reliable fast Smalltalks it owned in favor of a buggy slow early Java for marketing reasons. And even working within IBM at IBM Research around 1999, I could not use Smalltalk because IBM&#x27;s OTI subsidiary (which IBM had bought) wanted around US$250K a year internally just to let the embedded speech research group I was in try their embedded Smalltalk for the product we were making. They claimed they&#x27;d have to devote an entire support person to it -- and I tried to explain I was going out on a limb there as it was to suggest using Smalltalk instead of C and VxWorks (which my supervisor had already licensed). So instead I got IBM Research to approve Python for internal use (which took weeks of dealing with IBM&#x27;s lawyers) and Python, not Smalltalk, ended up on Lou Gerstner&#x27;s desk when he asked for one of our &quot;Personal Speech Assistant&quot; devices for his office.<p>I really learned to dislike proprietary software from that experience of seeing something I loved like Smalltalk being killed by a business model focusing on runtime fees and such. I remember how when an IBM Research group licensed their Jikes Java compiler as FOSS, their biggest surprise was not how many external users they had -- but how many internal IBM users they suddenly had who no longer had to jump through complex hoops to gain access to it.<p>Even now, Java, JavaScript, and Python are not quite where Smalltalk was back then in many ways. Of course, it&#x27;s decades later, so there are other good things like faster networks and CPUs and mobility and DVCS and above all a widespread culture of FOSS -- and when I squint, I can kind of see all the browsers out there running JavaScript as a sort of big multi-core Smalltalk system (just badly and partially implemented).<p>For a while I had hopes for Squeak, but it got bogged down early on by not having a clear free and open source license. Sames as I did recently to Automattic when it used React with a PATENTS clause, when I pointed out the concern, some lawyer chimed in with how everything was fine -- and only years later did the growing damage to the community become obvious to all. <a href="http:&#x2F;&#x2F;wiki.squeak.org&#x2F;squeak&#x2F;501" rel="nofollow">http:&#x2F;&#x2F;wiki.squeak.org&#x2F;squeak&#x2F;501</a> <a href="https:&#x2F;&#x2F;github.com&#x2F;Automattic&#x2F;wp-calypso&#x2F;issues&#x2F;650" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Automattic&#x2F;wp-calypso&#x2F;issues&#x2F;650</a><p>I really learned to dislike unclear licensing on a community project from that early Squeak experience.<p>Squeak continues to improve and has fixed the licensing issue, but it lost a lot of momentum. Also the general Squeak community remains more focused making a better Smalltalk -- not making something better than Smalltalk. That is something Alan Kay has pointed out -- saying he wanted people to move beyond Smalltalk with Squeak as a stepping stone. Yet most people seem to not pay attention to that.<p>But if even Dan Ingalls could be willing to build new ideas like the Lively Kernel on JavaScript, I decided I could too, and I shifted my career in that JavaScript direction inspired by his example.<p>As luck would have it, my day job is now programming in the overly-complex monstrosity that is Angular, when I know the more elegant Mithril.js is so much better from previous experience using it for personal FOSS projects... I guess every technology has its hipster phase where decision making is more about fads than any sort of technical merit or ergonomics? I can hope that sorts itself eventually. But once bad standards get entrenched either in a group or the world at large, it can be hard to move forward due to sunk costs, retraining costs, retesting costs, and so on. But, thick-headed as I am, I keep trying anyway. :-)<p>And as in another comment I made, there are important social&#x2F;political reasons to keep trying to create better systems to support democracy: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=15311950" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=15311950</a>
Google Memo and the Greater Male Variability Hypothesis
The article is pretty in-depth and nuanced. Here&#x27;s the conclusion for convenience:<p>&gt; Our Conclusions about the Greater Male Variability Hypothesis:<p>&gt; On average, male variability is greater than female variability on a variety of measures of cognitive ability, personality traits, and interests. This means men are more likely to be found at both the low and high end of these distributions (see Halpern et al., 2007; Machin &amp; Pekkarinen, 2008 and, especially, the supplementary materials; for an ungated summary click here). This finding is consistent across decades.<p>&gt; The gender difference in variability has reduced substantially over time within the United States and is variable across cultures. It is clearly responsive to social and cultural factors (see Hyde &amp; Mertz, 2009; Wai et al., 2010); Educational programs can be effective. It is also clear that there are cultural&#x2F;societal influences, as the male:female variability ratios can vary considerably across cultures (e.g., Machin &amp; Pekkarinen, 2008).<p>&gt; While the gender difference in the male:female ratio for the upper tail of the distribution of math test scores (SAT, ACT) narrowed considerably in the United States in the 1980s, it appears to have remained steady since the early 1990s. This can be seen visually in Figure 1 from Wai et al. (2010)<p>&gt; Therefore at the top end of any distribution of test scores where men have higher variability, we’d expect men to make up more than 50% of the upper end of the tail. Thus, any company drawing from the top 5% is likely to find a pool that contains more males. As one goes further out into the tail (i.e. becomes even more selective) the gender tilt becomes larger. Further compounding the gender tilt: the women in this elite group generally have much better verbal skills than the men in that elite group (see Reilly, 2012). This means that these women may be better employees than men who match them on quantitative skills, but because they have such superior verbal skills they have more choices available to them when selecting a profession.<p>&gt; Our Revised Conclusions About the Damore Memo We maintain that the research findings are complicated. This is evident in both this post and our original one. There are many abstracts containing both red and green text, and some of the top researchers in psychology are represented on both sides of the debate. Furthermore, many of the experts have concluded that:<p>&gt;&gt; … early experience, biological factors, educational policy, and cultural context affect the number of women and men who pursue advanced study in science and math and that these effects add and interact in complex ways. There are no single or simple answers to the complex questions about sex differences in science and mathematics (Halpern et al., 2009).<p>&gt; In light of of the research on the Greater Male Variability Hypothesis however, we have revised our original conclusions: Gender differences in math&#x2F;science ability, achievement, and performance are small or nil. (See especially the studies by Hyde; see also this review paper by Spelke, 2005). There are two exceptions to this statement:<p>&gt; Men (on average) score higher than women on most tests of spatial abilities, but the size of this advantage depends on the task and varies from small to large (e.g., Lindberg et al., 2010). There is at least one spatial task that favors females (spatial location memory; see e.g., Galea &amp; Kimura, 1993; Kimura, 1996; Vandenberg &amp; Kuse, 1978). Men also (on average) score higher on mechanical reasoning and tests of mathematical ability, although this latter advantage is small. Women get better grades at all levels of schooling and score higher on a few abilities that are relevant to success in any job (e.g., reading comprehension, writing, social skills). Thus, we assume that this one area of male superiority is not likely to outweigh areas of male inferiority to become a major source of differential outcomes.<p>&gt; There is good evidence that men are more variable on a variety of traits, meaning that they are over-represented at both tails of the distribution (i.e., more men at the very bottom, and at the very top), even though there is no gender difference on average. Thus, the pool of potentially qualified applicants for a company like Google is likely to contain more males than females. To be clear, this does not mean that males are more “suited” for STEM jobs. Anyone located in the upper tail of the distributions valued in the hiring process possesses the requisite skills. Although there may be fewer women in that upper tail, the ones who are found there are likely to have several advantages over the men, particularly because they likely have better verbal skills.<p>&gt; Gender differences in interest and enjoyment of math, coding, and highly “systemizing” activities are large. The difference on traits related to preferences for “people vs. things” is found consistently and is very large, with some effect sizes exceeding 1.0. (See especially the meta-analyses by Su and her colleagues, and also see this review paper by Ceci &amp; Williams, 2015).<p>&gt; Culture and context matter, in complicated ways. Some gender differences have decreased over time as women have achieved greater equality, showing that these differences are responsive to changes in culture and environment. But the cross-national findings sometimes show “paradoxical” effects: progress toward gender equality in rights and opportunities sometimes leads to larger gender differences in some traits and career choices. Nonetheless, it seems that actions taken today by parents, teachers, politicians, and designers of tech products may increase the likelihood that girls will grow up to pursue careers in tech, and this is true whether or not biology plays a role in producing any particular population difference. (See this review paper by Eagly and Wood, 2013).<p>&gt; In conclusion, based on the meta-analyses we reviewed and the research on the Greater Male Variability Hypothesis, Damore is correct that there are “population level differences in distributions” of traits that are likely to be relevant for understanding gender gaps at Google and other tech firms. The differences are much larger and more consistent for traits related to interest and enjoyment, rather than ability. This distinction between interest and ability is important because it may address one of the main fears raised by Damore’s critics: that the memo itself will cause Google employees to assume that women are less qualified, or less “suited” for tech jobs, and will therefore lead to more bias against women in tech jobs. But the empirical evidence we have reviewed should have the opposite effect. Population differences in interest and population differences in variability of abilities may help explain why there are fewer women in the applicant pool, but the women who choose to enter the pool are just as capable as the larger number of men in the pool. This conclusion does not deny that various forms of bias, harassment, and discouragement exist and may contribute to outcome disparities, nor does it imply that the differences in interest are biologically fixed and cannot be changed in future generations.<p>&gt; If our conclusions are correct then Damore was drawing attention to empirical findings that seem to have been previously unknown or ignored at Google, and which might be helpful to the company as it tries to improve its diversity policies and outcomes.
Some Excel users pop F1 off their keyboards (2012)
Whoa, I briefly worked on a project related to this, over in Redmond, like 20 years ago (ya I&#x27;m an oldie). So weird to see this being talked about in 2017, based on an article from 2012!?!<p>My background is in banking, here is some additional info on the F1 key thing in case anyone is interested in this random odd topic:<p>The article is not exactly wrong but it&#x27;s misleading (more on that later). Yes the F1 key opens the Excel Help Window. Removing the F1 key is NOT a common practice among bankers.<p>The origin, or one of the origins, of removing the F1 key as seen in this article, comes from how banks used to train their new interns and first year employees to use Excel. The first few weeks on the job at a Wall St bank were spent in a little training course and the training course was allowed to have a bit more &quot;fun&quot; than the real more professional side of the job soon to come for these rookies.<p>It was important to teach new banking analysts to be very efficient in Excel, this meant requiring them to memorize how to use the features of Excel without needing to look it up in the Help Menu AND training them to use keyboard shortcuts not the mouse, as keyboard shortcuts are faster. What this led to was some Wall St banks in their training class would have a little fun in training and tell new analysts&#x2F;students to unplug the mouse from the computer (so you have to learn the keyboard shortcuts) and tell them to remove the F1 key (so they can&#x27;t just look up how to do something that they should have memorized).<p>Of course, these training aid tactics of silliness were only relevant for the first few weeks on the job. As new analysts would very soon start learning all about macros and VBA in the second half of the training course. The F1 key can easily and quickly be disabled in a number of ways in the VBA editor of Excel [1] and the idea that a bunch of bankers are removing the F1 key from the keyboard so they don&#x27;t accidentally press it while reaching for F2 as this article describes is frankly ridiculous. Everyone uses macros and if F1 is disabled it is coded not physically removed, more than likely the help menu has been remapped to a multi-key shortcut just in case.<p>To those comments who say this is banking and the computers are locked down to the point you cannot install other software or make changes to current software, this is partly true but this does not extend to blocking employees from macros and VBA in Excel. Using and writing Excel macros is crucial to the job and it is expected to be utilized. I have never seen a bank where access to macros and VBA is blocked.<p>Now, it is possible to go to a bank and see someone has removed their F1 key from their keyboard. But it is not a common practice. IF you do see an F1 key removed it usually is because either: (1) the person removed the F1 key during training and then lost it or just never replaced it. (2) They are a brand new employee still in training (though less likely because training has changed in recent years and fun is not allowed anymore). (3) They were told incorrectly by a frat buddy or other &quot;wall street bro&quot; that removing the F1 is cool and sign of being an Excel ninja wizard. Reason 3 is sadly the most likely reason and if you read the article you will see they only spoke to and interviewed banking <i>analysts</i>, i.e. brand new rookie employees, not real investment bankers who have been on the job more than a few months and are out of the analyst phase.<p>If I saw an F1 key removed at a bank keyboard I would judge that person a little as it would not reflect very well on an experienced banking employee - unless that person was on one of the tech teams in which case they can do what they want and many have their own keyboards (which are loud guys) and I wouldn&#x27;t question their computer knowledge skills. The main reason not to permanently remove the F1 key after training is because other important finance software (like Bloomberg) needs the F1 key for other functions (in fact the bloomberg terminal official keyboard moved the help key to its own different button and its on a different row from the F1-F12 keys).<p>Also, having worked at Microsoft very briefly I think I can say Microsoft is aware of the issue of the F1 key Help Menu bothering some users where it is located and how slow the Help Menu can be. Back in 90s they used to talk to their customers and power users all the time and it&#x27;s likely they still do today, in addition the telemetry they snatch from everyone these days. They know, they aren&#x27;t changing it.<p>Lastly, while this was a mild case of an outdated and misleading article with a clickbait headline, business insider is usually much worse and I would just like to recommend that business insider be considered a fake news site and spam and blocked from Hacker News. Just a suggestion. Thank you.
It’s time to kill the web app
You know what would replace the web app if it was replaced today?<p>Some corporate locked down solution subtly or unsubtly controlled by a single conglomerate or interest group.<p>Recently certain corporations have been whispering about replacing the web standards with something &quot;better&quot;. At the same time as they have been pushing free our-platform-only &quot;internet connectivity&quot; in developing countries. I don&#x27;t want to name names since multiple corporations are implicated but for the sake of simplicity let&#x27;s call the imaginary placeholder company &quot;Facebook&quot;.<p>At the same time we literally JUST had a major split in the fabric of the internet with the EFF leaving W3C over DRM and now this is the top-rated comment on Y-combinator?<p>Venting frustrations is one thing, but anyone seriously advocating for replacing the web standards at this moment in time is either ignorant, ethically bankrupt or a corporate shill. Yes I know: Your mental internet filter has been finely tuned through years of weathering forum flamewars to stop reading any thread after encountering the word &quot;shill&quot; but please let me explain.<p>This is the first time in the history of the world that humanity has achieved a single standardized application platform supported by all major devices! If that wasn&#x27;t enough we now have amazing code collaboration tools like git(hub&#x2F;lab&#x2F;etc) and `npm publish`, to the point where the hardest part of writing a new web app often comes down to finding the right libraries and sticking them together. This is fucking amazing!<p>Today&#x27;s web is a land of unicorns and rainbows compared to what any sufficiently pessimistic human being would have predicted when the internet began. The technology used by the world for most of its communications is largely based on globally accepted standards and open source software!(!!).<p>Keep in mind that this is despite a global economy that has been trending toward increased corporate control by a decreasing shortlist of major players. In short: Despite the fact that the rest of the world currently appears to be mostly made of burning garbage, web developers should be dancing in the fucking streets!<p>If there are problems with the web then please remember: It&#x27;s still the early days of the web and we&#x27;ve only recently begun writing very complex applications for this platform. We&#x27;ll keep improving what we have and every year things will be better, but it is also always going to be the case that humans will push technology as far as it will go, so if you feel like web technology always sucks then that just means that you&#x27;re always working at the very edge of what&#x27;s possible with the state of the art. Changing platforms won&#x27;t change this fact and the bleeding edge will always be... bloody.<p>If anyone thinks that throwing away the world&#x27;s only common application platform because &quot;development is hard&quot; is a good idea then maybe they should try writing a UI-heavy app supporting Android, iOS, .NET and *nix with one-click install and high security, without using any web technologies, and then come back and tell me that this is a better way.<p>Now let me predict the future:<p>What&#x27;s going to happen is that Facebook will come out with some new app framework based on React (or React Native) which will compile to current web standards but also to the new &quot;Facebook browser&quot; (they won&#x27;t brand it as a browser but rather as a new part of the internet that has been missing until now). They will get more and more people developing for this framework since it makes development less painful (at least for the younger web developers who are fresh out of their corporate sponsored bootcamp and have only ever tried this one framework) and when they get enough developer market share they will start adding more and more &quot;facebook-only&quot; features which will enrich the experience for people using their &quot;browser&quot;. Keep in mind that I am still talking about a metaphorical Facebook. Maybe it will be a Facebook&#x2F;Adobe&#x2F;Amazon&#x2F;RIAA&#x2F;MPAA conglomerate &quot;standards&quot; initiative or some such multibeast.<p>Anyway: Because &quot;Facebook&quot; is actively developing this framework in-house at the moment they&#x27;ve been pushing public opinion against current web technologies in preparation for launch (honestly given who they are and their available resources they would be incompetent if they weren&#x27;t).<p>They were planning to launch this cross-industry collaboration and framework after the W3C DRM incorporation failed to pass, using the fires of industry indignation to bootstrap a corporate replacement for web standards, but now that they actually succeeded in undermining the W3C once, they will simple continue undermining web standards via the W3C while the FCC and the rest of the world is left to attempt to start a new standards organization out of the ashes, and let&#x27;s face it: The web standards were created when few people cared about web standards and the feat would be very hard to re-create without heavy industry support now that there are so many powerful stakeholders.<p>I know this post will most likely be buried but at least I&#x27;ll get the bitter satisfaction of linking to it and saying &quot;I told you so&quot;. Or maybe I&#x27;ll learn not to be so fucking pessimistic. Either way it&#x27;s a win.
It’s time to kill the web app
Internet is at risk of becoming low level plumbing of the snazzy house of proprietary app world. With the advent of app-only companies and products, internet, as we knew it, is slowly taking the backseat. App world is full in control of their masters and it is a very snobby world. Biggest irony of the sharing economy is that apps don&#x27;t like to share, linked to, looked inside. This world does not have a concept of hyperlinking, a basic premise of the internet. It is surely very un-internet like. It all seems designed to lock in the users to handful of apps and make them so myopic that they don&#x27;t even realize that there are options.<p>Let me take a step back. Internet, in my view, is the ultimate manifestation of FREEDOM. Everything is&#x2F;was free:<p>* Access to internet is free after you have paid your ISP. Almost everything that has been digitized is available on the internet for free. You could change ISPs and everything still worked.<p>* There was hardly any government control over the internet. They wished. However, it is designed in such a beautiful way that there are very few central systems. This makes the internet very tough to control (unless of course you are China).<p>* The real estate on the internet was also very cheap. You could buy a domain name in $10, a cheap server in $5 and go online with your site.<p>* There was no limit on number of sites you could visit. These sites could not steal your data. They could store some of their own data at your end but not steal much. Once you close the site, they cannot send you any popups or notifications. They cannot run in background and monitor your activity. Track your location, speed, acceleration etc.<p>* Better still, you could write blog posts which millions could read and cost you zilch. There were these things called RSS feed, which made it even unnecessary to go to sites to read content on them. You could just subscribe to RSS feeds.<p>* In fact, you could link to other people&#x27;s property and it was encouraged. People who visited your site, could easily hop to any other site you linked to. You did not have to pay anything for it.<p>* HTML was written in a way that made even sloppy code work. HTML was so dead simple that anybody could make a site in it. No lock-in. Almost all code written for one browser worked in all browsers. There were tonnes of browsers. This sloppy code could render on almost any device and browser. Again no lock-in. You could look into the html, css and javascript code of any site. It was free for all. Internet was the ultimate open source.<p>May be internet was too open to make money. So &#x27;they&#x27; invented the app world. App economy is a dream for big companies. Huge user base, free &amp; rich media push notifications, ability to steal the ultimate of user social graph (call logs, sms history) and on-ground sensors to enable steal very personal data of users. Lets have a look at this world as to how it compares to the internet.<p>* Internet fast speed lanes, internet.org, anti net neutrality deals. Enough said.<p>* Apps do share the internet. However are themselves in control of one company which makes it, one company which distributes it.<p>* Apps have already made it impossible for a part time hobby dev to produce and maintain 3-4 different apps. Hardly anybody I know, knows obj-c and java both very well.<p>* Apps have made it difficult to have more than 20-30 of them on your phone. More than that and your phone would be left with no space. Once these select 20 are there, you are locked into them. They steal your data and periodically push you notifications! A we just love them.<p>* We first managed to kill RSS. I remember there was a huge campaign one time which demeaned RSS. Google then killed reader for no apparent reason. Is the internet world a puppet show ?<p>* Apps cannot link to other apps. You cannot link to particular page&#x2F;screen of particular app in a generic way unless the other app wants it and allows it. There exists no generic way to do it. The standard way could be that you talk to the other app dev, sign a contract with them and possibly even pay them. Linking is dead.<p>* Apps are not free. They are locked in to a platform. If you want to port your code, you would need to rewrite the whole code base. (hybrid apps don&#x27;t seem like they are happening)<p>I think we are witnessing end of internet as we knew it. Companies are suddenly trying to kill browsers and generic internet. They are trying to invent a proprietary walled garden internet.
YC’s Essential Startup Advice
That seems like a lot of excellent advice except in cases where &quot;it depends&quot; and the advice is not good or some other advice would be better.<p>&gt; If we invest in you, your group is expected to move to the Bay Area for January--March 2018. You can of course leave afterward if you want, but it&#x27;s a good place for a startup to be.<p>IMHO it&#x27;s a horrible place to be, way too expensive, and anyone not really wealthy should get the heck out ASAP.<p>It&#x27;s the state of Governor Moonbeam and the district of Nasty Nancy, &quot;The San Francisco Treat&quot;.<p>&gt; we expect you to work out of wherever you find to live.<p>I agree with that.<p>IIRC, some places there can be zoning and insurance issues operating a business out of residential housing.<p>&gt; At each dinner we&#x27;ll invite an expert in some aspect of startups to speak.<p>Biggie problem: For the startups that are really wanted, eventually worth $10+ billion, there are not many experts from the past and many fewer for the next dozen such in the future.<p>Indeed, for the next $10+ billion startups, a guess is that they will be different from last $10+ billion startups, mostly need to be something quite new in some of problems solved, technology used, market, and customers served. Then the new stuff might be from <i>field crossing</i> and not from what is in Silicon Valley or computer science now.<p>&gt; Most successful startups change their idea substantially.<p>Not very good news.<p>&gt; The ideal company would have two or three founders. We&#x27;ll consider those with four or five. We&#x27;re reluctant to accept one-person companies, though we have funded many of them now.<p>But notice the advice<p>&gt; It turns out most companies fail fast because founders fall out.<p>Right. And the obvious solution is to be a sole, solo founder where<p>&gt; We&#x27;re reluctant to accept one-person companies,<p>For<p>&gt; Make something people want.<p>Yes, but at first, nearly no one knew they wanted a telephone, a Ford Model-T, a PC, Google, or Facebook.<p>&gt; the guidance below will help most startups find their path to success<p>But &quot;most startups&quot; will be at most a minor success. So far we get another Google only about once each ten years. So, the advice that works for &quot;most startups&quot; doesn&#x27;t have to work for the next Google or even the next 10 startups worth $10+ billion.<p>&gt; The first thing we always tell founders is to launch their product right away;<p>That is good advice for some startups, but for other startups there is the issue of &quot;You only get one chance to make a good first impression.&quot;<p>&gt; ... for the simple reason that this is the only way to fully understand customers’ problems and whether the product meets their needs.<p>Again, that&#x27;s good advice for some startups, but we would hope that the situation was:<p>(A) The startup has picked a problem currently solved at best poorly where it is totally, 100%, completely utterly clear that the first good, a much better, or an excellent solution will just thrill enough users&#x2F;customers with enough revenue per each to make a really successful business. The great example would be a cheap, safe, effective one pill taken once to cure any cancer.<p>Believe me, once someone has such a cancer pill, no way will they then be out talking to customers for feedback about, say, the color (white or yellow) or shape (round, oval) of the pill. Instead, as soon as that pill is known to exist, desperate customers will literally be banging on the doors to get one.<p>(B) The challenge is not at all having the first good solution thrill the users&#x2F;customers but just being able actually to construct that solution. That is, as for that cancer pill, the reason there is no solution now is that so far no one has been able to find one. So, the solution will need something new, some <i>secret sauce</i> in some important respects too difficult for others to discover. Then hide the secret sauce, say, in a server farm, with good security.<p>&gt; Surprisingly, launching a mediocre product as soon as possible, and then talking to customers and iterating, is much better than waiting to build the “perfect” product.<p>Sure. And &quot;The perfect can be the enemy of the very good.&quot;<p>But the<p>&gt; talking to customers and iterating<p>is not very good for all startups.<p>&gt; Once launched, we suggest founders do things that don’t scale (Do Things That Don’t Scale by Paul Graham1).<p>Okay, but how did that little photography experiment lead to &quot;a vibrant marketplace&quot;?<p>Maybe you are saying that the photography experiment said that pictures are important. Then the scalable, production version was to tell the AirBnB associates that they needed to hire an okay or better photographer to take pictures, e.g., much as in the experiment.<p>&gt; Talking to users usually yields a long, complicated list of features to build.<p>Maybe. But Google has hardly changed their home page in years.<p>&gt; a 100% solution that takes ages to build.<p>Creating unique, powerful, valuable, crucial core &quot;secret sauce&quot; is, say, some applied math research, and that by the right person commonly can be done in hours, days, or weeks. Then write the darned code, dirt simple code, and go for it.<p>E.g., I worked out the main parts of my Ph.D. dissertation research in my head in an airplane ride. The resulting dissertation had some nice improvements but was always well within what I first worked out.<p>This &quot;ages to build&quot; stuff suggests finding another problem to solve.<p>&gt; As companies begin to grow there are often tons of potential distractions.<p>By far the worst I found was contacting VCs. I&#x27;ll never do that again. It&#x27;d be better to start a grass mowing service than trying to get equity funding.<p>Besides, now, for an information technology startup based on software, the computing hardware, software infrastructure, and Internet data rates are so cheap that a solo founder, with a good startup selection, can bring his work to the <i>traction</i> equity funders want that will put him into nice profitability and ability to grow just from retained earnings. The equity funders are asking for too much: By the time a solo founder has what the equity funders want, that founder will no longer need, want, or accept an equity check.<p>&gt; chasing after press coverage<p>That can be one of the most important sources of publicity and users&#x2F;customers. Remember: You want your story told, and the press desperately wants a story to tell.<p>&gt; the most important tasks for an early stage company are to write code and talk to users.<p>That&#x27;s true for some startups, but such a startup is usually going to be in a sad situation.<p>Instead, the founders should already have a good problem to solve. By the time the customers see the good alpha test, the company should be in quite good shape with very little more code to write until, say, much bigger scale is needed.<p>&gt; For any company, software or otherwise, this means that in order to make something people want: You must launch something, talk to your users to see if it serves their needs, and then take their feedback and iterate.<p>This advice can hold for some companies, but not all. Instead, some startups have already selected a good problem and found a good solution.<p>&gt; These tasks should occupy almost all of your time&#x2F;focus. For great companies this cycle never ends.<p>Again, Google has hardly changed their home page in years.<p>&gt; Similarly, as your company evolves there will be many times where founders are forced to choose between multiple directions for their company.<p>Again, Google has hardly changed their home page in years.<p>&gt; When it comes to customers most founders don’t realize that they get to choose customers as much as customers get to choose them. We often say that a small group of customers who love you is better than a large group who kind of like you.<p>Sometimes, yes. E.g., does an auto company go for the Rolls Royce, Mercedes, BMW, Chevy SUV, Ford F-150, etc. market?<p>&gt; YC is sometimes criticized for pushing companies to grow at all costs, but in fact we push companies to talk to their users, build what they want, and iterate quickly.<p>That talking and iterating stuff only works for some startups.<p>&gt; It is very difficult as a new startup founder not to obsess about competition, actual and potential. It turns out that spending any time worrying about your competitors is nearly always a very bad idea. We like to say that startup companies always die of suicide not murder. There will come a time when competitive dynamics are intensely important to the success or failure of your company, but it is highly unlikely to be true in the first year or two.<p>Again, that doesn&#x27;t apply to all startups, but it&#x27;s good news, and I can believe it often applies.<p>&gt; A few words on fund raising (A Guide to Seed Fund raising by Geoff Ralston9). The first, best bit of advice is to raise money as quickly as possible and then get back to work.<p>Better advice: (A) Pick a problem and solution so that you, as a solo founder, won&#x27;t need equity funding. (B) If you contact 20 VC firms and don&#x27;t get a check, then give up on VC at least for a while. (C) Do &quot;get back to work&quot;.<p>&gt; It turns out most companies fail fast because founders fall out.<p>So, as above, be a solo founder. So, pick a pair of a problem and good solution that you as a solo founder can bring to nice profitability alone without equity funding.
Three Paths in the Tech Industry: Founder, Executive, or Employee
For the OP, I see some good news:<p>Generally there is better advice!<p>=== Startup Big Point<p>Here is a big, huge, gigantic point about doing a startup and owning 100% of it: In broad terms, that&#x27;s the American way!<p>Or, it&#x27;s obvious; just look: All across the US, east to west, north to south, from an isolated house in the woods to a crossroads up to the largest cities, sole, solo entrepreneurs start and run successful businesses. No biggie. No tech. No MBA. No venture funding. No team of co-founders. By the millions -- wrong, by the 10s of millions. If that was so difficult to do, then there wouldn&#x27;t be tens of millions of people doing it.<p>=== US Mainline Business<p>What do they do?<p>Mow grass -- the ones on my street show up with $100+ K of capital equipment counting the truck, the trailer, the mowers, etc.<p>Note: Now $100 K will pay for one heck of a powerful Web server farm; if you can keep that farm busy, then just at standard ad rates you have nearly a license to print money. Or, for computing, $100 K in capital equipment is now a LOT.<p>Do auto body repair.<p>Do other auto repair.<p>Sell car tires.<p>Add asphalt to driveways.<p>Do landscaping, architecture to grass mowing, for good customers, e.g., any company with nice grounds.<p>A dentist.<p>A CPA.<p>A pizza carryout.<p>A Chinese food carryout.<p>An Italian red sauce restaurant.<p>A manufacturer&#x27;s representative.<p>A local wholesale plumbing, electrical, building materials supplier.<p>A wide range of what can be called <i>big truck, little truck</i> businesses -- buy stuff delivered in a big truck and sell it out of a little truck.<p>Run several fast food restaurants, gas stations with convenience stores, etc.<p>And on and on.<p>Generally these businesses have one of the most powerful advantages in all of business -- a strong barrier to entry. That barrier is, and may I have the envelope, please [drum roll], geographical, that is, the businesses are not in competition with anyone more than 100 miles away. In particular they are 100% immune to competition from China. If they are in Tennessee, then they have no competition with anyone in NYS or CA. Etc. So, if they can do comparatively well in a radius of 100 miles, then they can do well for their career.<p>A huge fraction of the people who pay full tuition for their children at private universities did not take venture capital, did not have co-founders, didn&#x27;t get an MBA or a STEM field college degree, were nearly never an employee, and were not close to any of the scenarios in the OP.<p>Okay, that, my friends, is American Business 101 Facts of Life. That&#x27;s the overwhelmingly popular version of US business and, indeed, US careers. Nothing else even comes close. And the OP is far away from that.<p>So, take that US business 101 and, with computing, try to do more -- the computing should be an advantage.<p>=== Being an Executive<p>Okay, briefly, for being an executive: No security. None. Zip, zilch, zero. No matter what. Instead are an <i>at will</i> employee that at any time can be fired for any reason or no reason. Biggie reasons for getting fired: (A) As in the OP, office politics. E.g., there&#x27;s gossip (you get accused, tried, convicted, and fired all while knowing nothing about it). (B) The company does poorly. (C) The company does fairly well but gets bought out by another company. (D) The owner has a son and wants to give him your job. Etc. The crucial point is, you don&#x27;t own the business.<p>Just as an <i>executive</i>, your skills, alliances, knowledge of your current job, etc. anywhere else with a dime usually won&#x27;t cover a 10 cent cup of coffee.<p>Is there a way? Okay, be in sales and have a nice list of your accounts that do well. You don&#x27;t really want to be the sales manager, just a good sales guy. If your employer goes out of business, gets bought out, etc., then take your customer list to whatever company in the industry wants to server your customer list. Be sure the industry will be solid even if your employer might not be.<p>There&#x27;s just no magic to being an executive. And there&#x27;s nearly no power; instead your job is to get along, go along, and hope nothing bad happens. If you sponsor a new project and it fails, then you have a black mark on your record, and everyone else has an excuse to gang up to fire you. If the project is successful, then you have jealous enemies all the way to and including the BoD.<p>Part of this is the fact that currently the US economy has about 94 million people out of the labor force. So, mostly, there&#x27;s no shortage.<p>Currently it&#x27;s too common for Mr. Big CEO to wake up, have a bad day, look at his budget and organization, draw a big X, and say &quot;Off with their heads.&quot; In this way big, famous companies have suddenly fired dozens, hundreds, thousands, tens of thousands of people. E.g., at one time, as computing was roaring ahead, IBM suddenly went from 407,000 employees down to 209,000. They cleaned out rush hour traffic over big parts of NY, CT, and NJ.<p>Another time, at IBM Research, a group of about 100 people got into serious political trouble, and when the dust settled a lot of managers were demoted or fired and a about half of the rest, about have high-end Ph.D. holders, were fired. It was all about work-place politics, cliques fighting each other, etc.<p>Sure, decades ago lots of people could join a company and work until a nice retirement. Things changed slowly. E.g., could work for Sears for decades and retire. Sears? It&#x27;s about to go belly up, totally. No more; that&#x27;s rarely the case now. If you really want that, then: (A) Work for a company, e.g., a public utility, where the employees have a strong union and join the union. E.g., at one time could do that with the Bell System. Alas, even Ma Bell didn&#x27;t last. A local water company might be better. Be careful about a local electric utility since that industry might change a lot before you get to retire. (B) Work for government, local, state, or Federal. Remember, though, even working for government have to be careful about the slot. E.g., don&#x27;t be a civilian Civil Service employee for the US military: Then some military officer is running the place; they likely have their job changed each two years or so; so a new guy comes in and quickly doesn&#x27;t like you or your job, and you are OUT. So, each two years you have to sell yourself to another person you never met before. That person can make a big mistake firing you, but, then, you are still fired. So, for a 40 year career, you have to sell yourself to 20 people you never met before, and you have to be 100% successful in all 20 attempts. That&#x27;s not good job security. (C) Work for, say, some very solid, stable financial institution, maybe The Ford Foundation.<p>Officer in the US military? Every now and then you are up for promotion, and if you get passed over three times or some such, then you are out. So, as time goes on, the many lower level officers become many fewer upper level officers. It&#x27;s not at all clear which lower level officers will leave; it is totally clear that nearly all of them will.<p>Net, it&#x27;s better to be a non-commissioned officer, e.g., a sergeant where you can keep your job. Even if you get 4 stars, like General Mattis, you can be out because someone up there didn&#x27;t like you.<p>Some private companies have personnel policies similar to the military for their officers: They hire lots of people in their 20s, and by age 35 they are in management or out the door. It&#x27;s not at all clear who will be out the door, but it&#x27;s totally clear that nearly all of them will be. A lot of those people would be better off in their 20s starting and growing a grass mowing service, quite literally: There&#x27;s a good geographical barrier to entry, and the grass will keep growing. Commonly in the US, a Ph.D. in electronic engineering will have a better long term career as a licensed electrician. No joke.<p>I know a guy, bright enough, who got a Master&#x27;s in environmental engineering. His real career was as a plumber and installer of home heating systems.<p>=== Being a Founder<p>What the OP says about being a founder is quite narrow.<p>I have a high school friend. His father was selling beer for a small brewery. The brewery went out of business, but he knew a lot of people in the beer business. He went to the NW corner of his state, arranged to distribute about six brands of beer (right, the beer came to his warehouse on a railroad siding via railroad freight car) and sold out of little trucks. Somehow people still like beer! He passed the business down to his son. People still like beer! Good business. Darned good business.<p>If want to go into computing, then be sure you have an even better business than selling beer.<p>The OP wants to say that the business idea in computing is not very important. IMHO, that&#x27;s mostly nonsense.<p>Sure, if want only routine technology, then the OP comments about employees, teams, etc. are important. But a business with only routine technology tends to have a darned small barrier to entry. Sure, one of the best barriers to entry was virality from a social network with a strong network effect, but that&#x27;s path seems to be about saturated.<p>Again, there are 94 million people out of the labor force. So, if you want a job as founder of a successful business, then, IMHO, need to do something new and powerful with a good barrier to entry. Basically you have to plow new ground in US business, have to do something those 94 million don&#x27;t know how to do.
Counting raindrops using mobile-phone towers
Really interesting.<p>Didn&#x27;t get the paywall everyone else did.<p>Here&#x27;s the raw text:<p>NO ONE knows exactly how many people died in a series of mudslides that happened in and near Freetown, the capital of Sierra Leone, on August 14th. The upper estimates are more than a thousand. The areas swept away had not been evacuated partly because no one knew how much rain had actually fallen beforehand, laments Modeste Kacou, a rainfall expert at Félix Houphouët-Boigny University in Abidjan, in nearby Ivory Coast. Rain gauges are sparse in Sierra Leone. Satellites detect rainfall in the tropics, but estimates for small areas are often inaccurate. Worse, these numbers are calculated hours after the fact. Many countries therefore use cloud-scanning ground radar to measure precipitation as it is happening, but Sierra Leone has no such radar.<p>Nor do many other poor countries. Ivory Coast has double the GDP per person of Sierra Leone, but like most of west Africa, it also lacks precipitation radar. Indeed, maintenance costs mean that the number of weather stations around the entire world is shrinking, making it harder to forecast flash floods and landslides even in some rich countries. It would be useful, therefore, if some other way of measuring rainfall—preferably a cheap one that employs existing, widespread equipment—could be devised. Fortunately, there is just such a method, and it involves mobile-phone networks.<p>The basic insight is straightforward enough: rain weakens electromagnetic signals. Many mobile-phone towers, especially in remote areas, use microwaves to communicate with other towers on the network. A dip in the strength of those microwaves could therefore reveal the presence of rain. The technique is not as accurate as rooftop rain gauges. But, as Dr Kacou points out, transmission towers are far more numerous, they report their data automatically and they cost meteorologists nothing. He runs the Ivory Coast operations of Rain Cell Africa, an effort paid for by the World Bank, the UN Foundation, a charity, and the Institute for Development Research, which is based in France, to map rainfall in parts of Africa using data donated by Orange, a big telecoms firm, and Telecel Faso of Burkina Faso, a small one. Had the system been running in Sierra Leone, he reckons evacuations could have been carried out in time.<p>Rich countries are interested, too. A pilot project in the Netherlands a few years ago produced promising results, but it has not yet been followed up. This month officials in Gothenburg, Sweden, began to study rainfall maps derived from data collected every ten seconds from 418 mobile-phone towers owned by a firm called Hi3G. The hope is this will provide more accurate estimates of rainwater about to slosh into the municipal waterworks, helping managers to limit flooding and sewage overflows. Until now, the city has relied on 13 rain gauges, backed up by radar sweeps of the sky that are neither sufficiently frequent nor sufficiently precise, says Jafet Andersson, a hydrologist behind the scheme at the Swedish Meteorological and Hydrological Institute. Satellite data on rain in northern latitudes are so poor the agency does not bother using them at all.<p>Though it is useful to know how much rain is falling right now, forecasting is even better. Telecoms data promise to make this easier as well. Some newer networks are sufficiently sensitive that they can detect humidity and fog, both of which are predictors of imminent rain. Newer generations of mobile-phone masts use shorter wavelengths in their transmissions, because these can carry more data. Serendipitously, that also permits tinier amounts of water to be detected, for moisture weakens short wavelengths more than long ones. Using data from about 5,000 towers operated by three telecoms firms in Israel, Pinhas Alpert of Tel Aviv University creates moisture maps that, he says, are far more precise than those drawn with data from the Israel Meteorological Service’s humidity gauges, of which there are fewer than 70.<p>The right wavelength Moreover, because transmission towers are so common, predicting where rainclouds are being pushed by winds is easy and accurate, notes Dr Andersson. Several governments in Africa and South-East Asia have asked his team to set up rainfall-measurement networks for them. No deals have yet been signed, though, for there is a stumbling block: money.<p>For the time being, telecoms companies are happy to let forecasters use their data free of charge. As the value of such data becomes clearer, says Frédéric Cazenave of the Institute for Development Research, that is likely to change. Consider ClimaCell, a firm based in Boston, Massachusetts. In April ClimaCell began selling forecasts based on phone-tower data which are so precise, it claims, that its customers will be able to tell “if a plane, crane or game” will get soaked.<p>The firm’s clients include three airlines, several sports leagues, a construction company, a drone operator and a hedge fund (which uses weather forecasts to make trades). It plans to offer its forecasts in India by the end of this year, and to expand into ten more countries in 2018. If ClimaCell pulls that off profitably, telecoms firms in both the rich and poor world are likely to start demanding a slice of the action.
Ask HN: Making the move from Windows/.NET to Linux/Python/Ruby/etc?
First thing is make a plan on what you want to learn. It seems like you have a firm goal of Python&#x2F;Ruby. keep in mind there is lots of &quot;things&quot; you can learn. It&#x27;s a marathon not a single race. Once you have your main plan, learn this, learn that, etc... Also set aside some time to look into other concepts to pick up along the way.<p>I would first brush up on actual MVC concepts and truly under stand the 3-tier architecture design patterns. The C# MVC framework hides some crucial concepts away from the application developer. I would also brush up on CGI and FCGI programming just to get a bigger picture of how things are&#x2F;were done. Then make suer you have a firm grasp on the HTTP protocol etc... Understand RESTful webservices, SOAP webservices, start looking at RPC ( webservices not over http).<p>Then... I would find a crash course on Unix&#x2F;Linux, Linux acadamey, youtube lectures, books (&quot;running linux&quot; is a good primer), search .edu sites for online course materials, etc...<p>Third install a linux distrobution. spend some time with it getting comfortable with the command line. Then install a different distrobution and keep &quot;distro hopping&quot;. The goal is to find the one you like the best and use that for your projects. My first distor was Slackware back in 94. then I landed on RedHat, then I ran Mandrake, then Debian, then proper Unix (freeBSD) for awhile. I currently run Ubuntu at home.<p>I bring this up because I have clients that run Mac&#x27;s, other clients on CentOS&#x2F;RHEL. You never know where your code is going to run. Especially since Python and Ruby can run on many different platforms.<p>I should also suggest that you read the Linux from scratch documentation. This will help you get a firm grasp on how a GNU&#x2F;Linux system is put together. then at some point in your life actually build a LFS system.<p>The reason is so you can understand that not all GNU&#x2F;Linux systems have the same tools. That there are some overlaps and but some major differences (like location of logs, install paths, package management) doing things in Debian is different than Fedora&#x2F;Redhat, which is different than Suse.<p>Yes what I said above will take time to learn. Think of this as a side quest towards your ultimate goal. The skills you will learn along the way will help with all future endevors with doing Linux&#x2F;Unix development.<p>Once you have a Linux distro installed, go through the process of setting up Apache. Then Nginx. Essentially set up a basic LAMP stact. While your goal is Python&#x2F;Ruby there is some overlaps, and it will give you the basic skill set of administrating those environments.<p>Next install Python and Ruby. Start going through tutorials Build some things.<p>Remember there are many ways to do web development, and there are pros and cons to each way. Just keep learning, Sometimes a new way of doing things comes out of other area&#x27;s. A good idea is a good Idea, Being able to take what you know from C#, or something say the NodeJS folks are doing and implementing it in Python&#x2F;Ruby could be a very good thing for those languages.<p>Also here are some other topic you might want to look at:<p>* learn how to use the man program. run $ man man * using different shells, csh, bash, etc... there scripting languages. Learn to not be afraid of the command line. * build tools: make, autotools, cmake, etc.. build some software from source code to understand how different projects are put together. * compilers: gcc, clang&#x2F;llvm * debugging tools: gdb, strace, dtrace, etc... * system monitoring: top, htop, iotop, glance, free, df, du. * system administration tools: kill, ps, ip, ufw, etc... it&#x27;s nice to know how to find and shutdown a runaway program. also nice to know how to get your IP, opening and closing ports in the firewall. * learn how to use curl. So that you can test hitting a url from a commandline. curl with a bash script can automate some of your testing&#x2F;debugging. * learn how to set the sshd and using ssh to remotely connect to other machines.<p>Some final thoughts; Find your learning resources. Find tutorials on the topics you want to learn. Look at .edu sites as that you can find free course materials and lectures. Look for lectures on youtube, start listening to Linux&#x2F;Unix podcasts for new topics and news. Put together a reading list on the topics you want to learn (also keep in mind that just because the book is old doesn&#x27;t mean it&#x27;s useless). I have many books that discuss the inner workings of protocols that newer books on the same topic leave out.<p>Lastly, have fun while you are storming the castle, Don&#x27;t make this into a job&#x2F;work or you will get burned out.
Say no to Electron: use JavaFX to write a fast, responsive desktop app
Look. I agree, that you shouldn&#x27;t make desktop applications in Electron. The thing is, when you have a fresh grad student that was taught PHP as a web framework for most of the time and maybe Node, what do you think they will gravitate towards? I understand veterans feel like Qt, GTK and JavaFX are intuitive in their architecture and syntax but that is simply not true. Heck even XML can be jarring.<p>So here&#x27;s what i&#x27;ve found people trip up with JavaFX mostly.<p>1) FXML. Why do you need so much information just to view &quot;Hello world&quot;. You need to define a scene, then you need to define what&#x27;s inside the scene (You&#x27;ll need to go look up a reference guide on JavaFX to find what objects you can attach just to get started), then you need to describe that a text node is connected to the thing inside the scene. For HTML it&#x27;s always gonna be &lt;html&gt;&lt;body&gt;&lt;&#x2F;body&gt;&lt;&#x2F;html&gt;. Inside the body it doesn&#x27;t matter what structure you create, you&#x27;ll be cutting off &quot;sections&quot; with plain html+css. HTML5 got it&#x27;s canvas if you need more advanced functionality. Why would people who have been taught to use canvas and DOM revert to this?<p>2) Custom CSS. Fantastic, more syntactic sugar and another reference guide to search through... people get effects&#x2F;animations with greensock or css nowadays, it&#x27;s fairly competent stuff and frankly more intuitive. Just a gem from the reference: background: white; -fx-text-fill: ladder(background, white 49%, black 50%);<p>Without reading the reference I&#x27;m thinking it&#x27;s filling the color. What does ladder and it&#x27;s arguments mean I would have no idea. What does is this: &quot;Use the following if you want the text color to be black or white depending upon the brightness of the background.&quot; – right, cool but there&#x27;s filters and stuff made for this very thing in plain old CSS.<p>3) JVM hot code reload. The entire section is confusing to people using Node that learned to implement a watcher in Tutorial 1 of setting up package.json. Good luck explaining intricacies of JVM to grads that struggled to get bare bones Java application running in Eclipse. &quot;I find it hard to believe that anyone would prefer the webstack to working with a sane environment like the JVM.&quot; – I profusely disagree. The fact you need to have a virtual machine for your code to execute is a lot for people outside the bubble to comprehend. The fact you have two types of dependencies, runtime and compile, already confuse new people coming into the field. Gradle which is supposed to make lives easier is still way more confusing than fiddling with package.json. It&#x27;s perhaps not Gradle&#x27;s fault, I think the blame is more with veteran developers that like to be &#x27;clever&#x27; and manage to obfuscate something as simple as launching an application.<p>4) SceneBuilder. &quot;It can be integrated into all Java IDEs, making it easy to create new views.&quot;. What the author has forgotten to mention, is that it can be a pain to use and integrate (Haven&#x27;t tried this in IntelliJ and i&#x27;m sure it&#x27;s better there but it&#x27;s still more hassle than opening your flavor of browser inspection). You&#x27;ll most likely end up ditching the GUI and do everything programmatically, at which point you&#x27;ll ask yourself why are you doing css and js in Java. The example in the article is a simple &quot;Hello World&quot;. Anything more complex and you&#x27;ll find people falling into the habit of doing everything inside of Java.<p>5) ScenicView. &quot;To start it with your application, just download the jar and pass the option -javaagent:&#x2F;path-to&#x2F;scenicView.jar to the JVM.&quot; – that line might as well be written in Chinese if you&#x27;re a person coming from the Node scene.<p>6) JavaFX does not automatically refresh stylesheets. You need to build a whole seperate function and implementation just to refresh a stylesheet. &quot;This works in Mac, Windows and Linux Mint. But this was one of the only two problems I had related to differences in OS&#x27;s (the other one was the icon in the system tray on Mac does not work, but there is an ugly workaround for that). JavaFX abstracts that away pretty well, most of the time!&quot;. Well that&#x27;s reassuring there is an iffy solution to a problem that shouldn&#x27;t be there to begin with.<p>I also feel the author is quick to throw anyone using electron under the hipster title and then proceeds to call the webstack a mess yet ignoring the mess that Java is. Need I remind you why Node stuff was so popular? Because people required entire days to figure out how to get a simple ToDo Spring application working. And Spring is supposed to be easy. Think about that. You need to spend dev time on something as obscure as &#x27;JVM tuning&#x27; at one point or another. Or fixing some bizarre leak&#x2F;overflow because hurr-durr imperative programming. And the 20 years of patterns and best practices wasn&#x27;t good enough. We have shit like JPerf to figure out why all those stern-toned articles still lead to shitty code and JRebel to sweep the problem under a rug.<p>&quot;We&#x27;ve been writing desktop apps for decades. The web, on the other hand, only really got started less than 20 years ago, and most of that time it was only used for serving documents and animated gifs, not creating full-fledged applications, or even simple ones! To think that the web stack would be used to create desktop applications 10 years ago would be unthinkable.&quot; – And here we are with Java still remaining the clusterfuck that it is.<p>&quot;If people are preferring to ship a full web browser with their apps just so they can use great tools such as JavaScript (sarcasm) to build them, something must have gone terribly wrong.&quot; – yeah, that something was the JVM. Kinda funny we now have all this compile-to-js stuff when there&#x27;s still uppity aura revolving around JS. Heck I remember having trouble with just the JRE back when I hadn&#x27;t learned any programming.
Ask HN: Beginner confused on where to start learning programming
TL;DR - install Linux or use a Mac, Learn a compiled language, find a book&#x2F;tutorial for beginners on that language and your operating system and actually work through all examples&#x2F;tutorials&#x2F;problems. Take the harder route, Learn the basics of programming in general, as that all languages share those concepts. If you go that route all the other languages are just syntax and new tools&#x2F;programs. stay away from interpreted languages until after you have a firm grasp on how compiled languages work.<p>The state of programming is currently a mess. 20 years ago there was not so many topics and programming techniques. back then you started with Basic programming to get your feet wet. You would move on to other high level languages like C&#x2F;C++, Java, etc...) Developers older than me would have learned C&#x2F;C++, Fortran, Pascal, Lisp etc...) There was a clear learning path without too many branches to go down.<p>Now a days, you have terms like UI&#x2F;UX designer, Front-end developer, back-end developer, full-stack developer, embedded systems developer, etc...<p>These are all new concepts, back in the day it was just developer or software engineer.<p>Todays landscape has too many things to wrap your head around - HTML, Javascript, Nodejs, Go, Rust, Scala, CSS. Ruby on Rails, React, yadda, yadda, on and on and on...<p>So it is easy to understand it&#x27;s hard to find out where to start.<p>The truth is; all of these programming languages all build upon basic principles. Assignment of data, mathematics( arithmetic, calculus, algebra, etc...) , and most importantly Boolean algebra which gives us true and false, AND, OR, NOT, XOR, XAND, NOR, NAND, etc.. Boolean algebra or sometimes called Boolean Logic lets us form Conditional logic&#x2F;statements ( if statements, equals, greater than,, and loops (do, while, for, foreach, etc..).<p>to recap the basics of programming are:<p>1. assignment of data to a variable. X = 1; Just like algrebra. Put your data in a bucket with a label. 2. operations on that data: mathematics, inputs and outputs (I&#x2F;O), etc...) Do something with that data, which may create more data to store. 3. Conditional statements to decide what to do next with the data: if ( x = true) let y = 2 + 2; do while (y == 4) then y = y -1 end<p>So... My advice is learn the basics of programming. Do not get hung up on a &quot;Buzz&quot; word&#x2F;topic&#x2F;language. Pick a language and master the basics.<p>Programming languages could be said to generally come in 3 flavors. and 2 sizes. The official terms are paradigms and types. Note that there are more paradigms and types, but to keep it simple these are the most common&#x2F;basic.<p>Flavors (paradigms)<p>1. Procedural - (do step 1, then 2, then 3 in a defined order) 2. Object Oriented ( a very simple definition: objects are generic, reusable chunks of code that try to describe real world concepts. Once objects are created, what those objects can do are executed. define Object boy: the boy can jump, the boy can sit. create new boy. boy.jumps boy.jumps boy.sits ) 3. Functional - just like higher mathematics that have the concept of functions. f(x) = y +2 f(y) = 3 - a f(lambda)= f(x)(f(y)(a=1)). You define your functions, then the global function typically called the lambda function is where the program starts it&#x27;s execution.<p>Sizes (types):<p>1. Compiled&#x2F;assembled - languages are read by a program and turned into a file that is executable by your operating system. To run your program you ask your operating system to run the code in that file. your code is converted from the language you wrote it in and converted to machine binary (zero&#x27;s and ones) that your CPU can understand. That binary (zero&#x27;s and ones) is saved as a file for later use ,and reran quickly.<p>2. Interpreted - you run a program and give it your code as you wrote it, and it runs the steps in your code. it is not saved for later use. Every time you want to run your code, the work of converting your code to Binary is done again. Interpreted languages will sometimes have a runtime, A runtime is the program that executes your code on your machine. Interpreted languages are slower than compiled languages.<p>I would suggest picking a compiled programming language. Preferably on in the C syntax family (C, C++, or D). Pick up a book on one of these languages written for beginners. Read the book and work through each example&#x2F;tutorial&#x2F;assigment. This is the harder route, because you will have to learn the compiler and associated tools sometimes referred to as a &quot;toolchain&quot; You&#x27;ll have to learn how to run the compiler, what program&#x2F;tool you type your code into, etc...<p>I would also suggest Installing Linux or using a Mac. Stick to open operating systems that are of the UNIX family. Find open source applications that suit your fancy read the source code to learn more. This is one of the reasons Open source is king. You can use the force by reading the source.<p>Programming is not easy, it is not meant to be. You have to fall in love with computers, and learn not to give up when something is not working. Solving complex problems starts with learning from cryptic compiler warnings and errors. It will force you to find your mistakes, and avoid them moving forward. it&#x27;s like negative reinforcement, you just want it to work, eventually it clicks and you can write tons of code and have it compile on first go. Does not mean it works as designed, but works as you wrote it.<p>After you have a firm grasp on the concepts, rest is just syntax. What do I mean by that? every language has assignment of data, operational statements , and conditional statements. It&#x27;s just how you type the code that is different so it works. Learning a new language is then just re-learning how to express the basic concepts in code and run that languages toolchain.<p>you should not concentrate on web, mobile, embedded at first. Learn to crawl, then walk. Then again find open source projects in those area&#x27;s and see what you like. If you dabble with them all you will be well rounded and won&#x27;t sweat any job that comes your way. ultimately you might like embedded, or systems programming.<p>In reality though, you&#x27;ll do what ever work you can find a job doing. knowing more languages, and being able to answer tough interview questions will really determine what you end up doing. My advice to you is once you adept at writing code, it&#x27;s not what you like doing at work. It&#x27;s about what pays the bills. You can always work on what you want to do on the side.
Ask HN: Do you use a Chromebook or Pixelbook as your primary machine?
Primary laptop, yes. I have a desktop with a vastly superior mechanical keyboard and multiple monitors, but for a primary laptop, sure.<p>The concept of &quot;primary machine&quot; is weird to me. Since the early 80s, things keep getting separated out. NFS home directories moved bulk file storage to a fileserver in the 80s or early 90s. I could run ALL the self tests and build for ALL possible outputs by hand like the old days on a local machine, but that&#x27;s what the Jenkins vm is for. I could run git without any centralized server, but no one really does that, there&#x27;s a host for that. I could keep track of bugs in email or in my head like the baddest of bad old days, but there&#x27;s a host for that. Logging for debugging is on separate hosts now a days. I used to run my dev environment locally using lots of memory and CPU and battery power but I&#x27;ve had access to a vmware cluster that weighs several tons, I&#x27;m not carrying that kind of power around in a laptop. Since everything has abstracted out into cloudy hosts, there&#x27;s really nothing I can do with local dev resources WRT the proverbial &quot;what do I do in an internet outage?&quot; question. The answer is &quot;same thing I&#x27;d do with my desktop, nothing, because everything is online and cloudy today.&quot; In 1981 I could program productively on one machine air gap isolated from every other machine on the planet, but that was a very long time ago. There may also be inherent cultural issues, you can and we did program in Z80 assembler in &#x27;81 on completely isolated self hosted machines, but I&#x27;m not sure that even works culturally with modern languages, burned right into the tools are things like automatic dependency resolution that assume internet connectivity.<p>Having infinite cloud power laying around makes things weird talking with older generation devs. Yes the Scala compiler is not fast if you run it on a low speed battery friendly laptop cpu with 2 gigs of ram and slow spinning rust for storage, but the vmware image I compile on has specs better than anything you can buy today in as a laptop so I just don&#x27;t care about speed.<p>You have to get used to some closed source weirdness, but can be worked around. You &quot;right click&quot; using alt and the touchpad on my weird keyboard. The keyboard feel is awful compared to an original model-M on my desk but it does work. The cheapest chromebooks have ridiculous low res low dpi screens and one with a good screen, is, as you&#x27;d expect, hundreds of dollars not $49 or whatever the school kids get today. The vnc client still complains about security every time I use it as if I&#x27;ll respond on the 35194th complaint. The SSH client UI is like a weird re-interpretation of putty which is initially weird.<p>On the other hand, its never crashed or failed in any way not designed into it, OS security and maint patching isn&#x27;t an issue, the boot time is a couple seconds so I don&#x27;t bother with sleep mode, the battery really does last ten hours while it weighs practically nothing. It just works as an appliance which is very unusual for general purpose computer use.<p>Most chromebook solutions are multistep which makes it either impossible to use or trivial to use depending on your mindset and past experience. How to program an arduino on a chromebook? Well, that&#x27;s impossible in one step there is no one click arduino IDE installer on the google play store, but absolutely trivial in multiple steps. Setting up a VNC client on a chromebook is trivial. Setting up VNC server on a dirt cheap pi and accessing it via the network is trivial. Setting up arduino ide on a pi is trivial. So its both impossible to program an arduino using a chromebook in one step, and its also four unimaginably trivial steps to accomplish. So using &quot;windows monolithic thinking&quot; many things are simply impossible on the chromebook and will forever be impossible as long as you demand one click solutions, while &quot;unix small optimized tools thinking&quot; means everything is possible, even trivial, on a chromebook. Chromebooks have acceptable individual tools that can be strung together unix-like to do anything, but there are very few swiss army multi-tool giant monolithic one click solutions. You can&#x27;t one click install visual studio locally, but you can trivially VNC into a giant overpowered vmware image running emacs connected to cloudy fileserver and jenkins and gitlab and things...
A Social Network Doling Out Millions in Ephemeral Money
Hi, my name is @andrarchy, I&#x27;m part of Team Steemit and you can AMA :). This comment was originally intended as a response to aaron-lebo, but as it became quite long, I decided to post it here.<p>Aaron-lebo says, &quot;They&#x27;re interested in their network getting big so they can make money. There&#x27;s not much nuance to it. You can spin it another way but read the thread. It&#x27;s a soulless money grab, and it&#x27;s disappointing because they actually have decent software.&quot;<p>He&#x27;s right in a lot of ways and wrong in a lot of ways, but obviously I&#x27;m biased :). Regardless of whether we agree or disagree I want to thank anyone who even gives a moment&#x27;s thought to what we&#x27;re trying to do at Steemit. So thank you @aaron-lebo.<p>What&#x27;s right:<p>We have decent software. Well, when it comes to offering a free social network that rewards tens of thousands of ordinary people all over the world for their content in a cryptocurrency that can be traded on exchanges and cashed out for fiat currencies, we&#x27;re the only game in town, so I think we have a pretty strong argument that we have the BEST software. And knowing our engineers I have no problem saying they are the most intelligent, diligent, thoughtful, nice, and hard working people I have ever had the distinct pleasure of working with. IMO they&#x27;re the best in the world and welcome proof to the contrary.<p>One common criticism (not made by this commenter) is that most people &quot;don&#x27;t make a lot.&quot; First off I want to highlight exactly what this criticism is saying: some people on steemit make a lot. A lot of other people make a bit. A lot of people make not a lot. This is all true. But these types of comments invariably dismiss the difference that making $5 or $10 on a post (which MANY people DO make) can have on a person&#x27;s life especially in many of the countries where our users are from like those in Africa, Indonesia, and more. But even with respect to more wealthy nations, I often find the critiques of how much our users make to be remarkably tone deaf. It is certainly true that not everyone can make tons of money on Steemit and you will never see anyone on the team saying anything to that effect. Steemit is definitely NOT a get rich quick scheme. It&#x27;s only true that you CAN make money on Steemit and people do every single day. That&#x27;s right, the Steem blockchain distributes thousands of dollars worth of STEEM every single day to users. In fact, ALL of the money created by the Steem blockchain goes to users who provide some service to it whether it&#x27;s posting articles, comments, curating content, or running a node. Like every other major cryptocurrency&#x2F;blockchain protocol, the fact that anyone is free to participate in it is part of what makes it anti-fragile. None of that Steem goes to the team, unless of course we do one of those things. I understand the content that is currently on the site might not appeal to many, but the amount of rewards are real (no one denies that) and who is receiving them changes every day. What I recommend is that people view the lack of diversity of content on the site as an opportunity to contribute something new to the platform. That&#x27;s certainly how I looked at it when I started as a contributer on the platform and worked my way up by creating videos that helped explain the tech to the many lay-people on the platform. Eventually my videos got the attention of the people at Steemit and I was brought on board! That being said, we are still a small social network, so of course our content is not going to be able to match the bigger players. But if people want to look at the data for themselves, project our growth rate, and estimate when we will reach such scale they are welcome to. All the data is on the blockchain which is an open, public and decentralized database.<p>What&#x27;s wrong:<p>We&#x27;re not soulless. I swear we have souls! :) There are more than 30 people on the team at this point and statistically speaking it&#x27;s just highly unlikely that we could ALL be lacking in what is typically described as a near human-universal. Unless, of course, you don&#x27;t believe in souls in which case, yes, this statement is technically true. What I would say is that those in the cryptocurrency sector are often critiqued as being TOO ideological. We all openly advocate for decentralization for the purpose of spreading power (and money) among the people. That&#x27;s certainly why I got involved in the field and have been blogging and vlogging on the topic for over a year, all of which is immutably preserved on the blockchain for your investigation ;). At Steemit we do want to make steemit.com as big as possible because we all have Steem, but so do all of our users. So, yes, they do too. But to claim that anything else should be the case is like saying that Facebook is better because ONLY Facebook shareholders benefit when Facebook grows. The idea that we are somehow more soulless than Facebook, Twitter, Instagram, or any other social network that wants to grow because their model depends on eyeballs to sell ads to (notice any ads on steemit.com?) is pretty laughable. Yes, we have figured out a sustainable and scaleable way to align the incentives of the creators, developers, and users of an open, public, anonymous, censorship-resistant, and decentralized database protocol so that when it grows everyone benefits. We&#x27;re not sorry :)<p>One of the most frequent errors people make (and this is 100% understandable) is that for us it&#x27;s ALL about growing steemit.com. This couldn&#x27;t be more wrong. We are interested in growing use of the Steem blockchain protocol (just like the creators of Linux wanted to increase use of their protocol) and the demand for Steem. I&#x27;m not saying it&#x27;s not selfish, I&#x27;m just saying it&#x27;s not accurate ;). But let me be absolutely clear and I will happily hold myself accountable for this claim for all of time: this is because I believe that the Steem blockchain is a force for good. It is an open and decentralized database which is not controlled by any single person, corporation, or government, that anyone can acquire a stake in, that is free to use, that is anonymous, and that is censorship resistant. We see a lot of problems with existing social media solutions (like all of your personal information being held and profited off of by a private company with tight links to authorities) and we sincerely believe we have solved them with Steem. One can argue whether we are right, but I know our beliefs are sincere.<p>PROOF: Smart Media Tokens<p>That&#x27;s exactly why we launched Smart Media Tokens. It was always our desire to get developers to leverage Steem to create their own applications that would reward their users with money while enabling them to retain control over their personal identities and protecting them from censorship, but through conversations with entrepreneurs and developers we learned that there were additional features they wanted. While steemit.com is incredibly important to us and the number of engineers we have working on that site is now greater than ever, we decided to also commit significant resources to designing the Smart Media Tokens protocol precisely because we want to encourage other platforms to leverage our technology. If all we cared about was steemit.com this strategy would make no sense. Once Smart Media Tokens launch, any site (including this one wink wiiiiiiiiiiiiink) will be able to launch their own Steem-like token with; customized parameters, the ability to allocate founders tokens, the ability to raise capital, and more. Shameless plug over ;).<p>Yes, the system is designed to increase demand for Steem and that will benefit all current Steem holders, including the thousands of people doing the massively valuable work of contributing content to our platform while it is still growing. We are not idealists, but we&#x27;re not soulless either. We are ambitious and hopeful technologists who are leveraging a revolutionary technology to disrupt a massively centralized, exploitative, and corrupt landscape. You&#x27;re welcome ;)<p>If you&#x27;d like to learn more, check out this video of yours truly explaining how it all works: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;z-V6HnfbGUA" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;z-V6HnfbGUA</a><p>Regardless of your view of steemit, if you&#x27;ve gotten this far, you&#x27;re amazing. Thanks for reading!
Royalties from Writing a Hit Song with Justin Bieber
The revenues from live when you write or co-write for a globally recognised artist who tours will always be more lucrative for songwriters than the revenues from recordings. This is because (generally speaking) the revenues for live for songwriters come as a split of box office - and live performance is generally going to gross a lot more than recording for an artist of that stature.<p>For example, in the UK, PRS takes 3% of box office, and that is then paid out to songwriters based on duration of performance. So for example, if someone sells 1000 tickets at 10 each, there is £300 to be paid to PRS. If there are three acts playing and they each play 30 minutes, songwriters get £3.34 per minute. So if two songwriters co-write a song and agree a 60&#x2F;40 split and that song is 4 minutes 30 seconds one songwriter would get £9.02 from this show and the other would get £6.01.<p>Bieber played six dates at the O2 in London - capacity 20,000 - and tickets seem to have been around the £45 mark. Let&#x27;s assume he pretty much sold out - so those six dates grossed £5.4m<p>Assuming he did a 20 song set (so maybe 75 minutes duration for songs, with two support acts each playing 30 minutes - total duration 135 minutes) then the songwriters (for both Bieber and support acts) are getting £1200 per minute of performance across that segment of the tour. So actually, if you are the support act and you are a singer-songwriter playing 30 minutes support to Bieber, you&#x27;re going to walk away with your live appearance fees plus £36k in songwriter royalties.<p>Streaming revenue splits between labels and songwriters (or sound recording copyright royalties, and publishing mechanical royalties) are based on old record label models, where the label invested a large amount of money to record, manufacture and market product; because of this, they took the lion&#x27;s share of revenues from record music sales. It&#x27;s easy to argue that things have changed, but equally, generally speaking a songwriter needs an artist to perform and record the song - and the artist probably needs a label (or someone with some money) to market that song and help it generate the maximum revenues. Songwriters still benefit, because they get MORE revenues than they would have done otherwise. A great song is nothing unless someone records it and performs it.<p>So the problem is not necessarily with Pandora, Spotify, YouTube and all of these other companies - they are following models which by-and-large have been dictated to them by labels - you can&#x27;t operate a music streaming platform without songs to play, and you can&#x27;t play songs without obtaining permission to use the recording, and the recording copyright - and thus that permission - is controlled by record labels rather than songwriters.<p>The scale of the market is completely different. Bieber&#x27;s 2016 world tour apparently grossed $250m in ticket sales - if we stick (for ease) with the UK model for compensating songwriters for live performance, this tour generated $7.5m in revenues for songwriters. Let&#x27;s say that across the whole tour - including Bieber songs, numerous co-writes, and support acts - there were 250 songwriters involved. Assuming everyone&#x27;s song was performed for roughly the same duration then each songwriter should have walked away with about $37,500. Not bad.<p>The problem is not streaming - it&#x27;s people crafting flawed narrative that is based around &quot;my song got streamed a billion times and all I got was this lousy t-shirt&quot; but at the same time ignoring the completely different scales of revenue for live performance vs recording. Bieber has 32 million monthly listeners on Spotify. Assuming that they each listen to an average of 8 tracks a month (for a total of roughly 250m streams) then his label is generating maybe around $1.25m in recording revenues. The songwriters will be getting a fraction of that - and each individual songwriter will get a proportion of that fraction, based on how much their song is listened to. If his most popular song gets 50% of streaming activity, second most popular 25%, third most popular 10%... and his tenth most popular gets 1% then that track is only generating the label $12,500 a month - and maybe $2,500 for the song writer - but it&#x27;s still getting 2.5m streams a month. If the writer has a 20% share of the song, then they are getting $500 - over a year (all things remaining equal) then maybe they get $6,000 for 30m streams.<p>If that song was played on every date of a Bieber tour as part of a 20 song, 75 minute set, on a tour where headliner is performing 60% of the total duration, then based on the UK model for songwriter performance royalties the writer of that song would be getting $225,000.<p>If someone has a 20% share of that song they are getting $45,000.
The Depression Thing
What worked for me was a combination of things:<p>Physical exercise. 2 years ago I got scared because my bad back condition got worse. When I went to a therapy, I noticed exercises (with swiss ball) seemed to help me the most. I&#x27;m supposed to do them daily (they take 40 minutes each time), but by trial and error I figured out 4 is the sweet spot. Anything more and I get overtraining.<p>To answer the article: how do you even start ? You simply start small. Performing the required number of repetitions is not important at all, and sometimes even counterproductive because you&#x27;re tempted to use bad form to achieve your goal. Keeping your schedule is SUPER important. Exercise yields best results when it&#x27;s regular, and this in turn <i>tadaa!!</i> builds character. No, really. I would argue working out even a little is more about willpower than anything else. Then I started adding other exercises to feed my vanity and make up for my insecurities. I always looked like a nerd. I added push-ups. Then squats (both 3 times per week). Quick tip: wall facing squats make it impossible to cheat. If you can&#x27;t do them, do some stretching exercises and look up some progression exercices. I don&#x27;t recommend running to beginners because everyone can do running half-assedly, but you need good technique to avoid permanent injury. I got permanent injury and squats compensate for this. I have an exoskeleton of muscles around my spine and knee now. I recommend cycling to beginners. Much harder to get an injury unless you like pedalling downhill (all the cases where I had an accident where while pedalling downhill).<p>But this post already looks depth-first while my escape from the black hole was breadth-first.<p>Music. I always liked music very much. Finding new music you like provides a short mood boost. I found funk. The trouble with music is searching for something you like requires time, and I need novelty constanty. But every little bit helps.<p>Friends. If you don&#x27;t have friends, and it&#x27;s safe to say I didn&#x27;t 2 years ago, go somewhere where people with similar interests gather. I found a place where people play board games. A small monthly fee and you can play one of nearly hundred board games with other male nerds. Board games are similar to computer games, but thrive on innovation in game mechanic department. Very good if you like strategy games. Board games are the most refined kind of multiplayer game, they are designed to be finished in one seating and the better ones are built about interesting decisions. Unlike with modern multiplayer where you are shoveled around with complete strangers by a matchmaking system, you get to play with the same group. It&#x27;s like dedicated servers are back ! Even better, only one person needs to own a board game. It&#x27;s like StarCraft 1 multiplayer &quot;Spawn&quot; feature is back!! Also, you get to see and meet people, and disconnects are very rare because it&#x27;s more awkward to disconnect from people you&#x27;re seeing face to face. No one would like to play with that jerk again. Automatic self-moderation system ! No need to appoint moderators, referees or implement a voting system that 2 pals can abuse to kick you because you killed them, or that never gets through because it needs 50%+ people and they&#x27;re happy you&#x27;re stuck with a bad teammate because it will make them win. I still don&#x27;t have many friends, and I only really meet them every Wednesday - but every little bit helps.<p>Get out of your flat. Simply go for a walk and see a changing view instead of the same 4 walls. No need to talk to anyone. This always helped me for a short while. Every little bit helps.<p>Hypericum pills. Herbal meds that according to some clinical tests are as effective as synthetic meds, with much fewer side effects. Notably, they don&#x27;t cloud your thoughts, which is important for a programmer. The one notable downside is phototoxicity. You want to stay away from sunlight, or you&#x27;ll get permanent dark spots on skin. I accomplish this by covering my skin. I understand this is not an option in Australia, but every little bit helps. Note: for hypericum you want either pills, or oil&#x2F;alcohol based extracts, because the key component doesn&#x27;t dissolve in water. Don&#x27;t waste money on &quot;tea&quot;.<p>Reading books, watching a movie. There are times when my anxiety overpowers me and I can&#x27;t even focus on that, but a good movie puts me in a good mood and I don&#x27;t necessarily mean a movie with a good ending.<p>Meditation. Supposedly improves concentration, memory and stress resistance by 30-40%ish. Just what I need. I only recently started so it&#x27;s hard to estimate if it&#x27;s helping me, but it&#x27;s free and harmless and easy. 2 times a day, 15 minutes. You simply go into standby mode, close your eyes, try to throw out all thoughts from your mind. In particular, don&#x27;t think about the past and don&#x27;t make plans. If you must focus on something, focus on the present, like your breath or sounds (but human speech is very distracting). Every little bit helps.<p>Merely reading about your condition can be helpful. I started to feel better and gradually solve my problems once I read a bit about my behavior and motivation. It&#x27;s like some barriers fade away and you slowly start exploring areas of your life you&#x27;ve never tried, things you were too shy to try, etc. It takes time.<p>Tips on sleep: I have an alarm set to 23:00 every day. When it rings, I go to bed. Actually recently I started going to bed even earlier. Turn off TV if you have one, stay away from computer monitor, smartphones, tablets. There&#x27;s something about the blue light they emit that makes people not realize how tired they are. When I go to bed, I read a book or a newspaper. This doesn&#x27;t interfere with my natural exhaustion sensor and the extra mental fatigue helps me fall asleep faster. If I have insomnia and really can&#x27;t fall asleep, I turn on the light and read. Then I try going to sleep 1 hour later or so. If you still can&#x27;t sleep, just lie with eyes closed and try to relax. Lying with eyes closed will still make you much more fresh in the morning than going out of bed or watching something and pretending you had no rest.<p>Also, I don&#x27;t remember the source, but a recent study I read about found that sugar intake makes you feel better in the short run but causes anxiety and depression the long run. Foul stuff!<p>To do all these things (acting is the hardest part of depression) I used observation and logic. Observation to identify activities that make me feel marginally better. Logic, to tell myself it will be worse if I never do them. See, it&#x27;s like the female zombie in Return of the Living Dead says. You don&#x27;t eat brains to feel good. You do that to make the pain go away. Do tell yourself it will make the pain go away, just a little. Keep repeating that.
Ask HN: What non-work task have you automated?
I automated waking up at 5:30 AM without feeling miserable when I had a remote work job[0]<p>It&#x27;s not terribly sexy and was really simple to get working. I purchased the highest wattage LIFX RGBA bulbs and a high wattage (350W Incandescent equivalent) to replace all of the lighting in my bedroom, where I worked most days[1] and replaced the lighting in my bedroom with them.<p>I then used IFTTT to do the following:<p>(1) Over a period of 45 minutes ending at 5:30 AM, all of the LIFX bulbs gradually go from 0% to 100% in a cool blue-white hue. At 5:30, the 350W equivalent turns on (using a Belkin WeMo switch). The 350W turns off at 8:00 AM.<p>(2) At sunset, but no later than 7:30 PM (northern summers can be much later than this), the lights eliminate the blue hues and shift to warmer colors over a period of 45 minutes<p>(3) Over a period of 45 minutes ending at 11:15 PM, the LIFX bulbs go from 100% to 0%.<p>It took me a few weeks to arrive at the exact times and timings for each of these steps, as well as color. I kept a daily log of when I started feeling tired at night and a (very subjective) log of how tired I felt in the morning and what time I actually got out of bed (I had an alarm set, but only as a failsafe which was set for 8:00 AM while &quot;testing&quot; the settings). The 45 minute time frame was landed at after trying as low as 15 minutes. 45 minutes is the longest duration that&#x27;s allowed or I would have tried longer, but it is <i>just about</i> right. It&#x27;s <i>really</i> difficult to detect the brightening&#x2F;dimming while it&#x27;s happening and impossible for me to tell the color is changing during the day.<p>Unrelated, but as an aside -- quality of sleep and waking up early is something I struggled with all of my life until I started doing this a few years ago. In my 20s, I was jealous of my friends who regularly went out until 2:00 AM and somehow functioned the following day at 8-9AM when they had to be at work. I could <i>never</i> do that -- I&#x27;d literally fall asleep every time my eyes closed if I made that mistake. And worse, I could go to bed at 6:00 PM and 7:00 AM would still feel awful. I followed a few other techniques: Whenever possible (most of the time), I&#x27;d follow the rule to &quot;go to bed when I am tired&quot;. If it was the afternoon and I started feeling exhausted, I&#x27;d go to bed -- sometimes just a nap, on rare occasion, a full afternoon-night&#x27;s sleep. If I woke up in the middle of the night, I&#x27;d get up and do something until I was tired again, but for the first several months, regardless of the day or how tired I felt, I woke up at 5:30 AM. If I awoke within 45 minutes of that time, I&#x27;d stop trying to sleep and just &quot;get up&quot;[2]. I&#x27;d tried all of these techniques in the past and failed horribly, but after setting my lighting up, I noticed I was waking a <i>lot</i> easier and with a lot less effort. I added back in these techniques and the result was perfect. It&#x27;s now just &quot;habit&quot; - I am up early, I&#x27;m in bed between 9:00 and 11:30 every night and I rarely wake fully in the middle of the night. I&#x27;m rarely tired. All of that said, if life and job circumstances would allow it, I&#x27;d prefer doing 8:00 PM to 5:00 AM over a day-job, but doing the day-job shift doesn&#x27;t bug me a bit anymore.<p>[0] I&#x27;m in an office job, now, so some of what I did doesn&#x27;t affect me like it used to. I was waking this early, originally, because my team was in the UK and I wanted to maximize the amount of time I spent with them despite the 5-hour time skew.<p>[1] For whatever reason, working from bed turned out to be very relaxing and helped me to put in longer hours without feeling like I was putting in long hours.<p>[2] One observation I made was that if I awoke and there was less than an hour before I <i>had</i> to be up -- even if I was still very tired and could fall back asleep within a few minutes -- falling back asleep resulted in me feeling substantially <i>worse</i> when that alarm sounded (and it would stick with me most of the morning). If I get up when my brain wakes up the first time, that tiredness wears off within a half-hour. I ditched the &quot;snooze&quot; habit. I <i>don&#x27;t</i> get right out of bed, but gradually work myself out every morning (I have the time, after all, since I&#x27;m getting up so early).
Sahara Desert – Could you believe? The Sahara was once green
A hypothesis called &quot;The Saharan pump theory&quot; explains how flora and fauna have migrated through a physical link between Eurasia and Africa. This hypothesis postulates that long periods of heavy rainfall for thousands of years that have alternated with periods of drought in Africa, are associated with a so-called &quot;wet Sahara&quot; phase, during which large lakes and rivers such as mega-lake Chad, existed alternately with an immense desert, the Sahara. The wet Sahara was probably a mosaic landscape of rivers, lakes, swamps, woodlands, forest islands, wooded savannas and grasslands. Earth regions higher than 45° of latitude where under a heavy shelf of ice. Mediterranean sea was much lower than today.<p>The Middle Paleolithic was a period of African prehistory that began about 280,000 years BCE and ended approximately between 50,000 years and 25,000 years BCE.<p>Even during periods of drought, humans were most often able to follow the Nile to cross the Sahara, as flora and fauna persisted on its banks. Migrations between Eurasia and Africa were, however, interrupted when, during a desert phase of 1.8 - 0.8 million years, the Nile at times stopped flowing completely and at another period because of a geological uplift (movement of elevation) of the Nile region.<p>This has resulted in changes in the flora and fauna in the region which have made traveling at great distances very difficult. Evaporation exceeds precipitation, water levels in lakes like Lake Chad, fall very low and rivers become dry wadis. The once widespread flora and fauna must be retreated northward into the Atlas Mountains and south into West and East Africa in the Nile Valley and from there to South-East to the plateaus of Ethiopia and Kenya or north-east to Asia via Sinai.<p>This separates the populations of the different species in zones with different climates, thus requiring them to adapt either by migration or speciation (evolution towards new species) or by exploiting different resources.<p>The Saharan pump was invoked to explain three waves of human migrations outside Africa, namely: Homo Erectus to Southeast Asia, perhaps twice, once as far as China and India, once again to Pakistan. Homo heidelbergensis to the Middle East and Western Europe. Homo Sapiens Sapiens towards the Middle East and Western Europe, the so-called &quot;out of Africa&quot;<p>Between approximately 133 and 122 000 years BCE, the southern parts of the Saharan desert had the beginning of the so-called &quot;Abbassia Pluvial&quot; period, which is a very wet period with monsoon precipitation. This allowed the Eurasian animals to travel to Africa and vice versa.<p>The Abbassia Pluvial period brought humid and fertile conditions to what is today the Sahara desert, which then benefited from lush vegetation, fed by lakes, swamps and river systems, many of which disappeared later in the drier climate that followed the Abbassia Pluvial period. African wildlife, now associated with the savannas, meadows and woods of the southern Sahara, had penetrated all of North Africa during this period.<p>The Stone Age cultures, especially the Mousterian and Aterian have grown significantly in Africa during the Abbassia Pluvial period. The transition to more severe climatic conditions that accompanies the end of this humid period may have encouraged the emigration of Homo Sapiens away from Africa.<p>The coastal road around the western Mediterranean was open during the last glacial period and may have promoted exchanges. Wet periods were limited to only tens or hundreds of years.<p>During the Pluvial Mousterian, the dried-up regions of North Africa became again very humid, as during the rainy Abbassia period 50,000 years earlier. There were lakes and even small inland seas (mega lake Chad), swamps and hydrographic networks that no longer exist today. Where the Sahara Desert is located today, there was the African wildlife typical of meadows and woods, herbivores such as the gazelle, the giraffe or the ostrich, predators of the lion to the jackal, hippopotamuses and crocodiles, As well as species that have now disappeared, such as the Pleistocene camel.<p>The Mousterian Pluvial was caused by large-scale climatic changes during the last glacial period. At about 50,000 BCE the Würm glaciation was well advanced in the northern hemisphere. The ice caps in North America and Europe were increasingly shifting the climatic zones favorable to human life to the southern hemisphere. Temperate zones in Europe and North America were transformed into arctic tundra, and rainfall strips typical of temperate zones declined sharply at latitudes in North Africa.<p>Curiously, the same influences that created the Mousterian Pluvial also seem to have made it disappear. At its maximum development, there are between 30 and 18,000 BCE, the Laurentide ice sheet covers not only an enormous geographical area but reaches an altitude of 1750 meters. This creates a specific meteorological system that affects the jet stream on the North American continent. The jet stream splits into two entities, creating a new climate over the northern hemisphere, which sets harsher conditions in several regions of Central Asia and the Middle East, the end of the Mousterian Pluvial and a Return to a more arid climate in North Africa.<p>Human settlements then move northwards. The late Paleolithic began in Egypt about 30,000 BCE as evidenced by the skeleton of Nazlet Khater. The excavation of the Nile exposed the first stone tools of that period.<p>Another example of climate change introduced by the Saharan pump occurred after the last glacial maximum around 22,500 - 17,000 BCE. During the last glacial maximum the Sahara desert was more extensive than it is today, with a considerably weaker extent of tropical forests than today. During this period, lower temperatures reduce the strength of Hadley&#x27;s atmospheric cell, whereby tropical climbing air brings rain in the tropics, while dry air descending to about 20 degrees north latitude, Flows back to Ecuador and brings desert conditions to this region. This phase is associated with high levels of wind-borne mineral dust found in marine cores from the northern tropical Atlantic.<p>Around 12,500 BCE, begins a period of much more humid conditions in the Sahara, bringing a savannah climate to the Sahara.<p>Analysis of Nile sediment deposited in the delta also shows that this period had a higher proportion of sediments from the Blue Nile, suggesting higher precipitation on the highlands of Ethiopia. This was mainly due to a stronger monsoon activity in all tropical regions, affecting India, Arabia and the Sahara.<p>The African wet period that took place between 12,800 and 3,500 years BCE, was the last occurrence of a &quot;Green Sahara&quot;. Climatic conditions in the Sahara during the African wet season were dominated by a strong monsoon with heavy rainfall. With the considerable increase in rainfall, vegetation in North Africa is transformed into vast grasslands. The Sahel region to the south of the Sahara becomes a savanna.<p>The African wet season was also characterized by a network of vast rivers in the Sahara, large lakes, rivers and deltas. The four largest lakes were Megachad Lake, Megafezzan Lake, Ahnet-Mouydir Lake and Lake Chotts. There were large rivers in the area such as the Senegal River, the Nile River, the Sahabi River and the Kufra River. These river and lake systems have provided corridors that have enabled many animal species, including humans, to extend their geographic range to the north, migrating across the Sahara.<p>A brutal climatic event occurred around 6,000 BCE. It is characterized by a sudden drop in global temperatures that lasted several millennia. This sudden cooling event may have been caused by the collapse of the Laurentide ice sheet in northeastern North America, probably when the Ojibway and Agassiz glacial lakes suddenly emptied into the North Atlantic Ocean. The melting pulse may have altered the thermohaline circulation of the North Atlantic, reducing the transport of heat from the tropics to the north of the Atlantic.<p>The sudden movement of the Hadley atmospheric cell towards the south causes sudden cooling followed by slower warming, linked to changes with the El Niño cycle, which leads to a rapid drying up of the Saharan and Arab regions.
Satoshi was wrong
&gt; 1. A cap at 21M. It made the currency deflationary by nature. Ask an economist, no one thinks it’s a good idea long term <a href="http:&#x2F;&#x2F;www.investopedia.com&#x2F;articles&#x2F;personal-finance&#x2F;030915&#x2F;why-deflation-bad-economy.asp" rel="nofollow">http:&#x2F;&#x2F;www.investopedia.com&#x2F;articles&#x2F;personal-finance&#x2F;030915...</a><p>Most economists thought that &quot;Chancellor on brink of second bailout for banks&quot; The Times 03&#x2F;Jan&#x2F;2009 was a good thing, but many, including Satoshi, disagree on this.<p>As for Bitcoin being deflationary, that&#x27;s a more subtle point. Nobody (to a reasonable approximation) thinks that there is a direct link between monetary base and inflation&#x2F;deflation. There is a relationship, but total money supply is generally considered more important, and that includes liquid monetary instruments (like demand deposits, etc.) that are not reflected in the monetary base.<p>The notion that Bitcoin is worthless because it is deflationary is self-contradictory -- if it increases in value it cannot decrease in value. The contradiction here is that the price of Bitcoin will fluctuate with market forces just like everything else, but if supply chains begin to materialize denominated in Bitcoin, then by necessity the output and intermediate products of that supply chain will be stable in value relative to Bitcoin itself.<p>&gt; 2. He offered a system where the inputs&#x2F;outputs blockchain is the state itself, and in order to prune your chain you need to store UTXO set.<p>Agreed here; this was definitely a design flaw. I&#x27;m not totally sold on the ETH solution, which depends strongly on transactions being ordered properly, and has some odd race conditions that are a pain to deal with (that is, a transaction which is not currently valid may &quot;become&quot; valid, either by advancing the nonce or increasing the momentary balance at an address). One problem with the UTXO solution is that offline (air-gapped) signing solutions are very complex, because you need to locate UTXOs, rather than just signing &quot;send X bitcoins from this address to this address&quot;.<p>&gt; 3. He didn’t oversee the idea of Root of Trust<p>Eh; this seems like an independent problem to me. If you want to trust someone, by all means trust them, and use whatever web of trust primitives you want to expand that trust into trusting their software. But putting it in the blockchain means that the chain of custody becomes suspect; what if an individual (or their key) is compromised? That extends the surface area of any attack significantly. That&#x27;s not to say that it wouldn&#x27;t be nice to have some notion of trust, but so far I haven&#x27;t seen any ideas that seem even a little attractive for solving this problem. Satoshi didn&#x27;t cure cancer either.<p>&gt; 4. He offered “new payment — new address” as a rule<p>This rule still makes sense for a lot of situations just because there&#x27;s no &quot;receipt&quot; mechanism. I sell widgets, and a customer buys a widget -- did they send the money or not? If I have a single payment address, all I see are hundreds of payments for .01 BTC.<p>On an individual level, using a new address for the change from a transaction doesn&#x27;t seem to add a lot of value, but since Bitcoin tracks UTXO, not address balances, there&#x27;s relatively little bloat associated with this.<p>&gt; 5. He never defined clearly threat model of Bitcoin. Who’s the attacker? ... The one and only real attacker ... is ... governments.<p>The threat model, on the contrary, was extremely well defined. The only threat that Satoshi took seriously is the threat of double spend attacks.<p>The nation-state threat model is more subtle -- currency controls and police actions against users and miners. That&#x27;s a little out of scope of a software project.<p>If a nation-state chooses to take over mining, then all they do is increase the security of the network by adding hash power. If they try to use their hash power to facilitate double-spending (for some reason?) then that&#x27;s easily detectable by humans. But mainly it just doesn&#x27;t make sense as an attack, especially for a nation-state with a police force and a justice system, which can be used to enforce arbitrary rules much more cheaply than building a huge mining farm can.<p>&gt; 6. Complete lack of governance was sold to us as a good thing. What we got now? There’s nothing Bitcoin can do that Ethereum can’t, there’s no clear strategy or way of resolution of conflicts when one side whats 2x of blocks and another wants to keep it 1 Mb.<p>Better to have the debate, and have the forking and contention, than just sticking with a bad policy set by a bad policy maker. This is just inefficient democracy vs. efficient totalitarianism -- your mileage may vary, but I side pretty hard with the inefficient here.<p>As for Ethereum, I love a lot of what it brings to the table, but in the end, it brings too much, at the cost of a huge amount of complexity that makes trust decisions, already complex, exponentially worse. Also, I don&#x27;t understand what Ethereum has to do with a governance failure of Bitcoin. Ethereum has not yet taken over from Bitcoin in the space that Bitcoin was designed for, so I don&#x27;t think we can count it as proof of Satoshi&#x27;s mistakes.<p>&gt; 7. Lack of vision what Bitcoin would do when transactions reach maximum of capacity. In fact Satoshi himself envisioned it as a constantly growing onchain without any second layers. It’s only now when people realized that increasing onchain further is ridiculously stupid idea since it’s already way too hard to be a full node. And utopian hot-patches with completely broken incentives models like Lightning Network are another proof Satoshi had no idea how to fit all people onchain.<p>The whitepaper says nothing about block sizes or off-chain settlement. Satoshi went away before block sizes became a contentious issue, so we&#x27;ll never know if whatever he thought was right or wrong. And frankly, although I&#x27;m a &quot;big-blocker&quot; myself, I don&#x27;t think the answer here is settled. Whether on-chain can scale as storage costs drop and light-weight payment verification becomes more common and full nodes rarer, or whether we need a formal off-chain solution like LN or an informal off-chain solution like &quot;some company maintains a ledger&quot;, the jury is still out.
African American Vernacular English Is Not Standard English with Mistakes (1999) [pdf]
This is eerily reminscent of &quot;Authority and American Usage&quot;, a 2001 essay by david foster wallace. The samizdat copy is here: <a href="http:&#x2F;&#x2F;wilson.med.harvard.edu&#x2F;nb204&#x2F;AuthorityAndAmericanUsage.pdf" rel="nofollow">http:&#x2F;&#x2F;wilson.med.harvard.edu&#x2F;nb204&#x2F;AuthorityAndAmericanUsag...</a><p>Quoting from a section discussing Standard Written English vs. Standard Black English (footnotes omitted):<p><i>I&#x27;m not trying to suggest here that an effective SWE pedagogy would require teachers to wear sunglasses and call students Dude. What I am suggesting is that the rhetorical situation of a US English class---a class composed wholly of young people whose Group identity is rooted in defiance of Adult Establishment values, plus also composed partly of minorities whose primary dialects are different from SWE---requires the teacher to come up with overt, honest, and compelling arguments for why SWE is a dialect worth learning.<p>These arguments are hard to make. Hard not intellectually but emotionally, politically. Because they are baldly elitist.[^60] The real truth, of course, is that SWE is the dialect of the American elite. That it was invented, codified, and promulgated by Privileged WASP Males and is perpetuated as &quot;Standard&quot; by same. That it is the shibboleth of the Establishment, and that it is an instrument of political power and class division and racial discrimination and all manner of social inequity. These are shall we say rather delicate subjects to bring up in an English class, especially in the service of a pro-SWE argument, and extra-especially if you yourself are both a Privileged WASP Male and the teacher and thus pretty much a walking symbol of the Adult Establishment. This reviewer&#x27;s opinion, though, is that both students and SWE are way better served if the teacher makes his premises explicit and his argument overt---plus it obviously helps his rhetorical credibility if the teacher presents himself as an advocate of SWE&#x27;s utility rather than as some sort of prophet of its innate superiority.<p>Because the argument for SWE is both most delicate and (I believe) most important with respect to students of color, here is a condensed version of the spiel I&#x27;ve given in private conferences[^61] with certain black students who were (a) bright and inquisitive as hell and (b) deficient in what US higher education considers written English facility:<p>&quot;I don&#x27;t know whether anybody&#x27;s told you this or not, but when you&#x27;re in a college English class you&#x27;re basically studying a foreign dialect. This dialect is called Standard Written English. [Brief overview of major US dialects a la page 98.] From talking with you and reading your first couple essays, I&#x27;ve concluded that your own primary dialect is [one of three variants of SBE common to our region]. Now, let me spell something out in my official teacher-voice: the SBE you&#x27;re fluent in is different from SWE in all kinds of important ways. Some of these differences are grammatical- for example, double negatives are OK in Standard Black English but not in SWE, and SBE and SWE conjugate certain verbs in totally different ways. Other differences have more to do with style---for instance, Standard Written English tends to use a lot more subordinate clauses in the early parts of sentences, and it sets off most of these early subordinates with commas, and under SWE rules, writing that doesn&#x27;t do this tends to look &quot;choppy.&quot; There are tons of differences like that. How much of this stuff do you already know? [STANDARD RESPONSE = some variation on &quot;I know from the grades and comments on my papers that the English profs here don&#x27;t think I&#x27;m a good writer.&quot;] Well, I&#x27;ve got good news and bad news. There are some otherwise smart English profs who aren&#x27;t very aware that there are real dialects of English other than SWE, so when they&#x27;re marking up your papers they&#x27;ll put, like, &quot;Incorrect conjugation&quot; or &quot;Comma needed&quot; instead of &quot;SWE conjugates this verb differently&quot; or &quot;SWE calls for a comma here.&quot;That&#x27;s the good news---it&#x27;s not that you&#x27;re a bad writer, it&#x27;s that you haven&#x27;t learned the special rules of the dialect they want you to write in. Maybe that&#x27;s not such good news, that they&#x27;ve been grading you down for mistakes in a foreign language you didn&#x27;t even know was a foreign language. That they won&#x27;t let you write in SBE. Maybe it seems unfair. If it does, you&#x27;re probably not going to like this other news: I&#x27;m not going to let you write in SBE either. In my class, you have to learn and write in SWE. If you want to study your own primary dialect and its rules and history and how it&#x27;s different from SWE, fine---there are some great books by scholars of Black English, and I&#x27;ll help you find some and talk about them with you if you want. But that will be outside class. In class---in my English class---you will have to master and write in Standard Written English, which we might just as well call &quot;Standard White English&quot; because it was developed by white people and is used by white people, especially educated, powerful white people. [RESPONSES at this point vary too widely to standardize.] I&#x27;m respecting you enough here to give you what I believe is the straight truth. In this country, SWE is perceived as the dialect of education and intelligence and power and prestige, and anybody of any race, ethnicity, religion, or gender who wants to succeed in American culture has got to be able to use SWE. This is just How It Is. You can be glad about it or sad about it or deeply pissed off. You can believe it&#x27;s racist and unfair and decide right here and now to spend every waking minute of your adult life arguing against it, and maybe you should, but I&#x27;ll tell you something---if you ever want those arguments to get listened to and taken seriously, you&#x27;re going to have to communicate them in SWE, because SWE is the dialect our nation uses to talk to itself. African-Americans who&#x27;ve become successful and important in US culture know this; that&#x27;s why King&#x27;s and X&#x27;s and Jackson&#x27;s speeches are in SWE, and why Morrison&#x27;s and Angelou&#x27;s and Baldwin&#x27;s and Wideman&#x27;s and Gates&#x27;s and West&#x27;s books are full of totally ass-kicking SWE, and why black judges and politicians and journalists and doctors and teachers communicate professionally in SWE. Some of these people grew up in homes and communities where SWE was the native dialect, and these black people had it much easier in school, but the ones who didn&#x27;t grow up with SWE realized at some point that they had to learn it and become able to write fluently in it, and so they did. And [STUDENT&#x27;S NAME], you&#x27;re going to learn to use it, too, because I am going to make you.&quot;<p>I should note here that a couple of the students I&#x27;ve said this stuff to were offended---one lodged an Official Complaint---and that I have had more than one colleague profess to find my spiel &quot;racially insensitive.&quot; Perhaps you do, too. This reviewer&#x27;s own humble opinion is that some of the cultural and political realities of American life are themselves racially insensitive and elitist and offensive and unfair, and that pussyfooting around these realities with euphemistic doublespeak is not only hypocritical but toxic to the project of ever really changing them.</i>
TerrariaClone – An incomprehensible hellscape of spaghetti code
<p><pre><code> if (left) { if (right) { if (up) { if (down) { blockds[y][x] = 0; } else { if (upleft) { if (upright) { blockds[y][x] = 1; } else { blockds[y][x] = 2; } } else { if (upright) { blockds[y][x] = 3; } else { blockds[y][x] = 4; } } } } else { if (down) { if (downright) { if (downleft) { blockds[y][x] = 5; } else { blockds[y][x] = 6; } } else { if (downleft) { blockds[y][x] = 7; } else { blockds[y][x] = 8; } } } else { blockds[y][x] = 9; } } } else { if (up) { if (down) { if (downleft) { if (upleft) { blockds[y][x] = 10; } else { blockds[y][x] = 11; } } else { if (upleft) { blockds[y][x] = 12; } else { blockds[y][x] = 13; } } } else { if (upleft) { blockds[y][x] = 14; } else { blockds[y][x] = 15; } } } else { if (down) { if (downleft) { blockds[y][x] = 16; } else { blockds[y][x] = 17; } } else { blockds[y][x] = 18; } } } } else { if (right) { if (up) { if (down) { if (upright) { if (downright) { blockds[y][x] = 19; } else { blockds[y][x] = 20; } } else { if (downright) { blockds[y][x] = 21; } else { blockds[y][x] = 22; } } } else { if (upright) { blockds[y][x] = 23; } else { blockds[y][x] = 24; } } } else { if (down) { if (downright) { blockds[y][x] = 25; } else { blockds[y][x] = 26; } } else { blockds[y][x] = 27; } } } else { if (up) { if (down) { blockds[y][x] = 28; } else { blockds[y][x] = 29; } } else { if (down) { blockds[y][x] = 30; } else { blockds[y][x] = 31; } } } }</code></pre>
Future of Internet Marketing
With the power of progressive technologies like AR, Artificial Intelligence, Machine Learning, IoT, and of course with the unconquerable amount of data, marketers can do anything from immersive targeting to desirable selling.<p>It was an amazing shift for me from only writing and publishing content to taking efforts toward marketing them online. Though it seemed like a radical turn around, I pretty much managed to scrupulously lay the path.<p>#BoringHardshipStory<p>Let’s agree to the fact, Back in time, writers were demanded to shiploads of content out only to be left unread.<p>Well, none of us knew that writers had the power to write as well convert! Did we?<p>It came as a shocker, when with no fiscal investment (we didn’t have Adwords or any other fancy marketing tool back then), with shaky sales calls made, with the dull websites up, businesses were able to convince and convert their clients!<p>How? Thereafter, content led the conversion process. I still doubt if the content were crafted like poems in a little primitive time?!<p>Breaking the endless enigma, What is marketing? Where you promote your service&#x2F;product. What are sales? Where you do selling for money.<p>We aren’t living in a barter system economy. Yes, we don’t swap stuff and make fortune out of it.<p>Peek Into The Future Of Online Marketing:<p>Contextual marketing to Immersive marketing Online marketing is on the verge of imploding and giving rise to an entirely different business model. With the number of emerging technologies, the online marketing is making a quick shift from contextual marketing to immersive marketing (Yeah).<p>For instance, the inclusion of social media to our interaction stream has changed the way we stay connected with each other in our daily life. Plus the fact that they all can be accessed through just out the phone. And as the whole world has come online, seems like there is nothing else but through the digital medium, our life shall change as we wish!<p>Take a look at the new innovative technologies those we will soon get exposed to. They certainly are going to take the burden off our shoulder by revolutionizing the way the online marketing works right now.<p>Introducing… AR (Augmented Reality)<p>Augmented Reality within e-commerce As huge as it may sound, Augmented Reality is starting to change how marketers use gamification for their products.<p>Remember how Pokemon Go roped in the AR tech and had us caught in its fad rage?<p>It pretty much gave us an idea about the future of human-software interactions. Meaning, the point where the physical and digital environment will meet each other for betterment. And with some of the recent launches from futuristic companies like Google and Apple — Google’s AR core for Android to Apple’s iPhone 8 — X specially purposed to support AR, gives us a clear picture.<p>What will the intrusion of AR within marketing look like?<p>Back in 2010, how apps were battling for user attention, with the AR’s ability to make things jazzy by just the rear lens focused onto the objects will again resume the app war or WOP (War Of Products).<p>Within Marketing…<p>Immersive Experience Before Purchase<p>Checkout Modiface. Modiface is a no-brainer (self-explanatory by its name). It lets a user use its AR application to try out beauty products before making any actual purchases.<p>VR (Virtual Reality) and Online Marketing<p>Virtual Reality Gear to experience immersive visualisation<p>Virtual Reality and Internet Marketing shall do great by enhancing the visual&#x2F;video marketing and in specific will level up the transmission of video. Though VR technology requires a slightly heavy budget as of now, its benefits as a business and also as a user is enormous. Several tutorials on VR technology by Facebook for pushing out Oculus Rift. And with the emergence of revolutionary VR world, as users might expect a different product experience, businesses must offer unique experiences to users even before they ask for it. Would be well off if marketers opt it earlier before users get overly exposed to it.<p>For example, let’s take the most immersive visual experience from Jaguar made for the recent past Wimbledon Championship. In it, you will fly above a maze-like CG construction of the Wimbledon site to get dropped into the centre of the court during the match’s high time.<p>Who did the voice over? Andy Murray!<p>VR driven campaign by Jaguar featuring Andy Murray killing it in the recent past Wimbledon Match High Point: The experience ends as you get into the body of Andy while slamming that damn match point!<p>CAUTION: Never overdo technology and manners. People hate it.<p>Wearable Technology<p>Wearable technology will help improve our health in a greater extent<p>With the extremity of wearables we use currently, it’s either making our daily life convenient or pushing businesses towards the edge of death by data! Well, the power of IoT is huge and intricate. The rise in the usage of wearables, such as Apple Watches will prominently help businesses target their creative adverts to the target audience on-time, based on the wearers early shopping experiences&#x2F;online behaviour.<p>Check: The customized adverts have to be more personalized and creative to reap complete benefit out of the wearers. Fact: An average person checks his smartwatch 85% more than he checks his smartphone.<p>Seems like there are way more opportunities for marketers and businesses to get chosen by the multitude of users.
We fired our top talent. Best decision we ever made
There is a whole lot of anger directed at the company and management regarding this post, and while I agree that it is deserved, seeing the developer in this situation is not &quot;at fault&quot; is also the wrong way of looking at this. My apologies in advance for this rather long rant -- I&#x27;m the first to admit that this post, and many of the responses, frustrated me a bit. You can skip to the final paragraph if you just want to understand the source of my frustration.<p>Management failed, yes. They should have let him go a <i>long</i> time ago and it&#x27;s at least a little compassionate that they put up with this issue for two years. But, honestly, calls for more direct management involvement always concern me[0]. What really needed to happen is &quot;Rick&quot; needed to better manage himself. The &quot;red flag&quot; of &quot;I build everything myself and don&#x27;t rely on other code&quot; scares the hell out of me and implies that management were not terribly familiar with developing software.<p>Use something proven, documented and solid so you don&#x27;t have to re-invent the wheel. I don&#x27;t buy that this guy was a &quot;Genius Developer&quot;. I won&#x27;t explain all of the obviously good reasons to using good, proven, third-party options except to say the a large benefit in this case is that others on the team might <i>already</i> be familiar with them and can help out more easily.<p>And then there&#x27;s the lack of documentation. I <i>know</i> it&#x27;s a common thing. I often feel like it&#x27;s a battle I&#x27;m <i>personally</i> fighting all the time with other devs at <i>every</i> level. I mean, is there any language out there these days that doesn&#x27;t have a doc-comment sort of mechanism that will be parsed by popular IDEs to make <i>using</i> the things you&#x27;ve written easier?[1] It&#x27;s conceivable that &quot;Rick&quot; fell into the two-year backlog by spending most of his day trying to make sense out of the existing code-base. Maybe he wrote <i>fast algorithms</i> or had an incredible knowledge depth&#x2F;scope in the given languages but that&#x27;s intelligence masquerading as genius.<p>I also take issue with the idea that this guy went from Dr. Jekyll to Mr. Hyde. From reading this, it sounds like he was always a bit of Mr. Hyde -- he was just delivering when the software was simpler to manage on his own. Yes, there are those corner cases where you get a developer who&#x27;s talent is such that his ability to work with others becomes less important -- those times exist only when there is a corner case that they fit within -- one where they can have a positive impact without creating a large crater in whatever it is they&#x27;re in charge of. When I&#x27;ve interviewed candidates, though, those are not the people I have ever recommended. I can deal with a non-genius much easier than I can deal with someone who is untruthful[2] or incapable of getting along with others.<p>It sounds like the company could have had a more active mentoring program between senior and mid&#x2F;junior level developers. Even recognizing that this individual was a senior developer, getting him involved <i>mentoring</i> those below his skill level could have surfaced some of these issues earlier. Yeah, I get that some developers do not have the personality types to mentor well[3]. But the only issue I take with them firing the individual versus attempting to address the problem after &quot;cooler heads prevailed&quot; is that they waited too long for either. And before I get accused of being a heartless animal, I speak from personal experience here. I lost my job in January of this year. It <i>felt</i> like the worst thing imaginable for a little bit. It wasn&#x27;t fun. But I <i>am</i> better off, now. I found new employment (and quickly -- one perk for &quot;Rick&quot; is that hiring is out of control right now) and I ended up at a place I would have <i>never</i> found, otherwise. I&#x27;m actually <i>doing</i> things that I&#x27;ve always <i>wanted</i> to do and while I <i>loved</i> my last job, I love this one, more. Yes, in my case, I wasn&#x27;t &quot;fired&quot; -- my rather unique position was eliminated because the company changed directions -- but that&#x27;s not provable in an interview so I was on the same footing as Rick ... mostly.<p>It&#x27;s <i>hard to not see it as personal</i>, but at the same time, <i>people</i>, especially in our industry, are the <i>biggest cost</i> a company often has. Keeping a toxic employee too long can <i>literally</i> be the difference between the success and failure of the entire organization, resulting in the loss of <i>everyone&#x27;s</i> job and financial devastation for the founders. It&#x27;s not a company&#x27;s obligation to detox toxic people, but one way that can happen is by being fired and discovering that your assessment of your abilities and contributions was <i>woefully wrong</i> leading to great personal change. I hope the best for &quot;Rick&quot; as I would for any human being suffering loss, but I&#x27;m hoping he&#x27;s figured out, by now, what he&#x27;s done that has contributed to his failure and is &quot;relaunching&quot; as a better version of himself. That said, by writing this post, they&#x27;ve made that a <i>lot</i> harder for &quot;Rick&quot;.<p>And that&#x27;s my final point -- I&#x27;m a bit <i>disgusted</i> with the post as a whole. While they to provide &quot;Rick&quot; anonymity, &quot;Rick&quot; and likely &quot;Rick&quot;&#x27;s friends and family will know he&#x27;s been written about. At least in the short term, when &quot;Rick&quot; applies for a job and indicates that this organization was his past employer and that he was let go, they&#x27;re going to hit up Google, find this post, and likely conclude that &quot;Rick&quot; is &quot;The Rick&quot;, causing him to be avoided[4]. Perhaps I&#x27;m being overly dramatic, but I find the whole thing to be unprofessional enough that I have zero interest in discovering just what it is that this organization does because I&#x27;m not inspired to be a customer or a future employee.<p>Worse, it&#x27;s conceivable that they&#x27;ll have future lay-offs that fall into the category of &quot;reorganization&quot; or &quot;Not The Employee&#x27;s Fault(tm)&quot; and now all of <i>those</i> individuals might be mistaken for being the &quot;Rick&quot; (if they&#x27;re guys, anyway). And like many of you other fine folks have pointed out, the post makes it smell like this company has (potentially) <i>serious</i> management&#x2F;team issues -- the fact that they&#x27;re celebrating firing someone by calling it the &quot;Best decision we ever made.&quot; They &quot;purchased&quot; something at about the cost of a nice 4-bedroom McMansion where I live, lived in it for a year or two, then just abandoned the property for two years and handed the land back to the city who charged them to bulldoze it (the latter being an analogy to the unemployment penalty they&#x27;re now paying). That&#x27;s the &quot;best decision ever made&quot;? What do all of those <i>other</i> decisions look like[5]?<p>Having to fire someone is a <i>huge</i> failure for a company -- It started when you &quot;picked wrong&quot; and hired this person over all of the other choices, you then wasted a large amount of your own&#x2F;investors money, you&#x27;ve wasted you&#x2F;your employees time and energy, you&#x27;ve added toxicity to the organization and you&#x27;ve hurt the trust of your existing employees who do not know the full picture. Letting that go on for over two years just amplifies how <i>massive</i> a failure your organization just suffered. Don&#x27;t celebrate it. That&#x27;s about as tasteful as dancing on a grave.<p>[0] This often starts with the &quot;We need a Project Manager&quot;. When I hear those words, I can feel the productivity being sucked out of the room and with rare exception (really, at every place other than where I am currently employed), that&#x27;s been the case.<p>[1] I&#x27;m nuts about this, personally. I write a <i>lot</i> of code and I&#x27;m sure there are many among us who fall into the &quot;if I wrote it 6 months ago, it might as well have been written by someone else&quot;. I live or <i>die</i> by whatever I put into those doc-comments, so minimally they&#x27;re present for <i>everything</i> that is part of the API -- if I&#x27;ve written a unit test for it, it&#x27;s got a doc comment.<p>[2] &quot;Untruthful&quot; isn&#x27;t a pleasant way of saying &quot;liar&quot; -- sure, someone who is knowingly dishonest, such as not taking ownership of a mistake that was made or a intentionally misrepresenting the truth is a non-starter, but untruth falls into the category of making promises you can&#x27;t possibly keep. I&#x27;d much rather hear &quot;I am uncertain&#x2F;haven&#x27;t researched X, Y, and Z, but assuming that those only present minor problems, I can have the project done by next Tuesday&quot; over &quot;It&#x27;ll take a week&quot; and having to hear about X, Y, and Z next Tuesday. Everyone slips and, of course, I&#x27;ve made that mistake, but there are those who you simply just assume will let the deadline slip despite what they say and that&#x27;s grief I can&#x27;t stand.<p>[3] And sadly, for much the same reasons as [2], those are developers I shy away from hiring. We work in a field that is a trade-craft and one of the most <i>complicated</i> trade crafts around. And while an ability to mentor as a Junior Developer isn&#x27;t a requirement, as a Senior&#x2F;Principal it&#x27;s a core requirement, IMO.<p>[4] Some will say &quot;Good, saved that company a lousy hire!&quot; but that ignores two things: People who are bad at one place aren&#x27;t necessarily bad everywhere -- the whole &quot;not a good fit&quot; thing is <i>real</i>, rather often. Sometimes you aren&#x27;t at the right place. In addition, when people lose jobs, they often re-evaluate themselves and make life changes. &quot;Rick&quot; might not be &quot;The Rick&quot; anymore but he now can&#x27;t escape his past as easily.<p>[5] Yeah, I know, I&#x27;m being a dick about a click-bait-y headline; I know they obviously don&#x27;t mean that, but the flippant nature with how this was presented left a really bad taste in my mouth.
Here Are Twitter's Latest Rules for Fighting Hate and Abuse
Techmeme summary: <i>Erin Griffith &#x2F; Wired: Internal email details Twitter&#x27;s plans to update rules on abuse, including expanding reporting options, hiding hate symbols behind warnings, more</i><p>The email in full from the article:<p><pre><code> Dear Trust &amp; Safety Council members, I’d like to follow up on Jack’s Friday night Tweetstorm about upcoming policy and enforcement changes. Some of these have already been discussed with you via previous conversations about the Twitter Rules update. Others are the result of internal conversations that we had throughout last week. Here’s some more information about the policies Jack mentioned as well as a few other updates that we’ll be rolling out in the weeks ahead. Non-consensual nudity Current approach *We treat people who are the original, malicious posters of non-consensual nudity the same as we do people who may unknowingly Tweet the content. In both instances, people are required to delete the Tweet(s) in question and are temporarily locked out of their accounts. They are permanently suspended if they post non-consensual nudity again. Updated approach *We will immediately and permanently suspend any account we identify as the original poster&#x2F;source of non-consensual nudity and&#x2F;or if a user makes it clear they are intentionally posting said content to harass their target. We will do a full account review whenever we receive a Tweet-level report about non-consensual nudity. If the account appears to be dedicated to posting non-consensual nudity then we will suspend the entire account immediately. *Our definition of “non-consensual nudity” is expanding to more broadly include content like upskirt imagery, “creep shots,” and hidden camera content. Given that people appearing in this content often do not know the material exists, we will not require a report from a target in order to remove it. *While we recognize there’s an entire genre of pornography dedicated to this type of content, it’s nearly impossible for us to distinguish when this content may&#x2F;may not have been produced and distributed consensually. We would rather error on the side of protecting victims and removing this type of content when we become aware of it. Unwanted sexual advances Current approach *Pornographic content is generally permitted on Twitter, and it’s challenging to know whether or not sexually charged conversations and&#x2F;or the exchange of sexual media may be wanted. To help infer whether or not a conversation is consensual, we currently rely on and take enforcement action only if&#x2F;when we receive a report from a participant in the conversation. Updated approach *We are going to update the Twitter Rules to make it clear that this type of behavior is unacceptable. We will continue taking enforcement action when we receive a report from someone directly involved in the conversation. Once our improvements to bystander reporting go live, we will also leverage past interaction signals (eg things like block, mute, etc) to help determine whether something may be unwanted and action the content accordingly. Hate symbols and imagery (new)*We are still defining the exact scope of what will be covered by this policy. At a high level, hateful imagery, hate symbols, etc will now be considered sensitive media (similar to how we handle and enforce adult content and graphic violence). More details to come. Violent groups (new)*We are still defining the exact scope of what will be covered by this policy. At a high level, we will take enforcement action against organizations that use&#x2F;have historically used violence as a means to advance their cause. More details to come here as well (including insight into the factors we will consider to identify such groups). Tweets that glorify violence (new)*We already take enforcement action against direct violent threats (“I’m going to kill you”), vague violent threats (“Someone should kill you”) and wishes&#x2F;hopes of serious physical harm, death, or disease (“I hope someone kills you”). Moving forward, we will also take action against content that glorifies (“Praise be to for shooting up. He’s a hero!”) and&#x2F;or condones (“Murdering makes sense. That way they won’t be a drain on social services”). More details to come. We realize that a more aggressive policy and enforcement approach will result in the removal of more content from our service. We are comfortable making this decision, assuming that we will only be removing abusive content that violates our Rules. To help ensure this is the case, our product and operational teams will be investing heavily in improving our appeals process and turnaround times for their reviews. In addition to launching new policies, updating enforcement processes and improving our appeals process, we have to do a better job explaining our policies and setting expectations for acceptable behavior on our service. In the coming weeks, we will be: updating the Twitter Rules as we previously discussed (+ adding in these new policies) updating the Twitter media policy to explain what we consider to be adult content, graphic violence, and hate symbols. launching a standalone Help Center page to explain the factors we consider when making enforcement decisions and describe our range of enforcement options launching new policy-specific Help Center pages to describe each policy in greater detail, provide examples of what crosses the line, and set expectations for enforcement consequences Updating outbound language to people who violate our policies (what we say when accounts are locked, suspended, appealed, etc). We have a lot of work ahead of us and will definitely be turning to you all for guidance in the weeks ahead. We will do our best to keep you looped in on our progress. All the best, Head of Safety Policy</code></pre>
Ask HN: Have you relocated from the Bay Area to another tech hub?
I was born in Mountain View - my dad was an engineer in the semiconductor biz starting in the late 60s. I&#x27;ve been coding since 5th grade, and after bailing out on college I worked at a couple of valley startups in the mid to late 90s (one of which changed the world) and then I jumped to Denver, CO.<p>Back then, Denver was just around 400K people and had a very laid back vibe. Its downtown was underwhelming, mostly parking lots and old skyscrapers - the Rockies park and the Pepsi Center were new with some sports bars around them. The food scene was a minefield - you really had to seek out decent places. Some of my worst meals were at places in Denver back then (the Mexican food was a huge letdown compared to CA). The Denver software scene was a lot of corporate IT work, DoD stuff to do south (multiple military&#x2F;defense sites and there are huge downlinks in the area), and a tiny dose of startups. Housing was relatively cheap in the city center neighborhoods, which Californians were swooping in and buying much to the surprise of the natives (&quot;Really? That area used to be full of drug dealers and rotting old houses!&quot;). There was a ton of new housing being built out on the prairie. Now Denver feels pretty different - faster paced, a lot more people around, and awareness of things outside Colorado. To me, Denver&#x27;s downtown and city center neighborhoods are shockingly different. The city developed a huge area behind its downtown train depot into a brand new neighborhood with tall buildings. And I think they&#x27;ve tying that into the downtown and surrounding neighborhoods pretty well. Close in neighborhoods like Highlands, Baker, Five Points, Ballpark, River North, etc. are all very hip and relatively expensive. The food, music, and art scene has gotten <i>way</i> better. A buddy of mine in the new home construction biz says most of the housing demand is in the closer in developed areas, that brand new housing developments way out on the metro area limits has ok demand but not crushing. Homes in my neighborhood sell before hitting the MLS with multiple offers that usually go over asking...<p>My first Denver gig was a valley backed startup with the eng group in Denver and the marketing&#x2F;biz dev&#x2F;sales in Mountain View. It was an amazing team and we grew to a decent valuation, but didn&#x27;t exit in time for the dot com crash. After the dot com crash, the tech scene in Denver was bleak. A lot of folks left town, left software, or jumped onto DoD stuff. There just wasn&#x27;t much VC money or many established local software companies. I bounced around at a few places that kept going under and managed to hook on to a small software company in downtown. By 2005-ish things had picked up in the Denver tech scene - money was coming in, startups were growing and exiting pretty well. At one point I was at a startup that was just about to go public, S-1 filed and did the roadshow, but Lehman Brothers bit the big one and the US economy tanked. However, by that point, the Denver tech scene was strong enough that even with the economy in the toilet, finding good&#x2F;fun&#x2F;well paying software jobs was easy. And finding, much less hiring, good engineers was pretty tough.<p>Then Colorado made weed legal and holy shit did this place blow up. The number of people moving here is crazy. The amount of construction in the downtown area is nuts - cranes all over the place. Since I&#x27;ve been here, the Denver area has always been one of the fastest growing in the country and with lots of people moving in, so lots of things change quickly.<p>The current Denver tech scene seems pretty strong to me. There are a ton of fun&#x2F;cool startups all over the place. The city has bet big on software - encouraging tech with partnerships, working to provide workspaces&#x2F;housing, parks, public transportation, etc. There are lots of meet ups and I think a strong sense of a Denver tech community . Denver definitely has a competitive thing going on with Boulder (about 45 minutes from Denver) - Boulder has the marquee university and highly visible R&amp;D centers for big names like Google, Twitter, etc. Denver is desperate for a big name like that and every once in a while a rumor flies around that a big name is going to start an eng facility in Denver. Last year it was Facebook looking at property in RiNo, now it&#x27;s Apple looking along the 16th Street Mall. Boulder picked up a big name for itself in the startup world because of Tech Stars and Brad Feld, which Denver doesn&#x27;t seem to have just yet. Personally, I prefer Denver over Boulder because Denver is a big old messy, ugly city with lots of crazy stuff that&#x27;s good and bad, along with scars from its boom and bust history (some of the architecture and city planning from the &#x27;bust&#x27; periods are painful) - but it has all that and is trying to work it to make things better. Boulder is stunningly beautiful and has quick access to unreal outdoor space, but to me feels too uniform of thought - too small and claustrophobic (I lived in Santa Cruz for a bit and hated it - pretty, but everyone&#x27;s the same).<p>So, Denver&#x27;s tech scene is great but it&#x27;s nothing compared to the SF Bay Area. Folks don&#x27;t move here to change the world or make millions - people are generally here to put in the time at their job and enjoy life. I had a buddy who pitched a16z, who laughed him out of the room because &quot;they wouldn&#x27;t fund those lazy ski bums in Denver - move to Palo Alto and we&#x27;ll talk.&quot; It&#x27;s kind of a true stereotype - I interviewed at a small Denver software company a few years ago whose CEO told me &quot;Priorities here are: health, family, and then the job.&quot; It&#x27;s generally very family friendly, employers are cool about schedule changes for kids stuff or taking care of sick kids, etc. Wintertime has a lot last minute &quot;Ski Day&quot; emails come in. But after growing up in the SF Bay Area, I didn&#x27;t want to raise kids there (where $ wins over family), so we moved to Denver to start a family and it&#x27;s been awesome - I wouldn&#x27;t change a thing.
Google's latest iPhone rival off to a rocky start
All the laptops, desktops and their chipsets They all sport 3.5mm jacks with SPDIF optical + mic and headset. Dedicated soundcards like ASUS Xonar, Creative SoundBlaster X RGB they ship with gold caps and high grade chips with proper 7.1 Dolby and Surround certification + THX Audio and even MSI uses Nahimic DSP + ESS Sabre chips in their high end gaming GT series, Even the latest trashbook pro&#x27;s with USB C have a jack which is meh with Apple&#x27;s Cirrus Logic used across all their devices, My Samsung Galaxy S with Wolfson chip + Linux kernel level driver (Voodoo mod) offering huge control options for user trumps most of the devices, Also the iPod5.5G of mine with a more superior Wolfson chip.<p>The Audio is always Analog, Digital is in your computer or streamed as data but the audio is only audible in Analog. And every phone has an DAC chip to convert all the data you feed to the mic to the chip and mobo and OS, the DAC converts its so that you can listen to it.<p>Going digital will make things complicated, Like for eg in every dongle you have to rewire the DAC to the USB to the chipset and SoC to make sure that the Audio dongle works. The 3.5mm is still standard across, your Passenger jets, Fighter Jets, Professional Audio, Casual user, Audiophile community. Why ? Because it just works, the mechanical strength of the 3.5mm port is robust plus the 360 degree freedom and the USB C using the Digital line needs another circuitry and breaks standard. Ultimately this 3.5mm port is the de-facto standard deployed on ALL systems across the world.<p>Apple did for 3 reasons.<p>1) Apple has two different lines of wireless devices that sell for premium prices. Beats and Airpods (Which are literal trash grade sounding, using an W1 chip is to bridge that steps one takes to pair them and accelerometer + gyro aren&#x27;t ground breaking when coming to the purpose, ofc one wont buy them for audio but for convenience while they still work with the iPhones which have the 3.5mm adapter)<p>2) Each and every lightning to 3.5 needs MFi certification that will give them continuous stream of revenue on all audio gear and including the people who use the 3.5mm old headsets + those dongle money, Apple became a dongle company from tech company all the innovation and brave policy of Apple is gone with Steve, Tim is nothing but Ballmer at M$, who ruined Nokia and M$ both..<p>3) Strong DRM control, Closing analog will make then have control over the I&#x2F;O, A thoroughly made 100% business focused decision.<p>3a) Please don&#x27;t bring the space waste B$ to here, see Note 8 PCB design and a hint, check the MXM GPUs on MSI GT83VR and Titan and Clevo P870DM3 which are sporting a 1080 GPU which is socketed and has power to beat the Desktop chips. PCB design is to adhere by OEMs standards don&#x27;t spout what Coolaid corporate Orwellians demand showing that Taptic engine and missing space...<p>This Apple&#x27;s CEO is a beancounter and a disease to the innovation as they keep in raising the walls of their utopia where ignorance is bliss.<p>Next up thinking about the USB C Audio, that standard barely materialized and lacks proper standard, Pixel for eg doesn&#x27;t have Video out and iPhone&#x27;s old 40 pin has dedicated Audio line, the new 3.5mm adapter has only DAC while the AMP works from the Phone, for Apple it&#x27;s easy but for the people it&#x27;s complicated on multiple stages.<p>a - No proper audio quality, the DAC chips which will be used in the dongles are undocumented, forget the phones which are offering dedicated DAC&#x2F;AMP in theirs - Axon 7&#x27;s DAC HiFi works on LineageOS too a unique one, HTC 10, Vivo is famous for their HiFi. Cheap knockoff ones will sound worse, If you add a DAC inside a headphone that ruins the purchase for many as not all will like that sound signature where as Analog 3.5mm ones can have clear audio tuned for that IEM&#x2F;Headset.<p>b - Dongles - More space waste inside your pocket and no standard, perhaps iPhone ecosystem is standard, can&#x27;t match tot he 3.5mm spectrum but on USB C it&#x27;s worse doesn&#x27;t and doesn&#x27;t have any AMP or any Analog line, it&#x27;s 100% digital so you MUST ship one with a proper circuit with no cross compatibility (Try to use U11 dongle on the HTC 10, btw 10 beats U11 in audio since the AMP is not powerful enough vs the 10)<p>c - BT and Wireless ? they existed since Nokia and work the same, Perhaps the LDAC and APTX but they need licensing and even with 3.5mm jack wireless works the same, Waterproofing also exists the same, Note 8 is a perfect answer it has a full silo for S-Pen + an always on Home button like apple&#x27;s taptic gimmick and retains IP68 and has multiple Biometric security with the 3.5mm port under water with S-Pen removed too. LG v30 does, v20 did with HiFi Audio. AND the batteries go dead in these wireless sets, make drain more and eventually go to garbage with the planned obsolescence device of yours. Where as normal 3.5mm ones last until they die<p>d - High quality Audio is existing, HiFi gear is widely available without any complications, in the end everyone uses it, The wear and tear resistance is higher on the jack vs the USB C which often breaks and the 2 degree possibility vs the 360 degree beats this greedy <i></i><i></i><i></i><i></i> decision of Apple, Plus the wea<p>So I don&#x27;t see any advantage plus Unfortunately due to the stupid CEOs who is following the blind lead of Apple, M$ is shifted to Apple type marketing, UWP going against .exe Win32 apps, idk why all companies are obsessed with this company Apple while their whole ecosystem is limited and not deployed across multiple HW, for eg - Windows works in Govt, Military and critical mission control systems, Apple can&#x27;t match that but unfortunately the people just chose form over function and made Apple that huge, look where are we. Soon the eSIM will come in the iPhones (I called it first) &amp; planned obsolescence push further, Draconian controls, How much further you will bend, we said okay to the dedicated video ports to wireless, IR blasters, Now removable batteries also gone and now next is this ?? Question yourselves to believe what they say or the real facts which base on liberty and choice, an essential aspect for humans.<p><a href="https:&#x2F;&#x2F;www.androidauthority.com&#x2F;was-ditching-the-headphone-jack-a-good-idea-800101&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.androidauthority.com&#x2F;was-ditching-the-headphone-...</a><p>49% of the wireless market share in US by Apple. See how money is flowing.<p><a href="https:&#x2F;&#x2F;www.theverge.com&#x2F;circuitbreaker&#x2F;2016&#x2F;6&#x2F;21&#x2F;11991302&#x2F;iphone-no-headphone-jack-user-hostile-stupid" rel="nofollow">https:&#x2F;&#x2F;www.theverge.com&#x2F;circuitbreaker&#x2F;2016&#x2F;6&#x2F;21&#x2F;11991302&#x2F;i...</a><p><a href="https:&#x2F;&#x2F;www.theverge.com&#x2F;2017&#x2F;10&#x2F;5&#x2F;16426754&#x2F;pixel-2-headphone-jack-bluetooth-walled-garden" rel="nofollow">https:&#x2F;&#x2F;www.theverge.com&#x2F;2017&#x2F;10&#x2F;5&#x2F;16426754&#x2F;pixel-2-headphon...</a>
Cryptography with Cellular Automata (1985) [pdf]
If you like Wolfram&#x27;s state transition diagrams in figure 2, you&#x27;ll love Andrew Wuensche and Mike Lesser published a gorgeous coffee table book entitled &quot;The Global Dynamics of Cellular Automata&quot;.<p>I wrote some stuff in an earlier discussion about how the state transition diagrams in that book illustrate &quot;Garden of Eden&quot; configurations and &quot;Basins of Attraction&quot;, to which I&#x27;ll link and repeat here:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=14468707" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=14468707</a><p>There&#x27;s a thing called a &quot;Garden of Eden&quot; configuration that has no predecessors, which is impossible to get to from any other possible state.<p>For a rule like Life, there are many possible configurations that must have been created by God or somebody with a bitmap editor (or somebody who thinks he&#x27;s God and uses Mathematica as a bitmap editor, like Stephen Wolfram ;), because it would have been impossible for the Life rule to evolve into those states. For example, with the &quot;Life&quot; rule, no possible configuration of cells could ever evolve into all cells with the value &quot;1&quot;.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Garden_of_Eden_(cellular_automaton)" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Garden_of_Eden_(cellular_autom...</a><p>For a rule that simply sets the cell value to zero, all configurations other than pure zeros are garden of eden states, and they all lead directly into a one step attractor of all zeros which always evolves back into itself, all zeros again and again (the shortest possible attractor loop that leads directly to itself).<p>There is a way of graphically visualizing that global rule state space, which gives insight into the behavior of the rule and the texture and complexity of its state space!<p>Andrew Wuensche and Mike Lesser published a gorgeous coffee table book entitled &quot;The Global Dynamics of Cellular Automata&quot; that plots out the possible &quot;Garden of Eden&quot; states and the &quot;Basins of Attraction&quot; they lead into of many different one-dimensional cellular automata like rule 30.<p><a href="http:&#x2F;&#x2F;uncomp.uwe.ac.uk&#x2F;wuensche&#x2F;gdca.html" rel="nofollow">http:&#x2F;&#x2F;uncomp.uwe.ac.uk&#x2F;wuensche&#x2F;gdca.html</a><p>The beautiful color plates begin on page 79 in the free pdf file:<p><a href="http:&#x2F;&#x2F;uncomp.uwe.ac.uk&#x2F;wuensche&#x2F;downloads&#x2F;papers&#x2F;global_dynamics_of_CA.pdf" rel="nofollow">http:&#x2F;&#x2F;uncomp.uwe.ac.uk&#x2F;wuensche&#x2F;downloads&#x2F;papers&#x2F;global_dyn...</a><p>I&#x27;ve uploaded the money shots to imgur:<p><a href="http:&#x2F;&#x2F;imgur.com&#x2F;gallery&#x2F;s3dhz" rel="nofollow">http:&#x2F;&#x2F;imgur.com&#x2F;gallery&#x2F;s3dhz</a><p>Those are not pictures of 1-d cellular automata rule cell states on a grid themselves, but they are actually graphs of the abstract global state space, showing merging and looping trajectories (but not branching since the rules are deterministic -- time flows from the garden of eden leaf tips around the perimeter into (then around) the basin of attractor loops in the center, merging like springs (GOE) into tributaries into rivers into the ocean (BOA)).<p>The rest of the book is an atlas of all possible 1-d rules of a particular rule numbering system (like rule 30, etc), and the last image is the legend.<p>He developed a technique of computing and plotting the topology network of all possible states a CA can get into -- tips are &quot;garden of eden&quot; states that no other states can lead to, and loops are &quot;basins of attraction&quot;.<p>Here is the illustration of &quot;rule 30&quot; from page 144 (the legend explaining it is the last photo in the above album). [I am presuming it&#x27;s using the same rule numbering system as Wolfram but I&#x27;m not sure -- EDIT: I visually checked the &quot;space time pattern from a singleton seed&quot; thumbnail against the illustration in the article, and yes it matches rule 30!]<p><a href="http:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;lKAbP" rel="nofollow">http:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;lKAbP</a><p>&quot;The Global Dynamics of Cellular Automata introduces a powerful new perspective for the study of discrete dynamical systems. After first looking at the unique trajectory of a system&#x27;s future, an algoritm is presented that directly computes the multiple merging trajectories of the systems past. A given cellular automaton will &quot;crystallize&quot; state space into a set of basins of attraction that typically have a topology of trees rooted on attractor cycles. Portraits of these objects are made accessible through computer generated graphics. The &quot;Atlas&quot; presents a complete class of such objects, and is inteded , with the accompanying software, as an aid to navigation into the vast reaches of rule behaviour space. The book will appeal to students and researchers interested in cellular automata, complex systems, computational theory, artificial life, neural networks, and aspects of genetics.&quot;<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Attractor" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Attractor</a><p>&quot;Basins of attraction in cellular automata&quot;, by Andrew Wuensche:<p><a href="http:&#x2F;&#x2F;onlinelibrary.wiley.com&#x2F;doi&#x2F;10.1002&#x2F;1099-0526(200007&#x2F;08)5:6<19::AID-CPLX5>3.0.CO;2-J&#x2F;full" rel="nofollow">http:&#x2F;&#x2F;onlinelibrary.wiley.com&#x2F;doi&#x2F;10.1002&#x2F;1099-0526(200007&#x2F;...</a><p>&quot;To achieve the global perspective. I devised a general method for running CA backwards in time to compute a state&#x27;s predecessors with a direct reverse algorithm. So the predecessors of predecessors, and so on, can be computed, revealing the complete subtree including the &quot;leaves,&quot; states without predecessors, the so-called “garden-of-Eden&quot; states.<p>Trajectories must lead to attractors in a finite CA, so a basin of attraction is composed of merging trajectories, trees, rooted on the states making up the attractor cycle with a period of one or more. State-space is organized by the &quot;physics&quot; underlying the dynamic behavior into a number of these basins of attraction, making up the basin of attraction field.&quot;<p>If you like the book, you&#x27;ll love the code!<p><a href="http:&#x2F;&#x2F;www.ddlab.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.ddlab.com&#x2F;</a><p><a href="http:&#x2F;&#x2F;www.ddlab.com&#x2F;screensave3.png" rel="nofollow">http:&#x2F;&#x2F;www.ddlab.com&#x2F;screensave3.png</a><p><a href="http:&#x2F;&#x2F;uncomp.uwe.ac.uk&#x2F;wuensche&#x2F;2006_ddlab_slides1.pdf" rel="nofollow">http:&#x2F;&#x2F;uncomp.uwe.ac.uk&#x2F;wuensche&#x2F;2006_ddlab_slides1.pdf</a><p><a href="http:&#x2F;&#x2F;uncomp.uwe.ac.uk&#x2F;wuensche&#x2F;meta.html" rel="nofollow">http:&#x2F;&#x2F;uncomp.uwe.ac.uk&#x2F;wuensche&#x2F;meta.html</a><p><a href="http:&#x2F;&#x2F;uncomp.uwe.ac.uk&#x2F;wuensche&#x2F;boa_idea.html" rel="nofollow">http:&#x2F;&#x2F;uncomp.uwe.ac.uk&#x2F;wuensche&#x2F;boa_idea.html</a><p><a href="http:&#x2F;&#x2F;uncomp.uwe.ac.uk&#x2F;wuensche&#x2F;downloads&#x2F;papers&#x2F;2008_dd_overview_preprint.pdf" rel="nofollow">http:&#x2F;&#x2F;uncomp.uwe.ac.uk&#x2F;wuensche&#x2F;downloads&#x2F;papers&#x2F;2008_dd_ov...</a>
Ask HN: We have a great team and capital but can't find a good idea
Here is a pitch I wrote for an idea I am working on. I think the ML&#x2F;NLP space still has a huge run in front of it, even if it now seems crowded.<p>There is a much longer version, of course, but I had to cut 75% of this before Hacker News would allow me to post it.<p>-----------------------------<p>SUMMARY:<p>For many years I&#x27;ve worked with startups involved in data mining, so I&#x27;ve gotten to know how the current crop of these firms operates. The companies that I&#x27;ve worked with rely on Machine Learning (ML) and Natural Language Processing (NLP). All of the startups that I&#x27;ve worked with so far, and all of the one&#x27;s that I&#x27;ve read about, fall into 2 categories:<p>1.) they got into a business where they thought they could use humans to do data aggregation, and now they are desperately trying to build ML&#x2F;NLP stacks to automate the work.<p>2.) they were certain they didn&#x27;t need humans, because their Deep Neural Network was magically effective, but now they are having to rethink their business model because their Deep Neural Network has been less of a magical breakthrough than they expected.<p>There is a great deal of redundancy in the current efforts to find the right balance of ML&#x2F;NLP and humans. One company might use Spark to ingest documents about medicine, another analyzes advertising using a Hadoop cluster, another scans the web to pull in millions of articles that it runs through Storm, where it parses the data and then stores the final results in Cassandra. Each company stumbles through a painful process of trying to figure out where it should use humans to patch the failures of its ML&#x2F;NLP techniques.<p>In particular, there are specific patterns of using Mechanical Turk to vet those items where ML algorithms could not reach a high level of confidence. A given item can be sent to 5 different people on Mechanical Turk, and we can accept a vote of 4 or 5 as representing a high level of confidence. But if the vote is 3 or less, then we need to escalate that item to an even higher level or review. I&#x27;ve seen multiple companies build similar processes, which is why I think this should be a business of its own.<p>It&#x27;s important to be aware of the limits of startups such as Bigml.com. Most of the time, data analysis is just the starting point of a longer process that involves much interpretation on the part of humans. But much of that later human vetting can be standardized, and so firms should outsource the work.<p>LONG VERSION:<p>Let&#x27;s start by talking about one particular industry, which is firms that sell data about privately-held companies. There is a great conversation on Quora that summarizes the strengths and weaknesses of most of the enterprises in this field:<p>&quot;How do CB Insights, PrivCo, DataFox, Owler, Tracxn!, Mattermark, and Venture Scanner compare for private company research?&quot;<p><a href="https:&#x2F;&#x2F;www.quora.com&#x2F;How-do-CB-Insights-PrivCo-DataFox-Owler-Tracxn-Mattermark-and-Venture-Scanner-compare-for-private-company-research" rel="nofollow">https:&#x2F;&#x2F;www.quora.com&#x2F;How-do-CB-Insights-PrivCo-DataFox-Owle...</a><p>Danielle Morrill has written a great blog post about how and why she created Mattermark (&quot;The Deal Intelligence Company&quot;):<p><a href="https:&#x2F;&#x2F;medium.com&#x2F;@DanielleMorrill&#x2F;introducing-mattermark-the-deal-intelligence-company-a9ed7c8a9872" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;@DanielleMorrill&#x2F;introducing-mattermark-t...</a><p>According to Crunchbase, she has so far raised $17 million to build out her company. Samiur Rahman, the lead data engineer at Mattermark, has given a revealing interview about how NLP helps Mattermark pinpoint data about deals that companies may be arranging:<p><a href="https:&#x2F;&#x2F;www.techemergence.com&#x2F;how-natural-language-processing-helps-mattermark-find-business-opps-a-conversation-with-samiur-rahman&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.techemergence.com&#x2F;how-natural-language-processin...</a><p>Anyone who tries to scour the Web for information about companies, either publicly held or privately owned, immediately runs into a few problems, including the fact that any given company may have dozens of subsidiaries with similar names. The law, or a regulatory agency, might have forced the company to break up. So, for instance, German law has forced Deutsche Bank to set up different companies for loans and investments. As a consequence, one finds the following names on the web:<p>Deutsche Bank - Corretora de Valores S.A. Deutsche Bank A S Deutsche Bank AG Deutsche Bank GmbH Deutsche Bank S.A. Banco Alemao Deutsche Bank Trust Company Americas Deutsche Bank Trust Company Delaware Deutsche Bank Journalists tend to misspell these names, or just use the generic &quot;Deutsche Bank,&quot; which makes it difficult to discern which incarnation of Deutsche Bank is the subject of the article.<p>Samiur Rahman says he is a fan of &quot;word vectors&quot; and &quot;paragraph vectors&quot;, and then he uses the Nearest Neighbor algorithm to figure out which companies are similar to each other. As the interview makes clear, Mattermark got into a business where they thought they could use humans to do data aggregation; now they are desperately trying to build ML&#x2F;NLP stacks to automate the work. (In my taxonomy, they&#x27;re a Category 1 firm.)<p>...Some customers buy from several of these data-analysis companies. Some buy from both CB Insights, which has the best data about deals, and also CB-Mark, which has the best estimates on revenue. The customers are often sales teams hoping to find new customers for their own companies, but sometimes the data is also used for research (as at the universities) and sometimes the data is used for VC investments, as Danielle Morrill made clear in her history of Mattermark. Right now there are something like 30 companies in this field (selling data about private companies), though I assume this will eventually consolidate to something like 3 or 4 companies.<p>THE HYBRID APPROACH USES BOTH HUMANS AND ML&#x2F;NLP<p>For the foreseeable future, the best approach to data mining is a mix of ML&#x2F;NLP plus humans. The ML&#x2F;NLP scripts can be calibrated to produce an estimate of confidence. When there is high confidence, the entire ingestion process can be automated. When there is low confidence, the item-of-data needs to be flagged for human review.<p>The process I&#x27;m working on needs two levels of human review, but I&#x27;ll also talk about a third, if only to dismiss it.<p>1.) low-level: for tasks that don&#x27;t need much context, yet still can&#x27;t be automated, the item-of-data should be sent to 5 different people using Amazon&#x27;s Mechanical Turk. If at least 4 of these people come to the same conclusion, we can assume that this was data that low-skilled people were successful at categorizing. For instance, if an article had an ambiguous mention of Deutsche Bank, but 4 out of 5 people came to the same conclusion about which Deutsche Bank was under discussion, then we can consider this item-of-data categorized, and the rest of the ingestion process can proceed automatically.<p>2.) medium-level: this level of skill is needed if people with low-level skill failed to agree on how to categorize an item-of-data. If something was sent to Mechanical Turk but 5 people each reached 5 different conclusions, the data clearly needs a much closer inspection. Medium level skill review would also include large parts of medical and legal data sets, where data can be read from context by a person who lacks specific medical or legal training. &quot;While I worked at Google I downloaded all of Google&#x27;s documents for self-driving cars and now I&#x27;d like Uber to finance my new self-driving car startup&quot; is a sentence that can be understood by an untrained layman, though the realization that this describes illegal activity is still beyond the ability of a pure ML&#x2F;NLP approach. People of medium-level skill would be people who work directly with my startup. They could do the work remotely, but they would require some training, so I would have a long-term relationship with them.<p>3.) high-level: this is a different market, and so I&#x27;m not going to consider this. Data that requires high levels of skill might be items of medical or legal knowledge that a person needs extensive training to understand. Trying to deal with this kind of data would require a completely different business model. I&#x27;m not considering this as part of my current project.<p>WHO ARE MY CUSTOMERS?<p>My customers would be every group that:<p>1.) needs the output of NLP scripts...<p>2.) ...applied to a data source they specify...<p>3.) ...vetted to a high level of confidence<p>The market for ML and AI startups might seem crowded, but I think there is still many opportunities there, in particular, regarding standardizing the process whereby humans review the output of ML and NLP scripts.
The two questions I ask every interviewer
The best interview I had (for a lead games programmer position) was a written test, I can&#x27;t remember the exact details but perhaps 24 pages and 45 minutes to complete it. Before starting I flicked through the entire test to get a feel for what was being asked of me, and then for a little while debated just getting up and leaving the room. It felt a bit crazy, a test on all sorts of disciplines of games programming that I wasn&#x27;t an expert in, and I didn&#x27;t intend to become an expert in (and the job I was applying for didn&#x27;t require me to be an expert in). But I calmed down, tackled the test in the best way I could, answered the questions in an order where my most confident answers (and quick to complete) answers were done first, then moving onto ones I had a pretty good idea of, then ones I knew less well, etc.<p>40 minutes in, the CEO of the company came in, it is a ~300 man company, so it was interested in itself that he came in. Anyway, he wanted to go through the test, I said I hadn&#x27;t finished yet, and had made a note of the start time, and still had 5 minutes remaining. He said it doesn&#x27;t matter, lets have a look. And he flicked through the test and didn&#x27;t look at my answers and instead found a section of the test which I had completely ignored (it was to do with AI and path finding, both topics I have done almost nothing on during my entire education and career). Of course, this was the question he wanted me to answer, I explained the reasons I hadn&#x27;t answered it, and said if they wanted me to do things like this it really wouldn&#x27;t be a suitable job for me. But he persisted, stop worrying about all of these things, just answer the question now. Again I had that slight feeling of wanting to just leave, but again I overcame in.<p>I started talking him through the things I did know about the question, pointing out areas which could cause problems, then started listing what I could do to limit the question to avoid some of these issues (it had a picture of a top down level, the question was inside a box, I said lets initially forget about the box for example). Then I started just talking about an initial algorithm which could route AI around the level - I said it would obviously be really bad (basically AI just walking into things, then working out where to go next, and then a bit later saying oh I could keep track of where I have been in case I end up at the same point, etc.) we talked more about it, he asked some questions, and bit by bit I came up with a solution.<p>He said that&#x27;s interesting, because during this process you have described parts of various algorithms which someone who has studied AI would know about, but you are going from a brute force perspective, not having any way points in the level, no extra level knowledge. I said I didn&#x27;t realize I was allowed to do that, if I could do that, I could potentially come up with some nodes in the level, and rather than bumping into objects, go between nodes, work out the distance between nodes, build up a graph so you know the best way to get between points. Again he was happy, I was describing, or partially describing an actual solution. His next question was but how would I work out where to put the way point nodes, and again I just starting looking at the image, thinking about good positions for the way points, looking at the normals from each wall face, and started to see that putting way points as far away as possible from all normals had some advantages, and bit by bit came up with some sort of solution about how to do this.<p>By the end of it, there was a solution which involved preprocessing the level data offline, when the game runs using this data to move around the level, being able to handle paths becoming blocked (or new paths being opened), strategies for running this on a separate thread or CPU, asynchronous to the main game. It was demonstrating knowledge of the full pipeline for making a game from artist making levels to game running at 60Hz on a PC&#x2F;console, and also taking into account human resource, i.e. we could do things this way, which would give us a slight edge, but it would make the designers life a lot harder, so I&#x27;d probably not do that, take the performance hit, which is small and can probably be won back by the level designerse having extra time to optimize anyway.<p>I was offered the job, ultimately I had a better offer elsewhere so I didn&#x27;t take it, but for me the test made a lot of sense. My day to day job (Tech Director of a 12 man games development studio) is constantly having to solve problems which initially do not appear to have an answer. The ability to break down an issue, not panic, look from various different perspectives, build up a solution, have a good feeling for what parts of the solution are weak and need further research etc. to me makes a huge difference between an &#x27;ok&#x27; programmer&#x2F;developer and an exceptional one. I haven&#x27;t used the same test approach, but have certainly learned a lot from it when hiring people, never trying to trick interviewees, quite the opposite, trying to give them as many options as possible, and just urging them show me how they would be able to deal with day to day issues and come out on top. I certainly worry that whilst I got through, some people really would have just walked out, and in some cases you could argue those people wouldn&#x27;t work well when faced with tough problems and lots of pressure, but on the other hand, but outside of the testing environment they&#x27;d be fine. Anyway, I personally wouldn&#x27;t want to create quite as much stress&#x2F;pressure if I were to use these technique in future.