recentpopularlog in

nhaliday : ecosystem   61

Advantages and disadvantages of building a single page web application - Software Engineering Stack Exchange
Advantages
- All data has to be available via some sort of API - this is a big advantage for my use case as I want to have an API to my application anyway. Right now about 60-70% of my calls to get/update data are done through a REST API. Doing a single page application will allow me to better test my REST API since the application itself will use it. It also means that as the application grows, the API itself will grow since that is what the application uses; no need to maintain the API as an add-on to the application.
- More responsive application - since all data loaded after the initial page is kept to a minimum and transmitted in a compact format (like JSON), data requests should generally be faster, and the server will do slightly less processing.

Disadvantages
- Duplication of code - for example, model code. I am going to have to create models both on the server side (PHP in this case) and the client side in Javascript.
- Business logic in Javascript - I can't give any concrete examples on why this would be bad but it just doesn't feel right to me having business logic in Javascript that anyone can read.
- Javascript memory leaks - since the page never reloads, Javascript memory leaks can happen, and I would not even know where to begin to debug them.

--

Disadvantages I often see with Single Page Web Applications:
- Inability to link to a specific part of the site, there's often only 1 entry point.
- Disfunctional back and forward buttons.
- The use of tabs is limited or non-existant.
(especially mobile:)
- Take very long to load.
- Don't function at all.
- Can't reload a page, a sudden loss of network takes you back to the start of the site.

This answer is outdated, Most single page application frameworks have a way to deal with the issues above – Luis May 27 '14 at 1:41
@Luis while the technology is there, too often it isn't used. – Pieter B Jun 12 '14 at 6:53

https://softwareengineering.stackexchange.com/questions/201838/building-a-web-application-that-is-almost-completely-rendered-by-javascript-whi

https://softwareengineering.stackexchange.com/questions/143194/what-advantages-are-conferred-by-using-server-side-page-rendering
Server-side HTML rendering:
- Fastest browser rendering
- Page caching is possible as a quick-and-dirty performance boost
- For "standard" apps, many UI features are pre-built
- Sometimes considered more stable because components are usually subject to compile-time validation
- Leans on backend expertise
- Sometimes faster to develop*
*When UI requirements fit the framework well.

Client-side HTML rendering:
- Lower bandwidth usage
- Slower initial page render. May not even be noticeable in modern desktop browsers. If you need to support IE6-7, or many mobile browsers (mobile webkit is not bad) you may encounter bottlenecks.
- Building API-first means the client can just as easily be an proprietary app, thin client, another web service, etc.
- Leans on JS expertise
- Sometimes faster to develop**
**When the UI is largely custom, with more interesting interactions. Also, I find coding in the browser with interpreted code noticeably speedier than waiting for compiles and server restarts.

https://softwareengineering.stackexchange.com/questions/237537/progressive-enhancement-vs-single-page-apps

https://stackoverflow.com/questions/21862054/single-page-application-advantages-and-disadvantages
=== ADVANTAGES ===
1. SPA is extremely good for very responsive sites:
2. With SPA we don't need to use extra queries to the server to download pages.
3.May be any other advantages? Don't hear about any else..

=== DISADVANTAGES ===
1. Client must enable javascript.
2. Only one entry point to the site.
3. Security.

https://softwareengineering.stackexchange.com/questions/287819/should-you-write-your-back-end-as-an-api
focused on .NET

https://softwareengineering.stackexchange.com/questions/337467/is-it-normal-design-to-completely-decouple-backend-and-frontend-web-applications
A SPA comes with a few issues associated with it. Here are just a few that pop in my mind now:
- it's mostly JavaScript. One error in a section of your application might prevent other sections of the application to work because of that Javascript error.
- CORS.
- SEO.
- separate front-end application means separate projects, deployment pipelines, extra tooling, etc;
- security is harder to do when all the code is on the client;

- completely interact in the front-end with the user and only load data as needed from the server. So better responsiveness and user experience;
- depending on the application, some processing done on the client means you spare the server of those computations.
- have a better flexibility in evolving the back-end and front-end (you can do it separately);
- if your back-end is essentially an API, you can have other clients in front of it like native Android/iPhone applications;
- the separation might make is easier for front-end developers to do CSS/HTML without needing to have a server application running on their machine.

Create your own dysfunctional single-page app: https://news.ycombinator.com/item?id=18341993
I think are three broadly assumed user benefits of single-page apps:
1. Improved user experience.
2. Improved perceived performance.
3. It’s still the web.

5 mistakes to create a dysfunctional single-page app
Mistake 1: Under-estimate long-term development and maintenance costs
Mistake 2: Use the single-page app approach unilaterally
Mistake 3: Under-invest in front end capability
Mistake 4: Use naïve dev practices
Mistake 5: Surf the waves of framework hype

The disadvantages of single page applications: https://news.ycombinator.com/item?id=9879685
You probably don't need a single-page app: https://news.ycombinator.com/item?id=19184496
https://news.ycombinator.com/item?id=20384738
MPA advantages:
- Stateless requests
- The browser knows how to deal with a traditional architecture
- Fewer, more mature tools
- SEO for free

When to go for the single page app:
- Core functionality is real-time (e.g Slack)
- Rich UI interactions are core to the product (e.g Trello)
- Lots of state shared between screens (e.g. Spotify)

Hybrid solutions
...
Github uses this hybrid approach.
...

Ask HN: Is it ok to use traditional server-side rendering these days?: https://news.ycombinator.com/item?id=13212465

https://www.reddit.com/r/webdev/comments/cp9vb8/are_people_still_doing_ssr/
https://www.reddit.com/r/webdev/comments/93n60h/best_javascript_modern_approach_to_multi_page/
https://www.reddit.com/r/webdev/comments/aax4k5/do_you_develop_solely_using_spa_these_days/
The SEO issues with SPAs is a persistent concern you hear about a lot, yet nobody ever quantifies the issues. That is because search engines keep the operation of their crawler bots and indexing secret. I have read into it some, and it seems that problem used to exist, somewhat, but is more or less gone now. Bots can deal with SPAs fine.
--
I try to avoid building a SPA nowadays if possible. Not because of SEO (there are now server-side solutions to help with that), but because a SPA increases the complexity of the code base by a magnitude. State management with Redux... Async this and that... URL routing... And don't forget to manage page history.

How about just render pages with templates and be done?

If I need a highly dynamic UI for a particular feature, then I'd probably build an embeddable JS widget for it.
q-n-a  stackex  programming  engineering  tradeoffs  system-design  design  web  frontend  javascript  cost-benefit  analysis  security  state  performance  traces  measurement  intricacy  code-organizing  applicability-prereqs  multi  comparison  smoothness  shift  critique  techtariat  chart  ui  coupling-cohesion  interface-compatibility  hn  commentary  best-practices  discussion  trends  client-server  api  composition-decomposition  cycles  frameworks  ecosystem  degrees-of-freedom  dotnet  working-stiff  reddit  social  project-management 
october 2019 by nhaliday
Ask HN: Learning modern web design and CSS | Hacker News
Ask HN: Best way to learn HTML and CSS for web design?: https://news.ycombinator.com/item?id=11048409
Ask HN: How to learn design as a hacker?: https://news.ycombinator.com/item?id=8182084

Ask HN: How to learn front-end beyond the basics?: https://news.ycombinator.com/item?id=19468043
Ask HN: What is the best JavaScript stack for a beginner to learn?: https://news.ycombinator.com/item?id=8780385
Free resources for learning full-stack web development: https://news.ycombinator.com/item?id=13890114

Ask HN: What is essential reading for learning modern web development?: https://news.ycombinator.com/item?id=14888251
Ask HN: A Syllabus for Modern Web Development?: https://news.ycombinator.com/item?id=2184645

Ask HN: Modern day web development for someone who last did it 15 years ago: https://news.ycombinator.com/item?id=20656411
hn  discussion  design  form-design  frontend  web  tutorial  links  recommendations  init  pareto  efficiency  minimum-viable  move-fast-(and-break-things)  advice  roadmap  multi  hacker  games  puzzles  learning  guide  dynamic  retention  DSL  working-stiff  q-n-a  javascript  frameworks  ecosystem  libraries  client-server  hci  ux  books  chart 
october 2019 by nhaliday
Karol Kuczmarski's Blog – A Haskell retrospective
Even in this hypothetical scenario, I posit that the value proposition of Haskell would still be a tough sell.

There is this old quote from Bjarne Stroustrup (creator of C++) where he says that programming languages divide into those everyone complains about, and those that no one uses.
The first group consists of old, established technologies that managed to accrue significant complexity debt through years and decades of evolution. All the while, they’ve been adapting to the constantly shifting perspectives on what are the best industry practices. Traces of those adaptations can still be found today, sticking out like a leftover appendix or residual tail bone — or like the built-in support for XML in Java.

Languages that “no one uses”, on the other hand, haven’t yet passed the industry threshold of sufficient maturity and stability. Their ecosystems are still cutting edge, and their future is uncertain, but they sometimes champion some really compelling paradigm shifts. As long as you can bear with things that are rough around the edges, you can take advantage of their novel ideas.

Unfortunately for Haskell, it manages to combine the worst parts of both of these worlds.

On one hand, it is a surprisingly old language, clocking more than two decades of fruitful research around many innovative concepts. Yet on the other hand, it bears the signs of a fresh new technology, with relatively few production-grade libraries, scarce coverage of some domains (e.g. GUI programming), and not too many stories of commercial successes.

There are many ways to do it
String theory
Errors and how to handle them
Implicit is better than explicit
Leaky modules
Namespaces are apparently a bad idea
Wild records
Purity beats practicality
techtariat  reflection  functional  haskell  programming  pls  realness  facebook  pragmatic  cost-benefit  legacy  libraries  types  intricacy  engineering  tradeoffs  frontier  homo-hetero  duplication  strings  composition-decomposition  nitty-gritty  error  error-handling  coupling-cohesion  critique  ecosystem  c(pp)  aphorism 
august 2019 by nhaliday
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
Which of Haskell and OCaml is more practical? For example, in which aspect will each play a key role? - Quora
- Tikhon Jelvis,

Haskell.

This is a question I'm particularly well-placed to answer because I've spent quite a bit of time with both Haskell and OCaml, seeing both in the real world (including working at Jane Street for a bit). I've also seen the languages in academic settings and know many people at startups using both languages. This gives me a good perspective on both languages, with a fairly similar amount of experience in the two (admittedly biased towards Haskell).

And so, based on my own experience rather than the languages' reputations, I can confidently say it's Haskell.

Parallelism and Concurrency

...

Libraries

...

Typeclasses vs Modules

...

In some sense, OCaml modules are better behaved and founded on a sounder theory than Haskell typeclasses, which have some serious drawbacks. However, the fact that typeclasses can be reliably inferred whereas modules have to be explicitly used all the time more than makes up for this. Moreover, extensions to the typeclass system enable much of the power provided by OCaml modules.

...

Of course, OCaml has some advantages of its own as well. It has a performance profile that's much easier to predict. The module system is awesome and often missed in Haskell. Polymorphic variants can be very useful for neatly representing certain situations, and don't have an obvious Haskell analog.

While both languages have a reasonable C FFI, OCaml's seems a bit simpler. It's hard for me to say this with any certainty because I've only used the OCaml FFI myself, but it was quite easy to use—a hard bar for Haskell's to clear. One really nice use of modules in OCaml is to pass around values directly from C as abstract types, which can help avoid extra marshalling/unmarshalling; that seemed very nice in OCaml.

However, overall, I still think Haskell is the more practical choice. Apart from the reasoning above, I simply have my own observations: my Haskell code tends to be clearer, simpler and shorter than my OCaml code. I'm also more productive in Haskell. Part of this is certainly a matter of having more Haskell experience, but the delta is limited especially as I'm working at my third OCaml company. (Of course, the first two were just internships.)

Both Haskell and OCaml are uniquivocally superb options—miles ahead of any other languages I know. While I do prefer Haskell, I'd choose either one in a pinch.

--
I've looked at F# a bit, but it feels like it makes too many tradeoffs to be on .NET. You lose the module system, which is probably OCaml's best feature, in return for an unfortunate, nominally typed OOP layer.

I'm also not invested in .NET at all: if anything, I'd prefer to avoid it in favor of simplicity. I exclusively use Linux and, from the outside, Mono doesn't look as good as it could be. I'm also far more likely to interoperate with a C library than a .NET library.

If I had some additional reason to use .NET, I'd definitely go for F#, but right now I don't.

https://www.reddit.com/r/haskell/comments/3huexy/what_are_haskellers_critiques_of_f_and_ocaml/
https://www.reddit.com/r/haskell/comments/3huexy/what_are_haskellers_critiques_of_f_and_ocaml/cub5mmb/
Thinking about it now, it boils down to a single word: expressiveness. When I'm writing OCaml, I feel more constrained than when I'm writing Haskell. And that's important: unlike so many others, what first attracted me to Haskell was expressiveness, not safety. It's easier for me to write code that looks how I want it to look in Haskell. The upper bound on code quality is higher.

...

Perhaps it all boils down to OCaml and its community feeling more "worse is better" than Haskell, something I highly disfavor.

...

Laziness or, more strictly, non-strictness is big. A controversial start, perhaps, but I stand by it. Unlike some, I do not see non-strictness as a design mistake but as a leap in abstraction. Perhaps a leap before its time, but a leap nonetheless. Haskell lets me program without constantly keeping the code's order in my head. Sure, it's not perfect and sometimes performance issues jar the illusion, but they are the exception not the norm. Coming from imperative languages where order is omnipresent (I can't even imagine not thinking about execution order as I write an imperative program!) it's incredibly liberating, even accounting for the weird issues and jinks I'd never see in a strict language.

This is what I imagine life felt like with the first garbage collectors: they may have been slow and awkward, the abstraction might have leaked here and there, but, for all that, it was an incredible advance. You didn't have to constantly think about memory allocation any more. It took a lot of effort to get where we are now and garbage collectors still aren't perfect and don't fit everywhere, but it's hard to imagine the world without them. Non-strictness feels like it has the same potential, without anywhere near the work garbage collection saw put into it.

...

The other big thing that stands out are typeclasses. OCaml might catch up on this front with implicit modules or it might not (Scala implicits are, by many reports, awkward at best—ask Edward Kmett about it, not me) but, as it stands, not having them is a major shortcoming. Not having inference is a bigger deal than it seems: it makes all sorts of idioms we take for granted in Haskell awkward in OCaml which means that people simply don't use them. Haskell's typeclasses, for all their shortcomings (some of which I find rather annoying), are incredibly expressive.

In Haskell, it's trivial to create your own numeric type and operators work as expected. In OCaml, while you can write code that's polymorphic over numeric types, people simply don't. Why not? Because you'd have to explicitly convert your literals and because you'd have to explicitly open a module with your operators—good luck using multiple numeric types in a single block of code! This means that everyone uses the default types: (63/31-bit) ints and doubles. If that doesn't scream "worse is better", I don't know what does.

...

There's more. Haskell's effect management, brought up elsewhere in this thread, is a big boon. It makes changing things more comfortable and makes informal reasoning much easier. Haskell is the only language where I consistently leave code I visit better than I found it. Even if I hadn't worked on the project in years. My Haskell code has better longevity than my OCaml code, much less other languages.

http://blog.ezyang.com/2011/02/ocaml-gotchas/
One observation about purity and randomness: I think one of the things people frequently find annoying in Haskell is the fact that randomness involves mutation of state, and thus be wrapped in a monad. This makes building probabilistic data structures a little clunkier, since you can no longer expose pure interfaces. OCaml is not pure, and as such you can query the random number generator whenever you want.

However, I think Haskell may get the last laugh in certain circumstances. In particular, if you are using a random number generator in order to generate random test cases for your code, you need to be able to reproduce a particular set of random tests. Usually, this is done by providing a seed which you can then feed back to the testing script, for deterministic behavior. But because OCaml's random number generator manipulates global state, it's very easy to accidentally break determinism by asking for a random number for something unrelated. You can work around it by manually bracketing the global state, but explicitly handling the randomness state means providing determinism is much more natural.
q-n-a  qra  programming  pls  engineering  nitty-gritty  pragmatic  functional  haskell  ocaml-sml  dotnet  types  arrows  cost-benefit  tradeoffs  concurrency  libraries  performance  expert-experience  composition-decomposition  comparison  critique  multi  reddit  social  discussion  techtariat  reflection  review  random  data-structures  numerics  rand-approx  sublinear  syntax  volo-avolo  causation  scala  jvm  ecosystem  metal-to-virtual 
june 2019 by nhaliday
The End of the Editor Wars » Linux Magazine
Moreover, even if you assume a broad margin of error, the pollings aren't even close. With all the various text editors available today, Vi and Vim continue to be the choice of over a third of users, while Emacs well back in the pack, no longer a competitor for the most popular text editor.

https://www.quora.com/Are-there-more-Emacs-or-Vim-users
I believe Vim is actually more popular, but it's hard to find any real data on it. The best source I've seen is the annual StackOverflow developer survey where 15.2% of developers used Vim compared to a mere 3.2% for Emacs.

Oddly enough, the report noted that "Data scientists and machine learning developers are about 3 times more likely to use Emacs than any other type of developer," which is not necessarily what I would have expected.

[ed. NB: Vim still dominates overall.]

https://pinboard.in/u:nhaliday/b:6adc1b1ef4dc

Time To End The vi/Emacs Debate: https://cacm.acm.org/blogs/blog-cacm/226034-time-to-end-the-vi-emacs-debate/fulltext

Vim, Emacs and their forever war. Does it even matter any more?: https://blog.sourcerer.io/vim-emacs-and-their-forever-war-does-it-even-matter-any-more-697b1322d510
Like an episode of “Silicon Valley”, a discussion of Emacs vs. Vim used to have a polarizing effect that would guarantee a stimulating conversation, regardless of an engineer’s actual alignment. But nowadays, diehard Emacs and Vim users are getting much harder to find. Maybe I’m in the wrong orbit, but looking around today, I see that engineers are equally or even more likely to choose any one of a number of great (for any given definition of ‘great’) modern editors or IDEs such as Sublime Text, Visual Studio Code, Atom, IntelliJ (… or one of its siblings), Brackets, Visual Studio or Xcode, to name a few. It’s not surprising really — many top engineers weren’t even born when these editors were at version 1.0, and GUIs (for better or worse) hadn’t been invented.

...

… both forums have high traffic and up-to-the-minute comment and discussion threads. Some of the available statistics paint a reasonably healthy picture — Stackoverflow’s 2016 developer survey ranks Vim 4th out of 24 with 26.1% of respondents in the development environments category claiming to use it. Emacs came 15th with 5.2%. In combination, over 30% is, actually, quite impressive considering they’ve been around for several decades.

What’s odd, however, is that if you ask someone — say a random developer — to express a preference, the likelihood is that they will favor for one or the other even if they have used neither in anger. Maybe the meme has spread so widely that all responses are now predominantly ritualistic, and represent something more fundamental than peoples’ mere preference for an editor? There’s a rather obvious political hypothesis waiting to be made — that Emacs is the leftist, socialist, centralized state, while Vim represents the right and the free market, specialization and capitalism red in tooth and claw.

How is Emacs/Vim used in companies like Google, Facebook, or Quora? Are there any libraries or tools they share in public?: https://www.quora.com/How-is-Emacs-Vim-used-in-companies-like-Google-Facebook-or-Quora-Are-there-any-libraries-or-tools-they-share-in-public
In Google there's a fair amount of vim and emacs. I would say at least every other engineer uses one or another.

Among Software Engineers, emacs seems to be more popular, about 2:1. Among Site Reliability Engineers, vim is more popular, about 9:1.
--
People use both at Facebook, with (in my opinion) slightly better tooling for Emacs than Vim. We share a master.emacs and master.vimrc file, which contains the bare essentials (like syntactic highlighting for the Hack language). We also share a Ctags file that's updated nightly with a cron script.

Beyond the essentials, there's a group for Emacs users at Facebook that provides tips, tricks, and major-modes created by people at Facebook. That's where Adam Hupp first developed his excellent mural-mode (ahupp/mural), which does for Ctags what iDo did for file finding and buffer switching.
--
For emacs, it was very informal at Google. There wasn't a huge community of Emacs users at Google, so there wasn't much more than a wiki and a couple language styles matching Google's style guides.

https://trends.google.com/trends/explore?date=all&geo=US&q=%2Fm%2F07zh7,%2Fm%2F01yp0m

https://www.quora.com/Why-is-interest-in-Emacs-dropping
And it is still that. It’s just that emacs is no longer unique, and neither is Lisp.

Dynamically typed scripting languages with garbage collection are a dime a dozen now. Anybody in their right mind developing an extensible text editor today would just use python, ruby, lua, or JavaScript as the extension language and get all the power of Lisp combined with vibrant user communities and millions of lines of ready-made libraries that Stallman and Steele could only dream of in the 70s.

In fact, in many ways emacs and elisp have fallen behind: 40 years after Lambda, the Ultimate Imperative, elisp is still dynamically scoped, and it still doesn’t support multithreading — when I try to use dired to list the files on a slow NFS mount, the entire editor hangs just as thoroughly as it might have in the 1980s. And when I say “doesn’t support multithreading,” I don’t mean there is some other clever trick for continuing to do work while waiting on a system call, like asynchronous callbacks or something. There’s start-process which forks a whole new process, and that’s about it. It’s a concurrency model straight out of 1980s UNIX land.

But being essentially just a decent text editor has robbed emacs of much of its competitive advantage. In a world where every developer tool is scriptable with languages and libraries an order of magnitude more powerful than cranky old elisp, the reason to use emacs is not that it lets a programmer hit a button and evaluate the current expression interactively (which must have been absolutely amazing at one point in the past).

https://www.reddit.com/r/emacs/comments/bh5kk7/why_do_many_new_users_still_prefer_vim_over_emacs/

more general comparison, not just popularity:
Differences between Emacs and Vim: https://stackoverflow.com/questions/1430164/differences-between-Emacs-and-vim

https://www.reddit.com/r/emacs/comments/9hen7z/what_are_the_benefits_of_emacs_over_vim/

https://unix.stackexchange.com/questions/986/what-are-the-pros-and-cons-of-vim-and-emacs

https://www.quora.com/Why-is-Vim-the-programmers-favorite-editor
- Adrien Lucas Ecoffet,

Because it is hard to use. Really.

However, the second part of this sentence applies to just about every good editor out there: if you really learn Sublime Text, you will become super productive. If you really learn Emacs, you will become super productive. If you really learn Visual Studio… you get the idea.

Here’s the thing though, you never actually need to really learn your text editor… Unless you use vim.

...

For many people new to programming, this is the first time they have been a power user of… well, anything! And because they’ve been told how great Vim is, many of them will keep at it and actually become productive, not because Vim is particularly more productive than any other editor, but because it didn’t provide them with a way to not be productive.

They then go on to tell their friends how great Vim is, and their friends go on to become power users and tell their friends in turn, and so forth. All these people believe they became productive because they changed their text editor. Little do they realize that they became productive because their text editor changed them[1].

This is in no way a criticism of Vim. I myself was a beneficiary of such a phenomenon when I learned to type using the Dvorak layout: at that time, I believed that Dvorak would help you type faster. Now I realize the evidence is mixed and that Dvorak might not be much better than Qwerty. However, learning Dvorak forced me to develop good typing habits because I could no longer rely on looking at my keyboard (since I was still using a Qwerty physical keyboard), and this has made me a much more productive typist.

Technical Interview Performance by Editor/OS/Language: https://triplebyte.com/blog/technical-interview-performance-by-editor-os-language
[ed.: I'm guessing this is confounded to all hell.]

The #1 most common editor we see used in interviews is Sublime Text, with Vim close behind.

Emacs represents a fairly small market share today at just about a quarter the userbase of Vim in our interviews. This nicely matches the 4:1 ratio of Google Search Trends for the two editors.

...

Vim takes the prize here, but PyCharm and Emacs are close behind. We’ve found that users of these editors tend to pass our interview at an above-average rate.

On the other end of the spectrum is Eclipse: it appears that someone using either Vim or Emacs is more than twice as likely to pass our technical interview as an Eclipse user.

...

In this case, we find that the average Ruby, Swift, and C# users tend to be stronger, with Python and Javascript in the middle of the pack.

...

Here’s what happens after we select engineers to work with and send them to onsites:

[Python does best.]

There are no wild outliers here, but let’s look at the C++ segment. While C++ programmers have the most challenging time passing Triplebyte’s technical interview on average, the ones we choose to work with tend to have a relatively easier time getting offers at each onsite.

The Rise of Microsoft Visual Studio Code: https://triplebyte.com/blog/editor-report-the-rise-of-visual-studio-code
This chart shows the rates at which each editor's users pass our interview compared to the mean pass rate for all candidates. First, notice the preeminence of Emacs and Vim! Engineers who use these editors pass our interview at significantly higher rates than other engineers. And the effect size is not small. Emacs users pass our interview at a rate 50… [more]
news  linux  oss  tech  editors  devtools  tools  comparison  ranking  flux-stasis  trends  ubiquity  unix  increase-decrease  multi  q-n-a  qra  data  poll  stackex  sv  facebook  google  integration-extension  org:med  politics  stereotypes  coalitions  decentralized  left-wing  right-wing  chart  scale  time-series  distribution  top-n  list  discussion  ide  parsimony  intricacy  cost-benefit  tradeoffs  confounding  analysis  crosstab  pls  python  c(pp)  jvm  microsoft  golang  hmm  correlation  debate  critique  quora  contrarianism  ecosystem  DSL  techtariat  org:com  org:nat  cs 
june 2019 by nhaliday
Should I go for TensorFlow or PyTorch?
Honestly, most experts that I know love Pytorch and detest TensorFlow. Karpathy and Justin from Stanford for example. You can see Karpthy's thoughts and I've asked Justin personally and the answer was sharp: PYTORCH!!! TF has lots of PR but its API and graph model are horrible and will waste lots of your research time.

--

...

Updated Mar 12
Update after 2019 TF summit:

TL/DR: previously I was in the pytorch camp but with TF 2.0 it’s clear that Google is really going to try to have parity or try to be better than Pytorch in all aspects where people voiced concerns (ease of use/debugging/dynamic graphs). They seem to be allocating more resources on development than Facebook so the longer term currently looks promising for Google. Prior to TF 2.0 I thought that Pytorch team had more momentum. One area where FB/Pytorch is still stronger is Google is a bit more closed and doesn’t seem to release reproducible cutting edge models such as AlphaGo whereas FAIR released OpenGo for instance. Generally you will end up running into models that are only implemented in one framework of the other so chances are you might end up learning both.
q-n-a  qra  comparison  software  recommendations  cost-benefit  tradeoffs  python  libraries  machine-learning  deep-learning  data-science  sci-comp  tools  google  facebook  tech  competition  best-practices  trends  debugging  expert-experience  ecosystem  theory-practice  pragmatic  wire-guided  static-dynamic  state  academia  frameworks  open-closed 
may 2019 by nhaliday
Ask HN: What is the state of C++ vs. Rust? | Hacker News
https://www.reddit.com/r/rust/comments/2mwpie/what_are_the_advantages_of_rust_over_modern_c/
https://www.reddit.com/r/rust/comments/bya8k6/programming_with_rust_vs_c_c/

Writing C++ from a Rust developers perspective: https://www.reddit.com/r/cpp/comments/b5wkw7/writing_c_from_a_rust_developers_perspective/

https://www.reddit.com/r/rust/comments/1gs93k/rust_for_game_development/
https://www.reddit.com/r/rust/comments/acjcbp/rust_2019_beat_c/
https://www.reddit.com/r/rust/comments/62sewn/did_you_ever_hear_the_tragedy_of_darth_stroustrup/

Rust from C++ perspective: https://www.reddit.com/r/cpp/comments/611811/have_you_used_rust_do_you_prefer_it_over_modern_c/

mostly discussion of templates:
What can C++ do that Rust cant?: https://www.reddit.com/r/rust/comments/5ti9fc/what_can_c_do_that_rust_cant/
Templates are a big part of C++, It's kind of unfair to exclude them. Type-level integers and variadic templates are not to be underestimated.
...
C++ has the benefit of many competing compilers, each with some of the best compiler architects in the industry (and the backing of extremely large companies). rust so far has only rustc for viable compilers.
--
A language specification.
--
Rust has principled overloading, while C++ has wild-wild-west overloading.

Ok rustaceans, here's a question for you. Is there anything that C++ templates can do that you can't do in rust?: https://www.reddit.com/r/rust/comments/7q7nn0/ok_rustaceans_heres_a_question_for_you_is_there/
I think you won't get the best answer about templates in the Rust community. People don't like them here, and there's... not an insignificant amount of FUD going around.

You can do most things with templates, and I think they're an elegant solution to the problem of creating generic code in an un-GC'd language. However, C++'s templates are hard to understand without the context of the design of C++'s types. C++'s class system is about the closest thing you can get to a duck typed ML module system.

I dunno, I'm not sure exactly where I'm going with this. There's a lot about the philosophy of C++'s type system that I think would be good to talk about, but it really requires a full on blog post or a talk or something. I don't think you'll get a good answer on reddit. Learning an ML will get you pretty close to understanding the philosophy behind the C++ type system though - functors are equivalent to templates, modules equivalent to classes.
--
...
You're making a greater distinction than is necessary. Aside from/given const generics, SFINAE and impl where clauses have similar power, and monomorphization is substitution.

Both C++ templates and Rust generics are turing-complete (via associated types), the difference lies in the explicitness of bounds and ad-hoc polymorphism.
In Rust we have implicit bounds out of the "WF" (well-formed) requirements of signatures, so you can imagine C++ as having WF over the entire body of a function (even if I don't think current-generation C++ compilers take advantage of this).

While the template expansion may seem more like a macro-by-example, it's still type/value-driven, just in a more ad-hoc and implicit way.

https://gist.github.com/brendanzab/9220415
discussion  hn  summary  comparison  pls  rust  pragmatic  c(pp)  performance  safety  memory-management  build-packaging  types  community  culture  coupling-cohesion  oop  multi  reddit  social  engineering  games  memes(ew)  troll  scifi-fantasy  plt  chart  paste  computer-memory  code-organizing  ecosystem  q-n-a 
october 2016 by nhaliday
The Next Generation of Software Stacks | StackShare
most interesting part to me:
GECS have a clear bias towards certain types of applications and services as well. These preferences are particularly apparent in the analytics stack. Tools typically aimed primarily at marketing teams—tools like Crazy Egg, Optimizely, and Google Analytics, the most popular tool on Stackshare—are extremely unpopular among GECS. These services are being replaced by tools that are aimed a serving both marketing and analytics teams. Segment, Mixpanel, Heap, and Amplitude, which provide flexible access to raw data, are well-represented among GECS, suggesting that these companies are looking to understand user behavior beyond clicks and page views.
data  analysis  business  startups  tech  planning  techtariat  org:com  ecosystem  software  saas  network-structure  integration-extension  cloud  github  oss  vcs  amazon  communication  trends  pro-rata  crosstab  visualization  sv  programming  pls  web  javascript  frontend  marketing  tech-infrastructure 
april 2016 by nhaliday
Lean
https://lean-forward.github.io
The goal of the Lean Forward project is to collaborate with number theorists to formally prove theorems about research mathematics and to address the main usability issues hampering the adoption of proof assistants in mathematical circles. The theorems will be selected together with our collaborators to guide the development of formal libraries and verified tools.

mostly happening in the Netherlands

https://formalabstracts.github.io

A Review of the Lean Theorem Prover: https://jiggerwit.wordpress.com/2018/09/18/a-review-of-the-lean-theorem-prover/
- Thomas Hales
seems like a Coq might be a better starter if I ever try to get into proof assistants/theorem provers

edit: on second thought this actually seems like a wash for beginners

An Argument for Controlled Natural Languages in Mathematics: https://jiggerwit.wordpress.com/2019/06/20/an-argument-for-controlled-natural-languages-in-mathematics/
By controlled natural language for mathematics (CNL), we mean an artificial language for the communication of mathematics that is (1) designed in a deliberate and explicit way with precise computer-readable syntax and semantics, (2) based on a single natural language (such as Chinese, Spanish, or English), and (3) broadly understood at least in an intuitive way by mathematically literate speakers of the natural language.

The definition of controlled natural language is intended to exclude invented languages such as Esperanto and Logjam that are not based on a single natural language. Programming languages are meant to be excluded, but a case might be made for TeX as the first broadly adopted controlled natural language for mathematics.

Perhaps it is best to start with an example. Here is a beautifully crafted CNL text created by Peter Koepke and Steffen Frerix. It reproduces a theorem and proof in Rudin’s Principles of mathematical analysis almost word for word. Their automated proof system is able to read and verify the proof.

https://github.com/Naproche/Naproche-SAD
research  math  formal-methods  msr  multi  homepage  research-program  skunkworks  math.NT  academia  ux  CAS  mathtariat  expert-experience  cost-benefit  nitty-gritty  review  critique  rant  types  learning  intricacy  functional  performance  c(pp)  ocaml-sml  comparison  ecosystem  DSL  tradeoffs  composition-decomposition  interdisciplinary  europe  germanic  grokkability  nlp  language  heavyweights  inference  rigor  automata-languages  repo  software  tools  syntax  frontier  state-of-art  pls  grokkability-clarity  technical-writing  database  lifts-projections 
january 2016 by nhaliday

Copy this bookmark:





to read