recentpopularlog in

programming-language-development

« earlier   
J concepts in SC | SuperCollider 3.9.0 Help
The J programming language is a successor of APL (http://www.jsoftware.com). These languages are made for processing arrays of data and are able to express complex notions of iteration implicitly.

The following are some concepts borrowed from or inspired by J. Thinking about multidimensional arrays can be both mind bending and mind expanding. It may take some effort to grasp what is happening in these examples.
programming-language-development 
february 2018 by jcrites
Lisp In Less Than 200 Lines Of C
Objective: implement a lambda calculus based programming language like LisP, simply and briefly in C
programming-language-development 
november 2017 by jcrites
Threaded Code
What is Threaded Code Good for?

Threaded code is a technique for implementing virtual machine interpreters. There are various ways to implement interpreters: Some of the more popular are:

Direct string interpretation.
Compilation into a tree (typically, an abstract syntax tree) and interpret that tree.
Compilation into a virtual machine, and interpret the virtual machine code.

If you are interested in performance, the virtual machine approach is the way to go (because fetching and decoding is simpler, and therefore faster). If you are not interested in performance (yet), you still may want to consider the virtual machine approach, because it often is as simple as the others.
Threaded code, in its original meaning [bell73], is one of the techniques for implementing virtual machine interpreters. Nowadays, at least the Forth community uses the term threading for almost any technique used for implementing Forth's virtual machine.
programming-language-development 
november 2017 by jcrites
Fexprs as the basis of Lisp function application or $vau : the ultimate abstraction
Abstract

Abstraction creates custom programming languages that facilitate programming for
specific problem domains. It is traditionally partitioned according to a two-phase
model of program evaluation, into syntactic abstraction enacted at translation time,
and semantic abstraction enacted at run time. Abstractions pigeon-holed into one
phase cannot interact freely with those in the other, since they are required to occur at logically distinct times.

Fexprs are a Lisp device that subsumes the capabilities of syntactic abstraction,
but is enacted at run-time, thus eliminating the phase barrier between abstractions.
Lisps of recent decades have avoided fexprs because of semantic ill-behavedness that
accompanied fexprs in the dynamically scoped Lisps of the 1960s and 70s.
This dissertation contends that the severe difficulties attendant on fexprs in the
past are not essential, and can be overcome by judicious coordination with other
elements of language design. In particular, fexprs can form the basis for a simple, wellbehaved
Scheme-like language, subsuming traditional abstractions without a multiphase
model of evaluation.
The thesis is supported by a new Scheme-like language called Kernel, created
for this work, in which each Scheme-style procedure consists of a wrapper that induces
evaluation of operands, around a fexpr that acts on the resulting arguments.
This arrangement enables Kernel to use a simple direct style of selectively evaluating
subexpressions, in place of most Lisps’ indirect quasiquotation style of selectively
suppressing subexpression evaluation. The semantics of Kernel are treated through
a new family of formal calculi, introduced here, called vau calculi. Vau calculi use
direct subexpression-evaluation style to extend lambda calculus, eliminating a longstanding
incompatibility between lambda calculus and fexprs that would otherwise
trivialize their equational theories.
The impure vau calculi introduce non-functional binding constructs and unconventional
forms of substitution. This strategy avoids a difficulty of Felleisen’s lambda-vCS
calculus, which modeled impure control and state using a partially non-compatible
reduction relation, and therefore only approximated the Church–Rosser and Plotkin’s
Correspondence Theorems. The strategy here is supported by an abstract class of
Regular Substitutive Reduction Systems, generalizing Klop’s Regular Combinatory
Reduction Systems
programming-language-development  programming-languages  computer-science 
october 2017 by jcrites
graydon2 | "What next?"
"After memory safety, what do you think is the next big step for compiled languages to take?"

Setting aside the fact that "compiled" languages have had various more-or-less credible forms of "memory safety" for quite a long time, I agree (obviously!) that cementing memory safety as table stakes in all niches of language design -- especially systems languages -- continues to be an important goal; but also that there's also lots more to do! So I figured I'd take a moment to elaborate on some areas that we're still well short of ideal in; maybe some future language engineers can find inspiration in some of these notes.

Before proceeding, I should emphasize: these are personal and subjective beliefs, about which I'm not especially interested in arguing (so will not entertain debate in comments unless you have something actually-constructive to add); people in the internet are Very Passionate about these topics and I am frankly a bit tired of the level of Passion that often accompanies the matter. Furthermore these opinions do not in any way represent the opinions of my employer. This is a personal blog I write in my off-hours. Apple has a nice, solid language that I'm very happy to be working on, and this musing doesn't relate to that. I believe Swift represents significant progress in the mainstream state of the art, as I said back when it was released.

That all said, what might the future hold in other languages?
programming-languages  programming-language-development 
august 2017 by jcrites
The Eta Programming Language
A powerful language for building scalable systems on the JVM
programming-languages  programming-language-development 
january 2017 by jcrites

Copy this bookmark:





to read