mgreenblog

posts by category about this blog

JS2/ES4

After reading Brendan Eich's annotated slides from @media Ajax. I was formerly of two minds: on the one hand, I'd started to feel like JavaScript was hopeless, BASIC with closures and a dynamic object system that precludes efficient compilation; on the other, I'd started to feel the JS FP elitisim that Brendan so acutely calls out. The structured type fixture example in Brendan's talk is particularly convincing -- I could use that, definitely.

Then again, I'm not sure I'll ever get the chance. It's interesting that PL -- and many other fields -- is often more defined by the tools it happens to use (or happened to use at some point in the past) rather than problems of interest. What circumstances determine the used features of a programming language? How can feature use be encouraged (or discouraged)?

Dustin Diaz (dustin@dustindiaz.com)

What is it about closures and JavaScript's native Lambda structure that you have problems with? Since I've been called out as somewhat of an elitist, surely you have a good explanation of why a structured type fixture is going to solve all your problems. Nevertheless, don't mind Brendan's banter too much. It was never my intention to start a rivalry. I think ES4 has great intentions, and much thought has been put into it. JavaScript (as it is) and ES4 are simply different languages. Claiming backwards compatibility with JS 2.0 to what it currently is only sounds to me like there will be several bugs introduced. And if not that, a plethora of "gotchas" we'll have to watch out for.

Michael Greenberg (mike@weaselhat.com)

My comment may not have been clear -- I count myself among the JS FP elitists. I think that closures in JS are great, a huge win, the main reason the language is worthwhile.

I don't think structured type fixture will solve all of my problems, but it would simplify things. Type declarations are a nice form of documentation, and one that I occasionally miss in unannotated languages.

I'm not up-to-date on the backwards compatibility issues, so I can't really speak to that.

Dustin Diaz (dustin@dustindiaz.com)

I’m not up-to-date on the backwards compatibility issues, so I can’t really speak to that.

Neither am I so it's hard to say what's in store. I'm sure Brendan has very good intentions so most likely things will need to be clarified from the end of the ES4 working group.

leo (lmeyerov@eecs.berkeley.edu)

The doc is @ http://www.ecmascript.org/es4/spec/incompatibilities.pdf.

Backwards compatibility is really a non-concern for me - just throw in a version annotation as part of your script tag and a semantic analyzer to detect incompatibilities.

Some of the new type stuff is a surprise. Brendan commented on my blog that there is arrow type support, but I had never noticed that in the spec, and there are a bunch of type annotation rules I have never seen in other languages before, which concerns me a little. And what the heck is up with generic functions? I may be missing more, though the bulk of it I've been using for years via ActionScript.

OTOH, parameterized arrow types (even if they can't be recursive) are a big chunk of what I want for user level stuff. I never investigated it closely, but something for type states would also be good, considering how much event oriented stuff occurs. Perhaps my feelings will change as I use more ES4 features and will consider OCaml to be weaker.

One surprise is the lack of type inference. Jono has some ridiculous code snippets - types are useful to him for performance reasons, but such decoration is way over the top if you argue it is for documentation purposes.

Brendan Eich (brendan@mozilla.org)

Backward compatibility is easy to demonstrate with a reference implementation and commercial implementations coming up. Asserting it will fail without evidence is a form of FUD, or trash talk. Banter and "rivalry" (please!) aside, there are serious issues here, which involve the conduct of the biggest software company of them all.

leo: functions are structural types already, not nominal types. A type expression of the form function (a:int):string is long-hand for int -> string (the latter is not proposed syntax -- should it be? Probably not: (int, [boolean, boolean]) -> string requires a bottom up parse or a top-down cover-grammar parse and semantic feedback). Our "arrow" is spelled f-u-n-c-... ;-)

Hindley-Milner type inference can't be grafted compatibly onto JS1's ediface. Unannotated variables are not opportunities for H-M becaus lack of annotation means the same thing as annotation with the "any" type: var x = 42 is the same as var x: * = 42. "Local type inference" a la C# 3 is probably unnecessary given quality compilers and VMs (those tracing JITs).

Inference aside, type annotations are not documentation -- they have real effects, throwing type errors by default, or doing shape-tests or assertions (like), or wrapping after a like test and then checking every access (wrap).

/be

Michael Greenberg (mike@weaselhat.com)

I don't think anyone is asserting that backwards compatibility will be a bust. In any case, I'm with Leo as regards backwards compatibility: I don't really care, but feel confident that implementors will do whatever works for them. I'm very excited about ES4 and the remarkable opportunity to do something really positive for the Web.

I'm in the middle of reading the ES4 typing tutorial, and I'm impressed by the engineering wins. wrap is probably the first contract I've seen that I'd actually like to use. (Sorry, PLT-ers.) Have the typing judgments been formalized somewhere I can look at them?

While standard type inference won't work in ES4, I'm sure static analysis (say, abstract interpretation over the powerset of types) could prove that certain values always have certain types. But for that to be a win, the analysis would have to be able to accurately capture types interprocedurally, since a JIT-compiling VM will do just fine on the intraprocedural cases.

leo (lmeyerov@eecs.berkeley.edu)

Brendan, I picked up on the funny syntax after rereading the spec given your comment. I don't entirely buy the parsing argument: if the concern is performance, what about parallel parsers and intermediate formats? Slow parsing in "eval" is an acceptable trade-off.

I haven't investigated what it means to name a type in this system using the type declaration syntax (nor even remember right now whether classes are structurally or nominally subtyped in the proposal) - I picked (a flawed) structural subtyping for my students' assignment for pedagogic reasons.

Finally, I'm not convinced about the mismatch with TI; I just deleted a large block of text about technically why because I don't want to get into it on Mike's blog. However, my view is that it boils down to the real debate being on purpose (documentation, checking, and speed) and plugability, and once those are picked, taste (over-generality and too much implementation freedom such as by supporting a variety of typing systems can fragment the core developers, and alienate normal developers). For this sort of thing, I like to see validation (onus is on me to read the wikinotes), and largely defer to tried-and-true wisdom. The decision to support type checking only is fine, though I do expect more of Adobe etc who will implement environments for it to automatically infer annotations for generated code.

Mike, Cormac Flanagan's work in gradual typing is probably the best place to start. In terms of fun with contracts and types, Phil Wadler has recently been working on adding blame to type systems, which is awesome. I have seen asserts/contracts useful for static analysis as well (improving the performance of sketching, some separation logic work) as it provides hint for properties to verify (the latter case) and ways of escaping bad search paths earlier in synthesis (the former). While dynamic invariant inferring tools turn up a lot of garbage, if you're going to bother commenting an invariant, I've come to believe that at least it should be executable, or a syntax directed translator can turn it into real code :)

leo (lmeyerov@eecs.berkeley.edu)

Hindley-Milner type inference can’t be grafted compatibly onto JS1’s ediface. Unannotated variables are not opportunities for H-M.... Inference aside, type annotations are not documentation — they have real effects, throwing type errors by default, or doing shape-tests or assertions (like), or wrapping after a like test and then checking every access (wrap).

To be clear, I find the interesting point here not so much your first statement (technical difficulty of doing TI on JS2) but the latter. For example, while an object may have some methods, due to namespaces, some may be hidden, so guaranteeing this hiding at runtime would be important for capability based security. Introducing such features is dangerous, because it locks down freedom in the type system later. Backwards compatibility with the previous notion of strictness, however, I don't think is as much of a concern (though can be dealt with).

How to do this well is unclear, and makes the whole pluggable type system camp more believable to me (moving them that much closer to my in-pile..). It raises question of the cost of adding something to the type system for an evolving language - thus, in a sense, namespaces are more contentious than type checking (they can be encoded with closures, but that is already a cost.) Maybe going through "On the Expressive Power of Programming Languages" again as a lense for this issue would make both make more sense to me.

Brendan Eich (brendan@mozilla.org)

Michael@10:40am -- unfortunately, Microsoft and Yahoo! have been asserting backward incompatibility -- this was part of their FUD campaign, which led to the "rivalry" or "banter" Dustin mentioned. But let's get past it, I'm glad you don't buy it a fortiori. Type judgments were written up in Cormac Flanagan's ValleyScript paper, but he has been busy fixing bugs and improving the formalisms. If you can read past the staleness, that paper is linked from our wiki: http://wiki.ecmascript.org/doku.php?id=resources:resources&s=valleyscript

A revised version may be forthcoming. I'll check with Cormac when he's back from holiday travels.

ES4 will not mandate any complex and costly analyses (certainly not interprocedural partial evaluation or AA), since we are supporting mobile device targets. I should point out that tracing JITs inline for free, so do inter-procedural precise runtime type-based specialization. This is akin to PICs in strongtalk but without the whole-method wastefulness.

leo@1:50am -- you are not the buyer paying for phones with slow CPUs, tiny caches, and not enough RAM. Seriously, we are not going to mandate fancy compile-time analyses. But we are proposing a standard strict mode, for the same results (minimally) everywhere and normative rules about runtime effects of strict mode (to preserve interop).

Nominal types (classes, interfaces) are related by names given in extends and implements clauses (default extends clause names Object). The type name is the thing. Structural types may be named via type T = ... but that does not make them relate by name -- they are records, functions, and unions related by field names/types, arguments/return types, and union arm types, rather.

I'm not sure how you can mandate TI to get better interop than we will get by sticking with backward compatibility. We don't want ES4 code to require rewriting taxes due to implicit shifts in meaning (var x = 42 => var x:int = 42). We really are extending ES3 rather than changing its core runtime semantics incompatibly. But I'd be happy to hear more. Please feel free to use the es4-discuss@mozilla.org list.

Contracts and fancier static analyses are welcome ares for people to research and develop addons to the ES4 type system, but we are not out to make that system plugable. The web is a hot interoperation crucible, and it will melt down variations and force one default type system on us, if our normative spec doesn't mandate one.

/be

leo (lmeyerov@eecs.berkeley.edu)

Brendan, I am not sure what you meant by "you are not the buyer paying for phones with slow CPUs, tiny caches, and not enough RAM." My interpretation is that ES4 aims to better support these, and your comment suggests you think that the features I just described would negatively impact such an goal. However, doing computation in parallel lowers power consumption (can use cheaper chips and can tweak them for power, not speed). IRs are a convenience to eliminating even more client computation. The assumptions about the intended family of VMs seem somewhat odd for a future-looking language. Asynchrony (whether it is officially in the language semantics or not) is an essential feature of the two most common flavors of ES4, and that just screams compiler driven and VM level optimizations.

Source parsing complexity is important, but seems counterproductive at the standards level for ES4. My recent readings on the difficulties of optimizing Python (particularly the inferring atom types paper) have shocked me in how debilitating dynamic loading of scripts in an expressive language can be, and my concerns for source files would be in terms of usability (already addressed), backwards compatibility (already addressed), and modularity concerns (to prevent the Python problem). However, this is biased as I view JS/ES4 as likely bytecode for popular declarative systems, and the web's likely only realistic version of the CLR for the time being.

I had enough hands-on involvement with the migration into AS3 and am treating this as more of a learning opportunity :) Thanks for the details .

Brendan Eich (brendan@mozilla.org)

Hi Leo, I hope I am reading your comment @1:50 am correctly. The four paragraphs there consist of these six points from what I can tell (my interpretations and responses in parens, please correct me if I'm misreading you):

1a. parsing slowness is tolerable (eval, but also script tag handling source download)

1b. use parallel parsing (I'm with you, as much as we can without breaking the DOM run-to-completion execution model)

1c. use intermediate formats (over the wire, presumably -- here I'm less enthusiastic -- view-source is a virtue of the web, almost unique in widely used programmable systems)

2. Structural or nominal subtyping? (both, see the evolutionary programming tutorial at http://www.ecmascript.org/es4/spec/evolutionary-programming-tutorial.pdf)

3. TI uncertainty with deferred technical debate to avoid crowding Mike's blog, and then words that I interpret as meaning that you thought the ES4 type system was only for the strict-mode type checker, not for standard-mode runtime semantics -- that type annotations by default are merely documentation. (Not so: the type system is for both strict and standard modes: a type annotation in standard mode means runtime checking on every write to the annotated variable.)

4. References to Cormac Flanagan's work, also Phil Wadler's work (again seeming to suggest you think ES4 in standard mode ignores type annotations).

I hope this helps us understand each other. Mainly I hope to clarify that type annotations have runtime effects in standard mode, and we have thought about contracts a bit (mostly Dave Herman has; Robbie Findler pulled my ear convincingly at ICFP 2005 too), but we have deferred them to a future edition. We believe that the current ES4 proposals are future-proof -- there is room in the type annotation syntax for distinguished, statically undecidable value assertions. See http://wiki.ecmascript.org/doku.php?id=proposals:contracts.

Let me try to respond coherently to your latest comment:

  • We're agreed on wanting to target mobile devices. We are connected to Ras Bodik and his team at UCB (http://parallelbrowser.blogspot.com/2007/09/hello-world.html). More on this in due course.

  • Nevertheless, we do not intend to standardize a JS (ECMAScript) IR any time soon. If we do reach this promised land, it will probably be an arithmetically coded AST, so both the web-standard (and still crucial) view-source advantage, and guaranteed "safe semantics for valid syntax" guarantee (see http://portal.acm.org/citation.cfm?id=1269696&dl=&coll=GUIDE), still apply.

  • ES4 preserves ES3/JS1's ability for late-loaded code to upset arbitrary invariants, but allows new code to "opt in" to immutable type and other bindings, 'like'-typed API arguments and results, type-annotated code, etc.

Hope this helps,

/be