Categories
Development HSQL

Week 3: Extension, scopes, and planning Select

I started work on the extension first. Needing some changes, I needed two primary features: Syntax checking, and compiling to ECL as a primary featureset. So, I decided to go in order. For syntax checking, the basic process is:

  1. The language client keeps sending info about the text documents on the IDE, to the language server.
  2. The language server can act on these, and at its discretion, can send diagnostics.

There is a text document manager that is provided by the example implementation, but it does not emit any information about incremental updates, but rather provides the whole document itself.

Thankfully, enough configurability is present, to make your own text document manager. Using the standard document reference, I added the ability to add some listeners for incremental updates.

From there on out, I could check if the incremental updates would warrant a change in diagnostics (currently which i set to always update anyways) and then, push it to HSQLT to correct. Receiving the corrections, one can map the issues to the files, and the diagnostic severity levels; and then done! Pretty errors!

Syntax highlighting

The second part was rather simple, which was a compile command. Adding in a simple UI for confirming if the user wants outputs in case of warnings or errors, and writing it to the disk in that case, we get a nice little compile command (just remember to force VSCode to save all the files).

Can now use the command to compile straight away!

We can now pause most of the work on the extension, as this will work for testing the majority of HSQL. Once further progress is made on the language side, we can try working on ECL integration or some datatype highlighting.

Pretty diagrams

So I finally got around to updating the general architecture of what the current version of HSQL looks like and, here:

Pretty pretty

Lots of arrows here and pretty ideas aside, the current repository is meant to be used as an executable, as well a library. This should make it easy to create extensions based on the system, or even extend it later easily.

Packaging

Both hsqlt and the vscode extension are intended to be packaged before use. hsqlt has been configured to use pkg, and producing executables is very easy.

The VSCode extension though. I run a vsce package and I am greeted with:

But why?

Why is it trying to access the other project? I thought it was under .vscodeignore. Here’s some context, the extension and the compiler repo are located in a parent folder, and the compiler is linked to my global npm scope, and then linked back to this repo.

Digging further in the code, I open a Javascript Debugger Terminal, and see that it is from the ignore module. The module attempts to index which files to ignore, and follows symlinks. Adding to it, it also does not like accepting files which are outside the current folder (which we have here). And voila, the error. I have filed an issue under vsce and I hope to see some way around this. Worst-case scenario, I can unlink the repo before packaging (which might even work).

Select – Playing with scope

Select is a bit of a complex statement in SQL. The fact that it can call itself scares me, and having to deal with aliases is also scary. With aliases in mind, I got an idea – scoping. My mentor had mentioned earlier to think about procedures and syntax, and I have been working on scoping, knowing that I’d have to use them eventually. Interestingly, SQL Table aliases behave like a locally scoped definition; so perhaps table aliases can be mimicked with scoping.

Now how to enforce scoping? Functions come to mind. ECL Functions are similar to the usual program functions, save one critical difference – no variable shadowing. If a definition is declared, it cannot be shadowed in an easy way. So, time to go about it in a step by step way. How can I emulate a select query in ECL? I came up with this program flow for what the ecl should be shaped like

1. From - Create a gigantic source table from all the sources.
  a. Enumerate all the sources - the usual code generation
  b. apply aliases - redefine them
  c. Join them - JOIN
2. where - Apply a filter on the above table
3. sort - sort the data from above
4. Column filters and grouping - Use TABLE to do grouping and column filtering
5. apply limit offset from SQL - Use CHOOSEN on the above result
6. DISTINCT - DEDUP the above result

This method seems rather useful, as there is natural “use the result from above” flow to it. Additionally, with this, there is no way that we will be referring to data that has been deleted by a previous step. Honing this, i came up with this simple pseudo-algorithm –

1. Create a function and assign it to a value
2. Enumerate all the sources that have been changed, or need processing - JOINS, select in select and aliases.
  a. JOINS - do a join
  b. select in select - follow this procedure and assign it to its alias
  c. Aliases - assign the original value to the alias
3. Now, take _all_ the sources, and join them all. 
4. Take last generated variable and then do the following in order
  a. Where - apply filter
  b. sort - sort
  c. group and column filters - table
  d. limit offset - choosen
  e. distinct - dedup all
5. return the last generated variable.
6. The variable assigned to the function on step 1, is now the result

Let’s check this with an example

Here’s a SQL statement:

select *,TRIM(c1) from table1 as table2;

Here’s an idea of what the SQL would look like

__action_1 := function
    // inspect the sources - define all aliases
    table2:= table1;
    // the base select query is done - * from SELECT becomes the referenced table name
    __action_2 := TABLE(table2,{table2,TRIM(c1)});
    // return the last generated variable - __action_2
    return __action_2;
end;
__action_1; // this gives the output of the function

Seems pretty strange, yes. But I’ll be working on this next week, and we shall see how far things go.

Wrapping up for the week

With quite a bit of work done this week around, the plan is to pick up the following on the next week:

  1. Select AST+Codegeneration. Getting them ready as soon as possible is very important to getting things to a usable state. This one point alone is probably really large, as it involves getting a lot of components right to function completely. (In fact, I expect this to take quite a while.)
  2. Looking at syntaxes from other SQL/-like languages.

Categories
Development HSQL

Week 2: Codegen, AST and VSCode!

My first task this week is picking up from the codegen issue last week.

So to summarize, I need some good changes for codegeneration as a whole. So a while ago, I reused some concepts from yacc (or rather any parser) and the idea was to have a stack property, and this would be modified accordingly by each statement.

code:ECLCode[];

However, this is difficult to visualize – Each node of the parse tree will have some assumptions about the stack – ie. eg. After visiting a definition statement, the select statement will assume the definition statement would have put its code on the stack, can it can be used from there. This is good, but this does really complicate the whole process.

So, its easier to have each visit() statement return an instance of ECLCode, ie everything that it has translated. So once last week’s bug was fixed and we set have set up the new Parse Tree Visitor, we can move onto the next step!

Output

Taking a detour, OUTPUT is a very important statement. Most programs process some input and give some output and, well, the OUTPUT statement allows for just that.

Working out the particular syntax, we can see one issue from the beginning, ECL has an oddity with OUTPUT statements –

// this works
OUTPUT(someSingularValue,NAMED('hello1'));
// this works
OUTPUT(someTableValue,NAMED('hello2'));

//this does not work
OUTPUT(someSingularValue,,NAMED('hello3'));
//this works
OUTPUT(someTableValue,,NAMED('hello4'));

// this, should not work
OUTPUT(someSingularValue,'~output::hello5');
// but this doesn't work too!
OUTPUT(someTableValue,'~output::hello6');

// Although this works! What
OUTPUT(someTableValue,,'~output::hello7');
// this fails to compile too
OUTPUT(someTableValue,NAMED('hello8'),'~output::hello8');
// but this works!
OUTPUT(someTableValue,,NAMED('hello9'),'~output::hello9');

So, here’s the OUTPUT statement documentation, and seeing it, it becomes rather obvious what is happening.

Tables require a record expression, which is optional as in it may be left entry. For the first table output (hello2), it gets recognized as an expression and it can pass as correct syntax. But the latter ones, it can only be the table record variant, and syntaxes with only one comma, fails.

For reference, we can use SQL. SQL is mostly only tables. So, following that, I decided that the easiest (and maybe the laziest) is to not support singular (aka not table) data outputs. So, working that out will be two parts:

  1. Error on trying to output singular values
  2. Warnings on trying to output possibly singular values – eg. any values.

All right, with all that done, we can get around to get OUTPUT working.

ASTs working!

Codegeneration

With the codegeneration issues out of the place, it was a full speed ahead moment. And in no time at all, its done, OUTPUT and assignments.

A nice little warning too! (Because we don’t infer types on imports yet)

VSCode

All right, this is the meat of what’s part of HSQL; An easy way to deal with HSQL Code. And usually that means a not so easy way for us to deal with HSQL code. I’d worked with some language server work before (shameless plug here), and one of the things that was interesting was the recommendation to use Webpack to bundle the extension. The idea of the language server/client is very simple. There are two components:

  1. Language Client – Its the extension that plugs into the IDE. It feeds the language server any IDE contents, and what actions the user may be taking, and then takes any language server feedback and feeds it into the IDE.
  2. Language Server – It is a separate program that runs (maybe independently) and listens and communicates with API

So, the way the extension in VSCode is coded, is that you just start up the language server (or connect to it) and then you can communicate with it (uses JSON RPC).

It is apparent that what I needed was two repositories under a parent repo; where the parent repo had all the extension related content, and the child repos would have the language server and the client. And with that, I had to create a webpack config, to transpile the typescript projects, and generate two outputs, client.js, which would be executed as part of the extension, and server.js, which could be separately executed, but was mainly going to be bundled with the client.

I wrote up a nice webpack config, and it threw an error. Oh no.

This, took way too long. The majority of the day in fact; and I was knee deep trying to interpret what the compilation errors meant before the fix was apparent:

One entry.

transpileOnly: true

The trick was to tell ts-loader to not bother itself with anything other than transpiling and finally, it worked!

Pretending that didn’t happen, I pulled in older resources such as the textmate grammar for HSQL (It works pretty well for now), and added in some support code and voila, a working extension (I mean it didn’t do anything much yet, but atleast it compiled).

It knows when files are changing too! And syntax highlighting!

What was really interesting is that while the whole project was consuming atleast over a 100MB of space, Webpack easily pulled it down to less than 10MB for the two JS files (combined). This is a good indicator of why packaging is important in Javascript.

Wrapping up

With all this, I plan move into the next week, with the following tasks directly lined up:

  1. Have a compilation command for the HSQL extension.
  2. Explore some webpack optimizations? Perhaps like chunk splitting since the client and server will share the hsqlt module in common.
  3. Have some syntax checking support fixed in. This will remain as a good way to allow people to test as the compiler tool evolves.
  4. Start work on SELECT’s AST. This has been long pending, and its time to start the possibly biggest section of HSQL.