Ion Grammar & AST
?
?

Keyboard Navigation

Global Keys

[, < / ], > Jump to previous / next episode
W, K, P / S, J, N Jump to previous / next marker
t / T Toggle theatre / SUPERtheatre mode
V Revert filter to original state Y Select link (requires manual Ctrl-c)

Menu toggling

q Quotes r References f Filter y Link c Credits

In-Menu Movement

a
w
s
d
h j k l


Quotes and References Menus

Enter Jump to timecode

Quotes, References and Credits Menus

o Open URL (in new tab)

Filter Menu

x, Space Toggle category and focus next
X, ShiftSpace Toggle category and focus previous
v Invert topics / media as per focus

Filter and Link Menus

z Toggle filter / linking mode

Credits Menu

Enter Open URL (in new tab)
0:09Recap and set the stage for the day with some admin stuff1,2 including a tease of the annotated episode guide
🗹
0:09Recap and set the stage for the day with some admin stuff1,2 including a tease of the annotated episode guide
🗹
0:09Recap and set the stage for the day with some admin stuff1,2 including a tease of the annotated episode guide
🗹
2:44Summarise off-stream work
📖
2:44Summarise off-stream work
📖
2:44Summarise off-stream work
📖
4:33Safe overflow checking
🗩
4:33Safe overflow checking
🗩
4:33Safe overflow checking
🗩
6:41Continue to summarise off-stream work, including the addition of buf_end()
📖
6:41Continue to summarise off-stream work, including the addition of buf_end()
📖
6:41Continue to summarise off-stream work, including the addition of buf_end()
📖
10:11Plan for the day, doing the Ion lexer parsing
🗩
10:11Plan for the day, doing the Ion lexer parsing
🗩
10:11Plan for the day, doing the Ion lexer parsing
🗩
12:13Remove all the parse_expr() functions
12:13Remove all the parse_expr() functions
12:13Remove all the parse_expr() functions
13:13Create syntax.txt and spec out Ion's syntax
13:13Create syntax.txt and spec out Ion's syntax
13:13Create syntax.txt and spec out Ion's syntax
23:31Note the need to handle two-character tokens
🗩
23:31Note the need to handle two-character tokens
🗩
23:31Note the need to handle two-character tokens
🗩
25:06Enable next_token() to check for integer literal overflows, and introduce syntax_error()
25:06Enable next_token() to check for integer literal overflows, and introduce syntax_error()
25:06Enable next_token() to check for integer literal overflows, and introduce syntax_error()
28:02Step through lex_test() to see that it worked
🏃
28:02Step through lex_test() to see that it worked
🏃
28:02Step through lex_test() to see that it worked
🏃
28:11Add an integer literal overflow test in lex_test()
28:11Add an integer literal overflow test in lex_test()
28:11Add an integer literal overflow test in lex_test()
28:22Successfully hit our integer literal overflow
🏃
28:22Successfully hit our integer literal overflow
🏃
28:22Successfully hit our integer literal overflow
🏃
28:44Change val to int_val of type uint64_t in the Token struct and change lex_test() to test overflowing this
28:44Change val to int_val of type uint64_t in the Token struct and change lex_test() to test overflowing this
28:44Change val to int_val of type uint64_t in the Token struct and change lex_test() to test overflowing this
32:00Successfully overflow our integer
🏃
32:00Successfully overflow our integer
🏃
32:00Successfully overflow our integer
🏃
32:10Introduce scan_int() to do the integer-based work from next_token(), and enable it to handle hexadecimal numbers
32:10Introduce scan_int() to do the integer-based work from next_token(), and enable it to handle hexadecimal numbers
32:10Introduce scan_int() to do the integer-based work from next_token(), and enable it to handle hexadecimal numbers
39:21Step through lex_test() and see what scan_int() produces
🏃
39:21Step through lex_test() and see what scan_int() produces
🏃
39:21Step through lex_test() and see what scan_int() produces
🏃
39:54Enable scan_int() to iterate through the character stream to the null terminator
39:54Enable scan_int() to iterate through the character stream to the null terminator
39:54Enable scan_int() to iterate through the character stream to the null terminator
41:26Step through scan_int() to see that it works
🏃
41:26Step through scan_int() to see that it works
🏃
41:26Step through scan_int() to see that it works
🏃
42:16Enable lex_test() to check for UINT64_MAX in hex, and next_token() to handle whitespace
42:16Enable lex_test() to check for UINT64_MAX in hex, and next_token() to handle whitespace
42:16Enable lex_test() to check for UINT64_MAX in hex, and next_token() to handle whitespace
45:01Step through lex_test to see that it works
🏃
45:01Step through lex_test to see that it works
🏃
45:01Step through lex_test to see that it works
🏃
45:09Add a hex overflow test in lex_test()
45:09Add a hex overflow test in lex_test()
45:09Add a hex overflow test in lex_test()
45:55Step through lex_test() and successfully hit our overflow error
🏃
45:55Step through lex_test() and successfully hit our overflow error
🏃
45:55Step through lex_test() and successfully hit our overflow error
🏃
46:37Enable scan_int() to handle octal numbers, and next_token() to correctly start a new token after skipping whitespace
46:37Enable scan_int() to handle octal numbers, and next_token() to correctly start a new token after skipping whitespace
46:37Enable scan_int() to handle octal numbers, and next_token() to correctly start a new token after skipping whitespace
50:55Step in to next_token() to see that octal works
🏃
50:55Step in to next_token() to see that octal works
🏃
50:55Step in to next_token() to see that octal works
🏃
51:14Enable scan_int() to handle binary numbers
51:14Enable scan_int() to handle binary numbers
51:14Enable scan_int() to handle binary numbers
52:30Step through lex_test() to see that all our integer parsing works
🏃
52:30Step through lex_test() to see that all our integer parsing works
🏃
52:30Step through lex_test() to see that all our integer parsing works
🏃
53:05Spec out Ion's float syntax based on that of C3
53:05Spec out Ion's float syntax based on that of C3
53:05Spec out Ion's float syntax based on that of C3
56:43Enable next_token() to establish the number prefix to pass to scan_int(), in preparation for parsing floats
56:43Enable next_token() to establish the number prefix to pass to scan_int(), in preparation for parsing floats
56:43Enable next_token() to establish the number prefix to pass to scan_int(), in preparation for parsing floats
1:01:56Introduce scan_float()
1:01:56Introduce scan_float()
1:01:56Introduce scan_float()
1:06:22Step in to next_token() to see what it does
🏃
1:06:22Step in to next_token() to see what it does
🏃
1:06:22Step in to next_token() to see what it does
🏃
1:07:01Fix next_token() to position the stream at token.start rather than bookmark
1:07:01Fix next_token() to position the stream at token.start rather than bookmark
1:07:01Fix next_token() to position the stream at token.start rather than bookmark
1:07:12Step through scan_int()
🏃
1:07:12Step through scan_int()
🏃
1:07:12Step through scan_int()
🏃
1:07:36Make next_token() break
1:07:36Make next_token() break
1:07:36Make next_token() break
1:07:42Step through lex_test() to see that it works
🏃
1:07:42Step through lex_test() to see that it works
🏃
1:07:42Step through lex_test() to see that it works
🏃
1:07:49Introduce float literal tests in lex_test(), including assert_token_float()
1:07:49Introduce float literal tests in lex_test(), including assert_token_float()
1:07:49Introduce float literal tests in lex_test(), including assert_token_float()
1:08:33Step through lex_test to see that float literal testing works
🏃
1:08:33Step through lex_test to see that float literal testing works
🏃
1:08:33Step through lex_test to see that float literal testing works
🏃
1:08:40Introduce more float tests in lex_test()
1:08:40Introduce more float tests in lex_test()
1:08:40Introduce more float tests in lex_test()
1:09:15Run it to see that it works
🏃
1:09:15Run it to see that it works
🏃
1:09:15Run it to see that it works
🏃
1:09:18Add an exponent test in lex_test() and enable next_token() and scan_float() to handle the 'e' character4
1:09:18Add an exponent test in lex_test() and enable next_token() and scan_float() to handle the 'e' character4
1:09:18Add an exponent test in lex_test() and enable next_token() and scan_float() to handle the 'e' character4
1:11:28See that it works, and consider this number parsing complete enough for now
🏃
1:11:28See that it works, and consider this number parsing complete enough for now
🏃
1:11:28See that it works, and consider this number parsing complete enough for now
🏃
1:12:11Enable next_token() to handle '\' and '"' and introduce a TokenMod enum for next_token(), scan_int() and scan_float() to use
1:12:11Enable next_token() to handle '\' and '"' and introduce a TokenMod enum for next_token(), scan_int() and scan_float() to use
1:12:11Enable next_token() to handle '\' and '"' and introduce a TokenMod enum for next_token(), scan_int() and scan_float() to use
1:16:39Step through lex_test() to see that we pass
🏃
1:16:39Step through lex_test() to see that we pass
🏃
1:16:39Step through lex_test() to see that we pass
🏃
1:17:04Implement scan_char() and add char literal tests in lex_test()
1:17:04Implement scan_char() and add char literal tests in lex_test()
1:17:04Implement scan_char() and add char literal tests in lex_test()
1:22:08Find that these tests work
🏃
1:22:08Find that these tests work
🏃
1:22:08Find that these tests work
🏃
1:22:15Add a '\n' test in lex_test()
1:22:15Add a '\n' test in lex_test()
1:22:15Add a '\n' test in lex_test()
1:22:26Find that this doesn't work and investigate why
🏃
1:22:26Find that this doesn't work and investigate why
🏃
1:22:26Find that this doesn't work and investigate why
🏃
1:22:55Escape the \ in lex_test()
1:22:55Escape the \ in lex_test()
1:22:55Escape the \ in lex_test()
1:23:03Try again to see that it works
🏃
1:23:03Try again to see that it works
🏃
1:23:03Try again to see that it works
🏃
1:23:07Add a test for '\x' in lex_test()
1:23:07Add a test for '\x' in lex_test()
1:23:07Add a test for '\x' in lex_test()
1:23:11Correctly hit our error about \x not being supported
🏃
1:23:11Correctly hit our error about \x not being supported
🏃
1:23:11Correctly hit our error about \x not being supported
🏃
1:23:31Implement scan_str(), adding a *str_val to the Token struct and string literal tests in lex_test(), and introducing assert_token_str()
1:23:31Implement scan_str(), adding a *str_val to the Token struct and string literal tests in lex_test(), and introducing assert_token_str()
1:23:31Implement scan_str(), adding a *str_val to the Token struct and string literal tests in lex_test(), and introducing assert_token_str()
1:32:05Find that that works
🏃
1:32:05Find that that works
🏃
1:32:05Find that that works
🏃
1:32:09Add tests for \-escapes in lex_test()
1:32:09Add tests for \-escapes in lex_test()
1:32:09Add tests for \-escapes in lex_test()
1:32:42Find that that all works
🏃
1:32:42Find that that all works
🏃
1:32:42Find that that all works
🏃
1:32:59Q&A
🗩
1:32:59Q&A
🗩
1:32:59Q&A
🗩
1:33:52miotatsu pervognsen Are you going to handle a standalone 0 as a decimal integer literal?
🗪
1:33:52miotatsu pervognsen Are you going to handle a standalone 0 as a decimal integer literal?
🗪
1:33:52miotatsu pervognsen Are you going to handle a standalone 0 as a decimal integer literal?
🗪
1:34:19garretttypes The funny thing about recursive descent (especially with bookmarks like Per is using) is that it's not actually LL(1)
🗪
1:34:19garretttypes The funny thing about recursive descent (especially with bookmarks like Per is using) is that it's not actually LL(1)
🗪
1:34:19garretttypes The funny thing about recursive descent (especially with bookmarks like Per is using) is that it's not actually LL(1)
🗪
1:35:27enemymouse Did he fix the hex table uppercase 'E'?
🗪
1:35:27enemymouse Did he fix the hex table uppercase 'E'?
🗪
1:35:27enemymouse Did he fix the hex table uppercase 'E'?
🗪
1:35:33Fix typo in char_to_digit
1:35:33Fix typo in char_to_digit
1:35:33Fix typo in char_to_digit
1:35:41pmttavara Yeah yeah, technically 0 is an octal literal, we get it
🗪
1:35:41pmttavara Yeah yeah, technically 0 is an octal literal, we get it
🗪
1:35:41pmttavara Yeah yeah, technically 0 is an octal literal, we get it
🗪
1:35:45davechat pervognsen Do you think Ion will support multiline strings or would that be too much a deviation from C?
🗪
1:35:45davechat pervognsen Do you think Ion will support multiline strings or would that be too much a deviation from C?
🗪
1:35:45davechat pervognsen Do you think Ion will support multiline strings or would that be too much a deviation from C?
🗪
1:37:05oaeui He mentioned before that some things wouldn't actually be LL(1), but he definitely wants to avoid unbounded lookahead
🗪
1:37:05oaeui He mentioned before that some things wouldn't actually be LL(1), but he definitely wants to avoid unbounded lookahead
🗪
1:37:05oaeui He mentioned before that some things wouldn't actually be LL(1), but he definitely wants to avoid unbounded lookahead
🗪
1:37:39orcnz29 pervognsen For the second compiler pass, how will you resolve the declarations in an order-independent way? Will you register them on the first pass or is there another technique to achieve this?
🗪
1:37:39orcnz29 pervognsen For the second compiler pass, how will you resolve the declarations in an order-independent way? Will you register them on the first pass or is there another technique to achieve this?
🗪
1:37:39orcnz29 pervognsen For the second compiler pass, how will you resolve the declarations in an order-independent way? Will you register them on the first pass or is there another technique to achieve this?
🗪
1:39:24c__jm pervognsen It's quite a bit simpler than I thought as well. I think people blow systems-level problems up to be much scarier than they are
🗪
1:39:24c__jm pervognsen It's quite a bit simpler than I thought as well. I think people blow systems-level problems up to be much scarier than they are
🗪
1:39:24c__jm pervognsen It's quite a bit simpler than I thought as well. I think people blow systems-level problems up to be much scarier than they are
🗪
1:41:11Integer vs float scan-ahead
🗩
1:41:11Integer vs float scan-ahead
🗩
1:41:11Integer vs float scan-ahead
🗩
1:42:33Stop the stream for now
🗩
1:42:33Stop the stream for now
🗩
1:42:33Stop the stream for now
🗩