Keno Fischer, Julia Computing Co-Founder and Chief Technology Officer (Tools) participated in a Quora Session March 18-23. One of Keno's responses was also featured in Forbes.
The first thing to think about in answering this question is: What is a programming language? If you ask Wikipedia that question, you will find that a programming language "is a formal language, which comprises a set of instructions that produce various kinds of output" which is of course true, but in true encyclopedia form also mostly unhelpful. It does give the right idea though. Just write down some instructions and some rules for what they do and viola you've created a programming language. If you write down these rules using slightly fancy language, you would call that the specification of your language and have a very good claim to have created a programming language.
Of course, in most instances, programming languages don't start as exercises in specification writing. Instead, one starts with a program that actually does something with the programming language. Generally, this will either be a program that reads in some code written in the programming language and just does what the code says to do as it goes along (an "interpreter" - think following a recipe step by step) or one that translates the source code to the sequence of bits that the actual hardware understands (though this string of ones and zeros could also be considered a programming language that the hardware then interprets). There are a couple more exotic kinds of programs one could write to implement a programming language (e.g. type checkers, that just check that the source code is well-formed, i.e. allowed by the rules of the language, but don't otherwise execute it) and various variations on compilers and interpreters (hybrid systems, compilers to "virtual hardware", i.e. low level languages that are designed to be easy to map to actual hardware, compilers from one high level programming language to another, aka "transpilers"), but the key thing is that these programs "understand" the language in some way. The specification usually comes later, if ever.
Now, assuming you've started your own programming language, how does one decide what the language should be - what the available instructions are, what the rules and grammar of the language are, what the semantics of various things are, etc. There are a lot of things to consider when making these decisions: How does it work with the rest of the system? Is it self-consistent? Does it make sense to the user? Will the users be able to guess what's going, just by looking at the code? Are we able to efficiently have the hardware do what the language says it should do? Is there precedent somewhere, e.g. in mathematics or in other programming languages that set users' expectations for how thing should work? If so and we are deviating from that expectations, are there good reasons to [1]? If we are doing something different or unexpected, should we provide both or should we at least add something to make sure that users expecting the legacy behavior will easily find out what the legacy behavior is, etc? At the end, in every decision you make, you need to consider two things 1) The computer that has to run it and 2) The human that has to read it.
Both are extremely important, but there is of course a trade-off between them and languages differ where they fall on this spectrum. In Julia, we try very hard to make a program well understood by both (this was actually one of the original motivations for Julia). This isn't easy and there are hard trade-offs to be made sometimes (e.g. it'd be nice to check overflow for all arithmetic operations, but doing this by default is too slow on current generation machines), but we try to make sure that a) We make reasonable choices by default and b) whenever we make a trade off in either directions there is ways to let the users make the opposite choice while being able to use the rest of the system without trouble. Julia's multiple dispatch system is essential to making this work (though the details of that are a whole separate topic).
[1] E.g. we have a policy of generally spelling out names rather than using short abbreviations, so you might consider "sine" and "cosine" as more consistent names than "sin" and "cos", but you'd be fighting against 100 years of mathematical notation. As an example on the other side, a lot of languages like to use "+" to concatenate strings. However, we considered that a serious mistake, since + is facially commutative and string concatenation is not, which is why we use "*" as our string concatenation operator.
Well, this one's easy. The nice thing about having your own language is that anything you think is bad about it, you can just change without having to go through the effort of designing a whole new language from scratch. A more interesting question is what kind of language I would design if I were to do something completely different from what Julia does. Julia's design optimizes for usability and performance. I think it would be fun to work on a language that aggressively optimizes for correctness. This probably means a language with built in proof assistant and formal verification capabilities (or something that statically enforces coverage of all corner cases). I haven't thought too deeply about it, but I have used various proof assistants before and it's always felt like writing assembly to me in its level of tedious. Perhaps that's inherent to the domain, but I'd be interested in exploring what clean-room proof assistant with focus on usability would look like. Then again, there's nothing really preventing us from bolting that kind of technology onto Julia to supplement or replace test coverage for particularly critical code. Perhaps language designers are just doomed to just keep working on the same language until legacy considerations make it impossible to change anything.