it will likely be a formal specification language. Instead of indicating the "how", as we do today, we'll specify the "what". So instead of saying step by step how to implement a certain algorithm, we'll specify what the requirements for our program are, and the compiler will work out the most efficient algorithm automatically.
The spec language may or may not be text based. I do not believe we will ever over come the problem of linguistic ambiguity. Even with a computer with an equal or greater intelligence to a human. Humans still misinterpret eachother. Computers will always take what we say painfully literally. this is an anavoidable, but not necessarily intuitively obvious result of various inviolable premises we hold about the operation of a computer. But higher level, more intuitive ways to state a problem unambiguously exist.
I don't think that the star trek depiction of people programming holodecks is entirely far off. It will seem more obvious and intuitive, but computers will still make catastrophic errors based around the ambiguity of natural language.
The reason I think it seems that humans can understand eachother easily in a way that computers can't, is due to shared culture, and our shared understanding that we don't always say what we mean (sympathy), and our ability to (only occasionally) continuously clarify and disambiguate our meaning. Our ability to communicate with eachother is largely the result of the common shape of our bodies, perceptions, and our ability to imagine ourselves as other people (an ability we can see more clearly when we look at those who partially lack this ability- Autistic people and Aspergers people). This enables us to understand in an extremely intimate way, why someone else may be making a specific pattern of noises with their vocal mechanisms, and gesturing in a particular way with their faces and bodies.
We only derive meaning from these patterns of behavior by imagining what we would be thinking if we were doing those things. Computers lack human bodies, vocal mechanisms, and faces, and consequently any ability to sympathise with a human. Any attempt to make a computer truly understand us without making the computer into a human itself, will be largely fruitless.
Despite all that, human comprehension doesn't work quite as well as most of us imagine it does. Consider the challenge each of us have as programmers in determining the shape of a program that client requires. It doesn't happen instantaneously. A successful program is the result of continuous revision over the course of many weeks months and years. That revision process would be very challenging (impossible) to replace with an automatic process. We can better automate the repetitive work, but we will never be able to artificially generate a perfect sympathy for our intent, artistic vision, and personal/ethical needs.
Banking on a future AI is a bet that I wouldn't make, even if it were possible. We should think just as hard about what we would lose in such a proposition, as much as we think about what we would gain.