I think most of the efforts should be aimed at building Apertium corpuses.
English is a Germanic language, Apertium allows languages with the same basic rules to more transparent to one another (Although corpuses work in one direction only).
The problem is that we can't build corpuses so easily, in order to create a working corpus many hours of work are needed, if there was only a simpler way of creating them and maintaining them using AI it would be great (or even creating a graphical tool to aid in this mission).
Building "stupid" TMs with statistical data can be sometimes false and they need lots of AI to get better, building good corpuses with the right rules for every language will help computers understand human language in source and destination languages.
Why is it so important?
I want to present you with a problem I had with translating text from Arabic to Hebrew, Google's mechanism is doing the following procedure: Translated the Arabic text to English, English is then translated to Hebrew, You can't even possible imagine how strong is the phrase "Lost in translation" in this case.
BTW, Microsoft translator does much better job than Google's translator when translating from English to Hebrew.
Hebrew and Arabic share basic rules, instead of using English in the middle we can use a mechanism that will take advantage of their similarities to translate between them without going through 3rd language.
Same for Czech and Slovak, apparently many Czech translators are using the Slovak translation instead of translating from English and vice versa.
Apertium website: http://www.apertium.org/