This paper describes a new technique for speech synthesis based on using speech databases at different stages of text-to-speech process. Speech databases are used for storing, selection and concatenation of speech segments. Speech database units are phones in different segmental and prosodic contexts. Pitch synchronous segmentation and labeling of databases allows storing both segmental and prosodic information. The unit selection algorithm is based on criteria derived from categories of phonetic-prosodic annotations of speech databases and works without spectral matching. The output of the unit selection module is an acoustic phonetic-prosodic transcription which is used by the acoustic processor to generate a speech wave. The described approach is realized in the experimental Ukrainian TTS system. Several non-professional speaker databases with different speaking styles have been created and tested.