Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
693 views
in Technique[技术] by (71.8m points)

python - How to decode unicode in a Chinese text

with open('result.txt', 'r') as f:
data = f.read()

print 'What type is my data:'
print type(data)

for i in data:
    print "what is i:"
    print i
    print "what type is i"
    print type(i)


    print i.encode('utf-8')

I have file with string and I am trying to read the file and split the words by space and save them into a list. Below is my code:

Below is my error messages: enter image description here

Someone please help!

Update:

I am going to describe what I am trying to do in details here so it give people more context: The goal of what I am trying to do is: 1. Take a Chinese text and break it down into sentences with detecting basic ending punctuations. 2. Take each sentence and use the tool jieba to tokenize characters into meaningful words. For instances, two Chinese character 學,生, will be group together to produce a token '學生' (meaning student). 3. Save all the tokens from the sentence into a list. So the final list will have multiple lists inside as there are multiple sentences in a paragraph.

# coding: utf-8 
#encoding=utf-8

import jieba

cutlist = "。!?".decode('utf-8')
test = "【明報專訊】「吉野家」and Peter from US因被誤傳採用日本福島米而要報警澄清,並自爆用內地黑龍江米,日本料理食材來源惹關注。本報以顧客身分向6間日式食店查詢白米產地,其中出售逾200元日式豬扒飯套餐的「勝博殿日式炸豬排」也選用中國大連米,誤以為該店用日本米的食客稱「要諗吓會否再幫襯」,亦有食客稱「好食就得」;壽司店「板長」店員稱採用香港米,公關其後澄清來源地是澳洲,即與平價壽司店「爭鮮」一樣。有飲食界人士稱,雖然日本米較貴、品質較佳,但內地米品質亦有保證。"

#FindToken check whether the character has the ending punctuation
def FindToken(cutlist, char):
    if char in cutlist:
        return True
    else:
        return False

'''
cut check each item in a string list, if the item is not the ending punctuation, it will save it to a temporary list called line. When the ending punctuation is encountered it will save the complete sentence that has been collected in the list line into the final list. '''

def cut(cutlist,test):
    l = []
    line = []
    final = []

'''
check each item in a string list, if the item is not the ending punchuation , it will save it to a temporary list called line. When the ending punchuation is encountered it will save the complete sentence that has been collected in the list line into the final list. '''

    for i in test:
        if i == ' ':
            line.append(i)

        elif FindToken(cutlist,i):
            line.append(i)
            l.append(''.join(line))
            line = []
        else:
            line.append(i)

    temp = [] 
    #This part iterate each complete sentence and then group characters according to its context.
    for i in l:
        #This is the function that break down a sentence of characters and group them into phrases
        process = list(jieba.cut(i, cut_all=False))

        #This is puting all the tokenized character phrases of a sentence into a list. Each sentence 
        #belong to one list.
        for j in process:
            temp.append(j.encode('utf-8')) 
            #temp.append(j) 
        print temp 

        final.append(temp)
        temp = [] 
    return final 


cut(list(cutlist),list(test.decode('utf-8')))

Here is my problem, when I output my final list, it gives me a list of the following result:

[u'u3010', u'u660eu5831', u'u5c08u8a0a', u'u3011', u'u300c', u'u5409u91ceu5bb6', u'u300d', u'and', u' ', u'Peter', u' ', u'from', u' ', u'US', u'u56e0', u'u88ab', u'u8aa4u50b3', u'u63a1u7528', u'u65e5u672c', u'u798fu5cf6', u'u7c73', u'u800c', u'u8981', u'u5831u8b66', u'u6f84u6e05', u'uff0c', u'u4e26', u'u81eau7206', u'u7528u5167', u'u5730', u'u9ed1u9f8d', u'u6c5fu7c73', u'uff0c', u'u65e5u672cu6599u7406', u'u98dfu6750', u'u4f86u6e90', u'u60f9', u'u95dcu6ce8', u'u3002']

How can I turn a list of unicode into normal string?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Let me give you some hints:

  • You'll need to decode the bytes you read from UTF-8 into Unicode before you try to iterate over the words.
  • When you read a file, you won't get Unicode back. You'll just get plain bytes. (I think you knew that, since you're already using decode().)
  • There is a standard function to "split by space" called split().
  • When you say for i in data, you're saying you want to iterate over every byte of the file you just read. Each iteration of your loop will be a single character. I'm not sure if that's what you want, because that would mean you'd have to do UTF-8 decoding by hand (rather than using decode(), which must operate on the entire UTF-8 string.).

In other words, here's one line of code that would do it:

open('file.txt').read().decode('utf-8').split()

If this is homework, please don't turn that in. Your teacher will be onto you. ;-)


Edit: Here's an example how to encode and decode unicode characters in python:

>>> data = u"わかりません"
>>> data
u'u308fu304bu308au307eu305bu3093'
>>> data_you_would_see_in_a_file = data.encode('utf-8')
>>> data_you_would_see_in_a_file
'xe3x82x8fxe3x81x8bxe3x82x8axe3x81xbexe3x81x9bxe3x82x93'
>>> for each_unicode_character in data_you_would_see_in_a_file.decode('utf-8'):
...     print each_unicode_character
... 
わ
か
り
ま
せ
ん

The first thing to note is that Python (well, at least Python 2) uses the u"" notation (note the u prefix) on string constants to show that they are Unicode. In Python 3, strings are Unicode by default, but you can use b"" if you want a byte string.

As you can see, the Unicode string is composed of two-byte characters. When you read the file, you get a string of one-byte characters (which is equivalent to what you get when you call .encode(). So if you have bytes from a file, you must call .decode() to convert them back into Unicode. Then you can iterate over each character.

Splitting "by space" is something unique to every language, since many languages (for example, Chinese and Japanese) do not uses the ' ' character, like most European languages would. I don't know how to do that in Python off the top of my head, but I'm sure there is a way.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...