
Will Dwinnell  20100310 22:02:14 
"Neural networks take in binary digits and output fits (fuzzy bits) which is a number between 0 and 1 but never absolute (e.g. 0.4323, 0.9, 0.1). For this reason, to make use of the output, we have to round off the fits to form bits (binary units)."
Most neural network architectures will accept as inputs, and will produce as outputs realvalued inputs (and other data which can be represented using reals, such as dummy variables). I'm not sure what you're referring to as "fuzzy bits", since neural network outputs can have a variety of ranges, not just between 0.0 and 1.0.
"Because neural networks utilize fuzzy logic, the standard system architecture is slightly different."
Most neural networks do not use conventional fuzzy calculus elements (fuzzy sets, fuzzy membership functions, fuzzy inference, hedges, etc.).
"A reasonable threshold would be anything greater than 0.8 should be 1. Anything lower than 0.2 should be 0. Anything in the middle means the network is not smart enough and requires more training."
Many useful neural networks exhibit a majority of output values between 0.2 and 0.8. This may be all the differentiation which is possible, and still is very useful in many applications. Further, examining the neural network output distribution directly is a poor way to determine the level of training. This is much better accomplished by assessing neural network performance on a validation set.
I suggest the following as a starting place for artificial neural networks:
http://www.faqs.org/faqs/aifaq/neuralnets/part1/


2. Re: Neural Networks 

Reply 


Louis Stowasser  20100311 02:38:47  In reply to message 1 from Will Dwinnell 
Yes you are right. But I was writing in regards to how Neural Mesh manages neural networks (which I guess I should have clarified).
As the article says, neural mesh takes binary inputs and outputs numbers between 0.0 and 1.0. In the next version I will implement bipolar (1.0, 1.0). I read the term fuzzy bits in a book about fuzzy logic so I assume the author invented that.
"Most neural networks do not use conventional fuzzy calculus elements (fuzzy sets, fuzzy membership functions, fuzzy inference, hedges, etc.)."
What do you mean do not use conventional fuzzy calculus elements. Do you mean in the algorithms, in the inputs, outputs?
"Many useful neural networks exhibit a majority of output values between 0.2 and 0.8. This may be all the differentiation which is possible, and still is very useful in many applications. Further, examining the neural network output distribution directly is a poor way to determine the level of training. This is much better accomplished by assessing neural network performance on a validation set."
This is true. I mainly wanted to convey how to turn these outputs into something more useful ie binary.
Thanks,
Louis Stowasser 
