With my previous post on a "phonetic basis" to language, I should make it clear that I consider signed languages to be close analogues to spoken languages. (From what I know, this is a commonly-accepted fact among linguists.)
However, one thing that linguistics hasn't really provided a unified theory is explaining cognitively what happens for sign language -- part of the problem is that a lot of people still perceive sign languages as proxies for spoken language, or an alternative to speaking, that is, taking signed systems to be like writing systems. This doesn't bode well for scientific attention or funding...
We know sign language systems are very similar to phonetic systems, and in fact can be analysed like a phonetic system as far as dynamics like sound change, lexical diffusion, acquisition, etc. goes, and signs can be broken down into equivalents of phonemes, etc. -- the most pertinent difference generally is that a signed language is often capable of much more co-articulation than a spoken one. (Co-articulation is very common in all spoken languages, but we can usually co-articulate sound elements only if their places of articulation are far enough apart. The /t/ in "tea" for example, is a co-articulated phoneme, composed of a consonant not unlike the Spanish [t] and aspiration [h].) Sign language is not ideographic either. The fundamental permitted gestures in a sign language do not represent "ideas" -- like phonemes, by themselves they are often meaningless, until combined together. The gestures obey the recombination principle -- if you make one gesture and then another to form a word, that word is not necessarily releated in meaning to either gesture, much like "party" is not a semantic fusion of the concepts "par" and "tea".
Signed languages are so similar to phonetic languages that they are often used as alternate means to research child language acquisition, especially in crosslingual settings, often because it's very common for the parents of deaf children not to be native speakers/signers of the language the child learns to speak/sign. (The other research sources that have been useful in the field are migrant families and pidgin-creole communities.)
Similarities include a critical window for sign language -- children who learn it early invariably become fluent signers, excepting cases like those with neurological disorders; children who hit puberty before learning to sign generally never become native-level signers except with intensive study. And even then, errors are frequent and signing is more jerky and less fluid.
A deaf infant, instead of babbling, will instead do the equivalent with his hands. In fact, the timeline for sign language acquisition among children is so similar to spoken language acquisition -- e.g. we can expect a six-year-old signer to be very fluent, replete with different inflections, conjugations, declensions, irregular constructions and syntactical phrase-shifting -- that biologically the mechanism for sign language acquisition must be very similar to the mechanism for spoken language acquisition.
Even signed languages obey all the rules of Chomsky's Universal Grammar. I won't go into hardcore syntax here, but the idea is that there are universal rules that govern language, where expressions can be analysed as sets of verb phrases and noun phrases, with embedding rules and phrase-order rules, and in the way the order of a phrase can be shifted in different situations. An example that gets used frequently for English-speaking audiences for the principle of shifting word orders is when you ask a question -- which verb and subject do you invert? When being asked to turn a sentence with a relative clause like, "The goat that is in the garden is eating the flowers" into a question, invariably all the young children fluent enough to understand the sentence invert the right phrases. They do not form constructions like "goat the that is in the garden eating the flowers" or "is the goat that in the garden is eating the flowers?" -- and Chomsky argues this must be a consequence of a natural rule of universal grammar, known as the argument of the poverty of the stimulus. For one, children learning language-specific rules generally demonstrate their acquisition of the rules in stages, just as we regularly observe young children saying "he hitted her!", "I bringed juice for doggie" or "she giggled me!" -- evidence that they haven't completely learnt the rule. But we observe no children making wrong inversions. There are other arguments too -- like how children would learn such an elaborate algorithm, and use it so fluently and automatically?
Teenage and adult learners of sign languages, however, generally commit many violations of universal grammar, probably because they are not using their L1 cognitive machinery to process the language, which would automatically organise words into phrases (e.g. mentally organising like "goat that is in the garden" into a noun phrase, or NP) and clauses -- just like learners of spoken languages. The ability to analyse a sign language along noun-phrase / verb-phrase lines isn't necessary for an ideographic system; try telling a linguistics student to find the noun phrase or a verb phrase in a picture, painting or musical score and you will get protests about the silliness of the effort. Computer language isn't organised along VP/NP lines.... and not even veteran computer programmers can "think" in the computer languages they know well; it's always "code" and generally not a very effective medium for thought for humans, even though computers do very well, performing vast amounts of symbolic operations in it.
Sign languages can be generated spontaneously, and are capable of undergoing a process called "creolisation" like spoken languages. It's the spontaneous origins of many sign languages that make their study useful to many linguists working in spoken languages -- it sheds light on the origin of language in general. Children, when exposed to a stimulus composed of ungrammatical or pidgin language (e.g. migrant parents on a plantation, a colony composed of many ethnicities speaking each other's languages brokenly, non-native parents who sign imperfectly), without a good competing grammatical stimulus, will spontaneously generate corrective rules. The resulting language they produce obeys a grammar, complying with all the rules of universal grammar -- and is as complex and full-fledged as any other language. It's a good example of how grammar, though it may be culturally transmitted, is a partial consequence of universal biology. However, for spontaneous language generation to occur, you need at least two people -- a baby raised by wolves will not generate his own language, while there exist many documented cases of a secret grammatically full-fledged language, spoken between two close twins.
Spontaneous language generation can fuse different source languages -- the most prominent examples are creoles like Haitian Creole, combining elements from French, Spanish, African and indigeneous languages into a new grammatical system; I speak a creole called Singlish, with elements from Hokkien/Teochew Chinese, Malay, English, among others. With signed languages, the most prominent example is the emergence of a single "Nicaraguan Sign Language" (LSN) from the pooling together of home sign after the first schools for the deaf were opened in Nicaragua in 1979. Suddenly, previously isolated deaf children were pooling signs on playgrounds, schoolbusses, classrooms, and then becoming fast friends and signing to each other in each other's homes... However, this LSN was like the pidigin language that parents of immigrant children speak, a mixture of the various languages of their new environment; the conventions were often irregular, with jerky, non-fluid signing, and trends rather than real grammatical rules. The younger children exposed to LSN weren't content; they took it and transformed it into Idioma de Señas de Nicaragua (ISN), spontaneously standardising many of the grammatical rules and making many patterns that were previously mere trends obligatory. The new language also simplified many of the common constructions like a native language would, and had all the classic signs of assimilation rules found in natural spoken language. How a group of 4- to 8-year-olds achieves such efficient consensus on a matter like creating a unified "fusion" language has to be pretty fascinating if you ask me; the process is probably unconscious and memetic. I imagine a single child having a particularly ingenious way of signing something; some other child sees it and with excited eyes gestures something to the effect of "oh that's such a cool sign!" and then soon everyone's copying it. It's such a good meme that it outcompetes all other signing patterns for the same concept or grammatical rule; some signs occupy a slightly different niche (slightly different connotations, etc.) and they survive, becoming the signed equivalents of synonyms. Repeat for each convention. The process of creolisation is effectively identical for speaking children, only with spoken words.
There's a good argument to make for the idea that objectively, all languages have equal difficulty to the infant, and ultimately have all the same complexity and are biologically constrained to be that way. Latin has a mediumly-complex inflection system that tortures high school students, but on the other hand it only has a collection of word order rules that are not very strict and relatively simple phonology; Chinese languages have little inflectional grammar but have complex syntactical and phonological rules; when English dropped the sheer majority of the gender, case and conjugation systems, it converted other previously-normal words into grammatical auxiliaries and suddenly gained a complex word order system, and when English dropped vowel length distinction, it also started recognising a large degree of other vowel phonemes that previously didn't exist, and as such, English has a large amount of vowels compared to most other languages. English vowels are a torture for Arabic speakers, who have only 3 vowel qualities, yet an abundance of exotic consonants. In fact a general rule I'd like to propose is when a language is undergoing a change that will simplify or add complexity to its grammar, it must also undergo a complementary change that will compensate for the added or removed complexity in the opposite direction -- in a different area of course.
So that would imply signed languages must be as equally complex as spoken languages -- just extracting grammatical information from the secondary visual cortex rather than say, the auditory cortex. Curiously, the frontal cortex, auditory cortex and visual cortex converge roughly around the Sylvian Fissure, along which are located several important sites for language processing (though recent research suggests we've misidentified the actual locations of Broca's and Wernicke's areas -- partially because of the imprecision of the 1800s). Research into signed languages may have implications for phonics and children who learn spoken languages, and learn to read and write in them, because it shows the brain can process visual information grammatically. It is not the case however, that the visual information in sign language can be processed as a direct representation of symbolic thought and ideas: damage the same language centres that speaking people use, and sign language ability goes with it. In fact, Google tells me there's some good research out there on "sign language aphasia".
However, there's some further complexity for neuroscientific research into linguistic processing and the shared machinery that spoken and signed languages would both use because of the concept of neurplasticity. If you damage or lesion the areas commonly regarded as Broca's Area and Wernicke's Area in a child that is young enough (e.g. a large fall at two weeks of age), the child will probably grow up into a normal, healthy, fast-talking young child. Why? The appropriate neural tissue couldn't develop in the place it usually develops, so it develops somewhere else, sometimes in radically different areas (and later the remnants of these lesions show up 30 years later on medical imaging scans in an otherwise neurologically-normal patient). But neuroplasticity is also probably part of the mechanism for native-level sign language processing; if a language centre is supposed to be receiving signals from the auditory cortex but isn't (or isn't receiving enough signals, as with a damaged ear or damaged auditory cortex), some neuroplasticity mechanism probably has it adapt to processing from the visual cortex instead, and growing and sending out networks and pioneer neurons accordingly. Notably, the degree of neuroplasticity is greatest in children -- if being fluent in a language requires your language processing centres to send out pioneer axons (or even new neurons or networks) to the appropriate auditory or visual cortex and vice versa, and you're 13 years old ... well, hard luck.