Wordsi by Fozzie The Beat

A blog about computational semantics, life, and anything else.

Puzzle Time: Transforming Names

This post starts what will hopefully be a new trend: computational solutions to NPR’s Sunday Puzzle. In this weeks puzzle, we have to

Think of a common man’s name in four letters, one syllable. Move each letter exactly halfway around the alphabet. For example, A would become N, N would become A, and B would become O. The result will be a common woman’s name in two syllables. What names are these?

Word puzzles can often be pretty hard for automated computer programs. This is in part what made IBM’s Watson so darn cool. But thankfully this week’s puzzle is amazingly simple for a program, in fact, i’d even say it’s ideal for a program. To solve it, just need two simple steps:

  1. Gather a bunch of male and female names
  2. Process the male names as noted to see if we have any matching female names

Step 1 is perhaps the hardest since it requires gathering not only names, but common names in english. However, this handy site has done the work for us. Here they have listings of common English names based on a variety of categories: region, ranking, gender, alphabetical, etc. For our purposes, we’ll just grab the a long list of the most frequent male and female names from here and here. All the names are nicely laid out in a table with a pretty regular format. In fact the format is so regular we can write a regular expression for it. Here’s an example of the table:

1
2
3
4
5
6
7
8
9
10
<table align="left" CELLPADDING="3" CELLSPACING="3" style="table1">
<tr bgcolor="F5FDD9"><td><b>Name&nbsp;&nbsp;</td><td><b>% Frequency&nbsp;&nbsp;</td><td><b>Approx Number&nbsp;&nbsp;</td><td><b>Rank&nbsp;&nbsp;</td></tr>
<tr bgcolor="white"><td>AARON</td><td>0.24</td><td> 350,151 </td><td>77</td></tr>
<tr bgcolor="white"><td>ABDUL</td><td>0.007</td><td> 10,213 </td><td>831</td></tr>
<tr bgcolor="F5FDD9"><td>ABE</td><td>0.006</td><td> 8,754 </td><td>854</td></tr>
<tr bgcolor="white"><td>ABEL</td><td>0.019</td><td> 27,720 </td><td>485</td></tr>
<tr bgcolor="white"><td>ABRAHAM</td><td>0.035</td><td> 51,064 </td><td>347</td></tr>
<tr bgcolor="white"><td>ABRAM</td><td>0.005</td><td> 7,295 </td><td>1053</td></tr>
<tr bgcolor="F5FDD9"><td>ADALBERTO</td><td>0.005</td><td> 7,295 </td><td>1040</td></tr>
<tr bgcolor="white"><td>ADAM</td><td>0.259</td><td> 377,871 </td><td>69</td></tr>

Regular right? It’s so darn simple we can use some simple command line tools to clean it up:

1
cat female_names_alpha.htm | grep '<tr bgcolor="' | sed "s/.*<td>\([A-Z]\+\).*/\1/" | tail -n +2 > female_names.txt

If we downloaded the female names into female_names_alpha.htm, the first part will simply print out the contents of the html page to standard out. The second part, the grep command, will catch all rows in the table, which thankfully have the same start prefix. Part three extracts the names nestled in a row element. And finally the tail part eliminates the first row of the table, e.g. the head row, which grep accidentally catches. This’ll grab us a ton of names.

So what’s next? Well, we have a bunch of male and female names. But we’ve got too many, namely we have names that clearly won’t work as they have too many or too few letters. So let’s so some more cleaning, but this time in scala:

1
val maleNames = Source.fromFile(args(0)).getLines.filter(_.size == 4)

This dandy line reads the text from the file and filters out any lines that have more than four characters. Since our initial processing placed each name on a line, this in effect removes any names with more than four letters. If we do this with the female names, we’ll have lists of common four letter names for the two genders. For later use, we’ll also turn the female names into a set by calling .toSet on the list.

Last, we can do the transformation on each male name to see if it’s a valid female name and then print out any combinations that match. The next three lines do this slickly:

1
2
3
maleNames.map(w => (w, w.map(l => (((l-'A'+13) % 26) + 'A').toChar).toString))
         .filter( bg => femaleNames.contains(bg._2))
         .foreach(println)

This works in three steps. First, the map call transforms each male name into a tuple which consists of the original male name, and then a potential female name by sliding up each letter in the name by 13 letters. With ascii characters, this is done just by first grounding the letter to 0, by subtracting the char value for A, adding 13, and then rolling over the value to be back in the range of the 26 letter characters by taking the modulo (%) and finally bumping the character value into the range of real ascii characters by adding A. That’s all done in the first line. Step two is to simply throw out any generated female names that don’t show up in our list of female names, done by line 3. And the final line just prints out any matches we get. With our name lists, this turns out to be:

1
2
3
4
5
(ERIN,REVA)
(EVAN,RINA)
(GLEN,TYRA)
(IVAN,VINA)
(RYAN,ELNA)

The rules of the game stipulated that the male name had to have one syllable and the female name has to have two. We didn’t code for this at all since that parts slightly more complicated, and since our pre-processing reduced the set of options down to five, we can easily pick out a valid answer. In this case, “Glen” and “Tyra” or “Ryan” and “Elna” look like valid responses. I’d vote for the first pair since I haven’t heard “Elna” in forever.

Making Agglomerative Clustering Run in N^2 Time

Let’s say you’re an amateur zoologist and you’ve got a bunch of data describing parliaments of owls, gaggles of geese, peeps of chickens, and a train of jackdaws but you don’t really know that you have these bird types. All you really have are descriptive features describing each bird like feature type, beak type, conservation status, feeding preferences, etc. Using just this data, you’d like to find out how many bird species you have and how similar each group is to the others in a nice graphical fashion like down below. How would you do it?

The classic solution to the problem would be to use Hierarchical Agglomerative Clustering. In this case, Agglomerative Clustering would group together birds that have similar features into hopefully distinct species of birds. And there’s a lot of packages out there that kind of do this for you like Weka, Scikit-Learn, or plain old R. However, you’ve got a lot of birds to deal with and these standard packages are just taking way too long. Why are they taking way too long? Because they’re doing agglomerative clustering the slow ways.

So what do they slow ways look like? Well, pretty much all ways of doing agglomerative clustering start by building an Affinity Matrix that simply measures how similar two birds are to each other:

Building the Affinity Matrix
1
2
3
4
val m = Array.fill(numBirds, numBirds)(0)
for ((bird1, i) <- birds.zipWithIndex;
     (bird2, j) <- birds.zipWithIndex)
    m(i)(j) = similarityBetween(bird1, bird2)

This little snippet just compares each bird against all other birds and stores how “similar” they are to each other in terms of their descriptive features. What next? Well, you need another data structure to keep track of your bird groups. We’ll do this with just a map from group identifiers to sets of bird identifiers. And since this algorithm is named “agglomerative”, we gotta agglomerate things, so we’ll start by putting every bird in their own bird group:

Starting off the bird groups
1
2
3
4
    // First get each bird and it's id, then create a tuple holding the bird id
    // and a set with the bird as the only element.  Then turn this list of
    // tuples into a map.
    var groupMap = birds.zipWithIndex.map( birdIndex => (birdIndex._2, Set(birdIndex._1))).toMap

So that was simple. Now comes the complicated bits that either lead you to a slow version, a fairly slow version, or a super slow version of agglomerative clustering. Before we do either of these options, let’s make two simplifications: let’s assume we just want the final groups that our bird show up in and we know how many bird groups we want to find. Agglomerative Clustering can give you not only this information, but a whole tree showing how birds are linked together, but the book-keeping for doing this is tricky and doesn’t really affect the issues we’re focusing on here. With that out of the way, what does the super slow method look like?

The Super Slow Agglomerative Clustering
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
while (groupMap.size > numClusters) {
    val groupSet = groupMap.toSet
    // Find the two most similar groups in our map.
    var bestSim = 0.0
    var bestGroups = (-1, -1)
    for (Array((id1, birds1), (id2, birds2)) <- groupSet.subsets(2).map(_.toArray)) {
        // Get the similarity between the two bird groups using the raw
        // values in m.
        val sim = groupSim(id1, birds1, id2, birds2, m)
        if (sim > bestSim) {
            bestSim = sim
            bestGroups = (id1, id2)
        }
    }

    // Now merge the two groups together into a new group
    val newGroup = groupMap(bestGroups._1) ++ groupMap(bestGroups._2)
    // Now remove the two groups from the map
    groupMap = groupMap - bestGroups._1
    groupMap = groupMap - bestGroups._2
    // Update the map to store the new group.  Since the old id's are
    // removed, we can just re-use one of them without any change.
    groupMap = groupMap ++ Map((bestGroups._1 -> newGroup))
}

That’s it! It’s super simple and super slow. Why is it so slow? Well, in general, you’ll do the big while loop O(N) times, where N is your number of birds. Then the next block of code compares each possible pairing of bird groups. Since in general the number of groups is proportional to the number of birds, this will be O(N^2) comparisons. Throw these two bits together and you get a runtime of O(N3)! When you have 100,000 birds, that’s super slow.

So what can you do to hasten that up? Well, the main goal of the big loop through pairs of groups is to find the most similar pair of bird groups, so an obvious choice would be to create a priority queue, so let’s see how that looks:

Agglomerative Clustering, Priority Queue Style
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
// Create a new priority queue that tracks the similarity of two groups and
// their id's
val groupQueue = new PriorityQueue[(double, int, int)]()
val groupSet = groupMap.toSet
for (Array((id1, birds1), (id2, birds2)) <- groupMap.subsets(2).map(_.toArray)) {
    // Get the similarity between the two bird groups using the raw
    // values in m.
    val sim = groupSim(id1, birds1, id2, birds2, m)
    groupQueue.enque((sim, id1, id2))
}

var nextId = groupMap.size
while (groupMap.size > numClusters) {
    var best = (0, -1, -1)
    do {
        best = p.dequeue
    } while (groupMap.contains(best._2) && groupMap.contains(best._3))

    // Now merge the two groups together into a new group
    val newGroup = groupMap(best._1) ++ groupMap(best._2)
    // Now remove the two groups from the map
    groupMap = groupMap - best._1
    groupMap = groupMap - best._2

    // Create a new id for this group.
    val newId = nextId

    // Next, add in the similarity between the new group and all existing
    // groups to the queue.
    for ( (id, group) <- groupMap )
        groupQueue.enqueue((groupSim(newId, newGroup, id, group, m),
                            newId, id))

    // Finally, update the map to store the new group.
    nextId += 1
    groupMap = groupMap ++ (newId-> newGroup)
}

Now this approach is definitely faster but it’s also horribly memory inefficient. Even if you have an array based priority queue that doesn’t allocate any extra memory for each entry other than the tuple being stored, this approach has one major problem: after every merge step O(N) elements immediately become invalidated. Since the two merged groups no longer exist, any comparison between them and other groups is moot. However the priority queue doesn’t easily permit removing them so they just float around. This is precisely why lines 14-16 are in a do while loop that ensures the returned groups exist. It’s highly likely that they wont. I think we can do better, both in terms of speed and in terms of memory. So let’s blaze through a faster method in couple of steps.

First, we need to change the setup of the algorithm. The previous two method just depended on some simple data structures. This approach requires two additional structures for bookkeeping: a way to track chains of nearest neighbors and a way to track the set of clusters not already a part of the chain.

Agglomerative Clustering, Blazingly Fast Style Step 1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// A mapping from cluster id's to their point sets.
val clusterMap = new HashMap[Int, HashSet[Int]]()

// A set of clusters to be considered for merging.
val remaining = new HashSet[Int]()

// Create a cluster for every data point and add it to the cluster map
// and to the examine set.
for (r <- 0 until adj.size) {
    remaining.add(r)
    clusterMap(r) = HashSet(r)
}

// Create a stack to represent the nearest neighbor.  The real source of
// matic
val chain = new Stack[(Double, Int)]()

// Add in a random node from remaining to start the neighbor chain.
initializeChain(chain, remaining);

The clusterMap structure replaces our birdMap but does the same thing, but the remaining set and the chain stack hold the crux of this approach. Instead of trying to find the best link between two clusters, we’re going to depend on chains of nearest neighbors, so the first node in the chain could be anything, but the second node is simply the nearest neighbor to the first node. To add the third node, we find the nearest neighbor out of any nodes not in the chain. We’ll keep doing this until two nodes in the chain represent reciprocal nearest neighbors, that is two nodes that are most similar to each other, and no other nodes in or outside of the chain. Upon finding these two nodes, we merge them, immediately. Then we just repeat the process until we have the desired number of clusters. In scala, this turns out to be pretty simple to do:

Agglomerative Clustering, Blazingly Fast Style Step 2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
// Find the nearest neighbors and merge as soon as recursive nearest
// neighbors are found.
while (clusterMap.size > numClusters) {
    // Get the last link in the chain.
    val (parentSim, current) = chain.top

    // Find the nearest neighbor using the clusters not in the chain
    // already.
    val (linkSim, next) = findBest(remaining, adj, current)

    // Check the similarity for the best neighbor and compare it to that of
    // the current node in the chain.  If the neighbor sim is larger, then
    // the current node and it's parent aren't RNNs.  Otherwise, the current
    // node is RNNs with it's parent.
    if (linkSim > parentSim) {
        // The current node to push is more similar to the last node in the
        // chain, so the last node and the next to last nodes can't be
        // reciprocal nearest neighbors.
        chain.push((linkSim, next))
        remaining.remove(next)
    } else {
        // The current node is less similar to the last node than the last
        // node is to it's predecesor in the chain, so the last two nodes in
        // the chain are just the kind of nodes we're looking to merge.

        // Pop the current node from the top. 
        chain.pop
        // Pop the parent of the best node.
        val (_, parent) = chain.pop

        // These are the two nodes we'll be merging.  The node we
        // found above is left in the remaining set and is essentially
        // forgotten.

        // Remove the current and parent clusters from the cluster map
        // and extract the sizes.
        val (c1Points, c1Size) = removeCluster(clusterMap, current)
        val (c2Points, c2Size) = removeCluster(clusterMap, parent)
        val total = c1Size + c2Size

        // Update the similarity between the new merged cluster and all
        // other existing clusters.  
        for (key <- clusterMap.keys)
            adj(current)(key) = updatedSimilarity(
                adj, current, c1Size, parent, c2Size, key)

        // Replace the mapping from current to now point to the merged
        // cluster and add current back into the set of remaining
        // clusters so that it's compared to nodes in the chain.
        clusterMap(current) = c1Points ++ c2Points
        remaining.add(current)

        // If the chain is now empty, re-initialize it.
        if (chain.size == 0)
            initializeChain(chain, remaining)
    }
}

And that’s it! By focusing on reciprocal nearest neighbors, the algorithm merges together clusters that will always be merged, no matter how you find them. Furthermore, it’s remarkably easy and fast to find these nodes. By building the chain, the number of things that can go on the chain gets smaller and smaller.

Oh, but there’s one other magic trick to making this super fast, and it depends on how you compute updateSimilarity. The silly way to do it would be to traverse all the pairings between nodes in the new cluster and the each other remaining cluster, but that in itself gets really slow as the clusters grow. But rather than doing that, these folks suggest some recurrence relations that can be computed in constant time, for any of the existing agglomerative criteria methods. Crazy right? But just crazy enough to work correctly and be too fast to believe.

Moral Dilemmas

What is the nature of a moral dilemma? How moral can any dilemma be? Are some dilemmas inherently moral free or can any problem be cast into some level of a moral quandary? And if that’s true, how do you get to this quandary state and where do you go from there?

These questions have been bubbling around in my mind for the past couple months, ever since I read a New York Times Op-Ed post about moral dilemmas and a new book called “Lost in Transition”. In the NYTimes post, the David Brooks notes that the authors of “Lost in Transition” found that modern high school students are unequipped to think about things in moral terms. For instance, The authors posed some questions to students and said the following:

Not many of them have previously given much or any thought to many of the kinds of questions about morality that we asked.

I haven’t read the book in detail, so I can’t really comment on their full findings, proposals, or methodology, so I’m not about to comment on that now. Instead, I got to asking myself the many questions up above. In particular, I wanted to know how I could pose daily activities as moral dilemmas. I don’t often put things into directly moral terms, but I was a Teaching Assistant for a class focused on Ethics in Engineering, so I should be equipped to do some pretty solid moral analysis if I apply myself.

So, I’d like to try and break down some seemingly non-moral problems in totally moral terms. And to start, let’s go with Doing the dishes.

Now, normally, you might look at the task of dishes and think that it’s just something you have to do. Often it’s a little boring, or even a little unpleasant based on your cooking forays, but it’s not too often that people feel morally conflicted about when and whether or not they do their dishes. But if you live with other people and share a common cooking space, I would argue that it is inherently a moral issue. Namely, it’s an issue about acting with empathy.

According to Wikipedia, the world’s coolest encyclopedia, empathy is defined as

the capacity to recognize and, to some extent, share feelings (such as
sadness or happiness) that are being experienced by another sentient or
semi-sentient being

So how does empathy relate at all to doing the dishes? Well, by living in an apartment shared with other people, your actions implicitly affect the livelihood and wellbeing of others. So let’s break down the task of dishes into two aspects: your choice of actions and the possible affects it can have on others. When it comes to doing dishes, there’s really two choices you can make:

  1. do the dishes
  2. don’t do the dishes

And everyone in the house will likely have one of two reactions to dirty dishes:

  1. dirty dishes are not a problem
  2. dirty dishes are frustrating

With regards to clean dishes, it’s fairly safe to assume that everyone enjoys and appreciates clean dishes, i’ve certainly not met anyone bothered by a clean plate or glass. So how do these reactions connect to whether or not you should make those dishes sparkly clean? Well, by leaving dirty dishes you’re doing one of two things:

  1. you’re assuming that everyone is ok with dirty dishes
  2. you don’t about how others feel about dirty dishes

I would argue that both cases are a failure in empathy. By assuming that others share your same reaction to doing the dishes, you’re failing to recognize that others in the house may feel otherwise (first half of our empathy definition). If for instance, a roommate has a horrible case of OCD and emotionally breakdown upon the sight of food encrusted dishes, failing to do the dishes may make this particular person deeply wounded. And by simply ignoring how other’s feel about dishes (the second, sharing, half of empathy), even if you recognize that they may feel differently, you’re actively choosing not to share those feelings.

What happens if you do clean the dishes and leave things sparkly, fresh, and lemon scented? You’re still making an assumption about how other’s feel about dishes, but you’re making the most conservative assumption you can; most people prefer clean dishes so by cleaning you’re less likely to cause any emotional breakdowns, or in more normal cases, mild frustration. Assuming that you like to be happy, doing the dishes is the only empathy filled action available, it makes the fewest and most conservative assumptions about what makes others happy.

So that’s one way to break down the task of dishes into a moral dilemma. So the next time you make a mess in the kitchen and ask yourself “should I clean those?” You can make a handy debate about it.

Morphological Analysis Made Easy With Scala

Ever since I got involved with distributional semantics, i’ve been perplexed about how to handle morphed words, which happens to be just about every noun and verb in English. What is a morphed word in English you ask? It’s pretty much any word that’s been changed to reflect things like past tense, plurality, ownership, and all those things. They’re conjugated verbs and more! But they always pose a massive problem in distributional semantics.

Think about it for a second, what’s the big difference between “cat” and “cats”? Not too much, the second is simply saying there are multiple occurrences of a “cat”. But what do people typically do for distributional semantics? One of two bad options: leave the two words as separate things and thus split the information gained about “cat” across two words or stem every word and throw away something important like the multitude of “cat”. Both seem totally wrong and unsatisfactory.

So on and off I’ve searched for a good tool to do morphological analysis for English. Ideally, the analyzer could recognize the word “cats” and split into two things: cat and <p> that way you can retain all your information about a cat and still know that there were a multitude of them when you see “cats”. And for verbs? You should see the same thing, ran could then become something like run and <past>” so that again, you still know everything about running and that it happened in the past. But until today, I’ve never found a tool that does this both quickly and easily in a usable language.

That all changes today. Today I found the Lltool-box for Java. It creates a large finite state machine for recognizing morphed words and figuring out the root word, i.e. “run” in “ran” and “cat” in “cats”, and how the word was morphed, i.e. “run” for “ran” and “cat” for “cats”. All you have to do is get a listing of how words are morphed, load it up until the Lttoolbox-java system and analyze away your sentences.

So let’s take a spin on how to do this. First, you read the wiki instructions for Lttoolbox-java, download it, and compile it. Compiling is easy:

1
mvn deploy -DskipTests

The -DskipTests part seems needed since their unit tests don’t pass. But after that, you can start using the code in your favorite jvm based language. my personal fave is Scala, so let’s run with that. So what next? Now you create a Finite State Transducer using a dictionary file and tell it to analyze words:

Setting up the Finite State Transducer Source Gist
1
2
3
4
5
6
7
8
// Create the Finite State Transducer processor.
val fstp = new FSTProcessor()

// Load the finite state transducer with the compiled dictionary file.  
fstp.load(openInFileStream(args(0)))

// Setup the trandsducer to do morphological analysis and make sure it's valid.
fstp.initAnalysis

Now that you have a transducer ready to analyze words, you just feed it stuff like this:

Loading the transducer with input and output Source Gist
1
2
3
4
val in = new StringReader("cats, dogs and blubber all running quickly!")
val out = new StringWriter()
// Do the analysis.
fstp.analysis(in, out)

You gotta create a reader and writer for their transducer interface. It’s funky, but it still leaves you with a pretty flexible interface. So now what? What does the output look like? If you feed it “cats, dogs and blubber all running quickly!”, the output is pretty ugly at first:

This will never look pretty
1
2
3
4
5
6
7
^cats/cat<n><pl>$^,/,<cm>$
^dogs/dog<n><pl>$
^and/and<cnjcoo>$
^blubber/*blubber$
^all/all<adj>/all<adv>/all<predet><sp>/all<det><qnt><pl>/all<det><qnt><sp>/all<prn><qnt><mf><sp>$
^running/run<vblex><ger>/run<vblex><pprs>/run<vblex><subs>/running<adj>/running<n><sg>$
^quickly/quickly<adv>$

Beastly no? So i figure that it’s best to write some regular expressions to handle all of this. You’ll need a couple: one to split up words, one to match analysed words, one for unrecognized words, and one to split up the tags. In scala you can do this pretty easily like this:

Regular Expressions for handling output from Lttoolbox-java Source Gist
1
2
3
4
5
6
7
8
9
10
11
12
// 1: Recognize a fully analyzed word so that they can be tokenized.  In the
// above test case, "cats," will not be separated by white space so we require
// this more complicated splitting method.
val parseRegex = """\^.*?\$""".r
// 2: Recognize a word with morphological tags.
val morphredRegex = """\^(.+?)/(.+?)(<[0-9a-z<>]+>).*\$"""
// 3: Recognize a word that could not be recognized.  The transducer prepends
// &quot;*&quot; to unrecognized tokens, so we match and eliminate it.
val unknownRegex = """\^(.+)/\*(.+?)\$"""
// 4: A regular expression for matching morphological tags.  This is simpler
// than writing a splitting rule.
val featureRegex = """<.*?>"""

Then all you need to do is run through the analyzed sentence and split it up into separate tokens, some for root words and some for morphed features. You can do that like this:

Parsing that business Lttoolbox-java Source Gist
1
2
3
4
5
6
7
8
9
10
val tokens = parseRegex.findAllIn(out.toString).map(parseMatch =&gt;
// Match the current analyzed word as being morphed or unknown.  For morphed
// words, create a list of the lemma and the tags.  For unknown words just
// create a list of the lemma.
parseMatch.toString match {
    case morphredRegex(surface, lemma, tags) =&gt;
        lemma :: featureRegex.findAllIn(tags).toList
    case unknownRegex(surface, lemma) =&gt;
        List(lemma)
}).reduceLeft(_++_).filter(!rejectFeatures.contains(_))

This bit of code’s pretty sweet. You first iterate over each analyzed word with the first matcher. Then you match each word with the two word level regular expressions: one for fully analyzed words and one for unrecognized words. After that it’s easy smeesy, you just split the tags up with the last regular expression and turn it all into a list. The last two bits at the end turn the whole thing into a single list and lets you filter out tags or tokens you don’t want.

So with that, you can now do simple and fast morphological analysis in Java, Scala, or even Clojure (but who’d do something silly like that?)!-