Friday 15 January 2010

ios - Dragging Multiple Images -


I am trying to develop an analysis application that determines whether you are "smart", what to do in That is taking pictures of yourself and drag it on your face, where there are noses, mouth and eyes. However, I have tried that the code does not work:

  - (zero) touches: (NSSET *) touches with the avant: (UIEVENT *) event {UITTouch * Touch = [[Event All Tections] No Object]; CGPoint Location = [touch the location in: self.view]; If ([touch] == eye 1) {eye1.center = location; } And if ([touch] == eye 2) {eye2.center = location; } And if ([touch] == nose) {nose. Center = location; } And if ([Touch View] == Chin) {chin.center = Location; } And if ([touch] == lip 1) {lip1.center = location; } And if ([touch] == lip 2) {lip2.center = location; }} - Enhanced the touch of (zero): (NSSET *) touches with the event: (UEVENT *) Event {{SP Touch Brinjal: Test with Avent: Event}; }   

What's happening, because when I have the same image, it works, but is not useful for me. What can I do to do this work? The spots start from the bottom of the screen in a "toolbar" and then the user draws them on the face I want to see the end result:

 http://gyazo.com/0ea444a0edea972a86a46ebb99580b2e </ P> </ div> <p> <div class =

There are two basic approaches:

  1. You can access different touch methods (like touchesBegan , touches , etc.) in your controller or main view. ), Or you can use A You can use a single signal identifier.In this case, you can use touchesBegan or, if using a gesture identifier, then state of UIGestureRecognizerStateBegan , set the locationInView supervision, and then check whether using the frame of different views as the first parameter, touch your More than one of the ideas, on CGRectContainsPoint By the moment, and other parameters as Location .

    After recognizing this scene that the signal started, then in touched or, if in a signal identifier, state of UIGestureRecognizerStateChanged , and translationInView .

  2. Depending on the view (alternative and easy IMHO), you can create a personal gesture identifier that you attach to each sub-view. The latter approach may look like the following: For example, first add the identifier of your gesture:

      NSArray * views = @ [eye1, eye2, lip1, lip2, chin, nose]; (View in UIView * view) {view.userInteractionEnabled = YES; UIPanGestureRecognizer * Pan = [[UIPNZTrainerAnyAdAlOk] initWithTarget: Self-action: @Selector (Handle Stretch Wife :)]; [ADDERTIERCOZER: see]; }   

    Then you can implement a handlePanGesture method:

      - (zero) handlapencture: (UIPanGestureRecognizer *) Gesture {CGPoint translate = [gesture translateviewview: gesture See]; If (alert. Stat == UIJSTOR RECONNEZER STATCH CHEAPED) {gesture.view.transform = CGAffineTransformMakeTranslation (Translation .X, translation.); [Gesture. See. Superwive loudspeed toftront: gesture. Weave]; } And if (gesture.state ==UIJstorUKIZZZSTSATED) {gesture.view.transform = CGAffineTransformIdentity; Gesture.view.center = CGPointMake (gesture.view.center.x+translation.x, gesture.view.center.y + translation.y); }}      

No comments:

Post a Comment